Skip to end of metadata
Go to start of metadata

Some thoughts on defining walkthrough targets

One of the first tasks for an inspection team is to decide just what it is going to inspect. Real life instances of the products under consideration are diverse in version, configuration, customization, and local enhancements and extensions.  These are what the real world user experiences and interacts with.

While it is true that the target environment for component development should be the head end of the main development trunk of each product, it may not be the appropriate target for the inspection process. There is likely much to be learned from the existing local deployments. For example, in the uPortal realm, the inspection team must ask: "Can we learn everything we need to know from a laboratory demonstration version of uPortal3 - which doesn't match any production implementation - or should we extend our exploration to instances of the product that people are actually using?"

It may be that the real inspection target needs to be a composite, covering a number of typical implementations, making the assumption that some usability/accessibility problems are revealed only in actual operational services built from the product. It may turn out that the inspection team can express their results using a single non-production baseline instance of a product - this would be ideal.  But it is likely that they will have to examine real operational instances to determine if their baseline instance is representative. To make an analogy: the target instance will be demonstrably representative of real-world environments, in much the same way as a persona is representative of a user population.

Whatever the selected as the inspection target, it should allow for the following:

  • It must be of persistent relevance over time
  • Walkthrough results must be able to be replicated in future walkthroughs
  • There must be a means of demonstrating improvements when component solutions are applied

These observations suggest that there must be a certain stability to the target as the Fluid project progresses. We will need to show achievement against a meaningful baseline.

  • No labels


  1. Great list Paul!  These are important questions and issues and think about and I agree that they may end up being answered in different ways for the different products.  Thinking about Sakai, we can probably test the "out of the box" version.  However, even getting a current, stable version of it where data is persistent can pretty difficult.  And just Friday one of our technical architects spent almost a full day getting Sakai to run on a designers small feat apparently.

    Should the Fluid community be responsible for having versions of the Sakai, Moodle & uPortal apps running?  Obviously Fluid doesn't have servers and manpower hanging around to support this but perhaps it's something community members would be willing to take on?

  2. Regarding Daphne's question about community members supplying the test-bed instances of the applications, I'm hoping that this is a real possibility.    A site might be willing to offer a production instance for inspection, or clone one for added stability.