Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Mike (coordinator)
  • Rich
  • Colin
  • Julie

The Walkthrough Process

"To Do" list - User Experience Walkthrough Protocol

...

Within the "application" teams:

  1. Agree on user profiles
  2. Define scenarios for cognitive walkthroughs
  3. Define priority settings for reporting out (evaluators will go away do their evaluation and come back together to synthesize the results)

...

The process we're putting together has a lot of autonomy in it, and great potential for imaginative discovery.  But we also want to achieve as much cross-pollination as we can between the teams, both in setting up the inspection plans and publishing the results.  The intention is that each of teams makes its own plan, but with lots of open discussion so that everyone can see what everyone else is doing and borrow each other's best ideas. This should be easy if we all use the wiki as our common forum for planning and discussing.
Let's consider what is common to all the teams. We can start with a description of what a "protocol" means in this context. Roughly, a protocol is is a specification of the tests, processes, scenarios, profiles, and goals that determine the procedure for an inspection or walkthrough; and a description of what information is to be captured.

...

  • Teams assemble their protocols from the lists outlined by Clayton. If they plan to adopt or use others, these should be added to the common set.
  • Each team picks the best target for their investigation and defines it as precisely as possible
  • All teams start with the same reporting template.  If it is found to be constraining during the "team plan building" phase, enhancements can be made to thecommon the common version.

It's worth mentioning here that the inspections are bound to produce interesting information about each product that may not fit the report template and it will be up to the teams to find a way to communicate it to the rest of us - probably by recording their observations in the wiki.

Selecting a Target Instance of a Product for Inspection

With complex and flexible products such as uPortal, Sakai, and Moodle, which are highly configurable, customizable, extendable, and responsive to their local institutional environment, defining a test-bed environment for inspection presents some challenges. Some thoughts and suggestions are expressed in: Defining Walkthrough Targets.

Anchor
WorkingGroupCoordination
WorkingGroupCoordination
Working Group Coordination

...

  1. The coordinator arranges an initial team meeting, using the Breeze meeting room or other convenient venue.
  2. The team members identify their areas of experience, expertise, and interest in:
    1. Accessibility - cognitive, visual, etc
    2. Usability
    3. Cognitive walkthroughs
  3. The team discusses the protocol (See Clayton's outline)
    1. What usability heuristics do the members find most suitable?
    2. What accessibility measures/tools are to be used?
    3. What user profiles are to be assumed?
    4. What cognitive walkthrough scenarios are to be attempted?
    5. What refinements are required in the protocol?
  4. The team assesses coverage. What areas are covered, and with how much (desirable) redundancy? What areas aren't covered? Each inspection should be done by more than one evaluator – ideally by as many as possible.
  5. Team member partnerships are arranged where possible to address usability and accessibility synchronously. Team leads are assigned in areas of expertise.
  6. The team discusses the logistics of actual inspection activities:
    1. What are the problems with geographically distributed teams?
    2. Can the Breeze facility help to overcome the problems?
  7. The team discusses reporting:
    1. Does the proposed template meet the team's needs?
    2. Are refinements to the template required?
    3. What additional information will be reported?
    4. How can results be aggregated with those from other teams (consistency, style, references to heuristic principles, etc.)
    The

Setup for the Evaluation Process

  1. Each team determines the its test target and records a clear definition of it, sufficient to permit repeatability of the assessments. (See Defining Walkthrough Targets)
  2. The target application is broken into chunks for evaluation (highest priority areas first)
  3. Usage scenarios for cognitive walkthroughs are created
  4. User profiles are created
  5. The reporting template selected and finalized
  6. The protocol description is refined and finalized
  7. The team creates a test plan covering:
    1. Activity assignments: who will perform what steps
    2. Schedule: When activities will be done
    3. Selected heuristcs, heuristics and CW methodscognitive walkthrough methods (the protocol)
    4. Scenarios
    5. User profiles
    6. Deliverables (what is to be captured from the inspections)
    7. Reporting template
  8. The test plan is published in the wiki.
  9. The team commences the evaluation process.

Evaluation Process

...

  1. Break application into "chunks" for evaluation (highest priority areas first)
  2. Create usage scenarios for cognitive walkthroughs
  3. Individual evaluation Individual evaluations are performed by 3 - 5 evaluators. (The 3 - 5 number is not an absolute requirement – only good practice. An evaluation can be done by a single person if only one is available.)

Results Processing

The following steps may be done in several iterations, with ongoing communication of early draughts across the groups.

  1. Synthesize and prioritize findings
  2. Brainstorm design session (identify conceptual solutions to high priority issues).  Are there good component candidates?
  3. Write and share out report.
  4. Incorporate findings into community (some see below)
  5. Look for pain across applications? Are there issues a component(s) can address well?

Incorporating Findings into the Community

Some findings will drive component development

...

, others can be used for general product development

...

within the communities

...

  1. Sakai - Integrate into requirements group.  Do we need to create jira tickets?  Are these really "design bugs" conceptually and thus have a different status than requirements?
  2. Moodle - how does this get fed back into the process?
  3. uPortal - how do we integrate into their requirements process?  Deliver findings to the community?
  4. Look for pain across applications? Are there issues a component(s) can address well?

Selecting a Target Instance of a Product for Inspection

...

Heuristic evaluation & Cognitive walkthrough reference material:

...