Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  1. In Progress - Create list of usabilty usability accessibility heuristics - use draft list to get started.  In the spirit of agile, we'll refactor the list as we learn from the experience of combining the usability and accesibiliy heuristics and the cognitive walkthrough methods
  2. Agree on evaluation reporting format

Within the "application" teams:

  1. Agree on specifics of protocol and test-bed for the application under evaluation
  2. Agree on user profiles
  3. Define scenarios for cognitive walkthroughs
  4. Define priority settings for reporting out (evaluators will go away do their evaluation and come back together to synthesize the results)


  1. The coordinator arranges an initial team meeting, using the Breeze meeting room or other convenient venue.
  2. The team members identify their areas of experience, expertise, and interest in:
    1. Accessibility - cognitive, visual, etc
    2. Usability
    3. Cognitive walkthroughs
  3. The team discusses the protocol (See Clayton's outline)
    1. What usability heuristics do the members find most suitable?
    2. What accessibility measures/tools are to be used?
    3. What user profiles are to be assumed?
    4. What cognitive walkthrough scenarios are to be attempted?
    5. What refinements are required in the protocol?
  4. The team assesses coverage. What areas are covered, and with how much (desirable) redundancy? What areas aren't covered? Each inspection should be done by more than one evaluator – ideally by as many as possible.
  5. Team member partnerships are arranged where possible to address usability and accessibility synchronously. Team leads are assigned in areas of expertise.
  6. The team discusses the logistics of actual inspection activities:
    1. What are the problems with geographically distributed teams?
    2. Can the Breeze facility help to overcome the problems?
  7. The team discusses reporting: (see Proposed Template)
    1. Does the proposed template meet the team's needs?
    2. Are refinements to the template required?
    3. What additional information will be reported?
    4. How can results be aggregated with those from other teams (consistency, style, references to heuristic principles, etc.)


  1. Individual evaluations are performed by 3 - 5 evaluators. (The 3 - 5 number is not an absolute requirement – only good practice, and strongly encouraged. An evaluation can be done by a single person if only one is available.)
  2. Findings are recorded.

Results Processing

The following steps may be done in several iterations, with ongoing communication of early draughts across the groups.


Some findings will drive component development, others can be used for general product development within the communities.

  1. Sakai - Integrate into requirements group.  Do we need to create jira tickets?  Are these really "design bugs" conceptually and thus have a different status than requirements?
  2. Moodle - how does this get fed back into the process?
  3. uPortal - how do we integrate into their requirements process?  Deliver findings to the community?