Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 102 Next »

User Experience Walkthroughs of uPortal, Moodle, and Sakai

(Usability and accessibility Heuristic Evaluations and Cognitive Walkthroughs)

One of the first design activities we are working on in the Fluid project is to identify current user "pain points" by performing heuristic evaluation and cognitive walk-throughs of uPortal, Sakai and Moodle.

"Heuristic evaluation is a discount usability engineering method for quick, cheap, and easy evaluation of a user interface design" (Jacob Nielson, http://www.useit.com/papers/heuristic/). Design, usability, and accessibility experts will engage in systematic inspections of the the user interface with the goal of identifying usability and accessibility problems based on recognized principles ("heuristics"). Our particular technique will combine heuristic evaluations with cognitive walk-throughs of the user interface so that we also look at expected user flows through the system and identify potential work flow problems. Heuristic evaluation isn't, however, meant to be a replacement for watching real users' activity on a system, so we intend to use heuristics in conjunction with user testing.

These evaluations will help us identify areas of the user interface that are most in need of improvement. We can thus prioritize our work on the most important usability and accessibility problems. Based on the findings of these evaluations, we will focus on UI issues that can be solved by designing well-tested, user-centered UI components. These components will encompass common interactions that can be reused within and across community source applications. On the other hand, we don't expect that all problems can be solved by creating UI components. We'll also ensure that findings and identified solutions outside the component realm will be shared with the communities. Heuristic evaluations and walk-throughs will identify areas of focus; we will engage in solid user-centered design practices and user testing to create the right solution.

The working list of heuristics and cognitive walk-through questions for the Fluid project is being compiled at the User Experience Walkthrough Protocol wiki page. We welcome your input, and would very much appreciate additional volunteers to help with the evaluation process.

See the UX Walkthrough Protocol Working Page and UX Walkthrough Guidelines page.

The protocol and guidelines are organic and will continue to be refined as we learn from doing the hybrid inspections/evaluations.

The Walkthrough Process

"To Do" list - User Experience Walkthrough Protocol

As a group:

  1. In Progress: Create a list of usability and accessibility heuristics - using the draft list to get started.  In the spirit of agile, we'll refactor the list as we learn from the experience of combining the usability and accessibiliy heuristics and the cognitive walkthrough methods
  2. Agree on an evaluation reporting format (see Proposed Draft)

Within the "application" teams:

  1. Agree on specifics of protocol and test-bed for the application under evaluation
  2. Agree on user profiles
  3. Define scenarios for cognitive walkthroughs
  4. Define priority settings for reporting out (Evaluators will go away, do their evaluation, and come back together to synthesize the results.)

A Framework for the Walkthrough Process

The process we're putting together has a lot of autonomy in it, and great potential for imaginative discovery.  But we also want to achieve as much cross-pollination as we can between the teams, both in setting up the inspection plans and publishing the results.  The intention is that each of teams makes its own plan, but with lots of open discussion so that everyone can see what everyone else is doing and borrow each other's best ideas. This should be easy if we all use the wiki as our common forum for planning and discussing. Let's consider what is common to all the teams. We can start with a description of what a "protocol" means in this context. Roughly, a protocol is is a specification of the tests, processes, scenarios, profiles, and goals that determine the procedure for an inspection or walkthrough; and a description of what information is to be captured.

From a team's point of view, the inspection process rests on three elements:

  1. A protocol that clearly specifies what we are going to do, and what information we are going to capture along the way.
  2. A predetermined clearly specified target that we are going to inspect: (product, version, instance, set of chunks) - the thing we're going to do it to.
  3. A report template specifying the format, style and content of the report of the information we capture.

Equipped with the right three elements, the teams will be able to:

  • Establish a baseline for each product, enabling us to measure the value of future enhancements
  • Identify pain points which can be addressed with near-term solutions
  • Merge their results in such a way as to be able to identify common issues susceptible to common solutions through common components.

The discovery of common issues with common solutions will be a big win for Fluid and should be kept in mind by everyone at all times as something to strive for.  General and reusable solutions have a value that goes far beyond addressing a pain point in a particular product.

Ideally, we can set things up so that each team can work with the three elements (protocol, target, report template) that work best for their product, but with as much sharing as possible and a minimum of re-invention. 

  • Teams assemble their protocols from the lists outlined by Clayton. If they plan to adopt or use others, these should be added to the common set.
  • Each team picks the best target for their investigation and defines it as precisely as possible
  • All teams start with the same reporting template (see Draft from Daphne). If it is found to be constraining during the "team plan building" phase, enhancements can be made to the common version.

It's worth mentioning here that the inspections are bound to produce interesting information about each product that may not fit the report template and it will be up to the teams to find a way to communicate it to the rest of us - probably by recording their observations in the wiki.

Selecting a Target Instance of a Product for Inspection

With complex and flexible products such as uPortal, Sakai, and Moodle, which are highly configurable, customizable, extendable, and responsive to their local institutional environment, defining a test-bed environment for inspection presents some challenges. Some thoughts and suggestions are expressed in: Defining Walkthrough Targets.

Working Group Coordination

There is one inspection team per project/application. Teams are expected to be self-organizing and to form their own plans on how to proceed, but to communicate actively with the other teams on their plans and decisions - primarily through the wiki. Much in our approach is experimental, and it will be valuable to record what works, and what does not.  Here is an outline for consideration by the team members and coordinators:

  1. The coordinator arranges an initial team meeting, using the Breeze meeting room or other convenient venue.
  2. The team members identify their areas of experience, expertise, and interest in:
    • Accessibility - cognitive, visual, etc
    • Usability
    • Cognitive walkthroughs
  3. The team discusses the protocol (See Clayton's outline)
    • What usability heuristics do the members find most suitable?
    • What accessibility measures/tools are to be used?
    • What user profiles are to be assumed?
    • What cognitive walkthrough scenarios are to be attempted?
    • What refinements are required in the protocol?
  4. The team assesses coverage. What areas are covered, and with how much (desirable) redundancy? What areas aren't covered? Each inspection should be done by more than one evaluator - ideally by as many as possible.
  5. Team member partnerships are arranged where possible to address usability and accessibility synchronously. Team leads are assigned in areas of expertise.
  6. The team discusses the logistics of actual inspection activities:
    • What are the problems with geographically distributed teams?
    • Can the Breeze facility help to overcome the problems?
  7. The team discusses reporting: (see Proposed Template from Daphne)
    • Does the proposed template meet the team's needs?
    • Are refinements to the template required?
    • What additional information will be reported?
    • How can results be aggregated with those from other teams (consistency, style, references to heuristic principles, etc.)

Note: The outline is presented as what appear to be a set of sequential steps, but it's clear that is value in iterating over some of them, and doing some in parallel. This is also true for the other lists in this document.

Setup for the Evaluation Process

  1. Each team determines its test target and records a clear definition of it, sufficient to permit repeatability of the assessments. (See Defining Walkthrough Targets)
  2. The target application is broken into chunks for evaluation (highest priority areas first)
  3. Usage scenarios for cognitive walkthroughs are created
  4. User profiles are created
  5. The reporting template selected and finalized
  6. The protocol description is refined and finalized
  7. The team creates a test plan covering:
    • Activity assignments: who will perform what steps
    • Schedule: When activities will be done
    • Selected heuristics and cognitive walkthrough methods (the protocol)
    • Scenarios
    • User profiles
    • Deliverables (what is to be captured from the inspections)
    • Reporting template
  8. The test plan is published in the wiki.
  9. The team commences the evaluation process.

Evaluation Process

  1. Individual evaluations are performed by 3-5 evaluators using the (or a subset) guidelines synthesized by this group. (The 3-5 number is not an absolute requirement: only good practice, and strongly encouraged. An evaluation can be done by a single person if only one is available.)
  2. Findings are recorded.

Results Processing

The following steps may be done in several iterations, with ongoing communication of early draughts across the groups.

  1. Synthesize and prioritize findings
  2. Brainstorm design session (identify conceptual solutions to high priority issues).  Are there good component candidates?
  3. Write and share out report.
  4. Incorporate findings into community (see below)
  5. Look for pain across applications. Are there issues a component(s) can address well?

Incorporating Findings into the Community

Some findings will drive component development, others can be used for general product development within the communities.

  1. Sakai: Integrate into requirements group.  Do we need to create jira tickets?  Are these really "design bugs" conceptually and thus have a different status than requirements?
  2. Moodle: how does this get fed back into the process?
  3. uPortal: how do we integrate into their requirements process?  Deliver findings to the community?

Heuristic evaluation & Cognitive walkthrough reference material:

Jacob Nielsen'sdescription and overview

Applying Heuristics to Perform a Rigorous Accessibility Inspection in a Commercial Context Site Point's step by step heuristic review suggested process
Proposed UX Walkthrough Report

User Experience (UX) Evaluation Guidelines

Meetings

The UX Walk-through group meets every other Friday at 8am PDT / 10am MDT / 11am EDT / 4pm GMT.

Next Meeting: Friday, August 17, 2007

Teleconference
Local Dial-in number: 416-406-5118
Conference ID: 5220834

Agenda:

  • Discuss & Review Content Management Scenarios & Scope of Evaluations
  • Updates from product teams
  • Other? Add your suggestions here.
Related Links
  • No labels