Skip to end of metadata
Go to start of metadata

Introduction

"User states & contexts" (interim name) is a user modelling tool for conceptualizing, designing, and evaluating the ability of a design to be consumed and operated by users in a wide range of states and contexts. That is, given a design or design idea, will users, in the most inclusive sense, be able to use it?

This tool is motivated by the need to keep up with the considerations of basic use (consumption and operation) for all the many states and contexts users can be in. State and context are defined as follows:

State: personal factors that may affect the individual's ability to use a product or system.

Context: physically external context that may affect an individual's ability to use a product or system.

Each dimension we consider is a faculty of the user (i.e., state) or their environment (i.e., context) that may affect the use of a given product or system. In particular, we are interested in the effect of a user's state and context and not its cause. For instance, an effect of a state and context is that Sally may not be able to see well; we are interested in this. The cause may be that she has inherently low vision (i.e., a state) or that it is too bright outside (i.e., a context); we are not interested in this.

Dimensions of considerations include factors that may be personal or environmental. Dimensions of relevance depend on the product or system of interest.

Examples of dimensions include: vision, hearing, speech, dexterity, cognition, language ability, social context, internet access availability.

Because a user's state and context shifts throughout the day, week, and life of a user, what may be true at one moment may not be true in another. For instance, a user's needs for a given product in the morning at the office may be different in the afternoon in the car. This tool attempts to capture the space of these varying states and contexts in order to make the considerations for designing all needs more transparent.

This tool should be used in conjunction with personas or other user modelling techniques. See section "Why not personas?".

About name, "user states and contexts"

"User states and contexts" refers to the cause, whereas each of the elements of consideration is an effect. The tool's name is being temporarily used mainly as a nod to the underlying understanding of abilities/disabilities. It is not, however, what populates the elements of the framework. As such, it is a misnomer that should be corrected once we think of something more appropriate.

Introduction (older draft)

What are "user states and contexts"?

User states and contexts are an articulated understanding of the users which are used to: 1) ensure designers, developers and evaluators are thinking on the same page wrt users and their needs, and 2) designers and developers have a reference artifact to work against while generating and evaluating design ideas (i.e., mindfulness of user needs).

We are attempting to enumerate various user states and contexts which may have a direct effect on the use and access to a given OER, whether it be physical or cognitive (e.g., vision-related), contextual/environmental (e.g., noisy environment), or otherwise.

A user "state" refers to a personal factors (e.g., sensory, motor, cognitive), while a user "context" refers to his or her environmental considerations (e.g., regional context, technological availability, etc.).

These "states and contexts" are factors that relate directly to a user experience (e.g., online video lecture, interactive simulations, etc.). In particular, these factors can be classified under the following buckets (three of which are borrowed from WCAG's own principles):

  • Perceivability (e.g. sensory)
  • Operability (e.g. dexterity)
  • Understandability (e.g. cognitive, communication, cultural)
  • Availability (e.g. internet access)
  • Portability (e.g. device platform)
  • and a catch-all, Other

Why not personas?

Initially, we considered the construction of personas to distill, articulate, and externalize our understanding of users. Upon further consideration, we realized this was both impractical and undesirable for the FLOE project for at least two reasons: 1) The breadth of target OER users is extremely wide, and appropriately capturing relevant groups within the population is impractical, and, perhaps more importantly, 2) A key goal of FLOE is to be inclusive of the diversity of humans and their varying needs and preferences. Personas, which deliberately groups common needs and preferences into archetypal users, is not well-suited to capturing the diversity and variance in user needs and preferences at a granular level.

Advantages (vs. personas)

* Bypasses the problem of insufficient granularity in personas, as an actual user would be represented by a combination of these states and contexts.

* Avoids typecasting or classification of users into deterministic character + goal-based buckets (which downplays the diversity and multiplicity of user needs and preferences).

* Provides us with a reference point for bringing stakeholders to a similar understanding about user needs.

* Provides us with a reference point during design conceptualization and prototyping for being mindful of user contexts and evaluating ideas against (i.e., a sounding board for ideas)

Disadvantages (vs. personas)

* Lacks details around the personality, history, emotions, immediate goals, medium/long-term aspirations, etc. of the user

* Less "compact" than personas (personas are often created in handfuls, whereas the combination of enumerated states/contexts could number extremely high; there is a practical/logistical benefit to categorizing users).

* Doesn't "humanize" the users (e.g., doesn't tell a story about archetypal users, provide emotional or sociological context, etc.).

* Doesn't assist in prioritizing common user needs (no character biography to compare against).

* Focuses primarily on functional requirements.

Mapping user stories using user states and contexts (very drafty)

Structure of story:

  • For a given product/system,
  • a user with a set of abilities/disabilities,
  • in a particular context,
  • accomplishing particular activities/tasks,
  • on a particular platform

Examples of user states and contexts in action

See (Floe) User states and contexts - Explorations for previous drafts

Draft 3 (being edited)





User states and contexts

Sensory

Can see well. e.g., Susan wears corrective contact lenses, which bring her sight to near 20/20.

Needs an alternative to all visual information. e.g., Casey is blind; Haylee is at home at night with the power out.

Needs to enhance visibility. e.g., Susan's contact lenses fell out; Wendell just had laser eye surgery; Tristram is at the movie theatre.

Needs an alternative to colours. e.g., Gilbert is wearing sunglasses; Jami is colour blind.

Needs an alternative to all audio. e.g., Dennis is drilling into the cement; Cordelia is deaf.

Needs to enhance audio. e.g., Galen is at a bar; Marty has a hearing impairment.

Needs Braille.

Needs tactile graphics.

Dexterity

Can use a pointing device and keyboard well.

Needs an alternative to a pointing device. e.g., Byron has severe cerebral palsy; Jason's wireless mouse battery died

Needs a modified pointing device. e.g., Krystal has arthritis; Jason is left-handed, but the mouse at the library is for right-handed users

Needs an alternative to a keyboard. e.g., Byron has severe cerebral palsy; Judy's keyboard is on the fritz

Needs a modified keyboard. e.g., Garrett never learned to type.

Needs a way to stabilize to point

Needs an alternative way to hold down two keys at a time

Needs an alternative keyboard layout

Uses a automatic scanning

Uses manual scanning

Uses directed scanning

Uses binary code

Uses quadrant code

Uses voice recognition

Uses a touchscreen. e.g., Jim has an iPad or tablet.

Uses an eye tracking to point and select.

Cognitive

Finds the page (layout) readable.

Needs simplified layout. e.g., Owen had one too many drinks.

Finds the content understandable.

Needs simplified content. e.g., Leila pulled an all-nighter studying for her exam.

Communication

Is fluent in this language.

Understands this language.

Needs a translation of this language.

Needs sign language.

Needs text to be read aloud.

Cultural

Internet access

Has ready access to high-speed internet.

Has occasional access to high-speed internet.

Has ready access to low-speed internet.

Has occasional access to low-speed internet.

Needs to interact without internet.

Device platform

Has a modern computer.

Has an "antiquated" computer.

Doesn't have a computer at all.

Has a smartphone.

Has a non-smartphone.

Has no phone.

Has a device with a small display. e.g., Otis is using a non-smartphone with a 1.5" screen.

Has a device with a large display. e.g., Oliver is using a 19" monitor; Otto uses an interactive SMART multitouch table.

Temporal

Prefers longer interaction sessions. e.g., Robert has several hour blocks of free time on weekends to engage in an online workshop; Gary is able to sustain his concentration when he is in a focused state of learning and prefers to study for long periods with minimal disruption.

Prefers shorter interaction sessions. e.g., Robin can only spare 10 minute time blocks at a time to engage in online lesson modules; Greg finds it daunting to commit to an hour-long lesson module; Terry feels more motivated to work when he is able to complete small chunks or meet milestones as he progresses in a lesson.

Learning arrangement [FLOE-specific]

Instructor-facilitated arrangement. e.g., Sam is sitting with his four classmates listening to his teacher as she teaches a lesson using an online module; Stanley's tutor is teaching him with the help of an online lesson module.

Constructive, collaborative arrangement. e.g., Sidney and his three classmates are collaboratively working on exercises accompanying an online lesson module without an instructor.

Self-directed arrangement. e.g., Stuart is going through a lesson module at home for leisure.

On this page
  • No labels

4 Comments

  1. I like this kind of visualization very much. It could be supplemented by a parallel view of the performance requirements of the task and tools. That is, for each of the "productivity engagements" in Susan's day (e.g., editing a document), the demands of the task itself and the tools used for it would be displayed in the same circle. Wherever the polygons overlapped would be a sign of jeopardized functionality.

    In addition, perhaps the dots representing the user state are a bit fuzzy in the radial dimension, allowing us to illustrate not just a binary "can do" vs. "can't do" but the range of comfort and productivity inherent in having to accommodate to a less-than-optimal situation. This would show more overlaps between the user's state and the environment's demands.

    AT could be represented as "pushing" the user's state further out on the radius.

  2. Another point is that any collisions between the demands of the product in its use context, and the user's needs and preferences show up best in fine-grained analyses. That is, "driving" must be broken down into many separate tasks, so that we can see which tasks are easy for Susan with her sunglasses on, and which pose a problem. This lets us understand the performance chokepoints better, so that we can aim the solution as narrowly as possible.

  3. I have some animated visualizations for related concepts, looking at user abilities vs. product demands: http://inclusive.com/the-design-gap/

  4. Check out the visualization in NFL Stats Lab.