Skip to end of metadata
Go to start of metadata

Visual sketches in (pdf):




As is, these images do not necessarily communicate clarity, but are rather artifacts that Dana and I used in a recent design conversation. They are labeled starting with the number '2' to match the numbering on the bottom of the pdf image.

Image 2:

In three columns I tried to capture the points of engagement with different tools (or different layers in one tool).


Column1: Initial/Primary Engagement

Solo or with helper

Tools may include:

-       Pre-sets

-       Pre-tool (assistant)

-       Games (1)

-       Wizards (2)

-       Guided questions (2) what are the numbers?

-       Redo (to check)

Just Get Me In           

-some users could stop at this step and might not re-engage the tool


Column2: Secondary Engagement

Solo or with helper

-       functional tool

-       this is where the commonalities will be

-       this might resemble some of the functionality of the PMT

-       Q1: does the functional tool look the same for everyone?

I Want More

The user has come to this “tool” to dive in deeper…


Column3: (Not sure how this differs from 2, except for the self-verification part, which could happen in 1 or 2, I think)

-       self-verification

-       anytime tweaking

-       could be a history or something similar with a √ or X next to changes

Luxury of Time

These “activities” are the sorts of things users will do if they have time and motivation.


Image 3:

This image is a chart with 3 columns and 4 rows.

The rows are the 4 topic areas (voting, OER, OA, Elderly)

The columns are ‘Guided’ and ‘Games’

Here we were discussing what it might mean to create interfaces for each of the 4 areas that include interactions along the spectrum of guided “wizards” and games.


We listed some of the features a game might have:

-       progress

-       achievement

-       score

-       delight

-       fun

-       liberation or collection of things


Image 4:

This image is a chart with 3 columns and 4 rows.

The rows are the 4 topic areas (voting, OER, OA, Elderly)

The columns are ‘good enough + leave alone’ and ‘tweaker’


The point here is that the user who just wants to achieve ‘good enough’ is unlikely to come back and tweak – or is able to move forward with the good enough and there is no requirement for them to come back.


First needs:




(don’t seem to need ASL at first)




  • No labels


  1. Image 2: "Just Get Me In" – is this a request to skip the tool altogether and proceed to the 'target activity' such as voting?

  2. Image 2: "I Want More" – they want an opportunity to make finer choices?

  3. Image 2: I could see Column 3 as the natural home of all refinements and extensions: conditional selections (high verbosity screen reader setting for ballot initiatives); "I want to use my settings elsewhere"; etc.

  4. Image 3: we could add 'social effects', especially to games. This could be connected to peers, or seeking assistance. For example, you could choose to see the interface your friend uses (with permission, of course). With a headset this could be really cool!

  5. Image 3: "liberation or collection of things" is great! This would work well with having an avatar, which the user selects and customizes as part of the game.

  6. Image 4: I'm more concerned about users abandoning the tool before completing anything, than not returning to tweak. I think our goal should be to be able to show value right away, after minimal effort. Let them absorb that, and give them opportunities to return, without nagging them. Of course, it's easier in the non-voluntary settings like educational assessment. But we shouldn't rely on force, right? (smile) 

    We may be able to use the social effect to get users to return for tweaking. We found a paper in Y1 that said that individuals did more personalization when they knew how much personalization was being done by others:

    Banovic, N., Chevalier, F., Grossman, T. and Fitzmaurice, G. (2012). “Triggering triggers and burying barriers to customizing software.” Paper presented at ACM SIGCHI Conference on Human Factors in Computing Systems 2013, Austin, Texas. Retrieved from