Skip to end of metadata
Go to start of metadata

Please find annotated wireframes here: First Discovery Nov 20 14 copy.pdf

More detailed text description to follow.

Major refinements:

  • switch-scanning option has been removed, bringing the total number of preference screens down to 4: 1. language, 2. screen reader, 3. magnification and 4. contrast/colour
  • tool responds immediately to changes, with an option to Undo that does not get transformed (until user confirms the change by selecting Next)
    • an alternative option is shown, with a separate preview window and an embedded Confirm button
  • an idea for friendly prompts is shown, with things like "Want to try one? You can always undo it", and for the screen reader "Don't like the way it sounds? You can adjust it later" which lets the user know that they can make refinements
  • simplified and plainer language is used throughout
  • navigation features are all in one place on the bottom of the screen (next and back buttons along with a progress indicator)
  • focus, hover, and currently selected states are demonstrated
  • No labels

4 Comments

    • These are really nice. I like the progress indicators on the left side a lot.
    • Language screen #1: Another option to the “current selection is shown with arrows, yellow box follows keyboard focus” could be to have the box outline stay on the current choice, and the yellow arrows (or maybe one yellow arrow instead of two?) be moving that indicates the different options. I think have the boldest of the two (which seems to me to be the box outline) be the stationary indicator, and the less bold (the arrows) be the moving ones would make it more clear to me the intent. Since the arrows would be moving, they are likely to get attention for anyone that can see them, and the box outline feels “standard” in some way to me as an indicator of “this is the option you are currently selecting”.
    • Language screen #2: Is this screen being read to the user? Could you put the prompt inside each language option? Something like: “Do you prefer English?” “Qelle langue préférez-vous Français?” “Prefiere español?” This might help get around having the language of the whole screen changing a lot – which might be confusing/overwhelming. The hiccup is…how do you ask if the user prefers other languages? Maybe it’s not that big of a problem – since the current mockup would only show the phrase “other languages” in the three language options anyway. Perhaps “other languages” would just show in the selected language in my example change?
    • Screen Reader Screen: I guess the default would have to be “yes”, since presumably the device has already been talking to you! It would be nice if there was a required interaction here that would allow for the volume and smiley face prompt to appear based on a user action. If the default is already “yes”, then I’m guessing the volume buttons would already be displaying. If the user gets to this screen, wants text reading but does not change volume, then the only interaction would be “next” – which may be a little confusing. Might be useful to think about a way to present this where either a “yes” or a “no” option is required – this way a user could signal clear interest in the text reading, and trigger the appearance of volume controls and the little smiley face comment…and know they made a choice and are ready to move on to “next”.
    • Magnifier Screen: I like the smiley face suggestion.
    • Colour/Contrast Screen: Is this presenting a second option for indicating the “selected” box? Should one of these have an outline (like the Language Screen)?
    • Colour/Contrast use example: Does anyone have user experience with colour/contrast changing options? I don’t think I would vote for the preview window – it seems like that could get complicated on smaller devices, and could be confusing some users even on larger devices (significantly departs from previous interaction). On the other hand, I’m not sure how easy it will be to understand what has happened when the whole screen changes except for the undo icon and text. Perhaps that whole lower bar could stay consistent with the previous option – to delineate it as a clear “safe zone”. 
  1. Thanks for your comments, Emily.

    Regarding the keyboard-focus state vs. the currently-selected state, I went with the box-frame for keyboard focus as that is what is typically used in most browsers. I am not totally satisfied with the arrows for showing the currently-selected state, but also not comfortable using the box-frame for this.  I wanted to get away from using radio buttons because I'm not sure they would be obvious to someone who is not particularly computer-literate. At one point I had check-marks but they interfered with the progress-indicator check-marks in the toolbar.

    I think we could label the language buttons to read the prompt every time (even if they are separated visually), but am not entirely sure.

    We talked a bit at the last meeting about whether an explicit user-action indicating confirmation is necessary, or whether selecting "Next" without modifying the default is enough (as in your example: screen reader is ON, volume level is acceptable, and user selects Next because they like it as-is). There seemed to be some strong agreement based on user-testing research that this was sufficient.

    As you point out, in one of my wireframes there is no keyboard-focus visible. It's possible that until the user tabs to the main screen, keyboard-focus could be elsewhere in the tool, but I think we could design it such that keyboard-focus always falls on the default option on each screen. This would probably be best.

    I hope that answers some of your questions and I look forward to discussing further at tomorrow's meeting.

  2. This is great.  I agree with Emily's suggestion to use the box to highlight the currently selected option, and the arrows for the current focus.  The interface already uses arrows in a similar way pointing into the left hand bar, so this would be consistent.

    Here is a walk-through with a persona based on someone I know.  I’ve called him Ray.  He has trouble reading, partly due to visual impairment, and partly low literacy.  His hearing is also impaired. His motor control is good enough to use a tablet. He does accidentally touch the screen sometimes. He knows a computer can read things out to him and change colors and sizes. He wants to set preferences for voting and accessing online education. He is using the First Discovery Tool at a day habilitation center on an iPad belonging to the center.   

    First screen - Ray can see the screen but not read the words easily.  He would be able to recognize a flag if one were present.  He can tell there is speech happening but needs to turn up the volume to hear the spoken words. He sees a volume symbol, so he taps on that.  The language selection would be on English by default and he didn’t make any other selection, so he has the right language.

    He gets to the speech screen, still having trouble hearing the speech.  He sees the big plus and minus signs and taps on plus until he can hear the speech.  Now he wants the whole screen to be read to him - how will that happen?  Does the speech read through the whole page then start again from the beginning?  Are the buttons and other items visually highlighted as they are read out?  (This would be helpful.)  

    Happy with the volume, now he selects ‘Next’ and gets to the size page. I think he would be able to make a size selection.  Larger text would be helpful for him. How does this page change as he makes the text 4x larger?  Are the three choices still the same but the UI text enlarged?  Does the layout change?

    The tablet starts to slip on his lap and he accidentally touches the ‘reset all’ button.  This leads to a confirmation dialog, and he is able to cancel it without losing all his settings.  This sequence might even raise an internal flag that button size and filtering accidental actions may be a topic to pursue, if Ray ever uses a post-First-Discovery tool.

    From the text size page, Ray selects ‘Next’ again.  With the large text and the speech output, he decides which text is easiest to try and read. (I notice we are using Canadian spelling for ‘colors’. This might be unfamiliar to Ray.) He taps on the ‘white on black’ option.  It might be helpful if the labels for each option were unique (e.g. ‘I prefer black on white’, ‘I prefer white on black’).  The whole screen changes, and it doesn’t look how he expected.  He still sees the same three options.  At this point the ‘no change’ option might be confusing.  Perhaps ‘Original colors’ would be clearer?  He tries another option and sees the screen change again.  

    Then he presses ‘Next’.  That might lead to a confirmation/preview step.  He’s done.  I think that Ray could succeed with an interaction like this, and it would provide him with the essentials he needs.

    .. one thing that we haven't covered yet is how the preferences will be stored, and how they will be associated with Ray.  He hasn’t had to enter his name.  It isn’t his iPad. Has someone done this for him in advance (which would be feasible in this scenario)?

    .. I wonder if we need to establish anything about button size before leaving First Discovery.  Ray needs fairly large buttons, like the ones here.  If his button taps in this interaction are long and contain slips, or there are screen touches in non-target areas, the tool could go on to offer a choice of target sizes.  If this was being used with a mouse, we’d be able to tell a lot from the cursor path about the user’s motor control.  It might be important to adjust the mouse gain to give the user enough control to make selections with the mouse.

    .. interesting observation - I didn’t find any need for ‘undo last action’ - Ray can simply choose a different option if he doesn’t like a change. 

     

  3. This looks good – very easy to follow. A few comments:

    1. I'm hoping that language can be safely inferred, or part of the early assistance natural to the environment. I feel that we don't have consensus on how the user arrived at the tool, or even whether that's a design-relevant question that needs answering.
    2. Although there's a lot of value in the left nav bar, it may be cluttery or confusing to some. Maybe it doesn't show up until the 2nd screen? Maybe its presence is a preference itself (ugh).
    3. I like the preview window a lot. Where did this real estate come from? Alternately, what was on it during the other selection screens? I'm still in favor of having a target program screen and a preference setting screen, or if not on separate devices, clearly and permanently divided space on a single screen.