Skip to end of metadata
Go to start of metadata

This sketch starts to flesh out some thoughts around a "pre-tool:" A tool that gathers some basic information about the prospective user from a friend or family member. The pre-tool user is not the eventual First Discovery tool user,

The goal of the tool is NOT to start creating a set of preferences; It is to gather just enough basic information about the user's capabilities to get past that initial hurdle of not knowing anything about the user's capabilities.

The information gathered will be used to define, control, direct and filter the flow of the First Discovery tool. For example, if the pre-tool establishes that the user does not understand sign language, then the First Discovery tool will not bother to ask the use if they would prefer sign language. Most information given by the friend/family member will be verified by the user during the First Discovery process.

This sketch separates the pre-tool information gathering from the First Discovery process. There is no assumption that the two will happen at the same time, or that the First Discovery tool user will be present when the pre-tool user is answering the questions. This arose from the use case prepared during the seniors break-out session, where our user was doing First Discovery at home, alone, using a machine her family had purchased and pre-configured for her. (How realistic is this scenario?)



  • the pre-tool user is someone who knows the First Discovery tool user (i.e. a friend or family member) and is familiar with the person's capabilities; (I recognize that this does not handle cases such as a library employee helping a patron)
  • the pre-tool user may not be filling in the pre-tool questions on the same computer that the First Discovery tool will be using; (is this reasonable?)
  • the pre-tool user is fully capable of using the pre-tool without any accommodations.



  • While this sketch was developed with the Seniors application setting in mind, I imagine the basic ideas would be applicable in other application settings.
  • It might be that the answers to the pre-tool questions might actually modify further pre-tool questions, for example: if the expected device is identified, questions about physical capabilities might be tailored to be specific to the requirements for that device.


In the sketch below:

  • the numbered points represent a single screen;
  • the bullet points after a question are the answers that the user can choose from (likely radio buttons or check boxes);
  • the first iteration of the pre-tool might omit some of the questions or options, based on the expected capabilities of the first iteration of the First Discovery tool. For example, if we don't expect the First Discovery tool to support voice input, we might omit the pre-tool questions about whether the user can speak;
  • the question marks represent where more or other options might be useful but I don't know what those might be;
  • the italicized text in brackets are my thoughts, questions or comments.




1. Personal Identification of First Discovery Tool User

First name


Preferred language

(include caveats that this information is only for personalizing the wording in the following screens, e.g. name vs "the user," "his" vs "her," etc.)


2. System Identification (assuming a female user named Maude)

Is Maude going to be working on a particular type of system?

  • desktop computer
  • touch-screen tablet
  • touch-screen phone
  • ?

Has Maude ever seen/used this type of system before?

  • never seen
  • seen
  • seen, watched others use
  • tried once or twice
  • used a few times
  • used often
  • ?

Has Maude ever seen/used other types of computers before?

  • ?


3. Perception

Can Maude see?

  • just fine
  • some difficulty with:
    • small print
    • colour
    • peripheral vision
    • ?
  • very poorly
  • not at all

If Maude can see: Can she read text?

  • just fine
  • with reading glasses
  • if it's large print or she has some kind of magnifying glass
  • she can read it, but she doesn't really understand what she reads

Can Maude hear?

  • just fine
  • if you speak loudly
  • if you yell
  • not at all

If Maude can't hear:

  • Does she understand sign language?
    • if so, it it her first language?


4. Operation

  • Can Maude user her hands?
  • just fine
  • with a little bit of difficulty
  • with a lot of difficulty
  • not very well
  • not at all

(perhaps accompany the options with example tasks that avoid speculation about potential capabilities but rather focus on known capabilities. For example: if Maude has never used a computer, the person can't say whether or not she can use a keyboard. The examples could refer to similar tasks that can be known at this point, for example using the buttons on a phone.)
(this might want to be check boxes rather than radio buttons, depending on the choices)

Can Maude speak?

  • just fine
  • quietly
  • slowly
  • not very clearly
  • not at all

(these options could be presented with examples)
(this might want to be check boxes rather than radio buttons, depending on the choices)

  • No labels


  1. This is great, Anastasia, and I agree that it could serve as a template for a cross-setting pre-tool. And while I agree with the concern expressed about the pre-tool operator stepping on the target user's toes, I'm also concerned about low completion rates. So for me, that's one main goal of a pre-tool: making the tool experience shorter by 'pre-personalization'. Note that this works just as well for a target user him/herself.

    Another goal is to finesse the problem of getting answers from someone whose input and output NPs are more complex, such as your sign language example.

    Not sure about the gender question. Language is important, and I think a safe assumption is that the pre-tool operator and the target user share this language.

    System ID: do you think it's reasonable to ask if Maude will be using the very device the pre-tool and tool are on? This would be great to know. We could ask for permission to snapshot the accessibility settings, for example. Possibly at the end of the tool we could do some configurations, pairing the NP set with the device. If this is a good way to go, we should definitely encourage people to do their PGA tool session this way, or as close as possible. (this stuff may not belong here eventually, but I just wanted to get it down.)

    I think the functional dimension questions are a good start, and of course the cognition one is going to be hardest, both in terms of what questions to ask and how much to trust the tool operator's answers. "she can read it, but she doesn't really understand what she reads" probably belongs there. In fact, it may be better to separate reading from vision. Can we ask about seeing people or objects?


  2. Thank you Anastasia.

    I don't think a questionaire hurts at all. It makes getting in the door a more enjoyable experience for those who complete it and for those who do have access to assistance. It could be administered by anyone. Althought I do think we need something in here about whether a person is able to read as well. We dodged the literacy discussion at the face to face but if we knew that a person could not read we could at least voice the user interface. This would be particularly valuable for electronic voting.

    We might also ask if someone had been diagnosed with some reading impairments like dyslexia as long as it does not become too intrusive. In fact, this is one area where some identification of medical diagnosis may be of value as people (especially seniors) are accustomed to having to ask questionaires at doctors offices. I am not a big fant of this as you know but there might be some value here.

    We might also ask if people prefer to have information read aloud.

    What I do like is the questionaire was short. Nothing annoys people more than having to answer long questionaires and they may in fact punt.

  3. Thank you Anastasia,

    I agree with your comments under the "Can Maude use her Hands" section, it would be helpful to have additional options indicating known skills, such as using a keyboard, mouse, or touchscreen or comparable capabilities.    

  4. One way of addressing the "I know better than Maude" problem of the pre-tool operator is to ask the questions in terms of certainty: 

    1. I'm certain Maude can read
    2. I'm pretty sure Maude can read
    3. I think Maude can read
    4. I don't know if Maude can read

    This works well for can/can't questions, but less well for gradations. We'd have to create a nice interface to make it easy to say things like "I'm pretty sure Maude has some difficulty using her hands."