Skip to end of metadata
Go to start of metadata

iscovery Tool Sketch - V2

Open Educational Resources (OER)

Phil Kragnes



  • My sketch assumes that no preconfiguration has taken place
  • I  have a tendency to think in terms of systems and   algorithms, so please forgive the techie nature of my sketch


  1. Automatic Detection

    1. System language setting.

    2. Display type

      1. LCD monitor.

      2. Touch screen

      3. Refreshable braille

    3. Sound output

            1. Audio output device.

              1. Speakers.

              2. Headphones.

              3. Couple to hearing aid(s) or cochlear implant(s)
              4. Other.

            2. Audio output state.
                    1. Muted.
                    2. Unmuted.

  2. Personalization

                1. Preferred name.

                2. Preferred gender pronoun.

                3. Age

                          1. Specific number?

      1. Range?

            1. Category?

              1. Child.

              2. Young adult.

              3. Adult.

              4. Senior.

      2. Education — may be used to adjust instruction and reading level of tool, with prompt based on age specification.
      1. Grade?

      2. Level of education completed?

    1. Preferred Interaction Methods (more than one in the super-ordinate categories may be selected).
      Jim: we may want to refer to Access4All or another input/output schema.

      1. Output

        1. Audio only.

        2. Standard text.

        3. Large print.

        4. Captioned media.

        5. Media with ASL.

      2. Input

        1. Keyboard.

        2. Mouse/trackball.
        3. Camera or other device for free-space gesture recognition.
        4. Switch/scanning system.
        5. Speech.

        6. Touch-screen.

  3. Welcome and Introduction
    Jim: this could come first, embedding all the initial options and giving the user an entertaining selection experience from the beginning.
    Phil: A welcome that includes captioning, ASL, and audio description can be very overwhelming. I think it essential to determine some basic IO needs/preferences before a multimedia presentation.

    1. Multimedia

      1. Overview.

      2. Expectations for user participation.

      3. Estimated completion time.

  4. Participation Options
    Jim: Are these choices we're asking the user to make? May be too complex or even unnecessary if we plan for all possible session flows..
    Phil: I think the questions for getting at this type of information could be more social in nature. For instance, This session takes about x minutes. Would you like to go straight through, or would you like to take short breaks every so often? Response options: I would like to continue straight through | I would like to take a break every select minutes.

    1. Full/continuous.

    2. Sections with breaks.

    3. Save and resume.

    4. Assisted.

      1. Assistant present.

      2. Assistance needed/desired.

        1. Real-time video or text chat.

        2. Resources for obtaining assistance.

  5. Audio and Visual Comfort Adjustment

    1. Play audio content

      1. Instruct individual to press up and down arrow keys until listening volume is comfortable.

      2. Instruct individual to press Enter when satisfied.

    2. Present text content

      1. Text Size.

        1. Instruct individual to press up and down arrow keys until text size is comfortable.

        2. Instruct individual to press Enter when satisfied.

      2. Contrast

        1. Instruct individual to adjust the color of the text (up and down arrows) and the background color (left and right arrows) until they find a comfortable color combination.

        2. Instruct individual to press Enter when satisfied.

  6. Input Comfort Adjustment

    1. Audio input/speech recognition
      Jim: do all users get this?
      Phil: This raises a good point. Earlier in the session, the user should be prompted regarding a disability or medical condition.

      1. Instruct individual to introduce themselves, tell a story, or provide other speech input at a normal conversational volume.

      2. Set microphone level.

      3. Create speech file.

    2. Keyboard input

      1. Instruct individual to type a couple of sentences

        1. Repeated occurrence of pressing two keys simultaneously — key guard may be beneficial, implement and repeat exercise.

        2. Key press frequently sustained — filter keys may be beneficial, implement and repeat exercise.

    3. Mouse

      1. Instruct individual to target objects with the mouse as they appear on screen — consistent over-shooting of the target may indicate that a reduction in mouse speed would be beneficial, implement and repeat exercise.

      2. Instruct individual to select objects (single mouse click) and activate objects (double mouse click) — consistent errors may indicate that a reduction in click-speed may be beneficial, implement and repeat exercise.

    4. Touch-screen

      1. Present target and instruct the individual to change the target to a preferred size by sliding their finger up and down the screen.

      2. Instruct individual to touch the target when satisfied.

        1. If touch is accurate and direct, repeat the exercise multiple times with the same size target to confirm.

        2. If touch is inaccurate/indirect (individual touches screen and  slides finger to the target or only touches the periphery of the target), repeat the exercise multiple times, adjusting the target size until consistent, accurate and direct touch is achieved.

      3. Repeat with other gestures such as pinch, 2-finger tap, etc.
  7. Verification Exercise

    1. Incorporate determined output selections and settings.

    2. Incorporate determined input selections and settings.

    3. Query individuals comfort level.

    4. Provide option for returning to any or all of the evaluation exercises.

  8. Summary

    1. Thank individual for participating.

    2. Summarize determined accommodations and settings.

      1. Delineated list.

      2. Inform individual that the list can be printed or E-mailed, and provide instructions.

      3. Instruct individual to make a response to have settings saved to the cloud, indicating what application settings they can be applied to.

  • No labels


  1. To trim down the session length, I think most of the time, without any professional pre-configuration or a pre-tool, we will know at least this about a user in the OER setting:

    • Educational level
    • Age (more or less)
    • Language
    • Name
    • Gender pronoun (although I'm not sure if we need this – everything is going to be 'I' or "you", right?)
  2. I've always been unclear on how we can implement audio volume, because of the interaction among the system volume setting, application volume setting, amp/speaker volume setting, the user's angle to and distance from the speakers, and environmental noise. We could try to control all of these by tool instructions or by requiring/providing a complete audio hardware kit, but it might be better to finesse this problem by asking a simple social-type question:

    1. I generally like audio volume about the same as other people.
    2. I generally like audio volume a bit louder than other people do.
    3. I generally like audio volume much louder than other people do.
    4. I don't use audio at all.