Skip to end of metadata
Go to start of metadata

Location: WebEx

Audio connection: 1-408-792-6300 Call-in toll number (US/Canada) Access code: 576 089 623


  • Dana Ayotte
  • Anastasia Cheetham
  • Cynthia Jimes
  • Kate Katz
  • Jess Mitchell
  • Emily Moore
  • Madeleine Rothberg
  • Sepideh Shahi
  • Gregg Vanderheiden
  • Shari Twain


Objective: The next Co-Design meeting will focus on the requirements doc and project scope.

Resources: NIDDRs comments on our Requirements doc, the Comment-able version of the Requirements doc, and the Performance Work Statement (Contract) are linked here as well as in the Working Docs section of the wiki. We will be referencing these Documents frequently during the meeting.

  1. Revising requirements doc in response to NIDRR feedback
    • Application-setting specific requirements
      • Addressing NIDRR comment, " it appears that the requirements for the tool are identical in each of the domains"
      • Gregg's suggestion - adding discussion about "'weighing preferences"
      • Should we add new sections for technical and circumstantial requirements for OER, Assessment, and Elderly?
    • Discuss and come to a decision around needs versus preferences in general.
    • Review glossary in requirements doc; determine who can lead the organization and final revision of the glossary
  2. Scope and timeline
    • Determine (through the discussion) a process and framework for deciding which application setting to use for our prototype
    • Review current design/development timeline - any revisions needed?
  3. How to determine Default settings 
    • Determine a framework for determining each of the following:
      • What is the default size of the text?
      • What is the default volume for audio?
      • Will these values differ by application setting?
      • Do we actually need to know minimum NEED? We need to get people in the door to use and access within the application setting. We should check all design decisions back against that requirement.
  4. Determine inclusion and exclusion criteria for deciding which candidate preferences will be presented by the First Discovery tools.
    • Determine how and where we can capture variations of inclusion and exclusion criteria variations across application settings?


Kate's notes:

Anastasia's notes:

setting-specific requirements becoming cross-setting: how?

  • Madeleine: cross-setting doesn't always mean "all" settings, just "more than one"
  • Anastasia: should we identify the reqs that "changed" and make a specific note of that exlicitly?
  • Kate: make a note of how it's applicable to more than one setting
  • Shari: make note of when a req is NOT applicable to a specific setting? e.g. adding modules not relevant to voting?
  • Gregg: later discussions said "that's true for my area, too" Other than voting, we couldn't think of anything that never applied in the other settings. Regarding "using the same language as in the IEP," not sure what that means...
  • Madeleine: IEP might specify what accommodations the student is entitled to
  • Madeleine: The prefs we seem to be settling on are insufficient to support assessment; they're not enough to allow the student to go on and take a math or english test. For assessment, you need more thorough prefs checkign, so that you're optimizing better; we also didn't include various educational supports preferences – they're not for getting in the door. But in assessment, the "door" is much further away than in other areas.
  • Gregg: in assessment, operating the test itself can't be more difficult than actually answering the test questions
  • Mad: yes, that's a requirement of assessment
  • but this is "second level" really, beyond first discovery
  • the fact that fd is only about getting in the door is basically why the requirements are the same across settings
  • Kate: how can we capture all the rich work that was done for assessment in the requirements doc, even though it is out of scope for fd?
  • this is actually a reason not to use assessment for our prototype, since we can't really test an assessment scenario after only fd; in this setting, you do need that second level
  • The workshop did come up with a very rich set of requirements for assessment.
  • Maybe put these 'second order' requirements in an appendix? since it is application specific, but not in scope of fd
  • We would have to imagine that a preference could never be relevant in other contexts for it to be something that wouldn't be in common; that's the burden: coming up with the never.
  • Also, re-articulate our emphasis on the notion of tools, and contexts being so important to any particular implementation of this solution
  • Kate: are there technical constraints? e.g. in OER, integration with the LMS?
  • Anastasia: this is a web app, and we're only saving prefs, not acting on them, so basically: no (other than voting, of course).
  • Kate: but SOW says show a prototype which would save prefs and allow them to use their prefs in an application setting, so we may need to find an app setting where we can show that
  • Gregg: saving in cloud is enough, it puts them in a position to use the prefs; pulling down and transforming not in scope of this work order?

Criteria for selecting application setting for prototype

  • Should "being able to show end-to-end" be a criteria? aka a lot of work has already been done to set up the application of peoples' preferences in the oer realm, so we could build on the existing work there
  • Shari: another criteria could be the typical hardware that people are using in an environment and how well that matches what we've implemented on

When to get this response back to NIDRR?

  • At the january meeting with NIDRR, we can express "this is our plan of action, and this is where we are right now" but we don't want to take time on this – don't want to hold up the development
  • May 1, we need to submit the Objective 2 report, describing the architecture and implementation of the tool.

Needs vs Prefs in general

  • difference between a need and a preference
  • does the FD tool need to figure out both? no agreement
  • Jess: two other points, too: 1) how does the FD tool fit in the ecosystem of other tools that we are developing in the GPII, and 2) we know that this tool isn't going to solve everything for everyone; how do we articulate where the limits are? need to make those hard decisions that are going to have an effect on design in time for development to start tackling some of this
  • Gregg: yes this refers to "failing gracefully"
  • Dana: worried about scope of FD tool, not making it a long, arduous process.
  • Kate: could Gregg articulate, specifically, which prefs we'd need to identify the need as well as the pref, that might help our discussions
  • Gregg: list might be: font size, volume (but we can't really do that), reach (mobility). Gregg will add to response document

How to define boundaries?

  • Kate: we need to find some research/literature to support our decisions; an existing framework?
  • Gregg: wcag, etc. has some of this, but: talk in terms of real-world measurements, not points (12pt on screen not equal to 12pt on iphone)
  • Gregg knows someone at Lighthouse for the Blind who might have those numbers
  • Gregg: our decision might be limited by the hardware we're using and the interface designs we implement; they might only support a certain maximum font size. We will need to just use all the input of our members and our co-developers to determine the sweet-spot.
  • Shari: some keyboard settings can be set in ways that help some people without inconveniencing others
  • Gregg: we should record our rationale for our design decisions, so we can explain why we did things the way we did

Goals for next meeting:

  • agree on criteria for selecting application setting for prototype, try to prepare rest of response to NIDRR feedback
  • spend meeting focusing on designs, not more requirements discussions


  • Add additional language explaining why we didn't include things, flesh out that section that expresses that there were multiple things that we felt would occur in more than one application setting; turn that sentence into a section i.e. a paragraph or two that's more explicit about the examples that we thought would cross application settings: Anastasia
  • Glossary: update, respond to NIDRR feedback, add IEP, assessment, others. Gregg and Anastasia
  • Draft list of which prefs require both need and pref: Gregg






  • No labels