Skip to end of metadata
Go to start of metadata

NIDRR Feedback on the Workshop Report

Received 12./7/2014


We have looked over the workshop report and you should be commended on the job that has been done; a lot has been achieved.  It is clear that the workshop was very well conducted and that the subsequent analysis has been very fruitful. The response to question 9 on the user interface seems particularly strong and informative.

The analysis has raised a couple of questions that we would like some clarification on.

It seems that the educational assessment application poses some unique challenges which are brought up in the report, but whose implications are not clear.  For instance, it is indicated that a student’s Individualized Education Plan (IEP) may permit some preference settings and not others in the context of assessment (see last lines on pages 6 and 7, and the middle of page 9).  How does this affect the first discovery process?  E.g., must information from the IEP on allowable settings be an input to first discovery?  What does that imply about the first discovery process? In one sense, the information in the IEP puts constraints on allowable preference settings – however, unlike in the voting case cited in the report (under question 5), these constraints are individualized (and may be different from what the student would select for that setting).  How can such individualized constraints be taken into account in the first-discovery tool?  Do they imply a different kind of architecture for the tool than when they are not present?

Similarly, table 2 identifies “Subject-specific preferences” as preferences that should be supported for Online Assessments as identified in the workshop, but we do not find the implications of this in the document. Can those implications be elaborated?  E.g., would it mean a different first-discovery for each subject (hopefully not)?

The voting setting also poses some interesting questions whose implications could be worth elaborating. First, it is noted that the voting laws differ by states.  Presumably this would mean that the allowable accommodations differ by state. Would this imply that a different first-discovery tool would potentially be needed on a state-by-state basis?  It was also noted that in the voting domain it would be useful to have the first discovery tool show what the actual ballot would look like under the various settings. Because ballots differ on a township level (i.e., a level much more specific than a state), what does this imply about the tool?  Must the ballot be considered “input” to the first-discovery tool?  This question, of course, is not unique to voting in that it appeared that having content of interest to the user in the first discovery process is important.  How could that be accomplished?

The proposal for the design and development of first discovery tools is approved, but the team should consider how any of the questions raised above could be addressed in this context. For example, please consider how individual constraints (such as from an IEP) could be handled in this framework.

A second question to consider (or perhaps elaborate at a later time) concerns the generality and scalability of the first discovery process. For instance, how much would the configuration of the tool have to change from one polling place to another (in the same state versus in different states)? What about from a voting application to an OER application? How much programming skill would be needed for these types of changes?

It is clear that using the existing GPII technologies will be a great benefit – especially in enabling customization of the interface without affecting underlying functionality (not to mention that the project team is very familiar and comfortable in this environment). It is clear that this will be a great benefit in enabling iterative design. At the end of the project, it will be useful to understand what we have learned about first discovery independent of any particular implementation so that this work can inform the development of first discovery tools in other programming environments and/or other preference-based initiatives.

  • No labels