Exploring inference of visual cues in ESP to inform textual and sonification (sandbox)
Visual clues (before engaging skater on track):
What is perceived and important in supporting subconscious inference?
• top of track is starting point
• height of track suggests potential speed
• shape of track suggests trajectory
• skater standing on board with wheels suggests weight and rolling potential
• outside scenery, skater with helmet and hands out for balance suggests this can happen for “real” (2D, line drawings with flat colour and outlines suggests “realness” has parameters within a simulation)
Is spatial relationship important?
Audio, tactile, haptic clues
When user is engaging skater, options for:
• simultaneous bimodal, trimodal for collobarative work space
• audio: alerts and notifications (start, end, fail...), sonification (location of skater, earcons, track shapes), soundscapes (wheels on surface--changes depending on weight and friction)
• Delight in experience? both visual, audio.
Exploring /deconstructing the data manipulation dialogue boxes
Moving away from track build to the data collection and manipulation aspect of the sim (pedagogical goal).
This is the initial sketch to deconstruct and understand each part of the dialogue box
This is a graphical rendering of the discovery from the deconstruction exercise
This is the dialogue box elements/tools placed onto the active area of the simulation
Initial observations and questions:
order of elements on screen does not parallel visual order/logic in dialogue box.
Does this pose a problem for semantics if using structure such as table, list, etc.
Is there any potential problems when a group is engaging the sim and the users simultaneously require different modalities: mouse, screen, keyboard nav, screen reader
What is box A’s role? Is it an important scaffold? Is it an adjustment tool to effect results?
Mapping a walkthrough to understand static and dynamic points