• top of track is starting point
• height of track suggests potential speed
• shape of track suggests trajectory
• skater standing on board with wheels suggests weight and rolling potential
• outside scenery, skater with helmet and hands out for balance suggests this can happen for “real” (2D, line drawings with flat colour and outlines suggests “realness” has parameters within a simulation)
When user is engaging skater, options for:
• simultaneous bimodal, trimodal for collobarative work space
• audio: alerts and notifications (start, end, fail...), sonification (location of skater, earcons, track shapes), soundscapes (wheels on surface--changes depending on weight and friction)
• Delight in experience? both visual, audio.
This is the initial sketch to deconstruct and understand each part of the dialogue box
This is a graphical rendering of the discovery from the deconstruction exercise
This is the dialogue box elements/tools placed onto the active area of the simulation
order of elements on screen does not parallel visual order/logic in dialogue box.
Does this pose a problem for semantics if using structure such as table, list, etc.
Is there any potential problems when a group is engaging the sim and the users simultaneously require different modalities: mouse, screen, keyboard nav, screen reader
What is box A’s role? Is it an important scaffold? Is it an adjustment tool to effect results?