We're looking for students to collaborate with us on the Google Summer of Code 2016 program. Working with Fluid gives you a chance to learn more about accessibility and usability while writing code with cutting-edge open web technologies. Create cool stuff and make a real impact on users at the same time!
For information about the various ways we communicate with each other, see our Get Involved wiki page.
Game for First Discovery of Preferences
The goal of this project is to build a game (or game-like tool) that allows for the first-time discovery of digital preferences.
Students and other users of this tool will engage in a process of "learning to learn" - that is, discovering and choosing the preferences that work best for them. This may be directly applicable to a learning environment (for example, preferences that might help someone learn math), or it may be more general (preferences that help someone fill out a form on the internet). In many cases, these preferences are the same (for example, high contrast helps me to see the screen better, so I can complete an on-line math course). Preferences may include things like high or low contrast, text-to-speech, or simplify-content. For a list of preferences, please refer to the following links: Cloud 4 All Common Terms, PGA Preference Categorisation.
The goal of this tool is to introduce the user/player/learner to the experience of setting digital preferences, and to provide a method for doing so in a playful and engaging way, aimed at user groups who may not have a lot of experience in using digital devices. Providing a fun way to discover preferences and try them out in a non-intimidating environment is an important part of the learning-to-learn process. The Discovery Cats design mockup shows one approach, where preferences are set upfront in a game-like interface; once this process is complete the user enters the game itself. Another approach would be to integrate preference discovery and selection into the game in itself in a creative way (e.g. how can setting preferences help a player reach a specific goal or fulfill a quest?).
This project will involve working with the design team to make any necessary refinements to the designs and to implement a fully functional game. It is expected that students will make use of Fluid Infusion and any other appropriate web technologies and frameworks to implement the web based game. The game should be implemented such that it is fully controllable through the mouse, keyboard, and touch interfaces and make use of ARIA attributes for Assistive Technologies.
Stretch goal: Consider also the addition of a "dashboard" or other interface that allows the user/player to keep track of their progress over time. This would enable users to measure and track data regarding their own performance in relation to the preferences they set.
See also: Preference Framework and First Discovery Tool for more information and examples on preference editing tools
For latest updates on the project: Google Summer of Code (GSoC) 2016 Project Progress Repository
Mentor: Dana Ayotte (design), Justin Obara (dev)
IRC: danayo, Justin_o
Data Visualization and Sonification with Infusion
Building from Fluid's
fluid.model.transformWithRules API and following its Model Relay system for connecting component endpoints, this project will build a method of connecting an Infusion app to an arbitrary data source and transforming this data in preparation to be rendered. Too commonly data pipelines bake in a representational schema that cannot be escaped by a further rendering engine. Otherwise, data is put into a representational framework (i.e., a visualization library) that ties the data transformations to the specific rendering elements.
The goal of this project will be to build a functional I/O platform for data rendering so that common-type datasources can be transferred into an application model and transformed into a generic JSON schema that can further be given rules that transpose the data to a representation. Be it audio or visual representations, the platform will utilize the data in kind so as to develop a pattern for developing representational templates that are agnostic to data sources. This will lead to a friendlier, more accessible approach to representing data usefully to end-users.
SoundFonts provide a means for packaging and distributing audio samples for use in wavetable synthesizers and samplers. They typically provide a variety of instrument sounds sampled at different pitches and octaves, making it easy to create realistic-sounding digital instruments. SoundFonts are particularly useful for data sonification, since they provide a simple and low-cost way to give users the ability to choose from a variety of instrumental sounds when creating their sound designs.
The Floe Project (Flexible Learning for Open Education) is developing new tools for sonification and data presentation using audio. These tools are based on Flocking, a framework for audio signal processing, synthesis, and music composition, which uses the Web Audio APIs now built into most modern web browsers.
Mentor: Colin Clark
Accessible, Responsive Music UI Controls
With the introduction of the Web Audio API and music frameworks such as Flocking, it's possible to make music and develop custom instruments entirely using Web technologies.
A variety of user interface component libraries, such as Nexus UI, jQuery Kontrol, Interface.js and G200K's Polymer controls, have been developed to assist in the creation of musical interfaces. However, the majority of them aren't very "web-like." Many are based on Canvas or bitmap images, and aren't compatible with responsive design techniques, can't be easily re-styled or customized using tools like CSS, and aren't accessible via the keyboard or with assistive technologies such as a screenreader.
This project will involve the creation of a small collection of high-quality, responsive, SVG or DOM-based musical user interface controls such as knobs, sliders, x/y pads, button matrices, envelope editors, or waveform viewers. The student is free to choose which components to build, but each component will support extensive customization via CSS, will support use on mobile, tablet, and desktop devices, will include ARIA markup for assistive technologies, and will be fully controllable with the keyboard. Where visual presentations convey real-world controls (such as rotary knobs), the mouse and touch interactions will be consistent with the metaphor (e.g. rotary knobs should support a circular gesture for increasing and decreasing the value, not just a linear up/down mapping). An interaction designer from the Fluid Project community will be available to help with visual and interaction questions from the student throughout the project. These controls should be compatible with Flocking and Fluid Infusion.
Mentor: Simon Bates & Michelle D'Souza
IRC: simonjb michelled
Implement User Interface / Learner Options Responsive Design
Difficulty: low - medium
Mentor: Jonathan Hung
WebRTC Echo/Sound Test Application
Vidyo is a videoconferencing solution that enables high definition, low-latency, error resilient, multi-point video communication to both desktop and room system end points across general purpose IP networks. It was the first industry solution to support the H.264 SVC (Scalable Video Coding) standard for video compression and was part of the initial design of Google Hangouts. Vidyo also offers a WebRTC server that allows web browsers to make calls and join conferences without any software installation. This means that participants joining through WebRTC can interoperate with clients in other platforms supported by Vidyo, like native Vidyo endpoints as well as third party H.323, SIP, and Microsoft Lync clients.
One common issue in video conferences is adjusting volume levels across participants. It's often the case that a participant will sound too quiet or too loud, even with automatic volume configuration being provided by some clients. Participants then have to blindly adjust their microphone's volume level and ask other participants if now they sound okay. This is a costly process that often delays web conferences and causes unnecessary distraction. It makes for inefficient, sometimes embarrassing experiences for remote users. We want everyone to feel welcome and heard.
The goal of this project will be to build an application (and accompanying HTML5 website) using the WebRTC API that allows participants to connect to a video conference and test their volume levels by having their voice echoed back. This could be done by asking the participant to say something for a pre-defined amount of time and echoing it back to her. Another solution is to make the echoing constant but with a small delay so the participant can keep saying words and hearing back how she sounds in almost real-time.
Mentor: Giovanni Tirloni
Most of the work we do here either uses or directly involves the Infusion Framework and Component Library. These links should get you started learning about Infusion, and should lead you to many more pages.
Contributing Code To Infusion
Tutorial - Getting started with Infusion
Infusion Framework Best Practices
Good First Bugs