We're looking for students to collaborate with us on the Google Summer of Code 2016 program. Working with Fluid gives you a chance to learn more about accessibility and usability while writing code with cutting-edge open web technologies. Create cool stuff and make a real impact on users at the same time!
For information about the various ways we communicate with each other, see our Get Involved wiki page.
Game for First Discovery of Preferences
The goal of this project is to build a game (or game-like tool) that allows for the first-time discovery of digital preferences.
Students and other users of this tool will engage in a process of "learning to learn" - that is, discovering and choosing the preferences that work best for them. This may be directly applicable to a learning environment (for example, preferences that might help someone learn math), or it may be more general (preferences that help someone fill out a form on the internet). In many cases, these preferences are the same (for example, high contrast helps me to see the screen better, so I can complete an on-line math course). Preferences may include things like high or low contrast, text-to-speech, or simplify-content. For a list of preferences, please refer to the following links: Cloud 4 All Common Terms, PGA Preference Categorisation.
The goal of this tool is to introduce the user/player/learner to the experience of setting digital preferences, and to provide a method for doing so in a playful and engaging way, aimed at user groups who may not have a lot of experience in using digital devices. Providing a fun way to discover preferences and try them out in a non-intimidating environment is an important part of the learning-to-learn process. The Discovery Cats design mockup shows one approach, where preferences are set upfront in a game-like interface; once this process is complete the user enters the game itself. Another approach would be to integrate preference discovery and selection into the game in itself in a creative way (e.g. how can setting preferences help a player reach a specific goal or fulfill a quest?).
This project will involve working with the design team to make any necessary refinements to the designs and to implement a fully functional game. It is expected that students will make use of Fluid Infusion and any other appropriate web technologies and frameworks to implement the web based game. The game should be implemented such that it is fully controllable through the mouse, keyboard, and touch interfaces and make use of ARIA attributes for Assistive Technologies.
Stretch goal: Consider also the addition of a "dashboard" or other interface that allows the user/player to keep track of their progress over time. This would enable users to measure and track data regarding their own performance in relation to the preferences they set.
See also: Preference Framework and First Discovery Tool for more information and examples on preference editing tools
For latest updates on the project: Google Summer of Code (GSoC) 2016 Project Progress Repository
Mentor: Dana Ayotte (design), Justin Obara (dev)
IRC: danayo, Justin_o
Data Visualization and Sonification with Infusion
Building from Fluid's
fluid.model.transformWithRules API and following its Model Relay system for connecting component endpoints, this project will build a method of connecting an Infusion app to an arbitrary data source and transforming this data in preparation to be rendered. Too commonly data pipelines bake in a representational schema that cannot be escaped by a further rendering engine. Otherwise, data is put into a representational framework (i.e., a visualization library) that ties the data transformations to the specific rendering elements.
The goal of this project will be to build a functional I/O platform for data rendering so that common-type datasources can be transferred into an application model and transformed into a generic JSON schema that can further be given rules that transpose the data to a representation. Be it audio or visual representations, the platform will utilize the data in kind so as to develop a pattern for developing representational templates that are agnostic to data sources. This will lead to a friendlier, more accessible approach to representing data usefully to end-users.
SoundFonts provide a means for packaging and distributing audio samples for use in wavetable synthesizers and samplers. They typically provide a variety of instrument sounds sampled at different pitches and octaves, making it easy to create realistic-sounding digital instruments. SoundFonts are particularly useful for data sonification, since they provide a simple and low-cost way to give users the ability to choose from a variety of instrumental sounds when creating their sound designs.
The Floe Project (Flexible Learning for Open Education) is developing new tools for sonification and data presentation using audio. These tools are based on Flocking, a framework for audio signal processing, synthesis, and music composition, which uses the Web Audio APIs now built into most modern web browsers.
Mentor: Colin Clark
Accessible, Responsive Music UI Controls
With the introduction of the Web Audio API and music frameworks such as Flocking, it's possible to make music and develop custom instruments entirely using Web technologies.
A variety of user interface component libraries, such as Nexus UI, jQuery Kontrol, Interface.js and G200K's Polymer controls, have been developed to assist in the creation of musical interfaces. However, the majority of them aren't very "web-like." Many are based on Canvas or bitmap images, and aren't compatible with responsive design techniques, can't be easily re-styled or customized using tools like CSS, and aren't accessible via the keyboard or with assistive technologies such as a screenreader.
This project will involve the creation of a small collection of high-quality, responsive, SVG or DOM-based musical user interface controls such as knobs, sliders, x/y pads, button matrices, envelope editors, or waveform viewers. The student is free to choose which components to build, but each component will support extensive customization via CSS, will support use on mobile, tablet, and desktop devices, will include ARIA markup for assistive technologies, and will be fully controllable with the keyboard. Where visual presentations convey real-world controls (such as rotary knobs), the mouse and touch interactions will be consistent with the metaphor (e.g. rotary knobs should support a circular gesture for increasing and decreasing the value, not just a linear up/down mapping). An interaction designer from the Fluid Project community will be available to help with visual and interaction questions from the student throughout the project. These controls should be compatible with Flocking and Fluid Infusion.
Mentor: Simon Bates & Michelle D'Souza
IRC: simonjb michelled
Implement User Interface / Learner Options Responsive Design
Difficulty: low - medium
Mentor: Jonathan Hung
WebRTC Echo/Sound Test Application
Vidyo is a videoconferencing solution that enables high definition, low-latency, error resilient, multi-point video communication to both desktop and room system end points across general purpose IP networks. It was the first industry solution to support the H.264 SVC (Scalable Video Coding) standard for video compression and was part of the initial design of Google Hangouts. Vidyo also offers a WebRTC server that allows web browsers to make calls and join conferences without any software installation. This means that participants joining through WebRTC can interoperate with clients in other platforms supported by Vidyo, like native Vidyo endpoints as well as third party H.323, SIP, and Microsoft Lync clients.
One common issue in video conferences is adjusting volume levels across participants. It's often the case that a participant will sound too quiet or too loud, even with automatic volume configuration being provided by some clients. Participants then have to blindly adjust their microphone's volume level and ask other participants if now they sound okay. This is a costly process that often delays web conferences and causes unnecessary distraction. It makes for inefficient, sometimes embarrassing experiences for remote users. We want everyone to feel welcome and heard.
The goal of this project will be to build an application (and accompanying HTML5 website) using the WebRTC API that allows participants to connect to a video conference and test their volume levels by having their voice echoed back. This could be done by asking the participant to say something for a pre-defined amount of time and echoing it back to her. Another solution is to make the echoing constant but with a small delay so the participant can keep saying words and hearing back how she sounds in almost real-time.
Mentor: Giovanni Tirloni
Most of the work we do here either uses or directly involves the Infusion Framework and Component Library. These links should get you started learning about Infusion, and should lead you to many more pages.
Contributing Code To Infusion
Tutorial - Getting started with Infusion
Infusion Framework Best Practices
Good First Bugs
I am Arnold Chuenffo, Master student at the Faculty of Engineering and Technology, University of Buea, Cameroon.
I went through all selected organisations for GSOC2016 and Fluid project matched my interest.
I am particulary interested in the ideas " Implement User Interface/Learner Options Responsive Design" and "Accessible, Responsive Music UI Controls"
Frameworks: AngularJS, Jquery, Laravel, CodeIgniter, Ionic (for mobile)
Others: Restful API,Grunt, Node.js, Git, Firebase, Parse, Stylus.
Hi Arnold Tagne.
Thanks for your interest in the UI/Learner Options Responsive Design project. Here's a link to the UI Options wiki page: (Floe) User Interface Options (aka. Learner Options). You should find links to a lot of information about the design and the API.
To get involved, you can take a look at the current list of known bugs (see above) and see if there is anything there you want to help fix.
Feel free to ask me questions by leaving a comment here by using the "@<username>" command in your response, or by finding me in the
#fluid-workIRC channel (see instructions here: IRC Channel). My username in IRC is
Also if you have anything interesting you want to show me, feel free to share.
. Jonathan (Inclusive Designer)
I am Abhishek Bansal,an Undergrad student at IIIT-Hyderabad,India.(http://iiit.ac.in).
I am very much interested in the work done by Fluid.So i want to contribute to Fluid Project as part of GSoC'16 project.
In particular i am interested in Project idea - 'Game for First Discovery of Preferences'.
I have looked at the resources mentioned for this project idea.Are there any patch requirements or something to proceed further?
I know C,C++,python,HTML,CSS,JS well and i also have some experience with web frameworks like web2py,angularJS.
You can see some of my work at - https://github.com/abhibansal530
My name is Harsh Gautam and I am CS student. I'd love to work for fluid as my GSOC project.
I wanted to know more about the technical aspects of the project 'WebRTC Echo/Sound Test Application'. I couldn't find a way to communicate to the given mentor (tbd).
Any feedback would be highly appreciated. :)
Hi @Jonathan Hung,
Hope you are doing great . Thank for you detail reply.
I went through the mock ups and I also read documentation and tried the demos online. It's quite interesting, a good challenge and I think I can be up to those tasks.
I have some apps on the windows phone store (http://www.windowsphone.com/enus/search?q=arnold+chuenffo.) and my music app (afrizik.firebaseapp.com).
I will get back to you if I have any question.
Enjoy your day
My name is Alecu Marian Alexandru ( you can call me Alex ) and I am in my second year at Computer Science at University Babes-Bolyai Romania (http://www.cs.ubbcluj.ro/).
I'm very interested about the project "Game for First Discovery of Preferences" - maybe because games are the reason I first started to learn programming.
Before we proceed further, I would like to fully understand the project.
Firstly, looking at your demo with cats, the first question that is coming in my mind is: this will be a 2D or 3D game?
Secondly, the current question at one given step in the game is related to the previous? For example, choosing one option at current step will affect what is the next question?
Will be any AI algorithm involved or the questions will always be the same (until someone changes them from database)?
I'm very enthusiastic to be a part of GSOC project and it is nice to meet all of you.
Thank you, have a nice day!
Hello @Jonathan Hung,
Aman Vashney, Alexandru, Arnold Tagne, Harsh Gautam, Jitesh Madan, Abhishek Bansal, and anyone else. Please contact us in the fluid-work irc channel to discuss your project ideas.
Hi Jonathan Hung,
I am Winston, a final year Computer Science student at National University of Singapore and I am interested in working on "Implement UI/Learner Option Responsive Design" project. I have a year of experience in Silicon Valley working as a front-end developer using AngularJS and Bootstrap. I also have experience working with Sass and MaterializeCSS. Feel free to visit my site here. Cheers!
Winston Goh thanks for your interest. Please contact us in the fluid-work IRC channel. See prior comment.
Hi @Jonathan Hung,
My name is Arshad Khan, I am a second year student pursuing B.Tech. in Information Technology from Bharati Vidyapeeth College of Engineering, New Delhi. I am currently doing internship as an Front-end and UX developer at Broomberg Cleaning Services,New Delhi.
I am willing to work on Implement User Interface / Learner Options Responsive Design project because I have the required skills and most importantly for the learning experience.In addition to that I have great interest in implementing user interface and designing.
Personal Website: www.collegenerd.in
Broomberg Services: https://www.broomberg.in
arshad khan please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.
Hello Jonathan Hung,
My name is himank bhalla and a third year UG in information technology engineering from USICT, GGSIPU delhi.
I find Implement UI/Learner options Responsive Design and Accessible, Responsive Music UI Controls project very interesting probably because i have an experience in frontend designing.
Also i have worked in a delhi based startup http://www.clickgarage.in/ as a frontend developer.
I am really interested in working with fluidproject.
himank bhalla please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.
Hi Jonathan Hung,
I am Ngwa Pius a 4th year Computer Engineering student in the Faculty of Engineering and Technology, University Of Buea, Cameroon.
After browsing through the projects carried out by Fluid, I am particularly interested in working on "WebRTC Echo/Sound Test Application". I have worked with two companies as an intern ( at PIAR Inc for 3 months and at Go-Groups Ltd for 6months) as both Front-end and back-end developer.
I possess skills that would greatly enhance my development of this project . These include:
I also program in C, JAVA, PHP. Hope to hook you up soon for sharing of ideas.
Nowa Pius please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.
Wepngong Ngeh Benaiah
Hello Justin Obara,
I have done web and desktop applications and equally a demo game. While browsing at the projects I got interested in "Game for First Discovery of Preferences" and will love to discourse further with the devs. Any pointers on this will be appreciated. I will love to contribute to fluid by means of gsoc2016.
I am Abhisek Panda,an Undergrad student at CET-Bhubaneswar,India.(http://www.cet.edu.in/).
I am a node.js developer I want to work with you on the below mentioned project. How do I prove myself as a worthy candidate for this project ? I have prepared a sub-project i.e RPI-Webcam related to your Project_Idea: WebRTC Echo/Sound Test Application: https://github.com/Abi-Abhisek/RPI-Webcam.git
My name is Blagoj Dimovski and I am second year student at Faculty of Computer Science and Engineering in Skopje, Macedonia. I've looked through the projects, and I found this one really interesting: Implement User Interface / Learner Options Responsive Design. I do have some knowledge and experience with designing user interfaces, user experience from courses at my faculty, but also from practical work. In the past few years, I've also worked on few projects as a freelancer in the area of Web Design and Development, and I've visited few workshops. You can check some of my work at my small portfolio here: http://blagojdimovski.com/, as well as on freelancer.com: https://www.freelancer.com/u/BlagojD1.html.
I am really interested into working and contributing to the project. I have some ideas for improving the project, and I will try to work on the bugs as my first step.
Abhisek Panda and Blagoj please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.
Edmund Alwin de Leon
Hi Jonathan Hung!
I am Alwin de leon. I have already shared the link of my draft proposal to IDI. I hope you can give comments to it
Thank you so much,
Edmund Alwin de Leon please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.