Fluid is an open source community of designers and developers who help improve the usability and accessibility of the open web. We contribute to a variety of open source projects (such as jQuery UI), and we work on a few projects of our own: the Design Handbook, a guidebook of techniques for improving usability, and Infusion, a JavaScript application framework for developing flexible user interfaces.

Fluid Infusion is built on top of jQuery, providing all the stuff you need to create user interfaces that are incredibly flexible, accessible, and easy-to-use. Infusion is an application framework and a suite of user interface components built with HTML, CSS, and JavaScript. In contrast to many other user interface toolkits, Infusion components aren't black boxes--they're built to be modified, adapted, and changed to suit your application or context. Taking a "one size fits one" approach, Infusion even lets end-users customize their experience with the UI Options component.

We're looking for students to collaborate with us on the Google Summer of Code 2016 program. Working with Fluid gives you a chance to learn more about accessibility and usability while writing code with cutting-edge open web technologies. Create cool stuff and make a real impact on users at the same time!

For information about the various ways we communicate with each other, see our Get Involved wiki page.


Game for First Discovery of Preferences

The goal of this project is to build a game (or game-like tool) that allows for the first-time discovery of digital preferences. 

Students and other users of this tool will engage in a process of "learning to learn" - that is, discovering and choosing the preferences that work best for them. This may be directly applicable to a learning environment (for example, preferences that might help someone learn math), or it may be more general (preferences that help someone fill out a form on the internet). In many cases, these preferences are the same (for example, high contrast helps me to see the screen better, so I can complete an on-line math course). Preferences may include things like high or low contrast, text-to-speech, or simplify-content. For a list of preferences, please refer to the following links: Cloud 4 All Common Terms, PGA Preference Categorisation.

The goal of this tool is to introduce the user/player/learner to the experience of setting digital preferences, and to provide a method for doing so in a playful and engaging way, aimed at user groups who may not have a lot of experience in using digital devices. Providing a fun way to discover preferences and try them out in a non-intimidating environment is an important part of the learning-to-learn process. The Discovery Cats design mockup shows one approach, where preferences are set upfront in a game-like interface; once this process is complete the user enters the game itself. Another approach would be to integrate preference discovery and selection into the game in itself in a creative way (e.g. how can setting preferences help a player reach a specific goal or fulfill a quest?). 

This project will involve working with the design team to make any necessary refinements to the designs and to implement a fully functional game. It is expected that students will make use of Fluid Infusion and any other appropriate web technologies and frameworks to implement the web based game. The game should be implemented such that it is fully controllable through the mouse, keyboard, and touch interfaces and make use of ARIA attributes for Assistive Technologies.

Stretch goal: Consider also the addition of a "dashboard" or other interface that allows the user/player to keep track of their progress over time. This would enable users to measure and track data regarding their own performance in relation to the preferences they set. 

See also: Preference Framework and First Discovery Tool for more information and examples on preference editing tools

For latest updates on the project: Google Summer of Code (GSoC) 2016 Project Progress Repository


Difficulty: medium

Mentor: Dana Ayotte (design), Justin Obara (dev)

IRC: danayo, Justin_o

Skills required: JavaScript, CSS,  and HTML. Familiarity with canvas, SVG and/or javascript game engines a bonus.

Data Visualization and Sonification with Infusion

Building from Fluid's fluid.model.transformWithRules API and following its Model Relay system for connecting component endpoints, this project will build a method of connecting an Infusion app to an arbitrary data source and transforming this data in preparation to be rendered. Too commonly data pipelines bake in a representational schema that cannot be escaped by a further rendering engine. Otherwise, data is put into a representational framework (i.e., a visualization library) that ties the data transformations to the specific rendering elements. 

The goal of this project will be to build a functional I/O platform for data rendering so that common-type datasources can be transferred into an application model and transformed into a generic JSON schema that can further be given rules that transpose the data to a representation.  Be it audio or visual representations, the platform will utilize the data in kind so as to develop a pattern for developing representational templates that are agnostic to data sources. This will lead to a friendlier, more accessible approach to representing data usefully to end-users.


JavaScript SoundFont 2 Parser and Synthesizer Engine

SoundFonts provide a means for packaging and distributing audio samples for use in wavetable synthesizers and samplers. They typically provide a variety of instrument sounds sampled at different pitches and octaves, making it easy to create realistic-sounding digital instruments. SoundFonts are particularly useful for data sonification, since they provide a simple and low-cost way to give users the ability to choose from a variety of instrumental sounds when creating their sound designs.

The Floe Project (Flexible Learning for Open Education) is developing new tools for sonification and data presentation using audio. These tools are based on Flocking, a framework for audio signal processing, synthesis, and music composition, which uses the Web Audio APIs now built into most modern web browsers.

However, neither Flocking nor the Web Audio API currently provides any support for parsing or playing back SoundFont-based instruments. While there are JavaScript libraries (such as MIDI.js and the soundfont-player library) that claim to provide SoundFont support, these are all based on an approach that extracts the sound files from a SoundFont with a fixed duration and envelope, significantly limiting the usefulness and expressiveness of the SoundFonts.

This project will entail the development of a robust, pure JavaScript SoundFont 2 parser as well as a Flocking unit generator that is capable of expressively playing a SoundFont. The student may choose to take the sf2-parser library written by Gree and modified by Colin Clark as the basis of the project. The parser should support all the major features of the SoundFont 2 specification, and should be able to parse standard .sf2 files and expose their contents in a data-oriented, JSON-style data structure. The playback unit generator should provide inputs that can control and modulate all of the essential parameters of a SoundFont. All code written should be accompanied by unit tests. Along the way, the student is encouraged to create an awesome musical demo of their work.


Difficulty: Medium

Mentor: Colin Clark

IRC: colinclark

Skills required: In-depth knowledge of JavaScript and web development. Knowledge of digital audio and MIDI.

Accessible, Responsive Music UI Controls

With the introduction of the Web Audio API and music frameworks such as Flocking, it's possible to make music and develop custom instruments entirely using Web technologies.

A variety of user interface component libraries, such as Nexus UI, jQuery Kontrol, Interface.js and G200K's Polymer controls, have been developed to assist in the creation of musical interfaces. However, the majority of them aren't very "web-like." Many are based on Canvas or bitmap images, and aren't compatible with responsive design techniques, can't be easily re-styled or customized using tools like CSS, and aren't accessible via the keyboard or with assistive technologies such as a screenreader.

This project will involve the creation of a small collection of high-quality, responsive, SVG or DOM-based musical user interface controls such as knobs, sliders, x/y pads, button matrices, envelope editors, or waveform viewers. The student is free to choose which components to build, but each component will support extensive customization via CSS, will support use on mobile, tablet, and desktop devices, will include ARIA markup for assistive technologies, and will be fully controllable with the keyboard. Where visual presentations convey real-world controls (such as rotary knobs), the mouse and touch interactions will be consistent with the metaphor (e.g. rotary knobs should support a circular gesture for increasing and decreasing the value, not just a linear up/down mapping). An interaction designer from the Fluid Project community will be available to help with visual and interaction questions from the student throughout the project. These controls should be compatible with Flocking and Fluid Infusion.

Difficulty: medium

Mentor: Simon Bates & Michelle D'Souza

IRC: simonjb michelled

Skills required: In-depth knowledge of JavaScript, CSS, and HTML or SVG. Familiarity with common music applications and user interfaces on desktop and mobile.

Implement User Interface / Learner Options Responsive Design

User Interface Options (UIO), also referred to as Learner Options, is a tool which allows a user to customize a web page or application to their own specific needs and preferences. The current implementation has been designed for a traditional desktop experience; however, as the trend to mobile devices continues it is necessary to provide a responsively designed implementation that will work equally well for those on any size device. An initial set of designs have been mocked up by the design team, providing a starting point for the work.This project will involve working with the design team to make any necessary refinements to the designs and to implement a fully functional responsive design that will allow UIO to work in a user friendly manner on screens/devices of all sizes ( e.g mobile, tablet, desktop ) and respond appropriately to user preference adjustments that a user makes. The student should expect to make use of CSS, in particular media queries, for implementing the responsive design. Ideally the CSS will be written using the Stylus CSS preprocessor. There may be additional JavaScript coding and HTML changes required to facilitate changes in the new designs. 

Difficulty: low - medium

Mentor: Jonathan Hung

IRC: jhung

Skills required: JavaScript, CSS, Stylus and HTML. Familiarity with responsive layout techniques (e.g. fluid layouts, media queries).

WebRTC Echo/Sound Test Application

WebRTC is an open framework for the web that enables Real Time Communications in the browser. It includes the fundamental building blocks for high-quality communications on the web, such as network, audio and video components used in voice and video chat applications. These components can be accessed through a JavaScript API, enabling developers to easily implement their own RTC web application.

Vidyo is a videoconferencing solution that enables high definition, low-latency, error resilient, multi-point video communication to both desktop and room system end points across general purpose IP networks. It was the first industry solution to support the H.264 SVC (Scalable Video Coding) standard for video compression and was part of the initial design of Google Hangouts. Vidyo also offers a WebRTC server that allows web browsers to make calls and join conferences without any software installation. This means that participants joining through WebRTC can interoperate with clients in other platforms supported by Vidyo, like native Vidyo endpoints as well as third party H.323, SIP, and Microsoft Lync clients.

One common issue in video conferences is adjusting volume levels across participants. It's often the case that a participant will sound too quiet or too loud, even with automatic volume configuration being provided by some clients. Participants then have to blindly adjust their microphone's volume level and ask other participants if now they sound okay. This is a costly process that often delays web conferences and causes unnecessary distraction. It makes for inefficient, sometimes embarrassing experiences for remote users. We want everyone to feel welcome and heard.

The goal of this project will be to build an application (and accompanying HTML5 website) using the WebRTC API that allows participants to connect to a video conference and test their volume levels by having their voice echoed back. This could be done by asking the participant to say something for a pre-defined amount of time and echoing it back to her. Another solution is to make the echoing constant but with a small delay so the participant can keep saying words and hearing back how she sounds in almost real-time.

Difficulty: medium

Mentor: Giovanni Tirloni

IRC: gtirloni

Skills required: In-depth knowledge of JavaScript. Familiarity with web conferencing technology.

Infusion Documentation

Most of the work we do here either uses or directly involves the Infusion Framework and Component Library. These links should get you started learning about Infusion, and should lead you to many more pages.

Contributing Code To Infusion
Infusion Documentation
Tutorial - Getting started with Infusion
Infusion Framework Best Practices

Good First Bugs

T Key Summary Assignee Reporter P Status Resolution Created Updated Due


  1. Hi everyone,

    I am Arnold Chuenffo, Master student at the Faculty of Engineering and Technology, University of Buea, Cameroon.

    I went through all selected organisations for GSOC2016 and  Fluid project matched my interest.

    I am particulary interested in the ideas " Implement User Interface/Learner Options Responsive Design"  and  "Accessible, Responsive Music UI Controls"

    My skills:

    Frameworks: AngularJS, Jquery, Laravel, CodeIgniter, Ionic (for mobile)

    Languages: Javascript, PHP, Java, CSS3, HTML5.

    Others: Restful API,Grunt, Node.js, Git, Firebase, Parse, Stylus.

  2. Hi Arnold Tagne.

    Thanks for your interest in the UI/Learner Options Responsive Design project. Here's a link to the UI Options wiki page: (Floe) User Interface Options (aka. Learner Options). You should find links to a lot of information about the design and the API.

    To get involved, you can take a look at the current list of known bugs (see above) and see if there is anything there you want to help fix.

    Feel free to ask me questions by leaving a comment here by using the "@<username>" command in your response, or by finding me in the #fluid-work IRC channel (see instructions here: IRC Channel). My username in IRC is jhung.

    Also if you have anything interesting you want to show me, feel free to share.

    . Jonathan (Inclusive Designer)

  3. Hi everyone,

    I am Abhishek Bansal,an Undergrad student at IIIT-Hyderabad,India.(http://iiit.ac.in).

    I am very much interested in the work done by Fluid.So i want to contribute to Fluid Project as part of GSoC'16 project.

    In particular i am interested in Project idea - 'Game for First Discovery of Preferences'.

    I have looked at the resources mentioned for this project idea.Are there any patch requirements or something to proceed further?

    I know C,C++,python,HTML,CSS,JS well and i also have some experience with web frameworks like web2py,angularJS.

    You can see some of my work at - https://github.com/abhibansal530


  4. Greetings Everyone 


    I am Jitesh Madan, Master Student at the Netaji Subash Institute of Technology, University of Delhi, Delhi, India (http://www.nsit.ac.in/).
    My topic of interest from the above list are "Game for First Discovery of Preferences" & "WebRTC Echo/Sound Test Application".
    My Skills
    • Javascript Libraries : AngularJs, Node.js, SocketIO.js, Jquery
    • Languages : HTML 5, CSS 3, Java, JavaScript
    • Miscellaneous/ Others : Restful API, Git
    Experience With HTML 5, Canvas and other technologies 
    I want to contribute to GSOC'16 and Projects from Fluid just fits in my interest.




  5. Hi, Jonathan.

    My name is Harsh Gautam and I am CS student. I'd love to work for fluid as my GSOC project.

    I wanted to know more about the technical aspects of the project 'WebRTC Echo/Sound Test Application'. I couldn't find a way to communicate to the given mentor (tbd).

    1. The sound test application is to build as feature in Vidyo or will it be an independent application?
      1. If it has be made as a feature where can I start contributing to Vidyo?
      2. If it has be made from scratch independently, does it have a web-app, native app or both? Will electron be a nice way to do it?
    Please, introduce me to the working environment of here, guide me on the next steps so that we can maximize the amount of work I can do.

    Any feedback would be highly appreciated. :)

  6. Hi @Jonathan Hung,

    Hope you are doing great . Thank for you detail reply. 

    I went through the mock ups and I also read documentation and tried the demos online. It's quite interesting, a good challenge and I think I can be up to those tasks.

    I have some apps on the windows phone store (http://www.windowsphone.com/enus/search?q=arnold+chuenffo.)  and my music app (afrizik.firebaseapp.com). 

    I will get back to you if I have any question.

    Enjoy your day

  7. Hello everyone,

    My name is Alecu Marian Alexandru ( you can call me Alex ) and I am in my second year at Computer Science at University Babes-Bolyai Romania (http://www.cs.ubbcluj.ro/).

    I'm very interested about the project "Game for First Discovery of Preferences" - maybe because games are the reason I first started to learn programming.

    Before we proceed further, I would like to fully understand the project.

    Firstly, looking at your demo with cats, the first question that is coming in my mind is: this will be a 2D or 3D game?

    Secondly, the current question at one given step in the game is related to the previous? For example, choosing one option at current step will affect what is the next question?

    Will be any AI algorithm involved or the questions will always be the same (until someone changes them from database)?

    About my skills: I started learning web from a young age - 15. I work for 5+ years in web development and in this years I worked with a wide variety of frameworks and libraries. I know very well HTML 5, CSS 3, JavaScript ( jQuery 4+ years, AngularJS, ReacJS ), PHP ( Symphony, CodeIgniter, Yii ) and I am junior in Java, Python and C++. I also worked with Three.js, but that was 2 years ago. For more details, please check my Linkedin profile: https://ro.linkedin.com/in/marian-alecu-a7459093.

    I'm very enthusiastic to be a part of GSOC project and it is nice to meet all of you.

    Thank you, have a nice day!

  8. Hello @Jonathan Hung,

    I am Aman (amanvar on github) pursuing computer science student in IIIT Hyderabad, India. I went through the Idea's for GSOC'16 and I found " Game for First Discovery of Preferences " quit interesting and like to work on this. As I am a newbie in this org. I would need some guidance to start working on this project. Please provide your guidance and help to let me understand and get involved in this project.
    I have previously done some projects in my institute on javaScript, HTML, CSS.
    Please consider this and help me to start.
    Aman Varshney


  9. Aman VashneyAlexandruArnold TagneHarsh GautamJitesh MadanAbhishek Bansal, and anyone else. Please contact us in the fluid-work irc channel to discuss your project ideas.

  10. Hi Jonathan Hung,

    I am Winston, a final year Computer Science student at National University of Singapore and I am interested in working on "Implement UI/Learner Option Responsive Design" project. I have a year of experience in Silicon Valley working as a front-end developer using AngularJS and Bootstrap. I also have experience working with Sass and MaterializeCSS. Feel free to visit my site here. Cheers!


  11. Winston Goh thanks for your interest. Please contact us in the fluid-work IRC channel. See prior comment.


  13. arshad khan please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.

  14. Hello Jonathan Hung,

    My name is himank bhalla  and a third year UG in information technology engineering from USICT, GGSIPU delhi.

    I find  Implement UI/Learner options Responsive Design  and Accessible, Responsive Music UI Controls project very interesting probably because i have an experience in frontend designing.

    Although i am a fullstack developer (backend in django framework on python) I have worked with a wide variety of frontend tools,frameworks and libraries. I know very well HTML 5, CSS 3,bootstrap,materialize,sass, JavaScript , jQuery,mobilejquery,stepjquery, EmberJS,ajax. I also worked with html5 canvas API. 

    Also i have worked  in a delhi based startup http://www.clickgarage.in/  as a frontend developer. 

    I am really interested in working with fluidproject.

    thank you,

  15. himank bhalla please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.

  16. user-8f3e3

    Hi Jonathan Hung,

            I am Ngwa Pius a 4th year Computer Engineering student in the Faculty of Engineering and Technology, University Of Buea, Cameroon.

         After browsing through the projects carried out by Fluid, I am particularly interested in working on "WebRTC Echo/Sound Test Application". I have worked with two companies as an intern ( at PIAR Inc for 3 months and at Go-Groups Ltd for 6months) as both Front-end and back-end developer.

    I possess skills that would greatly enhance my development of this project . These include:

    JavaScript, CSS 3, HTML 5 with experience on multimedia programming during course projects.

    I also program in  C, JAVA, PHP. Hope to hook you up soon for sharing of ideas.




  17. Nowa Pius please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.

  18. Hello Justin Obara,

    I am a 4th Computer Engineering student(Software Engineering) at the University of Buea. I have skills in JavaScript,CSS, Html5, PHP,yii 2 framework and Java.

    I have done web and desktop applications and equally a demo game. While browsing at the projects I got interested in "Game for First Discovery of Preferences" and will love to discourse further with the devs. Any pointers on this will be appreciated. I will love to contribute to fluid by means of gsoc2016.


    Wepngong B.N

  19. @Giovanni Tirloni

    I am Abhisek Panda,an Undergrad student at CET-Bhubaneswar,India.(http://www.cet.edu.in/).

    I am a node.js developer I want to work with you on the below mentioned project. How do I prove myself as a worthy candidate for this project ? I have prepared a sub-project i.e RPI-Webcam related to your Project_Idea: WebRTC Echo/Sound Test Application:  https://github.com/Abi-Abhisek/RPI-Webcam.git

  20. Hello,

    My name is Blagoj Dimovski and I am second year student at Faculty of Computer Science and Engineering in Skopje, Macedonia. I've looked through the projects, and I found this one really interesting: Implement User Interface / Learner Options Responsive Design. I do have some knowledge and experience with designing user interfaces, user experience from courses at my faculty, but also from practical work. In the past few years, I've also worked on few projects as a freelancer in the area of Web Design and Development, and I've visited few workshops. You can check some of my work at my small portfolio here: http://blagojdimovski.com/, as well as on freelancer.com: https://www.freelancer.com/u/BlagojD1.html.

    My project related skills : HTML5, CSS3, JavaScript (jQuery, AngularJS, Mobile Angular UI), CSS preprocessors, (Stylus included), Adobe Photoshop, and also I have some experience in designing User Interfaces and Web Sites with focus on User Experience for both desktop and mobile devices, as I mentioned.

    I am really interested into working and contributing to the project. I have some ideas for improving the project, and I will try to work on the bugs as my first step.



  21. Abhisek Panda and Blagoj please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.

  22. Hi Jonathan Hung!

    I am Alwin de leon. I have already shared the link of my draft proposal to IDI. I hope you can give comments to it (smile)

    Thank you so much,

  23. Edmund Alwin de Leon please contact us in the fluid-work IRC channel. See the IRC Channel page for information on how to connect.