This document outlines the specific technologies we propose to use in the Fluid Engage services layer. More details about the overall architecture of Engage can be found on the Engage Architecture page.
The Engage services layer aims to weave together the various resources and sources of information required to create a compelling and interactive exhibit experience. We'll talk to content and collections management systems, image stores and asset systems, as well as social networking services on the Web such as YouTube, Wikipedia, and Twitter. Engage will share these resources back out throughout the system ensuring that it's easy to create great user interfaces for both the Web and mobile devices using these services.
For database persistence and searching, we'll use Apache's CouchDB and Lucene products. CouchDB provides us with a Web-friendly, document-oriented database solution that won't force strict schema requirements on our users and implementers.
Goals & Criteria
- Broadly approachable
- Hit the ground running
- Scalable and forward-looking
- Fits the approach of the Fluid community: functional, declarative, Web-oriented
A Broadly broadly approachable services layer technology lets us reach out to a large number of Web developers, allowing them to get involved in the Engage community and technology using skills and languages they may already be familiar with. Similarly, the Fluid current community itself needs to hit the ground running with Engage, building the services layer in technologies that are familiar and will get us coding with minimum ramp-up time.
Interoperability is a primary goal of Engage, ensuring that our technologies will work with a wide range of existing authoring tools, content management systems, and other tools databases commonly found in museums and on the Web. The Engage services layer needs to fit in well and play nice. leverage open standards wherever possible.
At the same time, it our choice for server-side technology needs to be scalable and forward-looking, allowing museums to invest in Engage over the long run, without fear that our services will slow down impossibly when confronted with large collections, and that they won't quickly go obsolete. . Engage services also need to be future-proof enough to not quickly become obsolete as the technology landscape evolves.
Lastly, the Fluid community has built up, over the years, a set of techniques and philosophies for writing software: accessibility, the Open Web, functional programming, and markup agnosticism are common themes throughout our technologies. As a community, we embrace diversity and a wide range of styles; shared patterns and techniques in code help make our solutions more coherent and consistent to develop with.
Some Background Information
We explored a variety of potential technologies for the Engage services layer. In each case, we considered both a programming language and an accompanying Web framework, recognizing that much of the advantage of a particular language comes from good tools we can reuse. Here's a list of the options we considered:
- Ruby + Merb
- Python + CherryPy
We explored each of these options through a series of conversations with the Engage development team and the wider Fluid community. A compilation of notes and ideas from these conversation has been maintained by Antranig Basman and Colin Clark at the Fluid Engage Server-side Technology page.
Some Background Thinking
- MVC and the proliferation of controllers
- Goal of ubiquity
- Infrastructure and being at the bleeding edge
Portability and JSGI
Next Generation Runtimes
Rhino and the JVM
Our aim was to find a language and accompanying framework that fit both the technological needs of our users and the culture of our development community. Throughout the process, we carefully examined both features and the wider context: the associated community, documentation, and support.
In-depth information about our approach to evaluating technologies, along with our observations, is available at the Fluid Engage Server-side Technology Notes page.
CouchDB and Lucene
Data Feeds and CouchDB
There is an incredible amount of diversity among museums in terms of how they each organize their collections and structure their data. Each museum's collection is different, and it is a daunting and risky task to attempt to force all types of collections into a single schema. Engage's technology needs to embrace this diversity. Rather than creating a "one size fits all" approach at the database level, we'll treat schemas as flexible.
Given that we can't pre-bake the data model completely, a standard SQL-based relational database is inappropriate for Engage. A new crop of document-oriented databases has emerged that are well-suited to handling dynamic schemas. Foremost among these is CouchDB, a highly scalable and Web-friendly database written in Erlang.
Data in CouchDB is queried and saved using a RESTful API: fetches use a standard HTTP GET request, while additions and updates use simple PUT or POST operations. This means that Couch can be used with any programming language, without requiring extra native database drivers. It's also an excellent fit for Engage's RESTful service-oriented architecture. For museums that already have their own databases or CMS systems, Couch can be replaced with a custom service that provides the same operations and feeds as the Couch Views that will ship out of the box with Engage.
Written in Erlang, Couch is extremely scalable and can meet the requirements of massive, complex collections and exhibit data.
Free Text Searching With Lucene
Users have come to expect natural, Google-like text searches from most modern Web experiences. As a result, Engage will need to include a free text search engine service.
Hands down, the best open source technology for this is Apache's Lucene. Today, it's the fastest and most reliable open source text searching software available, and it has few competitors. Lucene is a JVM-based tool, and should be relatively easy to install in most server environments. Lucene can be easily connected to CouchDB, allowing new documents in Couch to seamlessly be added to the indexes.
We will continue to assess Lucene's associated Solr project, which provides a RESTful API and JSON-based wire protocol ideal for Engage's architecture.
Rhino and the JVM
Rhino's deployment within the JVM brings with it a number of advantages. We can leverage the Servlet API to provide us with a proven interface to Apache and other Web servers. Since threading in Java is well-established, and the Servlet API offers clear semantics for concurrency, the JVM offers an easy way to establish a container and framework that is scalable and concurrency-safe.
Where necessary, we can also leverage other libraries that are available in Java. However, to be very clear, we will limit our use of Java within the Engage services layer to only situations requiring extremely fast performance (such as converting data in bulk) or to clear, easy-to-port algorithms.
Next Generation Runtimes
Portability and JSGI
With this plan to build first on Rhino and the JVM, and then move to a next-generation runtime as soon as they're ready, the obvious problem is portability. If we commit to Rhino now, will be able to move to V8 or Tracemonkey-based environment later?
Env.js for Browser Compatibility
Tying it all Together: Kettle
The Engage services layer will weave together a handful of existing technologies to create a new server-side Web development framework called Kettle:
- Rhino + Servlets for the Web container
- JSGI for portability
- Env.js for standard library features and browser compatibiltiy.
- Infusion for application development
The only substantial new code we'll need write is a URL routing scheme, most likely inspired by CherryPy's approach. With this, Kettle will offer a full-featured framework for server-side development.
- is familiar and broadly approachable, both by our community and by the majority of Web developers
- allows us to reuse existing code and infrastructure such as Fluid Infusion, allowing us to get started fast
- enables us to deploy components and markup either on the client or the server, based on device or app requirements
- can scale to meet the needs of museums with large collections and many users
- is unlikely to become obsolete in the future