A broadly approachable services layer technology lets us reach out to a large number of Web developers, allowing them to get involved in the Engage community and technology using skills and languages they may already be familiar with. Similarly, the Fluid community itself needs to hit the ground running with Engage, building the services layer in technologies that are familiar and will get us coding with minimum ramp-up time.
Interoperability is a primary goal of Engage, ensuring that our technologies will work with a wide range of existing authoring tools, content management systems, and other tools commonly found in museums and on the Web. The Engage services layer needs to fit in well and leverage open standards wherever possible.
At the same time, our choice for server-side technology needs to be scalable and forward-looking, allowing museums to invest in Engage over long run, without fear that our services will slow down impossibly when confronted with large collections, and that they won't quickly go obsolete.
Lastly, the Fluid community has built up, over the years, a set of techniques and philosophies for writing software: accessibility, the Open Web, functional programming, and markup agnosticism are common themes throughout our technologies. As a community, we embrace diversity and a wide range of styles; shared patterns and techniques in code help make our solutions more coherent and consistent to develop with.
Our aim was to find a language and accompanying framework that fit both the technological needs of our users and the culture of our development community. Throughout the process, we carefully examined both features and the wider context: the associated community, documentation, and support.
In-depth information about our approach to evaluating technologies, along with our observations, is available at the Fluid Engage Server-side Technology Notes page.
There is an incredible amount of diversity among museums in terms of how they each organize their collections and structure their data. Each museum's collection is different, and it is a daunting and risky task to attempt to force all types of collections into a single schema. Engage will need to embrace this diversity; rather than creating a "one size fits all" approach at the database level, we'll take a flexible approach to schemas in Engage.
Given that we can't pre-bake the data model completely, a standard SQL-based relational database in inappropriate for Engage. A new crop of document-oriented databases has emerged that are well-suited to handling dynamic schemas. Foremost among these is CouchDB, a highly scalable and Web-friendly database written in Erlang.
Data in CouchDB is queried and saved using a RESTful API: fetching data involves a standard HTTP GET request, while adding or updating documents, just a PUT or POST operation. This means that Couch can be used with any programming language without extra native database drivers. It's also an excellent fit for a RESTful service-oriented architecture. For museums that already have their own databases or CMS systems, Couch can be replaced with a custom service that provides the same RESTful operations and feeds as provides by the Couch Views that will ship out of the box with Engage.
Written in Erlang, Couch is extremely scalable and can meet the requirements of massive, complex collections and exhibit data.
Users have come to expect natural, Google-like text searches from most modern Web experiences. As a result, Engage will need to include a free text search engine service.
Hands down, the most viable open source technology for this is Apache's Lucene. Today, it's the fastest and most reliable open source text searching software available, and it has few competitors. Lucene is a JVM-based tool, and should be relatively easy to install in most server environments. Lucene can be easily connected to CouchDB, allowing new documents in Couch to seamlessly be added to the indexes.
We will continue to assess Lucene's associated Solr project, which provides a RESTful API and JSON-based wire protocol ideal for Engage's architecture.
Rhino's deployment within the JVM brings with it a number of advantages. We can leverage the Servlet API to provide us with a proven interface to Apache and other Web servers. Since threading in Java is well-establish, and the Servlet API offers clear semantics for concurrency, the JVM offers an easy way to establish a container and framework that is scalable and concurrency-safe.
Where necessary, we can also leverage other libraries that are available in Java. However, to be very clear, we will limit our use of Java within the Engage services layer to only situations requiring extremely fast performance (such as converting data in bulk) or to clear, easy-to-port algorithms.
With a plan to build first on Rhino and the JVM, and move to a next-generation runtime as soon as the required Web infrastructure is established for one of these, the obvious problem is portability. If we commit to Rhino now, will be able to move to V8 or Tracemonkey later?