Before that scares the hell out of anyone using Solr, the Solr bundle I am talking about is a small shim OSGi bundle that takes content from a Social Content Repository system called Sparse Map and indexes the content using Solr, either embedded or as a remote Solr cluster. The Solr used is a snapshot from the current 4.0 development branch of Solr. Right, now thats cleared up I suspect 90% of the readers will leave the page to go and read something else ?

So Solr 4 works just great. The applications using Sparse Map, like Sakai OAE , have a high update rate and are adding to the index continuously. The bundle queues updates and processes them via a single threaded queue reader into the index which is configured to accept soft commits and perform periodic flushes to disk. The Solr instance is unmodified from the standard Solr 4 snapshot and we have had no problems with it. Provided the cadinality of the fields that the application indexes are not insane, and the queries are also achievable there are no performance issues with queries being the sub 10ms that we have all become accustomed to from Solr. Obviously if you do stupid things you can make a query in Solr take seconds.

There are however some issues with the way the bundle works and certainly when deployed into production into a real cluster there are issues. No one would seriously run the Sparse Map with this Solr bundle on a single app server node for anything other than development or testing, so the default Embedded Solr configuration is a distraction. If your not writing code with the intention of deploying into production, then why write the code? Life is to short, unless your an academic on track to a Nobel prize. When deployed, the bundle connects to a remote Solr master for indexing with one or more Solr slaves hanging off the master (polling not being pushed to). There are several problems with this configuration. If the master goes down, no index updates can happen. This doesn’t break the Solr bundle since it queues and recovers from master failure with a local write ahead transaction log or queue. It does break the indexed data on the master since anything in memory on the master will be lost, and only those segments on disk will get propagated to the Solr slaves when the master recovers. This is a rock and a hard place. 1s commits with propagation cause unsustainable segment traffic with high segment merge activity. Infrequent commits will just loose data and destroy data propagation rates. The slaves, being read only are expendable provided there are always enough to service the load. Thats sounds like the definition of a slave, I would not like to be one, but then I wouldn’t know if I was.

Solr, in this configuration, wasn’t really designed for this type of load. If we indexing new documents at the rate of 1 batch an hour then Solr in this configuration would be prefect. However the updates can come through at thousands per second. So although it works, its fine, but when it breaks it will break and leave the index in some unknown state. The problem is rooted in how the indexing is done and where the write ahead log or queue is stored. Its fine for a single instance since the write ahead log is local to the embeded Solr instance but no good for a cluster.

Other approaches

There are lots of ways to solve this problem. It was solved in Sakai 2 (CLE) search which treated segments as immutable and sent them to a central location for distribution to each app server. Writers on each app server wrote to local indexes and on commit the segment was pushed to a central location where the segment was pushed to all other app server nodes. The implementation was less than perfect and there were all sorts of timing issues especially when it came to merging and optimising. That code was written in 2006 on a very old version of Lucene (1.9.1 IIRC). So old it didn’t have commit, let alone soft commits and it was only used for relatively slow rates of update supporting non critical user functionality. Its in production many Sakai 2 schools. Every now and again a segment gets corrupted and that corruption propagates slowly over the whole cluster with each merge and optimise. Eventually full index rebuilds are needed which can be carried out when in full production but are best done overnight when the levels of concurrency are lower.

At the time we had considered using the DB based IndexReaders and IndexWriters from the Compass project. These were readers and writers that used a DB BLOB as the backing store. Lucene depends heavily on seek performance, and doing seek over a network into the DB blob, doesn’t work. The IO required to retrieve sections of the segments to pull terms is so high that search speed is a bit low (British understatement, stiff upper lip and all that). After tests those drivers were rejected for the Sakai 2 work. It might have worked on an Oracle DB where seeks in blobs is supported and you can do some local caching, but on MySQL it was a non stater.

The next approach is that used by Jackrabbit. The Lucene index is embedded in the repo. Every repo has a local index with updates being written directly to all index sychronised across the cluster. Works well on one app node, but suffers in a cluster since ever modification to the local index has to be serialised over the entire cluster. Depending on the implementation of that synchronisation it can make the whole cluster serialized on update. Thats ok if the use case is mostly read as it is with the Enterprise Content Management use case, but in a Social Content Repository the use case is much higher update. App servers cant wait in a queue to get a lock on the virtual cluster wide index before making their insert and inserting a pointer into a list to tell all others their done.

Since 2006 the world has not stood still and there have been lots of people looking at this space. LinkedIn opensources Zoie and Bobo that deliver batched updated into distributed indexes and then build faceted search from those indexes. Although these would work for a Social Content Repository my feeling was the quality of data service (time it takes from a content item update to the index presence) was too high and required lots of discipline in the coding of the application to ensure that data local to the user was published directly to the content system rather than discovered via the search index. The area of immediate impact of data for LinkedIn is well defined, the users view of their profile etc so that QoDS can be higher than where an update might have to instantly propagate to 100s of users. The types of use cases I was targetting with the Sparse were more like Google+ where groups take a greater prominence. Except in Education, the group interaction is real time which pushed the QoDS down into the second or sub second range. So Zoie was ground breaking, but not enough. The work on this application, now Sakai OAE, started in 2008 when there was nothing else (visible) around. We started with SLing based on Jackrabbit and use its search capabilities, until we realised that a Social Content Repository has to support wide shallow hierarchies with high levels of concurrent update the Enterprise Content Management model is deep narrow hierarchies with lower levels of concurrent update. See this for detail

Roll forwards to 2010 when we pulled in Solr 4 which was just about to get the NRT patches applied. It looked, bar the small issue of cluster reliability that it was an Ok choice. And now were up to date 2012 and the world of distributed search has moved on and I want to solve the major blocker of reliability. I don’t want to have to write a distributed index as I did for Sakai 2, partly because there are many others out there doing the same thing better than I have time to. I could use SolrCloud, although IIUC that deals with the cloud deployment of Shards of SolrSlaves rather than addressing the reliability of high volume updates to those shards.

Terms, Documents or Segments

What to shard and replicate. The ability to shard will ensure scalability in the index, which turns the throughput model from a task compute farm into a parallel machine using the simplest of gather scatter algorithms (my PhD and early research was numerical parallel solutions on early MPP hardware, we always looked down on gather scatter since if never worked for highly interconnected and dynamic problem sets, sorry if thats offensive to MapReduce aficionados, btw gather scatter is the right algorithm here). The ability to replicate, many times, will ensure that we don’t have to thing about making hardware resilient. But what to shard and replicate. The Compass IndexReader and IndexWriter DB implementation proved that inverted indexes need high seek speeds to minimise the cost of scanning segments for terms. Putting latency between the workings of the inverted index and its storage was always going to slow an index down and even if you made segment and terms local to processing, processing queries on partial documents (shards of terms) creates imbalance in the processing load of a parallel machine and dependence on the queries. The reason for less than perfect parallel speedup on numerical problems in 1990 was almost always due to imperfect load balance in the algorithm. Pausing the solution for a moment to wait for other machines to finish is a simple bottleneck. Even if sharding and replication of partial documents or terms balances over the cluster of search machines, the IO to perform anything but the simplest query is probably going to dominate.

So I need an index implementation that shards and replicates documents. Its 2012 and a lot has happend. The author of Compass Shay Banon (@kimchy) went on to write ElasticSearch with a bunch of other veterans. It looks stable and has considerable uptake with drivers for most languages. It abandons the store segments centrally model of Compass and Sakai 2 and replicates the indexing operation so that documents are shaded and replicated. Transporting a segment over the network after a merge operation, as Solr Master/Slave does is time consuming, especially if you have everything in a single core and you merged segment set have become many GB in size. This looks like a prime contender for replacing the search capability since its simple to run, self configuring and discovering and ticks all the boxes as far as scaling, reliability and ease of use.

Another contender is Lucandra. Initially this was just Lucene on top of Cassandara. It implemented the IndexReader and IndexWriter inside Cassandra without segments eliminating the need to manage segments but also loosing most of the optimisations of memory mapped data. Unlike the Compass IndexReader and IndexWriter that wrote segments to DB blobs the structure of the index is columns and rows inside Cassandra. Not dissimular from the Sparse Map Cassandra driver that indexes by writing its own inverted index as it goes. There are some performance gains since if you put the Lucandra class into the Cassandra JVM the data is supposedly local, however Cassandra data is replicated and shaded so there is still significant IO between the nodes and the solution may benefit from Cassandras ability to cache, but will still suffer from the same problems that all term based or partial document sharding suffers from. Poor performance due to IO. When Lucandra became Solandra a year later in the authors reported the performance issues, but also reported a switch to sharding by document.

There will be more out there, but these examples show that the close source community implementing large distributed indexes on a document based shard and replicate approach is the right one to follow. (Hmm isn’t that what the 1998 paper from some upstarts titled “The Anatomy of a Large-Scale HypertextualWeb Search Engine” said ? The authors of Solandra admit that it still looses many of the optimisations of the segment but rightly point out if your deploying infrastructure to manage millions of small independent indexes then the file system storage issue become problematic which is where the management of storage by Cassandra becomes an advantage. As of September 2011 I get the impression that ElasticSearch is more mature than Solandra, and although everyone itches these days to play with a new tool in production (like a column DB) and throw away the old and reliable file system, I am not convinced that I want to move just yet. Old and reliable is good, sexy and new always gets me into trouble.

I think, I am going to deprecate the Solr bundle used for indexing content in Sparse Map and write a new bundle targeting ElasticSearch. It will be simpler, since I can use the write ahead transaction log already inside elastic search, its already real time (1s latency to commits and faster than that for non flushed indexes). I have also found references to it supporting bitmap bloom filter fields which means I can now embed much larger scale ACL reader indexing within the index itself. A post to follow on that later. Watch this space.

__