Monitoring SparseMap

24 02 2012

Yesterday I started to add monitoring to sparsemap to show what impact client operations were having and to give users some feedback on why/if certain actions were slow. There is now a service StatsService, with an implementation StatsServiceImpl that collects counts and timings for storage layer calls and api calls. Storage layer calls are generally expensive. If SQL storage is used that means a network operation. If a column DB storage layer is used, it may mean a network operation, but its certainly more expensive than not. Storage calls also indicate a failure in the caching. The StatsServiceImpl can also be configured in OSGi to report the usage of individual Sparse Sessions when the session is logged out. Normally at the end of a request. So using long term application stats should tell a user of SparseMap why certain areas are going slow, what storage operations need attention and if using SQL give the DBA an instant indication of where the SQL needs to be tuned. The short term session stats should enable users of sparsemap to tune their usage to avoid thousands of storage layer calls and eliminate unnecessary API calls.  Currently output is to the log file. Soon there will be some JMX beans and maybe some form of streamed output of stats. The log output is as follows.

app Key Slow Queries, T, N, Q 
app Slow Queries, 55, 1, cn:index_update:update cn_css_w set created = ? where rid = ?
app Slow Queries, 121, 1, ac:select:select b from ac_css_b where rid = ?
app Slow Queries, 272, 3, cn:select:select b from cn_css_b where rid = ?
app Slow Queries, 212, 3, cn:update:update cn_css_b set b = ? where rid = ?
app, Key Counters, ncalls, login, loginFailed, logout, slowOp, op, api
app, Counters, 2000, 2000, 0, 2000, 0, 4005, 0
app Key Storage, T, N, OP
app Storage, 2529, 2057, cn:select
app Storage, 1241, 1023, cn:index_update
app Storage, 888, 1004, cn:update
app Storage, 11, 18, cn:insert
app Storage, 21, 32, ac:select
app Storage, 10, 6, au:select
app Storage, 4224, 4001, n/a:validate
app Key Api Call, T, N, M
app Api Call, 11699, 999, org.sakaiproject.nakamura.lite.content.ContentManagerImpl.update
app Api Call, 217, 995, org.sakaiproject.nakamura.lite.accesscontrol.AuthenticatorImpl.signContentToken
app Api Call, 11, 1, org.sakaiproject.nakamura.lite.content.ContentManagerImpl.getInputStream
app Api Call, 691, 6061, org.sakaiproject.nakamura.lite.accesscontrol.AccessControlManagerImpl.signContentToken
app Api Call, 122, 2, org.sakaiproject.nakamura.lite.content.ContentManagerImpl.writeBody
app Api Call, 1250, 5059, org.sakaiproject.nakamura.lite.content.ContentManagerImpl.get
Output is in CSV form in the logs so the logs can be processed later the column headings have the following meaning.

T total time in ms
N number of times
M method
OP operation
Q query.

OP  is parsed as columnFamily:operation so cn:select means a select on the Content columnFamily.

For the DBA the Slow Queries are the ones to watch.

In this instance, at some point during the lifetime of this JVM

select b from ac_css_b where rid = ?

took 121 ms to run but it happend only once. (The laptop hung while performing a Time Machine backup)

This feature will be in version 1.6





SparseMap Content 1.5 released

19 02 2012

Sparse Map version 1.5 has been tagged (org.sakaiproject.nakamura.core-1.5) and released. Downloads of the source tree in Zip and TarGZ form are available from GitHub.

In this release 14 issues were addressed, the details are in the issue tracker.  This release contains critical fixes to the 1.3 and 1.3 release that make those releases unusable except for the smallest deployments. As with precious releases the SPI bundles for Derby and PostgresSQL have been released, however MySQL and Oracle SPI implementations have not been released in binary forms due to the licenses on the JDBC Drivers for those databases. There are tags in the source repositories for these releases. Again, I have also restrained from releasing the SPI implementations for Cassandra, HBase and MongoDB as I am not satisfied the implementations are sufficiently tested or complete.

If you find any issues, please mention them to me or, better still, add an issue to the issue tracker. Unless otherwise stated the license is Apache 2. Thanks to everyone who made this release possible.

Tag:  https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.core-1.5
Derby SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.derby-driver-10.6.2.1-1.5 PostgreSQL SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.postgres-driver-9.0-801-1.5 MySQL SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.mysql-driver-5.1.13-1.5 Oracle SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.oracle-driver-1.4-1.5 
Issues Fixed: https://github.com/ieb/sparsemapcontent/issues?sort=created&direction=desc&state=closed&page=1&milestone=6

To use

<dependency>
  <groupId>org.sakaiproject.nakamura</groupId>
  <artifactId>org.sakaiproject.nakamura.core</artifactId>
  <version>1.5</version>    
</dependency>

The Jar is an OSGi bundle complete with Manifest, bundled with services. To use you will need to select a SPI implementation fragment bundle and deploy that with the core bundle. Normally this is done when the OSGi Standalone application jar is constructed. In addition to the core SparseMap bundle you will now need one of the the SPI implementation fragments.

Derby

<bundle>
  <groupId>org.sakaiproject.nakamura</groupId>
  <artifactId>org.sakaiproject.nakamura.derby-driver</artifactId>
  <version>10.6.2.1-1.5</version>    
</bundle>

PostgreSQL

<bundle>
  <groupId>org.sakaiproject.nakamura</groupId>
  <artifactId>org.sakaiproject.nakamura.postgres-driver</artifactId>
  <version>9.0-801-1.5</version>    
</bundle>

MySQL, Oracle

Please check out the source code at the appropriate tag and build locally.

 





Search ACLs Part 2: Simple is always best

14 02 2012

I can’t take any credit for what I say next, all that goes to Mark Triggs who I spent some time chatting with last week. Simple is always best, and although a Bloom filter might be the CS way of solving membership resolution, its not the way to do it in an inverted index where we are really looking for the intersection between two sets. We already have lists of principals that can read each document associated with each document as a multivalued key word. This list is created by analysing the fully resolved ACL for each content item at the point of indexing. In general, content items dont have crazy numbers of ACLs so although the number of unique sets of read principals may be high, the number of read principals is not crazy high, so in that form cardinality etc are not overloaded.

The problem we encounter is that the simple query causes many query operations inside Solr so as the number of principals a user has rises the time taken for each of their queries also increases, into seconds. Not exactly much hope of web scale with that, and eventually the number of terms is too great for the Solr query parser, although by then I would expect the query to be taking minutes.

The solution, like all good things, is simple. Encode the read principals as a parseable token in the query (eg comma separated) and then use a custom query component compiles and caches a bitmap for the read principals, which are sorted. That part of the operation can be slow, for 1M documents, 100 readers per document, 10,000 possible readers if a user has 1000 principals Mark reports it taks order 250ms to build the bitmap first time from about 10 segments. Segments are immutable so as the current segment changes only parts of the bitmap need to be rebuilt. With 2M documents, 2000 principals per user it takes about 600ms to build the bitmap, again each segment is taking order 10ms to build the bitmap dependent on the size of the segment.

Once the bitmap is built and cached, subsequent queries are dominated by the other terms in the query. This looks like a solution that can be retrofitted to the existing index and should not require excessive load cache space or additional load, providing we cache with a key of segment/sorted-list-of-principals and are able to invalidate segment bitmaps as the segments get updated. In our case there are 2 classes of principal list. The anon user and logged in users. If we include the userID of the logged in user then we will probably get one cached bitmap per user. If we look for patterns in the lists of principals we can share the bitmaps more widely.

Mark can be found at http://dishevelled.net/, and the code at http://dishevelled.net/MarkQueryComponent.txt





Access Control Lists in Solr/Lucene

8 02 2012

This isn’t so much about access control lists in Solr or Lucene but more about access control lists in an inverted index in general. The problem is as follows. We have a large set of data that is access controlled. The access control is managed by users and they can individual items closed or open or anywhere between. The access control lists on the content, which may be files, or simply bundles of metadata is of the form 2 bitmaps, representing the permissions granted and denied, each pair of bitmaps being associated with a principal and the set of principal/bitmap pairs associated with each content item. A side complication is that the content is organised hierarchically and permissions for any one user inherit following the hierarchy back to the root of the tree. Users have many principals through membership of groups, through directly granted static principals and through dynamically acquired principals. All of this is implemented outside of the Solr in a content system. Its Solr’s task to index the content in such a way that a query on the content for an item is efficient and returns a dense result set that can have the one or two content items that the user can’t read, filtered out before the user gets to see the list. Ie we can tolerate a few items the user can’t read being found by a Solr query, but we cant tolerate most being unreadable. In the ACL bitmaps, we are only interested in a the read permission.

The approach I took to date was to look at each content item or set of metadata when its updated, calculate a set of all principals that can read the item and add those principals as a multivalued keyword property of the Solr document. The query, performed by a user computes the principals that the user has at the time they are making the query, and builds a Solr query that gets any document matching the query and with a reading principal in that set. Where the use of principals is moderate, this works well and does not overload either the cardinality of the inverted index where the reader principals are stored in Solr or the size of the Solr query. In these cases the query can be serviced as any other low cardinality query would be, by looking up and accumulating the bitmap representing the full set of documents for each reader principal in turn. The query then requires n lookups and accumulate operations, where n is the number of principals the user has, to resolve the permissions part of the query.

However, and this is the reason for this post, where this fails is where the cardinality of the reader principals becomes to high, or the number of principals that a user has is too high. Unfortunately those two metrics are connected. The more principals there are in a system, the more a user will need to access information, and so the reader principal mechanism can begin to break down. The alternative is just as unpleasant, where the user only has a single principal, their own. In those scenarios active management of ACLs in the content system becomes unscalable both in compute and human terms, which is why principals representing groups were introduced in the first place. Even if there were not limits to the size of a Solr query the cost of processing 1024 terms is prohibitive for anything other than offline processing.

Example of a Bloom filter

Image via Wikipedia

One solution that has been suggested is to use a Bloom filter to represent the set of principals that the user has and test each indexed principal against this filter. If this is done as part of the query, as the result set is being created there is no gain over scanning all documents since the inverted index would not be used. There could be some benefit in using this approach once a potential set of documents is generated and sorted, since the cost of performing sufficient hashes to fill the appropriate set of bloom buckets is low enough that it could be used as a post query filter. I think the term in Solr is a Collector. In this scenario we are already verifying the user can read a content item or its metadata before delivering to the user, hence its acceptable to have a less that perfect set of pointers being emitted from Solr, provided that the set of pointers we retrieve is dense. We can’t afford to have a situation where the set of pointers is sparse, say several million items, and the only item the user can read is the last one. In that scenario any permissions checking performed without the benefit of Solr would be super expensive.

So, the Bloom filter applied within Solr has the potential to be able to filter most result sets rapidly enough to create a result set that is dense enough for more thorough checking. How dense does it need to be and how large does the bloom filter need to be ? That is an open-ended question, however if, on average you had to read 20% more elements than you returned that might not be excessive if results sets were generally limited to no more than the first 100 items. If that’s the case then 80% density is enough. Bloom provides a guarantee of no false negatives but a probability of a % of false positives, ie items the a Bloom filter indicates are present in a set, but which, are not. For classical Bloom filters 20% is an extremely high probability of false positive, certainly not acceptable for most applications. It has been reported in a number of places that the quality of the hash function used to fill the buckets of the filter is also of importance in the number of false positives since an uneven distribution of hashes over the number space represented by the Bloom bitmap will result in inefficient usage of that bitmap. Rather than doing the math which you will find on Wikipedia, knowing that all fast hashes are less than perfect, and being a pragmatist I did an experiment. In Apache Hadoop there are several implementations of the Bloom filter with 2 evenly distributed and efficient hash functions, Jenkins and Murmur2, so I have used that implementation. What I am interested in is how big a filter would I need to get 80% density to a set of results and how big would that bitmap need to be as the number of inputs to the bloom filter (the users principals) rises. It turns out, that very small bitmap lengths will give sufficient density where the number of input terms is small, even if the number of tested principal readers is high. So 32 bytes of Bloom filter is plenty large enough to test with < 20 principals. Unfortunately however, the cardinality of these bitmaps is too high to be a keyword in an inverted index. For example, if the system contained 256K principals, and we expected users on average to have no more than 64 principals we would need a bloom filter of no more than 256 bits to generate, on average 80% density. Since we are not attempting to index that bloom filter the cardinality of 2^^256 is not an issue. Had we tried to, we would almost certainly have generated an unusable inverted index. Also, that Bloom filter is constructed for each users query, we can dynamically scale it to suit the conditions at the time of the query (number of items in the system, and number of principals the user has). Real system with real users have more principals and sometimes users with more principals. A system with 1M principals that has on average 1024 principals per user will need a bloom filter containing about 8Kbits. Its certain that adding a 8Kbit token ( or a 1Kbyte[] ) as a single parameter to a Solr query circumvents the issue surrounding the number of terms we had previously, but it’s absolutely clear that the cardinality of 2^^8196 is going to be well beyond indexing, which means that the only way this will work is to post filter a potentially sparse set of results. That does avoid rebuilding the index.

From this small experiment I have some questions unanswered:

  • Will converting a potentially sparse set of results be quick enough, or will it just expose another DoS vector?
  • What will be the practical cost performing 100 (principals) x 20 (I was using 20 hashes) into an 8kbit filter to filter out each returned doc item?
  • Will the processing of queries this way present a DOS vector?




Deprecate Solr Bundle

2 02 2012

Before that scares the hell out of anyone using Solr, the Solr bundle I am talking about is a small shim OSGi bundle that takes content from a Social Content Repository system called Sparse Map and indexes the content using Solr, either embedded  or as a remote Solr cluster. The Solr used is a snapshot from the current 4.0 development branch of Solr. Right, now thats cleared up I suspect 90% of the readers will leave the page to go and read something else ?

So Solr 4 works just great. The applications using Sparse Map, like Sakai OAE , have a high update rate and are adding to the index continuously. The bundle queues updates and processes them via a single threaded queue reader into the index which is configured to accept soft commits and perform periodic flushes to disk. The Solr instance is unmodified from the standard Solr 4 snapshot and we have had no problems with it. Provided the cadinality of the fields that the application indexes are not insane, and the queries are also achievable there are no performance issues with queries being the sub 10ms that we have all become accustomed to from Solr. Obviously if you do stupid things you can make a query in Solr take seconds.

There are however some issues with the way the bundle works and certainly when deployed into production into a real cluster there are issues. No one would seriously run the Sparse Map with this Solr bundle on a single app server node for anything other than development or testing, so the default Embedded Solr configuration is a distraction. If your not writing code with the intention of deploying into production, then why write the code? Life is to short, unless your an academic on track to a Nobel prize. When deployed, the bundle connects to a remote Solr master for indexing with one or more Solr slaves hanging off the master (polling not being pushed to). There are several problems with this configuration. If the master goes down, no index updates can happen. This doesn’t break the Solr bundle since it queues and recovers from master failure with a local write ahead transaction log or queue. It does break the indexed data on the master since anything in memory on the master will be lost, and only those segments on disk will get propagated to the Solr slaves when the master recovers. This is a rock and a hard place. 1s commits with propagation cause unsustainable segment traffic with high segment merge activity. Infrequent commits will just loose data and destroy data propagation rates. The slaves, being read only are expendable provided there are always enough to service the load. Thats sounds like the definition of a slave, I would not like to be one, but then I wouldn’t know if I was.

Solr, in this configuration, wasn’t really designed for this type of load. If we indexing new documents at the rate of 1 batch an hour then Solr in this configuration would be prefect. However the updates can come through at thousands per second. So although it works, its fine, but when it breaks it will break and leave the index in some unknown state. The problem is rooted in how the indexing is done and where the write ahead log or queue is stored. Its fine for a single instance since the write ahead log is local to the embeded Solr instance but no good for a cluster.

Other approaches

There are lots of ways to solve this problem. It was solved in Sakai 2 (CLE) search which treated segments as immutable and sent them to a central location for distribution to each app server. Writers on each app server wrote to local indexes and on commit the segment was pushed to a central location where the segment was pushed to all other app server nodes. The implementation was less than perfect and there were all sorts of timing issues especially when it came to merging and optimising. That code was written in 2006 on a very old version of Lucene (1.9.1 IIRC). So old it didn’t have commit, let alone soft commits and it was only used for relatively slow rates of update supporting non critical user functionality. Its in production many Sakai 2 schools. Every now and again a segment gets corrupted and that corruption propagates slowly over the whole cluster with each merge and optimise. Eventually full index rebuilds are needed which can be carried out when in full production but are best done overnight when the levels of concurrency are lower.

At the time we had considered using the DB based IndexReaders and IndexWriters from the Compass project. These were readers and writers that used a DB BLOB as the backing store. Lucene depends heavily on seek performance, and doing seek over a network into the DB blob, doesn’t work. The IO required to retrieve sections of the segments to pull terms is so high that search speed is a bit low (British understatement, stiff upper lip and all that). After tests those drivers were rejected for the Sakai 2 work. It might have worked on an Oracle DB where seeks in blobs is supported and you can do some local caching, but on MySQL it was a non stater.

The next approach is that used by Jackrabbit. The Lucene index is embedded in the repo. Every repo has a local index with updates being written directly to all index sychronised across the cluster. Works well on one app node, but suffers in a cluster since ever modification to the local index has to be serialised over the entire cluster. Depending on the implementation of that synchronisation it can make the whole cluster serialized on update. Thats ok if the use case is mostly read as it is with the Enterprise Content Management use case, but in a Social Content Repository the use case is much higher update. App servers cant wait in a queue to get a lock on the virtual cluster wide index before making their insert and inserting a pointer into a list to tell all others their done.

Since 2006 the world has not stood still and there have been lots of people looking at this space. LinkedIn opensources Zoie and Bobo that deliver batched updated into distributed indexes and then build faceted search from those indexes. Although these would work for a Social Content Repository my feeling was the quality of data service (time it takes from a content item update to the index presence) was too high and required lots of discipline in the coding of the application to ensure that data local to the user was published directly to the content system rather than discovered via the search index. The area of immediate impact of data for LinkedIn is well defined, the users view of their profile etc so that QoDS can be higher than where an update might have to instantly propagate to 100s of users. The types of use cases I was targetting with the Sparse were more like Google+ where groups take a greater prominence. Except in Education, the group interaction is real time which pushed the QoDS down into the second or sub second range. So Zoie was ground breaking, but not enough. The work on this application, now Sakai OAE, started in 2008 when there was nothing else (visible) around. We started with SLing based on Jackrabbit and use its search capabilities, until we realised that a Social Content Repository has to support wide shallow hierarchies with high levels of concurrent update the Enterprise Content Management model is deep narrow hierarchies with lower levels of concurrent update. See this for detail

Roll forwards to 2010 when we pulled in Solr 4 which was just about to get the NRT patches applied. It looked, bar the small issue of cluster reliability that it was an Ok choice. And now were up to date 2012 and the world of distributed search has moved on and I want to solve the major blocker of reliability. I don’t want to have to write a distributed index as I did for Sakai 2, partly because there are many others out there doing the same thing better than I have time to. I could use SolrCloud, although IIUC that deals with the cloud deployment of Shards of SolrSlaves rather than addressing the reliability of high volume updates to those shards.

Terms, Documents or Segments

What to shard and replicate. The ability to shard will ensure scalability in the index, which turns the throughput model from a task compute farm into a parallel machine using the simplest of gather scatter algorithms (my PhD and early research was numerical parallel solutions on early MPP hardware, we always looked down on gather scatter since if never worked for highly interconnected and dynamic problem sets, sorry if thats offensive to MapReduce aficionados, btw gather scatter is the right algorithm here). The ability to replicate, many times, will ensure that we don’t have to thing about making hardware resilient. But what to shard and replicate. The Compass IndexReader and IndexWriter DB implementation proved that inverted indexes need high seek speeds to minimise the cost of scanning segments for terms. Putting latency between the workings of the inverted index and its storage was always going to slow an index down and even if you made segment and terms local to processing, processing queries on partial documents (shards of terms) creates imbalance in the processing load of a parallel machine and dependence on the queries. The reason for less than perfect parallel speedup on numerical problems in 1990 was almost always due to imperfect load balance in the algorithm. Pausing the solution for a moment to wait for other machines to finish is a simple bottleneck. Even if sharding and replication of partial documents or terms balances over the cluster of search machines, the IO to perform anything but the simplest query is probably going to dominate.

So I need an index implementation that shards and replicates documents. Its 2012 and a lot has happend. The author of Compass Shay Banon (@kimchy) went on to write ElasticSearch with a bunch of other veterans. It looks stable and has considerable uptake with drivers for most languages. It abandons the store segments centrally model of Compass and Sakai 2 and replicates the indexing operation so that documents are shaded and replicated. Transporting a segment over the network after a merge operation, as Solr Master/Slave does is time consuming, especially if you have everything in a single core and you merged segment set have become many GB in size. This looks like a prime contender for replacing the search capability since its simple to run, self configuring and discovering and ticks all the boxes as far as scaling, reliability and ease of use.

Another contender is Lucandra. Initially this was just Lucene on top of Cassandara. It implemented the IndexReader and IndexWriter inside Cassandra without segments eliminating the need to manage segments but also loosing most of the optimisations of memory mapped data. Unlike the Compass IndexReader and IndexWriter that wrote segments to DB blobs the structure of the index is columns and rows inside Cassandra. Not dissimular from the Sparse Map Cassandra driver that indexes by writing its own inverted index as it goes. There are some performance gains since if you put the Lucandra class into the Cassandra JVM the data is supposedly local, however Cassandra data is replicated and shaded so there is still significant IO between the nodes and the solution may benefit from Cassandras ability to cache, but will still suffer from the same problems that all term based or partial document sharding suffers from. Poor performance due to IO. When Lucandra became Solandra a year later in the authors reported the performance issues, but also reported a switch to sharding by document.

There will be more out there, but these examples show that the close source community implementing large distributed indexes on a document based shard and replicate approach is the right one to follow. (Hmm isn’t that what the 1998 paper from some upstarts titled “The Anatomy of a Large-Scale HypertextualWeb Search Engine” said ? The authors of Solandra admit that it still looses many of the optimisations of the segment but rightly point out if your deploying infrastructure to manage millions of small independent indexes then the file system storage issue become problematic which is where the management of storage by Cassandra becomes an advantage. As of September 2011 I get the impression that ElasticSearch is more mature than Solandra, and although everyone itches these days to play with a new tool in production (like a column DB) and throw away the old and reliable file system, I am not convinced that I want to move just yet. Old and reliable is good, sexy and new always gets me into trouble.

I think, I am going to deprecate the Solr bundle used for indexing content in Sparse Map and write a new bundle targeting ElasticSearch. It will be simpler, since I can use the write ahead transaction log already inside elastic search, its already real time (1s latency to commits and faster than that for non flushed indexes). I have also found references to it supporting bitmap bloom filter fields which means I can now embed much larger scale ACL reader indexing within the index itself. A post to follow on that later. Watch this space.






Rogue Gadgets

5 01 2012

I have long thought one of the problems with OpenSocial is its openness to enable any Gadget based app anywhere. Even if there is a technical solution to the problem of a rogue App in the browser sandbox afforded by the iframe that simply defers the issue. Sure, the Gadget code that is the App, can’t escape the iframe sandbox and interfere with the in browser container or other iframe hosted apps in from the same source. Unfortunately wonderful technical solutions are of little interest to a user whose user experience if impacted by the safe but rogue app. The app may be technically well behaved, but downright offensive and inappropriate on many other levels, and this is an area which has given many institutions food for thought when considering gadget based platforms like Google Apps for Education. A survey of the gadgets that a user could deploy via an open gadget rendering endpoint reveals that many violate internal policies of most organizations. Racial and sexual equality are often compromised. Even basic decency. It’s the openness of the gadget renderer that causes the problem, in many cases when deployed, it will render anything its given. It’s not hard to find gadgets providing porn in gmodules the source of iGoogle, not exactly what an institution would want to endorse on its staff/student home pages.

For too long there has been an assumption that it’s the responsibility of the user to self police. That’s fine where the environment is offered by an organisation that can claim to be “only the messenger”, but when an environment is offered by an organization that is more than a messenger, self policing doesn’t hold water. The weakness of the OpenSocial gadget environment is its openness. It’s hard, if not impossible to control what gadgets are available and put the onus on the container to control what is loaded.

Trusting Mobile Apps

There is a parallel to this problem in the mobile device industry seen in the difference between Android and iOS. Android is open, the environment allows developers to do almost anything they like and have full access to all features of the phone. The Android Market with over 400K apps on it is often reported as being “wild west”  to quote “…Unlike Apple’s strict approval policy, the Android Market is seen a little like the Wild West of the mobile, with many applications getting through which would never make the cut on iOS…. “. That leaves the user with plenty of choice but exposed to a lot of risk. It’s spawning an industry of FUD, based on real fears and dangers generating a new revenue stream for those that profited from virus and malware explosions on PCs. This time it’s a mobile device where the user may have placed far more trust in the device than they know (money, bank details, authentication, liability), and has far less ability to do anything about it (there I go, adding to the FUD).

Don’t get me wrong, as a developer, I don’t like the iOS approval process, but I think it’s a necessary evil to ensure that those providing the market place or store know that what they are pushing onto the unsuspecting public won’t do harm. Firstly the iOS platform protects the device from the rogue developer. Secondly the approval process ensures that the app conforms to the guidelines, not eating the battery or using up all the users monthly bandwidth allowance in a day. Thirdly, although not always the case, the approval process ensures that the soft factors of the app are acceptable. I haven’t tried, but I suspect an app that worked as a terrorist bomb trigger app, and gave step by step instructions how to do it would not pass the soft factors inspection. Consequently users of the iOS platform feel that they can trust the apps they are being sold. There is no aftermarket industry in end-user protection as there is no business case to support it.

In the Gadget environment, it’s the gadget renderer that is the equivalent to the store. By rendering a gadget, the renderer is not just a “messenger” not to be blamed, it’s saying something about what its rendering. If the gadget renderer doesn’t do that, then I have to argue that you should not trust the gadget rendered. It could be pushing anything at you, you might trust it, but if it doesn’t trust what it’s sending you, how can you trust what it sends? Would you accept a package from a person in a uniform before boarding a plan, just because the uniform had a badge with the word “security” on it? No, neither would I. If they had a gun and ID, I would still ask them why I should be trusted to carry it.

OCLC WorldShare

There are some OpenSocial gadget renderers that care about their reputation. Most Libraries are considered to be trusted sources of information and OCLC with a membership of 72000 libraries, museums and archives in 170 countries has a reputation it and its membership cares about. OCLC recently launched WorldShare, an OpenSocial based platform that uses Apache Shindig to render Gadgets and provide access for those gadgets to a wealth of additional information feeds. It does not provide the container in which to mount the Gadgets but it provides a trusted and respected source of rendered Gadgets. This turns the OpenSocial model on its head. A not for profit organisation delivering access to vast stores of information via OpenSocial and the Gadget feeds. Suddenly the gadget rendered feed is the only thing that matters. The container could be provided by OCLC, but equally by members. OCLC has wisely decided to certify any gadget that it is prepared to serve. Like the iOS certification and approval process, WorldShare’s certification is based on technical and soft criteria. That process will hopefully ensure quality, add value and protect its uses from the wild west. Just as we trust our libraries to truthfully hold and classify knowledge, I hope that the WorldShare’s realisation that the vendor has a responsibility, will give as all the confidence to continue to trust OCLC as a source.





SparseMap Content version 1.4 released.

14 12 2011

Sparse Map version 1.4 has been tagged (org.sakaiproject.nakamura.core-1.4) and released. Downloads of the source tree in Zip and TarGZ form are available from GitHub.

In this release 6 issues were addressed, the details are in the issue tracker.  The main difference you will notice in this release is the size of the core. The jar has shrunk from over 2MB to just over 200KB. This is due to the introduction of a Service Provider Interface for the raw storage layer. Implementations of the Service Provider Interfaces have been released separately for Derby, MySQL and PostgreSQL. Due to the licensing surrounding the Oracle JDBC driver I have not released a binary of the Oracle SPI implementation, however there is a tagged release in the source repository. I have also restrained from releasing the SPI implementations for Cassandra, HBase and MongoDB as I am not satisfied the implementations are sufficiently tested or complete.

If you find any issues, please mention them to me or, better still, add an issue to the issue tracker. Unless otherwise stated the license is Apache 2. Thanks to everyone who made this release possible.

Tag:  https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.core-1.4
Derby SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.derby-driver-10.6.2.1-1.4 PostgreSQL SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.postgres-driver-9.0-801-1.4 MySQL SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.mysql-driver-5.1.13-1.4 Oracle SPI Tag: https://github.com/ieb/sparsemapcontent/tree/org.sakaiproject.nakamura.oracle-driver-1.4-1.4 
Issues Fixed: https://github.com/ieb/sparsemapcontent/issues?sort=created&direction=desc&state=closed&page=1&milestone=5

To use

<dependency>
  <groupId>org.sakaiproject.nakamura</groupId>
  <artifactId>org.sakaiproject.nakamura.core</artifactId>
  <version>1.4</version>    
</dependency>

The Jar is an OSGi bundle complete with Manifest, bundled with services. To use you will need to select a SPI implementation fragment bundle and deploy that with the core bundle. Normally this is done when the OSGi Standalone application jar is constructed. In addition to the core SparseMap bundle you will now need one of the the SPI implementation fragments.

Derby

<bundle>
  <groupId>org.sakaiproject.nakamura</groupId>
  <artifactId>org.sakaiproject.nakamura.derby-driver</artifactId>
  <version>10.6.2.1-1.4</version>    
</bundle>

PostgreSQL

<bundle>
  <groupId>org.sakaiproject.nakamura</groupId>
  <artifactId>org.sakaiproject.nakamura.postgres-driver</artifactId>
  <version>9.0-801-1.4</version>    
</bundle>

MySQL

<bundle>
  <groupId>org.sakaiproject.nakamura</groupId>
  <artifactId>org.sakaiproject.nakamura.mysql-driver</artifactId>
  <version>5.1.13-1.4</version>    
</bundle>




OSGi and SPI

13 12 2011

OSGi provides a nice simple model to build components in and the classloader policies enable reasonably sophisticated isolation between packages and versions that make it possible to consider multiple versions of an API, and implementations of those APIs within a single container. Where OSGi starts to become unstuck is for SPI or Service Provider Interfaces. It’s not so much the SPI that’s a problem, rather the implementation. SPI’s normally allow a deployer to replace the internal implementation of some feature of a service. In Shindig there is a SPI for the various Social services that allow deployers to take Shindig’s implementation of OpenSocial and graft that implementation onto their existing Social graph. In other places the SPI might cover a lower level concept. Something as simple as storage. In almost all cases the SPI implementation needs some sort of access to the internals of the service that it is supporting, and that’s where the problem starts. I most of the models I have seen, OSGi bundles Export packages that represent the APIs they provide. Those APIs provide a communications conduit to the internal implementation of the services that the API describes without exposing the API. That allows the developer of the API to stabilise the API whilst allowing the implementation to evolve. The OSGi classloader policy gives that developer some certainty that well-behaved clients (ie the ones that don’t circumvent the OSGi classloader policies) wont be binding to the internals of the implementation.

SPIs, by contrast are part of the internal implementation. Exposing an SPI as an export from a bundle is one approach, however it would allow any client to bind to the internal workings of the Service implementation, exposed as an API and that would probably be a mistake. Normal, well-behaved clients, could easily become clients of the SPI. That places additional, unwanted burdens on the SPI interface as it can no longer be fully trusted by the consumer of the SPI or its implementation.

A workable solution appears to be to use OSGi Fragment bundles that bind to a Fragment Host, the Service implementation bundle containing the SPI to be implemented. Fragment bundles different to normal bundles in nature. Its probable best to think of them as a jar that gets added to the classpath of bundle identified as the Fragment Host on activation, so that the Fragment bundles contents become available to the Fragment Hosts classloader. Naturally there are some rules that need to be observed.

Unlike an OSGi bundle a Fragment bundle can’t make any changes to imports and exports of the Fragment Host classloader. In fact if the manifest of the fragment contains any Import-Package, or Export-Package statements, the Fragment will not be bound to the Fragment Host. The Fragment can’t perform activation and the fragment can’t provide classes in  a package that already exists in the Fragment Host bundle, although it appears that a Fragment host can provide unique resources in the same package location. This combination of restrictions cuts off almost all the possible routes for extension, converting the OSGi bundle from something that can be activated, into a simple jar on the classloaders search path.

There is one loophole that does appear to work. If the Fragment Host bundle specifies a Service-Component manifest entry that specifies a service component xml file that is not in the Fragment Host bundle, then that file can be provided by the Fragment bundle. If you are using the BND (or Felix Bundle plugin) tool to specify the Service-Component header, either explicitly or explicitly you will find that your route is blocked. This tool checks that any file specified exists. If the file does not exist when the bundle is being built, BND refuses to generate the manifest. There may be some logic somewhere in that decision, but I havent found an official BND way of overriding the behaviour. The solution is to ask the BND tool to put an empty Service-Component manifest header in, then merge the manifest produced with some supplied headers when the jar is constructed. This allow you to build the bundle leveraging the analysis tools within BND and have a Service-Component header that contains non-existent server component xml files.

On startup, if there is no Fragment bundle adding the extra service component xml file to the Fragment Host classloader, then an error is logged and loading continues. If the Fragment bundle provides the extra service component xml file, then its loaded by the standard Declarative Service Manager that comes with OSGi. In that xml file, the implementor of the SPI can specify the internal services that implement the SPI, and allow the services inside the Fragment Host to satisfy their references from those components. This way, a relatively simple OSGi Fragment bundle can be used to provide an SPI implementation that has access to the full Fragment Host bundle internal packages, avoiding exposing those SPI interfaces to all bundles.

In SparseMap, I am using this mechanism to provide storage drivers for several RDBMs’s via JDBC based drivers and a handful of Column DBs (Cassandra, HBase, MongoDB). The JDBC based drivers imply contain SQL and DDL configuration as well as a simple declarative service and the relevant JDBC driver jar. This is because the JDBC driver implementation is part of the Fragment Host bundle, where it lies inactive. The ColumnDB Fragment bundles all contain the relevant implementation and client libraries to make the driver work. SparseMap was beginning to be a dumping ground for every dependency under the sun. Formalising a storage SPI and extracting implementations into SPI Fragment bundles has made SpraseMap storage independently extensible without having to expose the SPI to all bundles.

This will be in the 1.4 release of SparseMap due in a few days. For those using SparseMap, they will have to ensure that the SPI Fragment bundle is present in the OSGi container when the SparseMap Fragment Host bundle becomes active. If its not present, the repository in SparseMap will fail to start and an error will be logged indicating that OSGI-INF/serviceComponent.xml is missing.





Solr Search Bundle 1.3 Released

13 12 2011

he Solr Search v1.3 bundle developed for Nakamura has been released. This is not to be confused with Apache Solr 4. The bundle wraps a snapshot version of Apache Solr 4 at revision 1162474 and exposes a number of OSGi components that allow s SolrJ client to interact with the Solr server.

In this release 2 bugs were identified and fixed. These bugs relate to the reliability of remote servers in a Solr cluster and the reliability of the indexing queues.

As always, thanks goes to everyone who contributed and helped to get this release out.

Issues Fixed: https://github.com/ieb/solr/issues?sort=created&direction=desc&state=closed&page=1&milestone=4
Release Tag: https://github.com/ieb/solr/tree/org.sakaiproject.nakamura.solr-1.3

Downloads are available from the release tag.

To Use from a maven2 project

 
    <dependency>
        <groupId>org.sakaiproject.nakamura</groupId>
        <artifactId>org.sakaiproject.nakamura.solr</artifactId>
        <version>1.3</version>
    </dependency>




Minimalism

25 11 2011

In spare moments between real work, I’ve been experimenting with a light weight content server for user generated content. In short, that means content in a hierarchical tree that is shallow and very wide. It doesn’t preclude deep narrow trees, but wide and shallow is what it does best. Here are some of the things I wanted to do.

I wanted to support the same type of RESTfull interface as seen in Sakai OAE’s Nakamura and standards like Atom. By that I mean where the URL points to a resource, and actions expressed by the http methods, http protocol and markers in the URL modify what a RESTfull request does. In short, along the lines of the arguments in http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven which probably drove the thinking behind Sling on which Nakamura is based. I mention Atom, simply because when you read the standard  it talks about the payload of a response, but makes no mention of how the URL should be structured to get that payload. It reinforces the earlier desire.

I wanted the server to start as quickly as possible, and use as little memory  as possible. Ideally < 10s and < 20MB. Java applications have got a bad name for bloat but there is no reason they have to be huge to serve load. Why so small (in Java terms)? Why not, contrary to what most apps appear to do, memory is not there to waste?

I wanted the core server to be support some standard protocols. eg WebDav, but I wanted to make it easy to extend. JAX-RS (RestEasy) inside OSGi (Minimal Sling bootstrap + Apache Felix)

I wanted the request processing to be efficient. Stream all requests (commons-upload 1.2.1 with streaming, no writing to intermediate file or byte[] all of which involve high GC traffic and slow processing), all things only processed once and available via an Adaptable pattern, a concept strong in Sling. And requests handled by response objects, not servlets. Why ? So the response state can be thread unsafe, so a request can be suspended in memory and unbound from the thread. And the resolution binding requests to resources to responses to be handled entirely in memory by pointer avoiding iteration. Ok so the lookup of a resource might go through a cache, but the resolution through to resource is an in memory pointer operation.

Where content is static, I wanted to keep it static. OS’s have file systems that are efficient at storing files, efficient at loading those file from disk and eliminating disk access completely, so if the bulk of the static files that my application needs really are static, why not use the filesystem. Many applications seem to confuse statically deterministic and dynamic. If the all possibilities of can be computed at build time, and the resources requires to create and serve are not excessive, then the content is static. Whats excessive ? A production build that takes 15 minutes to process all possibilities once a day is better than continually wasting heat and power doing it all the time. I might be a bit more extreem in that view accepting that filling a TB disk with compiled state is better than continually rebuilding that state incrementally in user facing production requests. If a deployer wants to do something special (SAN, NAS, something cloud like) with that filesystem there are plenty of options. All of Httpd/Tomcat/Jetty are capable of serving static files in high 1000s of requests per second concurrent, so why not use that ability. Browser based apps need every bit of speed they can get for static data.

The downside of all of this minimalism is a server that doesn’t have lots of ways of doing the same thing. Unlike Nakamura, you can’t write JSPs or JRuby servlets. It barely uses the OSGi Event system and has none of the sophistication of Jackrabbit. The core container is Apache Felix with the the Felix HttpSerivice running a minimalist Jetty. The Content System is Sparse Content Map, the search component is Solr as an OSGi bundle. Webdav is provided by Milton and Jax-RS by RestEasy. Cacheing is provided by EhCache. It starts in 8Mb in 12s, and after load drops back to about 10MB.

Additional RESTfull services are creating in one of three ways.

  1. Registering a servlet with the Felix Http Service (whiteboard), which binds to a URL, breaking the desire that nothing should bind to fixed URLs.
  2. Creating a component that provides a marker service, picked up by the OSGi extension to RestEasy that registers that service as a JAX-RS bean.
  3. Creating a factory service that emits JAX-RS annotated classes that act as response objects. The factory is annotated with the type of requests it can deal with, and the response objects tell JAX-RS what they can do with the request. The annotations are discovered when the factory is registered with OSGi, and those annotations are compiled into a one step memory lookup. (single concurrent hashmap get)

Methods 1 and 2 have complete control over the protocol and are wide open to abuse, method 3 follows a processing pattern closely related to Sling.

Integration testing

Well unit testing is obvious, we do it and we try and get 100% coverage of every use case that matters. In fact, if you work on a time an materials basis for anyone, you should read your contract carefully to work out if you have to fix mistakes at your own expense. If you do, then you will probably start writing more tests to prove your client that what you did works. Its no surprise, in other branches of Engineering, that acceptance testing is part of many contracts. I dont think an airline would take delivery of a new plane without starting the engines, or a shipping line take delivery of a super tanker without checking it floats. I am bemused that software engineers often get away with saying “its done”, when clearly its not. Sure we all make mistakes, but delivering code without test coverage is like handing over a ship that sinks.

Integration testing is less obvious. In Sling there is a set of integration tests that test just about everything against a running server. Its part of the standard build but lives in its one project. Its absolutely solid and ensures that nothing builds that is broken, but as an average mortal, I found it scary since when thing did break I had to work hard to find out why. Thats why in Nakamura we wrote all integration tests in scripts. Initially bash and perl then later Ruby. With hindsight this was a huge mistake. First, you had to configure your machine to run Ruby and all the extensions needed. Not too hard on Linux, but for a time, those on OSX would wait forever for ports to finish building some base library. Dependencies gone mad. Fine if you were one of the few who created the system and pulled everything in over many months, but hell for the newcomer. Mostly, the newcomer walks away, or tweets something that everyone ignores.

The devs also get off the hook. New ones dont know where to write the tests, or have to learn Ruby (replace Ruby with whatever the script is). Old devs can sweep them under the carpet and when it gets to release time ignore the fact that 10% of the tests are still broken… because the didn’t have time to maintain them 3 fridays ago at 18:45, just before they went to a party. The party where they zapped 1% of their brain cells including the ones that were remembering what they should have done at 18:49. Still they had a good time, the evening raised their morale, started a great weekend ready for the next week and besides, they had no intention of boarding the ship.

So the integration testing here is done as java unit tests. If this was a c++ project they would be c++ unit tests. They are in the bundle where where the code they test is. They are run by “mvn -Pintegration test”. Even the command says what is going to happen. It starts a full instance of the server (now 12s becomes an age), or uses one thats already running and runs the tests.  If your in eclipse, they can be run in eclipse, just as another test might, and being OSGi, the new code in the bundle can be redeployed to the running OSGi container. That way the dev creating the bundle can put their tests in their bundle and do integration testing with the same tools they did unit testing. No excuse. “find . -type d  -name integration | grep src/test  ” finds  all integration tests, and by omission ships that sink.