Section Group Support for Wiki in Sakai

9 06 2006

There is already some support for Groups and Sections in Sakai RWiki. This is basic support that connects a Wiki SubSpace to a Worksite group. If the connection is made (by using the name of the group as the SubSpace name), permissions are taken from the Group permissions. There is a wiki macro that will generate links to all the potential Group/Section SubSites in a Worksite (see the list of macros in the editing help page)

This is a simple approach that is probably understandable, but its not exactly sophisticated or flexible. So, being a glutton for UI punishment, we have started to open up the concept further.

The concept is, that for any node in the Wiki hierarchy, thats Wiki Pages or Wiki Subsites, you (an maintain or admin user) can configure which permissions ‘realm’ is associated with the node, edit the permission on the roles, add/delete roles in that ‘realm’, modify permissions associated with the role, add/remove users from a role.

A can of worms! The challenge is not in creating the functionality, any thing is possible. The challenge is with creating a UI that doesn’t confuse the hell out of anyone other than the developer that created.

One view on this is that its better to stick with simple statements that control the permissions and not expose the full power of the underlying permissions system. Such a statement might be ‘Lock this page’. I think I agree with that for an access type users, but for a user who is maintaining a worksite, this may not be enough power. I am going to have to do many mock ups to uncover all the issues. The advanced permissions editing may not make 2.2.

Advertisements




LGPL What is acceptable extension

6 06 2006

Sesame is LGPL license, with a clarification on Java binding. The net result of the statement in the Readme is that you can use Sesame in another project without it having to be LGPL. Thats great! Well its great if you want to use the LGPL Library in a way the developers intended. When it comes to reimplementing an underlying diver you are faced with three choices. Either implement the driver so that its compatible with the internal implementation, or implement your own algorithm, or use something else.

Implementing your own driver, keeping it compatible with the original driver and underlying storage structure is almost certainly an LGPL extension that should also be licensed as LGPL and released back to the net.

Implementing your own driver with its own algorithm will probably give you the right to claim that the code is just using the LGPL code as a library. Then you can choose you own license.

If your project already has a non xGPL license, then neither of the above options are palatable. A fork in the ‘virtual’ code base or changing your license. I dont know the answer, so I’m choosing the do nothing option. I wont be using the Data source and non DDL based Sesame RDBMS driver or fixing any of the bugs in it, since its just too close to the original, uses too many of the thoughts and would have to be licensed LGPL. So Sesame will be available inside Sakai search as a add in module that uses its own database connection. I guess that not many will want to use this non-standard, in the terms of Sakai, connection strategy.

There is hope, if an RDF Triple store becomes as widespread and acceptable as a RDBMS, then we will be able to treat it just like MySQL or any other database. Time will tell.





Sesame RDBMS Drivers

6 06 2006

I’ve written a Data source based Sesame driver, but one thing that occurs to me in the Sakai environment. Most production deployments do not allow the application servers to perform DDL operations on the database. Looking at the default implementations, thats the non data source ones, they all perform lots of DDL on the database in startup and in operation. This could be problem for embedding Sesame inside Sakai. I think I am going to have to re-implement from scratch the schema generation model. It might even be worth using Hibernate to build the schema although it not going to make sense to use Hibernate to derive the object model, the queries are just too complex and optimized.





Sesame RDBMS Implementation

5 06 2006

It looks like there are some interesting features in the Sesame default RDBMS implementation. Since it uses its own connection pooling, it tends to commit on close. If the standard connection pool that is used by default is replaced by a java.sql.Datasource, things like commit don’t happen when Sesame thinks they should have happened. The net result is a bunch of exceptions associated with lock timeouts, as one connection coming out of the data source block subsequent connection. The solution looks like its going to be to re-implement most of the RDBMS layer with one that is aware of a Datasource rather than a connection pool.





Sesame in a Clustered environment

4 06 2006

Sesame has one major advantage in a clustered environment, it stores its content in a database. Im not saying this is good thing, but it just makes it easier to deploy in a clusterd environment where the only thing that is shared is the database. It should be relatively easy to make it work OOTB with Sakai… however, it looks like the default implementation of the Sesame RDBMS Sail driver (this is the RDF Repository abstraction layer) like to get a jdbc url, user name and password. This would be Ok, except that Sakai likes use a Data source.

The solution appears to be to extend various classes within the Sesame core rdbms implementation so that whenever a connection is required it comes from the Sakai data source rather than some separately managed JDBC pool.

Its not clear at the moment is Sesame is scalable enough to handle the potential number of triples that Sakai will generate. The tests of the Lucene part of the search engine were indexing about 5GB of data representing about 100,000 documents. Performance was perfectly acceptable. If the same document set was to put into a triple store, we will see at least 2M triples, and thats before we start to add in any work site ontology beyond the standard Sakai ontology.

If we get to this size of RDF store, we should also consider using Kowari but with an entirely native index format we might have to employ similar techniques to those used in the Lucene clustered search to make it work. Alternatively we could look at a dedicated RDF server… although I suspect that this would be too much deployment effort for most users.





Wiki Sub-Sites Groups and Sections

4 06 2006

In general the Wiki tool was well received, and the presentations done by Harriet, Andrew and Frances Tracy invoked thought. It was especially good to see faculty members relaying real teaching and research experience of Sakai in use.

However, we still have lots questions about how sections/groups are going to relate to Wiki sub-sites. The mapping idea, where a wiki sub-site maps to a section/realm group appears to make sense, but the UI for controlling the permissions and the way in which roles might be added to realms just isn’t clear enough yet. I hope to see this in in 2.2, but it might slip.





Exploding Content Hosting Service

4 06 2006

We had some extremely productive conversations on the ContentHostingService towards the end of the Vancouver Conference. The basic idea of the ContentHostingPulugin was extended, and it looks like it might be worth attempting to restructure the ContentHostingService to separate the implementation of the default storage mechanism so that node properties are stored centrally, but all handling mechanisms become plug ins. This will me merging both ContentResource and ContentCollection into a single ContentEntity more fully so that the core of ContentHosting can treat them the same, regardless of where and how they are stored. This opens the potential to have tools inject ContentHostingHandlers into the ContentHostingService. ie Repositories, DSpace, IMS-CP etc etc etc, not going to be 2.2, but maybe 2.3. Will be working with Jim Eng on this, as its his code.