Comments : 1 Comment »
Categories : Uncategorized
You cant, well not on OSX. But you can bring up a Parallels desktop, install debian etch from the network iso make certain you give it 1G of swap to keep oracle happy, dont add any extra packages and then edit /etc/apt/sources.list
deb http://debian.virginmedia.com/ etch contrib main
deb http://security.debian.org/ etch/updates main contrib
deb-src http://security.debian.org/ etch/updates main contrib
deb http://oss.oracle.com/debian unstable main non-free
to include the oracle oss
then you do a
wget http://oss.oracle.com/el4/RPM-GPG-KEY-oracle -O- | sudo apt-key add -
apt-get install oracle-xe
Once that is all done, and you have been through the oracle db configure step, you should apt-get install ssh so that you can create a TCP tunnel to localhost:8080 from you OSX box. That done you can expose the http://localhost:8080/apex url on your OSX box, login to the Oracle XE web interface and configure.
The nice thing about this is, if you keep the kernel small, its almost like running oracle as a native app in OSX. I did try with Ubuntu, but it was far to clever and made a huge mess of the VGA console and would only run with X enabled…. not very lightweight.
Comments : Comments Off on Running Oracle On OSX
Categories : Uncategorized
Being slightly desperate for a test instance of Oracle on OSX I had a brainwave. Run Linux inside Parallels Desktop and run Oracle inside that VM. Since Parallels uses VT and can go direct to Hardware, looks like an option. However to Ubuntu 7.04 running you have to tell parallels your OS is solaris. Something to do with teh vga setup. Aparently line vga=771 also works on boot.
Comments : Comments Off on JSR-170 does provide a level interoperability.
Categories : Thought
Well that’s a bit obvious, standards are supposed to do that, however frequently they fail to generate interoperability.
Sakai now has an experimental Content Hosting Service that uses JSR-170, there are a few bugs in it, but in the general it works and I have uploaded a block of 500MB/1600 files a number of times.
This CHS-JCR implementation in the 2.5 trunk binds to the JCRService api that also in the 2.5 trunk. All it needs is an implementation of the JCRService API. This API is really just a session factory that manages a JSR-170 repository and produces sessions.
We currently have 2 working implementation. A Jackrabbit implementation and a Xythos implementation. The cool bit is that you can deploy your chosen implementation and it works with no code changes. There are obviously some configuration settings, but these are all in sakai.properties. The Xythos implementation also requires a working Xythos server, a xythos properties file and a license key. The Xythos JCR implementation, has only just been released, so perhaps Sakai is the first educational app to bind in this way.
I also have an Alfresco JCRService implementation, however the dependency list is quite large, and it depends on Hibernate 3.2.1, which might conflict with Sakai’s hibernate.
Comments : Comments Off on Sakai Search working in a cluster again
Categories : Thought
Not so long ago I realised that Sakai search was corrupting its indexes every month or so due to NFS latencies in propagating changes in the shared segments. I had incorrectly assumed that these latencies might be in the order of 10s max, but this appears not to be the case.
So I have rewritten the indexer stack.
So the Indexer stack in Sakai’s search has been re-written into 4 XA transaction managers. The first takes index items from a queue, attaches them to the transaction, and processes the list. The result is a Lucene Segment. This is sent to the shared storage area as a save point as part of the 2 phase commit protocol.
The next transaction in the pipeline is a merge operation that retrieves shared save points and adds them to the local index. The lucene index searcher is reopened in the background.
The third transaction performs a merge and optimization of the save point segments added to the index in the second transaction. Since this will involve reorganization of the index we also post file deletion requests to a delayed queue to ensure that no files are closed while they are still in active use by users. The result of this operation is a local index, built up over time from a sequence of save points.
The final transaction manager performs periodic merges on the central index store, to merge past savepoints into single save points after all active nodes have loaded the later save points. Unlike all the previous transactions that are completely decoupled and run in parallel, this transaction acquires a cluster wide lock on the segments it is merging.
So far in soak tests and unit tests I have not seen failiures of the same form that were previously seen in search in 2.4