Log4J and Chainsaw

5 09 2006

After years of using tail -f or less on log4j log files, and then trying to do cleaver things in XL with the output, I realize there is this thing called Chainsaw, which accepts a feed of Log4j over IP.

So you can turn off most of the output in the log files and use Chansaw to drill into the log stats, turning on and off DEBUG on selected classes at runtime.

As with all log4j its a pain to setup. Im using a Simple Receiver on port 4445 and so my log4.properties file looks like this:

log4j.rootLogger=INFO, stdout, Chainsaw

log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d{hh:mm:ss.SSS} %-5p[%-20c] %m%n log4j.appender.stdout.Threshold=WARN

log4j.appender.Chainsaw=org.apache.log4j.net.SocketAppender log4j.appender.Chainsaw.remoteHost=localhost log4j.appender.Chainsaw.port=4445 log4j.appender.Chainsaw.locationInfo=true

log4j.logger.org.sakaiproject=WARN log4j.logger.org.sakaiproject.content=DEBUG log4j.logger.org.sakaiproject.dav=DEBUG log4j.logger.org.apache=ERROR log4j.logger.org.springframework=ERROR log4j.logger.org.hibernate=ERROR log4j.logger.vm.none=FATAL log4j.logger.com.sun.faces=FATAL

The trick is the Threshold setting to reduce the level in the stdout.

If I drop this into the webapps/dav/WEB-INF/classes/log4j.properties it gets loaded in the base loader and changes the whole of Sakai.


JCR Session Startup

4 09 2006

Is good, and quick,

JCR Sessions.

4 09 2006

I had thought with the structure of JCR that it would be a good ideal to open and attach one session to each request thread, avoiding the session creation mechanism. Before you think that this is totally daft, remember that the JCR persistence manager in Jackrabbit manages persistence for the entire JVM. So a managed session attached to the request thread is not so dumb…. well, perhaps, except that if anything goes wrong with the session, that state persists with the session to the next request. The interesting bit is that the error hangs around until and eden GC collection cycle takes place…. at which point any objects that were left uncommitted in in the session are finalized. If the finalization ‘rollsback’ the JCR object transaction, the session recovers, but a it looks like everything that was ‘committed’ after the error state is also rolled back.

In retrospect, creating sessions per request is going to be a better approach. However, its going to need some sort of hook into the request cycle to ensure that the Session is created and destroyed. Lazy construction is ok, but there is no unbind hook at the component level.

Jackrabbit Cluster

1 09 2006

There is one think that I must have forgotten to check out completely with Jackrabbit….. clustering ! Although it uses a DB, and has a Persistence Manager, there is a Cache sitting above the Persistence Manager, which, in a cluster risks becoming invalidated. There is work in this area under https://issues.apache.org/jira/browse/JCR-169 but no indication of when its going to be implemented. I have also seen a jackrabbit-dev discussion on the implementation of cache concurrency in a cluster, which appears to be sensible and almost working.

So, although this looks promising, the only way to run Jackrabbit in a cluster with the cache turned on is to have a single JCR node. This is not ideal as I already know that the CHS API hits the underlying storage very hard. Turning the cache off, is probably as bad as running it on a single node.

There are a whole load of issues to be solve in this area if Jackrabbit is to work in a cluster as we would like it to.