Fouling

3 05 2017

Screen Shot 2017-05-03 at 08.36.01

Fouling for any boat, large or small eats boatspeed, fuel and satisfaction. Most boats haul out every year, pressure wash off or scrape off the marine flora and fauna that have taken up residence. Owners then repaint the hull with a toxic antifouling paint. Toxic to marine life, and unpleasant to human. In small boats the toxicity has been greatly reduced over the years, some would argue so has the effectiveness. The problem is the paint erodes and the toxins leak. High concentrations of boats lead to high concentrations of toxins.

For larger ships the cost of antifouling is significant. Interruption to service and the cost of a period in dry dock. Antifouling on large ships is considerably more toxic than available for the pleasure boat industry to extend the periods between maintenance to several years.

About 10 years ago I coated my boat with copper particles embedded in epoxy. The exposed copper surface of the particles reacts with sea water to create a non soluble copper compound. This doesn’t leach, but like clipper ships coated in solid copper sheets discourages marine fouling. I have not painted since. Until a few years ago I did need to scrub off. I would see a few barnacles and some marine growth, but no more than would be normal with toxic paint.

A few years ago I added 2 ultrasonic antifouling units. These send low power ultrasonic pulses into the hull and the water. According to research performed at Singapore University, barnacle larvae use antennae  to feel the surface they are about to attach to. Once attached the antenna stick to the surface and convert into the shell. Once attached, they never fall off. Ultrasound at various frequencies excites the antenna which disrupts the sensing process. The larvae swim past to attach elsewhere. There is also some evidence that ultrasound reduces algae growth. The phenomena was first discovered testing submarine sonar over 50 years ago. My uncle did submarine research in various scottish locks during the war.

The ultrasonic antifouling I have fitted currently was a punt. 2 low cost units from Jaycar, first published in an Australian electronics magazine, that you put together yourself. Those are driven by a 8 pin PIC driving 2 MOSFETS. I think it’s made a difference. After a year in the water I have no barnacles and a bit of soft slime. There are expensive commercial units available at more cost, but the companies selling them seem to come and go. I am not convinced enough to spend that sort of money but I am curious and prepared to design and build a unit.

The new unit (board above), for the new boat is a bit more sophisticated.  Its a 6 channel unit controlled by an Arduino Mega with custom code, controlling a MOSFET signal generator and 6 pairs of MOSFETS. It outputs 15-150Khz at up to 60W per channel. A prototype on the bench works and has my kids running out the house (they can hear the harmonics). My ears are a little older so can’t, but still ring a bit. I won’t run it at those levels as that will likely cavitate the water and kill things as well as eat power. It remains to be seen if the production board works, I have just ordered a batch of 5 from an offshore fabrication shop.





Metrics off Grid

10 04 2017

I sail, as many of those who have met me know. I have had the same boat for the past 25 years and I am in the process of changing it. Seeing Isador eventually sold will be a sad day as she predates even my wife. The site of our first date. My kids have grown up on her, strapped into car seats when babies, laughing at their parents soaked and at times terrified (parents that is, not the kids). see https://hallbergrassy38forsale.wordpress.com/ for info.

The replacement will be a new boat, something completely different. A Class 40 hull provisioned for cruising. A Pogo 1250. To get the most out of the new boat and to keep within budget (or to spend more on sails) I am installing the bulk of the electronics. In the spare moments I get away from work I have been building a system over the past few months. I want get get the same sort of detailed evidence that I get at work. At work, I would expect to record 1000s of metrics in real time from 1000s of systems. A sailing boat is just as real time, except it’s off grid. No significant internet, no cloud, only a limited bank of 12v batteries for power, and a finite supply of diesel to generate electricity, in reality no 240v, but plenty of wind and solar at times.  That focuses the mind and forces implementation to be efficient. The budget is measured in Amps or mA as it was for Apollo 13, but hopefully without the consequences.

Modern marine instruments communicate using a variant (NMEA2000) of a CAN Bus present in almost every car for the past 10 years, loved by car hackers. The marine variant, adds some physical standards mostly aimed at easing amatuer installation and waterproofing. The underlying protocol and electrical characteristics are the same as a standard CAN Bus. The NMEA2000 standard also adds message types or PGNs specific to the marine industry. The standard is private, available only to members, but OpenSource projects like CanBoat on GitHub have reverse engineered most of the messages.

Electrically the CAN Bus is a twisted pair with a 120Ohm resistor at each end to make the pair appear like an infinite length transmission line (ie no reflections). The marine versions of the resistors or terminators come with a marine price tag, even though they often have poor tolerances. Precision 120Ohm resistors are a few pence, and once soldered and encapsulated will exceed any IP rating that could be applied to the marine versions. The Marine bus runs at 250Kb/s slower than many vehicle CAN bus implementations. Manufacturers add their own variants for instances Raymarine SeatalkNG which adds proprietary plugs, wires and connectors.

My new instruments are Raymarine, a few heads and sensors, a chart plotter and an Autopilot. They are basic with limited functionality. Had I added a few 0s to the instrument budget I would have gone for NKE or B&G which have a “Race Processor” and sophisticated algorithms to improve the sensor quality including wind corrections for heal and mast tip velocity, except, the corrections allowed in the consumer versions are limited to a simple linear correction table. I would be really interested to see the code for those extra 0s assuming it’s not mostly on the carbon fibre box. This is where it gets interesting for me. In addition to the CanBus project on GitHub there is an excelent NMEA2000   project targeting Arduino style boards in C++ and SignalK that runs on Linux. The NMEA2000 project running on an Aduino Due (ARM-M3 core processor) allows read/write interfacing to the CAN Bus, converting the CAN protocol into a serial protocol that SignalK running on a Pi Zero W can consume. SignalK acts as a conduit to an Instance of InfuxDB which captures boat metrics in a time series database. The metrics are then viewed in Grafana in a web browser on a tablet or mobile device.

I mentioned power earlier. The setup runs on the Pi Zero W with a load average of below 1 the board consuming about 200mA (5V). The Arduino Due consumes around 80mA@5V most of the time. There are probably some optimisations on to IO that can be performed in InfluxDB to minimize the power consumption further. Details of setup are in https://github.com/ieb/nmea2000. An example dashboard showing apparent wind and boat speed from a test dataset. This is taken from the Pi Zero W.

GrafanaAparentWind

Remember those expensive race processors. The marketing documentation talks of multiple ARM processors. The Pi Zero W has 1 ARM and the Arduino Due has another. Programmed in C++ the Arduino has ample spare cycles to do interesting things. On sailing boats, performance is predicted by the designer and through experience presented in a polar performance plot.

Pogo1250Polar

A polar plot showing expected boat speed for varying true wind angles and speeds. Its a 3D surface. Interpolating that surface of points using bilinear surface interpolation is relatively cheap on the ARM giving me real time target boat speed and real time % performance at a rate of well above 25Hz. Unfortunately the sensors do not produce data at 25Hz, but the electrical protocol is simple. Boat speed is presented as pulses at 5.5Hz per kn and wind speed as 1.045Hz per kn.  Wind angle is a sine cosine pair centered on 4V with a max of 6V and a min of 2V. Converting that all that to AWA, AWS and STW is relatively simple. That data is uncorrected. Observing the Raymarine messages with simulated electrical data I found its data is also uncorrected, as the Autopilot outputs Attitude information, ignored by the wind device. I think I can do better. There are plenty of 9DOF sensors (see Sparkfun) available speaking i2c that are easy to attach to an Arduino. Easy, because SparkFun/Adafruit and others have written C++ libraries. 3D Gyro and 3D Acceleration will allow me to correct the wind instrument for wind  shear, heal and mast head velocity (the mast is 18m tall, cup anemometers have non linear errors wrt angle of heel). There are several published papers detailing the nature of these errors. I should have enough spare cycles to do this at 25Hz, to clean the sensors and provide some reliable KPIs to hit while at sea.

A longer term projects might be teach a neural net to steer, by letting it watch how I steer, once I have mastered that. Most owners complaining their autopilots can’t steer as well as a human. Reinforcement learning in the style of Apha Go could change that. I think I heard the Vendee Globe boats got special autopilot software for a fee.

All of this leaves me with more budget to spend on sails, hopefully not batteries. I will only have to try and remember not to hit anything while looking at the data.

 





Referendums are binary so should be advisory

28 06 2016

If you ask the for the solution to the multi faceted question with a binary question you will get the wrong answer with a probability of 50%. Like a quantum bit, the general population can be in any state based on the last input or observation, and so a Referendum, like the EU Referendum just held in the UK should only ever be advisory.  In that Referendum there were several themes. Immigration, the economy and UK Sovereignty. The inputs the general population were given, by various politicians on both sides of the argument, were exaggerated or untrue. It was no real surprise to hear some promises retracted once the winning side had to commit to deliver on them. No £350m per week for the NHS. No free trade deal with the EU without the same rights for EU workers as before. Migration unchanged. The Economy was hit, we don’t know how much it will be hit over the coming years and we are all, globally, hoping that in spite of a shock more severe than Lehman Brothers in 2008, the central banks, have quietly taken their own experts advice and put in place sufficient plans to deal with the situation. Had the Bank of England not intervened on Friday morning, the sheer cliff the FTSE100 was falling off, would have continued to near 0.  When it did, the index did an impression of a base jumper, parachute open drifting gently upwards.

Screen Shot 2016-06-28 at 20.08.07

The remaining theme is UK Sovereignty. Geoffrey Robertson QC  makes an interesting argument in the Guardian Newspaper, that in order to exit the EU, the UK must under its unwritten constitution vote in parliament to enact Article 50 of the Lisbon Treaty. He argues that the Referendum was always advisory. It will be interesting, given that many of those who have voted now regret their decision, if they try and abandon the last theme that caused so many to want to leave. The one remaining thing so close to their heart that they were prepared to ignore all the experts, believe the most charismatic individuals willing to tell them what they wanted to hear. UK Sovereignty, enacted by parliament by grant of the Sovereign. I watched with interest not least because the characters involved have many of the characteristics of one of the US presidential candidates.

If you live in the UK, and have time to read the opinion, please make your own mind up how you will ask your MP to vote on your behalf. That is democracy and sovereignty in action. Something we all hold dear.





Ai in FM

3 02 2016

Limited experience in either of these fields does not stop thought or research. At the risk of being corrected, from which I will learn, I’ll share those thoughts.

Early AI in FM was broadly expert systems. Used to advise on hedging to minimise overnight risk etc or to identify certain trends based on historical information. Like early symbolic maths programs (1980s) that revolutionised the way in which theoretical problems can be solved (transformed) without error in a fraction of the time, early AI in FM put an expert with a probability of correctness on every desk. This is not the AI I am interested in. It it only artificial in the sense it artificially encapsulates the knowledge of an expert. The intelligence is not artificially generated or acquired.

Machine learning  covers many techniques. Supervised learning takes a set of inputs and allows the system to perform actions based on a set of policies to produce an output. Reinforcement learning https://en.wikipedia.org/wiki/Reinforcement_learning favors the more successful policies by reinforcing the action. Good machine, bad machine. The assumption is, that the environment is stochastic. https://en.wikipedia.org/wiki/Stochastic or unpredictable due to the influence of randomness.

Inputs and outputs are simple. They are a replay of the historical prices. There is no guarantee that future prices will behave in the same way as historical, but that is in the nature of a stochastic system.  Reward is simple. Profit or loss. What is not simple is the machine learning policies. AFAICT, machine learning, for a stochastic system with a large amount of randomness, can’t magic the policies out of thin air. Speech has rules, Image processing also and although there is randomness, policies can be defined. At the purests level, excluding wrappers, financial markets are driven by the millions of human brains attempting to make a profit out of buying and selling the same thing without adding any value to that same thing. They are driven by emotion, fear and every aspect of human nature rationalised by economics, risk, a desire to exploit every new opportunity, and a desire to be a part of the crowd. Dominating means trading on infinitesimal margins exploiting perfect arbitrage as it the randomness exposes differences. That doesn’t mean the smaller trader can’t make money, as the smaller trader does not need to dominate, but it does mean the larger the trader becomes, the more extreme the trades have to become maintain the level of expected profits. I said excluding wrappers because they do add value, they adjust the risk for which the buyer pays a premium over the core assets. That premium allows the inventor of the wrapper to make a service profit in the belief that they can mitigate the risk. It is, when carefully chosen, a fair trade.

The key to machine learning is to find a successful set of policies. A model for success, or a model for the game. The game of Go has a simple model, the rules of the game. Therefore it’s possible to have a policy of, do everything. Go is a very large but ultimately bounded Markov Decision Process (MDP). https://en.wikipedia.org/wiki/Markov_decision_process  Try every move. With trying every move every theoretical policy can be tested. With feedback, and iteration, input patterns can be recognised and successful outcomes can be found. Although the number of combinations is large, the problem is very large but finite. So large that classical methods are not feasible, but not infinite so that reinforcement machine learning becomes viable.

The MDP governing financial markets may be near infinite in size. While attempts to formalise will appear to be successful the events of 2007 have shown us that if we believe we have found finite boundaries of a MDP representing trade, +1 means we have not. Just as finite+1 is no longer finite by the original definition, infinite+1 proves what we thought was infinite is not. The nasty surprise just over the horizon.





What do do when your ISP blocks VPN IKE packets on port 500

12 11 2015

VPN IKE packets are the first phase of establishing a VPN. UDP versions of this packet go out on port 500. Some ISPs (PlusNet) block packets to routers on port 500, probably because they don’t want you to run a VPN end point on your home router. However this also breaks a normal 500<->500 UDP IKE conversation.  Some routers rewrite the source port of the IKE packet so that they can support more than one VPN. The feature is often called a IPSec application gateway. The router keeps a list of the UDP port mappings using the MAC address of the internal machine. So the first machine to send a  VPN IKE packet will get 500<->500, the second 1500<->500, the third 2500<->500 etc. If your ISP filters packets inbound to your router on UDP 500 the VPN on the first machines will always fail to work.  You can trick your router into thinking your machine is the second or later machine by changing the MAC address before you send the first packet. On OSX

To see the current MAC address use ifconfig, and take a note of it.

then on the interface you are using to connect to your network do

sudo ifconfig en1 ether 00:23:22:23:87:75

Then try and establish a VPN. This will fail, as your ISP will block the response to your port 500. Then reset your MAC address to its original

sudo ifconfig en1 ether 00:23:22:23:87:74

Now when you try and establish a VPN it will send a IKE packet out on 500<->500. The router will rewrite that to 1500<->500 and the VPN server will respond 500<->1500 which will get rewritten to 500<->500 with your machine IP address.

How to debug

If you still have problems establishing a VPN then using tcpdump will show you what is happening. You need to run tcpdump on the local machine and ideally on a network tap between the router and the modem. If you’re on Fibre or Cable, then a Hub can be used to establish a tap. If on ADSL, you will need something harder.

On your machine.

sudo tcpdump -i en1 port 500

On the network tap, assuming eth0 is unconfigured and tapping into the hub. This assumes that your connection to the ISP is using PPPoE. Tcp will decode PPPoE session packets, if you tell it to.

sudo tcpdump -i eth0 -n pppoes and port 500

If your router won’t support more than 1 IPSec session, and uses port 500 externally, then you won’t be able to use UDP 500 IKE unless you can persuade your ISP to change their filtering config.





Scaling streaming from a threaded app server

3 01 2013

One of the criticisms that is often leveled against threaded servers where a thread or process is bound to a request for the lifetime of that request, is that they don’t scale when presented with a classical web scalability problem. In many applications the criticism is justified, not because the architecture is at fault, but often because some fundamental rules of implementation have been broken. Threaded servers are good at serving requests, where the application thread has to be bound to the request for the shortest possible time, and while it is bound, no IO waits are encountered. If that rule is adhered to, then some necessities of reliable web applications are relatively trivial to achieve and the server will be capable of delivering throughput that saturates all the resources of the hardware. Unfortunately, all to often application developer often break that rule and think the only solution has to be to use a much more complex environment that requires event based programming to interleave the IO wait states of thousands of in progress requests. In the process they dispose of transactions, since the storage system they were using (a RDBMS) can’t possibly manage several hundred thousand in progress transactions even if there was sufficient memory on the app server to manage the resources associated with each request to transaction mapping…. unless they have an infinite hardware budget and there was no such thing as physics.

A typical situation where this happens is where large files are streamed to users over slow connections. The typical web application implementation spins up a thread, that performs some queries to validate ACLs on the item, perhaps via SQL or via some in memory structured. Once the request if validated that thread, with all its baggage and resources laboriously copies blocks of bytes out to the client while keeping the thread associated with the request. The request to thread association is essentially long lived. If the connector managing the http connection knows about keep alives, it might release the thread to connection association at the end of the response, but it can’t do that until the response is complete. So a typical application serving large files to users will rapidly run out of spare threads giving threaded servers a bad name. That’s bad in so many ways. Trickled responses can’t be cached, so they have to be regenerated every time. The application runs like a dog, because a tiny part of its behaviour is always a resource hog. Anyone deploying in production will find simple DOS’s are easy to execute by just holding down the refresh button on a browser.

It doesn’t have to be like that. The time taken for the application to process the request and send the very first byte should be no greater than any other request processed by the application. Most Java based applications can get that response time below 10ms and responses below 1ms are no to hard on modern hardware with a well structured application. To do this with a streamed body is relatively simple. Validate the request, generate a response header in the threaded application server that instructs the connector handling the front end http connection to deliver content from an internal location. Commit the response with no body, and detach the thread servicing the request from the request freeing it to service the next request. Since if implemented efficiently, there were hardly any IO waits involved in that operation, the potential for a thread or CPU core to do other processing while waiting for IO is reduced.

If the bitstream to be send is stored as a file, then you can use X-Sendfile originating LiteHttpd, with close implementations in  Apache Httpd (mod_xsendfile),  nginx ( X-Accel-Redirect). If the file is stored at a remote httpd location then some other delivery mechanism can be used. Obviously the http connector (any of the above) should be configured to handle a long lived connection delivering bytes slowly.

In the blog post prior to this I mentioned that DSpace 3 could be made to serve public content via a cache exposing literally thousands of assets to slow download. I am using this approach to ensure that the back end DSpace server does not get involved with streaming content which might small PDFs but could just as well be multi GB video files or research datasets. The assets in DSpace have been stored on a mountable file system allowing a front end http server to deliver the content without reference to the application server. I have used the following snippets to set and commit the response headers after ACLs have been processed. I also deliver such content have a HMAC secured redirect to ensure that user submitted content into the Digital Repository can’t maliciously steal administrative sessions. Generation of HMAC secured redirect takes in the region of 50ms during which time resources are dedicated. If the target is public, the redirect pointer may be cached. Conversion of HMAC secured redirect into X-Sendfile header takes in the region of 1ms with no requirement for database access. Serving the bitstream itself introduces IO waits, but the redirects cant be sent to simple evented httpd servers in a farm. If all the app server is doing is processing the HMAC secured redirects then a few 100 threads at 1ms per request can handle significant traffic in the app server layer. I’ll leave you to do the math.

The same technique could be used for any long lived httpd request, eliminating the need to use an evented application server stack and abandon transactions. Obviously, if your application server code has become so complex the non streaming requests are taking so long they are limiting throughput, then this isn’t going to help.

For Apache mod_xsendfile:

protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
  response.setHeader("X-Sendfile", assetStoreBase+path);
  response.setHeader("Content-Type", (String) meta.get("content-type"));
  if ( meta.has("filename")) {
     response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));
  }
  // thats it, response can be committed.
}


For nginx:

 

protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
    response.setHeader("X-Accel-Redirect", assetStoreBase+path);
    response.setHeader("X-Accel-Buffering",buffering);
    response.setHeader("X-Accel-Limit-Rate",rateLimit);
    response.setHeader("X-Accel-Expires",cacheExpires);
    response.setHeader("Content-Type", (String) meta.get("content-type"));
    if ( meta.has("filename")) {
        response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));
    }
}


For LiteHttpd:

 

protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
   response.setHeader("X-LIGHTTPD-send-file", assetStoreBase+path);
   response.setHeader("Content-Type", (String) meta.get("content-type"));
   if ( meta.has("filename")) {
       response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));
   }
}




Making the Digital Repository at Cambridge Fast(er)

18 12 2012

For the past month or so I have been working on upgrading the Digital Repository at the University of Cambridge Library from a heavily customised version of DSpace 1.6 to a minimally customised version of DSpace 3. The local customizations were deemed necessary

List of commandments from Lev. 9–12; numbered Halakhot 8–18; includes a list of where they appear in Maimonides’ Book of Commandments and the Mishneh Torah, and possibly references to another work.

to achieve the performance required to host the 217,000 items and the 4M metadata records in the Digital Repository. DSpace 3 which was releases at the end of November 2012 showed promise in removing the need for many of the local patches. I am happy to report that this has proved to be the case and we are now able to cover all of our local use cases using a stock DSpace release with local customizations and optimizations isolated into an overlay. One problem however remains, performance.

The current Digital Repository contains detailed metadata and is focused on preservation of artifacts. Unlike the more popular Digital Library which has generated significant media interest in recent weeks with items like “A 2,000-year-old copy of the 10 Commandments” , the Digital Repository does not yet have significant traffic. That may change in the next few months as the UK government is taking a lead in the Open Access agenda which may prompt the rest of the world to follow. Cambridge, with its leading position in global research will be depositing its output into its Digital Repository. Hence, a primary concern of the upgrade process was to ensure that the Digital Repository could handle the expected increase in traffic driven by Open Access.

Some basics

DSpace is a Java web application running in Tomcat. Testing Tomcat for a trivial application reveals that it will deliver content at a peak rate of anything up to 6K pages per second. If that rate were sustained for 24h, 518M pages would have been served. Unfortunately traffic is never evenly distributed and applications always add overhead but this gives an idea of the basics. At 1K pages/s 86M pages would be served in 24h. Many real Java webapps are capable of jogging along happily at that rate. Unfortunately DSpace is not. It’s an old code base that has focused on the preservation use case. Many page requests perform heavy database access and the flexible Cocoon based XMLUI  is resource intensive. The modified 1.6 instance using a JSP UI delivers pages at 8/s on a moderate 8 core box and the unmodified DSpace 3, using the XMLUI instance a 15/s on a moderate 4 core box. Surprisingly, because the application does not have any web 2.0 functionality to speak of, even at that low level it feels quite nippy as each page is a single request once the page assets (css/js/png etc) are distributed and cached. With the Cambridge Digital Library regularly serving 1M pages per hour, Open Access on the Digital Repository at Cambridge will change that. Overloaded DSpace remains solid and reliable, but slow.

Apache Httpd mod_cache to the rescue

Fortunately this application is a publishing platform. For anonymous users that data changes very slowly and the number of users that log into the application is low. The DSpace URLs are well structured and predictable with no major abuse of the HTTP standard. Event the search operations backed by Solr are well structured. The current data set of 217K items published as html pages represents about 3.9GB of uncompressed data, less if the responses are stored and delivered gzipped. Consequently configuring Apache HTTPD with mod_cache to cache page responses for anonymous users has a dramatic impact on throughput. A trivial test with Apache Benchmark over 100 concurrent connections indicates a peak throughput of around 19K pages per second. I will leave you to do the rest of the maths. I think network will be the limiting factor.

Loosing statistics

There are some disadvantages to this approach. Deep within DSpace statistics are recorded. Since the cache will serve most of the content for anon users these statistics no longer make sense. I have misgivings about the way in which the statistics are being collected since if the request is serviced by Cocoon, the access is recorded in a Solr Core by performing an update operation on the core. This is one of the reasons why the throughput is slow, but I also have my doubts that this is a good way of recording statistics. Lucene indexes are often bounded by the cardinality of the index. I worry that over time the Lucene indexes behind the Solr instance recording statistics will overflow available memory. I would have thought, but have no evidence, that recording stats in a Big Data way would be more scalable, and in some ways just as easy for small institutions (ie append only log files, periodically processed with Map Reduce if required). Alternatively, Google Analytics.

Gottchas

Before you rush off and mod_cache all your slow applications there is one fly in the ointment. To get this to work you have to separate anonymous responses from authenticated responses. You also have to perform that separation based on the request and nothing else, and you have to ensure that your cache never gets polluted, otherwise anonymous users, including a Google spider, will see authenticated responses. There is precious little in an http request that a server can influence. It can set cookies, and change the url. Applications could segment URL space based on the role of the user, but that is ugly from a URI point of view. Suddenly there are 2 URIs pointing to the same resource. Setting a cookies doesn’t work, since the response that would have set the cookie is cached, hopefully without the cookie. The solution that worked for us was segment authenticated requests onto https and leave anon requests on http. Then configure the URL space used to perform authentication such that it would not be cached, and ensure an anon users never accessed https content, and an authenticated user, never accesses http content. The latter restriction ensures no authenticated content ever gets cached and the former ensures that the expected tsunami of anon users doesn’t impact the management of the repository. Much though I would have liked to serve everything over a single protocol on one virtual host the approach is generally applicable to all webapps.

I think the key message is, if you can host using Apache Httpd with mod_mem_cache or even the disk version, then there is no need to jump through hoops to use exotic applications stacks. My testing of Dspace 3 was done with Apache HTTPD 2.2 and all the other components running on a single 4 core box probably well past its sell by date.