Referendums are binary so should be advisory

28 06 2016

If you ask the for the solution to the multi faceted question with a binary question you will get the wrong answer with a probability of 50%. Like a quantum bit, the general population can be in any state based on the last input or observation, and so a Referendum, like the EU Referendum just held in the UK should only ever be advisory.  In that Referendum there were several themes. Immigration, the economy and UK Sovereignty. The inputs the general population were given, by various politicians on both sides of the argument, were exaggerated or untrue. It was no real surprise to hear some promises retracted once the winning side had to commit to deliver on them. No £350m per week for the NHS. No free trade deal with the EU without the same rights for EU workers as before. Migration unchanged. The Economy was hit, we don’t know how much it will be hit over the coming years and we are all, globally, hoping that in spite of a shock more severe than Lehman Brothers in 2008, the central banks, have quietly taken their own experts advice and put in place sufficient plans to deal with the situation. Had the Bank of England not intervened on Friday morning, the sheer cliff the FTSE100 was falling off, would have continued to near 0.  When it did, the index did an impression of a base jumper, parachute open drifting gently upwards.

Screen Shot 2016-06-28 at 20.08.07

The remaining theme is UK Sovereignty. Geoffrey Robertson QC  makes an interesting argument in the Guardian Newspaper, that in order to exit the EU, the UK must under its unwritten constitution vote in parliament to enact Article 50 of the Lisbon Treaty. He argues that the Referendum was always advisory. It will be interesting, given that many of those who have voted now regret their decision, if they try and abandon the last theme that caused so many to want to leave. The one remaining thing so close to their heart that they were prepared to ignore all the experts, believe the most charismatic individuals willing to tell them what they wanted to hear. UK Sovereignty, enacted by parliament by grant of the Sovereign. I watched with interest not least because the characters involved have many of the characteristics of one of the US presidential candidates.

If you live in the UK, and have time to read the opinion, please make your own mind up how you will ask your MP to vote on your behalf. That is democracy and sovereignty in action. Something we all hold dear.

Ai in FM

3 02 2016

Limited experience in either of these fields does not stop thought or research. At the risk of being corrected, from which I will learn, I’ll share those thoughts.

Early AI in FM was broadly expert systems. Used to advise on hedging to minimise overnight risk etc or to identify certain trends based on historical information. Like early symbolic maths programs (1980s) that revolutionised the way in which theoretical problems can be solved (transformed) without error in a fraction of the time, early AI in FM put an expert with a probability of correctness on every desk. This is not the AI I am interested in. It it only artificial in the sense it artificially encapsulates the knowledge of an expert. The intelligence is not artificially generated or acquired.

Machine learning  covers many techniques. Supervised learning takes a set of inputs and allows the system to perform actions based on a set of policies to produce an output. Reinforcement learning favors the more successful policies by reinforcing the action. Good machine, bad machine. The assumption is, that the environment is stochastic. or unpredictable due to the influence of randomness.

Inputs and outputs are simple. They are a replay of the historical prices. There is no guarantee that future prices will behave in the same way as historical, but that is in the nature of a stochastic system.  Reward is simple. Profit or loss. What is not simple is the machine learning policies. AFAICT, machine learning, for a stochastic system with a large amount of randomness, can’t magic the policies out of thin air. Speech has rules, Image processing also and although there is randomness, policies can be defined. At the purests level, excluding wrappers, financial markets are driven by the millions of human brains attempting to make a profit out of buying and selling the same thing without adding any value to that same thing. They are driven by emotion, fear and every aspect of human nature rationalised by economics, risk, a desire to exploit every new opportunity, and a desire to be a part of the crowd. Dominating means trading on infinitesimal margins exploiting perfect arbitrage as it the randomness exposes differences. That doesn’t mean the smaller trader can’t make money, as the smaller trader does not need to dominate, but it does mean the larger the trader becomes, the more extreme the trades have to become maintain the level of expected profits. I said excluding wrappers because they do add value, they adjust the risk for which the buyer pays a premium over the core assets. That premium allows the inventor of the wrapper to make a service profit in the belief that they can mitigate the risk. It is, when carefully chosen, a fair trade.

The key to machine learning is to find a successful set of policies. A model for success, or a model for the game. The game of Go has a simple model, the rules of the game. Therefore it’s possible to have a policy of, do everything. Go is a very large but ultimately bounded Markov Decision Process (MDP).  Try every move. With trying every move every theoretical policy can be tested. With feedback, and iteration, input patterns can be recognised and successful outcomes can be found. Although the number of combinations is large, the problem is very large but finite. So large that classical methods are not feasible, but not infinite so that reinforcement machine learning becomes viable.

The MDP governing financial markets may be near infinite in size. While attempts to formalise will appear to be successful the events of 2007 have shown us that if we believe we have found finite boundaries of a MDP representing trade, +1 means we have not. Just as finite+1 is no longer finite by the original definition, infinite+1 proves what we thought was infinite is not. The nasty surprise just over the horizon.

What do do when your ISP blocks VPN IKE packets on port 500

12 11 2015

VPN IKE packets are the first phase of establishing a VPN. UDP versions of this packet go out on port 500. Some ISPs (PlusNet) block packets to routers on port 500, probably because they don’t want you to run a VPN end point on your home router. However this also breaks a normal 500<->500 UDP IKE conversation.  Some routers rewrite the source port of the IKE packet so that they can support more than one VPN. The feature is often called a IPSec application gateway. The router keeps a list of the UDP port mappings using the MAC address of the internal machine. So the first machine to send a  VPN IKE packet will get 500<->500, the second 1500<->500, the third 2500<->500 etc. If your ISP filters packets inbound to your router on UDP 500 the VPN on the first machines will always fail to work.  You can trick your router into thinking your machine is the second or later machine by changing the MAC address before you send the first packet. On OSX

To see the current MAC address use ifconfig, and take a note of it.

then on the interface you are using to connect to your network do

sudo ifconfig en1 ether 00:23:22:23:87:75

Then try and establish a VPN. This will fail, as your ISP will block the response to your port 500. Then reset your MAC address to its original

sudo ifconfig en1 ether 00:23:22:23:87:74

Now when you try and establish a VPN it will send a IKE packet out on 500<->500. The router will rewrite that to 1500<->500 and the VPN server will respond 500<->1500 which will get rewritten to 500<->500 with your machine IP address.

How to debug

If you still have problems establishing a VPN then using tcpdump will show you what is happening. You need to run tcpdump on the local machine and ideally on a network tap between the router and the modem. If you’re on Fibre or Cable, then a Hub can be used to establish a tap. If on ADSL, you will need something harder.

On your machine.

sudo tcpdump -i en1 port 500

On the network tap, assuming eth0 is unconfigured and tapping into the hub. This assumes that your connection to the ISP is using PPPoE. Tcp will decode PPPoE session packets, if you tell it to.

sudo tcpdump -i eth0 -n pppoes and port 500

If your router won’t support more than 1 IPSec session, and uses port 500 externally, then you won’t be able to use UDP 500 IKE unless you can persuade your ISP to change their filtering config.

Scaling streaming from a threaded app server

3 01 2013

One of the criticisms that is often leveled against threaded servers where a thread or process is bound to a request for the lifetime of that request, is that they don’t scale when presented with a classical web scalability problem. In many applications the criticism is justified, not because the architecture is at fault, but often because some fundamental rules of implementation have been broken. Threaded servers are good at serving requests, where the application thread has to be bound to the request for the shortest possible time, and while it is bound, no IO waits are encountered. If that rule is adhered to, then some necessities of reliable web applications are relatively trivial to achieve and the server will be capable of delivering throughput that saturates all the resources of the hardware. Unfortunately, all to often application developer often break that rule and think the only solution has to be to use a much more complex environment that requires event based programming to interleave the IO wait states of thousands of in progress requests. In the process they dispose of transactions, since the storage system they were using (a RDBMS) can’t possibly manage several hundred thousand in progress transactions even if there was sufficient memory on the app server to manage the resources associated with each request to transaction mapping…. unless they have an infinite hardware budget and there was no such thing as physics.

A typical situation where this happens is where large files are streamed to users over slow connections. The typical web application implementation spins up a thread, that performs some queries to validate ACLs on the item, perhaps via SQL or via some in memory structured. Once the request if validated that thread, with all its baggage and resources laboriously copies blocks of bytes out to the client while keeping the thread associated with the request. The request to thread association is essentially long lived. If the connector managing the http connection knows about keep alives, it might release the thread to connection association at the end of the response, but it can’t do that until the response is complete. So a typical application serving large files to users will rapidly run out of spare threads giving threaded servers a bad name. That’s bad in so many ways. Trickled responses can’t be cached, so they have to be regenerated every time. The application runs like a dog, because a tiny part of its behaviour is always a resource hog. Anyone deploying in production will find simple DOS’s are easy to execute by just holding down the refresh button on a browser.

It doesn’t have to be like that. The time taken for the application to process the request and send the very first byte should be no greater than any other request processed by the application. Most Java based applications can get that response time below 10ms and responses below 1ms are no to hard on modern hardware with a well structured application. To do this with a streamed body is relatively simple. Validate the request, generate a response header in the threaded application server that instructs the connector handling the front end http connection to deliver content from an internal location. Commit the response with no body, and detach the thread servicing the request from the request freeing it to service the next request. Since if implemented efficiently, there were hardly any IO waits involved in that operation, the potential for a thread or CPU core to do other processing while waiting for IO is reduced.

If the bitstream to be send is stored as a file, then you can use X-Sendfile originating LiteHttpd, with close implementations in  Apache Httpd (mod_xsendfile),  nginx ( X-Accel-Redirect). If the file is stored at a remote httpd location then some other delivery mechanism can be used. Obviously the http connector (any of the above) should be configured to handle a long lived connection delivering bytes slowly.

In the blog post prior to this I mentioned that DSpace 3 could be made to serve public content via a cache exposing literally thousands of assets to slow download. I am using this approach to ensure that the back end DSpace server does not get involved with streaming content which might small PDFs but could just as well be multi GB video files or research datasets. The assets in DSpace have been stored on a mountable file system allowing a front end http server to deliver the content without reference to the application server. I have used the following snippets to set and commit the response headers after ACLs have been processed. I also deliver such content have a HMAC secured redirect to ensure that user submitted content into the Digital Repository can’t maliciously steal administrative sessions. Generation of HMAC secured redirect takes in the region of 50ms during which time resources are dedicated. If the target is public, the redirect pointer may be cached. Conversion of HMAC secured redirect into X-Sendfile header takes in the region of 1ms with no requirement for database access. Serving the bitstream itself introduces IO waits, but the redirects cant be sent to simple evented httpd servers in a farm. If all the app server is doing is processing the HMAC secured redirects then a few 100 threads at 1ms per request can handle significant traffic in the app server layer. I’ll leave you to do the math.

The same technique could be used for any long lived httpd request, eliminating the need to use an evented application server stack and abandon transactions. Obviously, if your application server code has become so complex the non streaming requests are taking so long they are limiting throughput, then this isn’t going to help.

For Apache mod_xsendfile:

protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
  response.setHeader("X-Sendfile", assetStoreBase+path);
  response.setHeader("Content-Type", (String) meta.get("content-type"));
  if ( meta.has("filename")) {
     response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));
  // thats it, response can be committed.

For nginx:


protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
    response.setHeader("X-Accel-Redirect", assetStoreBase+path);
    response.setHeader("Content-Type", (String) meta.get("content-type"));
    if ( meta.has("filename")) {
        response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));

For LiteHttpd:


protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
   response.setHeader("X-LIGHTTPD-send-file", assetStoreBase+path);
   response.setHeader("Content-Type", (String) meta.get("content-type"));
   if ( meta.has("filename")) {
       response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));

Making the Digital Repository at Cambridge Fast(er)

18 12 2012

For the past month or so I have been working on upgrading the Digital Repository at the University of Cambridge Library from a heavily customised version of DSpace 1.6 to a minimally customised version of DSpace 3. The local customizations were deemed necessary

List of commandments from Lev. 9–12; numbered Halakhot 8–18; includes a list of where they appear in Maimonides’ Book of Commandments and the Mishneh Torah, and possibly references to another work.

to achieve the performance required to host the 217,000 items and the 4M metadata records in the Digital Repository. DSpace 3 which was releases at the end of November 2012 showed promise in removing the need for many of the local patches. I am happy to report that this has proved to be the case and we are now able to cover all of our local use cases using a stock DSpace release with local customizations and optimizations isolated into an overlay. One problem however remains, performance.

The current Digital Repository contains detailed metadata and is focused on preservation of artifacts. Unlike the more popular Digital Library which has generated significant media interest in recent weeks with items like “A 2,000-year-old copy of the 10 Commandments” , the Digital Repository does not yet have significant traffic. That may change in the next few months as the UK government is taking a lead in the Open Access agenda which may prompt the rest of the world to follow. Cambridge, with its leading position in global research will be depositing its output into its Digital Repository. Hence, a primary concern of the upgrade process was to ensure that the Digital Repository could handle the expected increase in traffic driven by Open Access.

Some basics

DSpace is a Java web application running in Tomcat. Testing Tomcat for a trivial application reveals that it will deliver content at a peak rate of anything up to 6K pages per second. If that rate were sustained for 24h, 518M pages would have been served. Unfortunately traffic is never evenly distributed and applications always add overhead but this gives an idea of the basics. At 1K pages/s 86M pages would be served in 24h. Many real Java webapps are capable of jogging along happily at that rate. Unfortunately DSpace is not. It’s an old code base that has focused on the preservation use case. Many page requests perform heavy database access and the flexible Cocoon based XMLUI  is resource intensive. The modified 1.6 instance using a JSP UI delivers pages at 8/s on a moderate 8 core box and the unmodified DSpace 3, using the XMLUI instance a 15/s on a moderate 4 core box. Surprisingly, because the application does not have any web 2.0 functionality to speak of, even at that low level it feels quite nippy as each page is a single request once the page assets (css/js/png etc) are distributed and cached. With the Cambridge Digital Library regularly serving 1M pages per hour, Open Access on the Digital Repository at Cambridge will change that. Overloaded DSpace remains solid and reliable, but slow.

Apache Httpd mod_cache to the rescue

Fortunately this application is a publishing platform. For anonymous users that data changes very slowly and the number of users that log into the application is low. The DSpace URLs are well structured and predictable with no major abuse of the HTTP standard. Event the search operations backed by Solr are well structured. The current data set of 217K items published as html pages represents about 3.9GB of uncompressed data, less if the responses are stored and delivered gzipped. Consequently configuring Apache HTTPD with mod_cache to cache page responses for anonymous users has a dramatic impact on throughput. A trivial test with Apache Benchmark over 100 concurrent connections indicates a peak throughput of around 19K pages per second. I will leave you to do the rest of the maths. I think network will be the limiting factor.

Loosing statistics

There are some disadvantages to this approach. Deep within DSpace statistics are recorded. Since the cache will serve most of the content for anon users these statistics no longer make sense. I have misgivings about the way in which the statistics are being collected since if the request is serviced by Cocoon, the access is recorded in a Solr Core by performing an update operation on the core. This is one of the reasons why the throughput is slow, but I also have my doubts that this is a good way of recording statistics. Lucene indexes are often bounded by the cardinality of the index. I worry that over time the Lucene indexes behind the Solr instance recording statistics will overflow available memory. I would have thought, but have no evidence, that recording stats in a Big Data way would be more scalable, and in some ways just as easy for small institutions (ie append only log files, periodically processed with Map Reduce if required). Alternatively, Google Analytics.


Before you rush off and mod_cache all your slow applications there is one fly in the ointment. To get this to work you have to separate anonymous responses from authenticated responses. You also have to perform that separation based on the request and nothing else, and you have to ensure that your cache never gets polluted, otherwise anonymous users, including a Google spider, will see authenticated responses. There is precious little in an http request that a server can influence. It can set cookies, and change the url. Applications could segment URL space based on the role of the user, but that is ugly from a URI point of view. Suddenly there are 2 URIs pointing to the same resource. Setting a cookies doesn’t work, since the response that would have set the cookie is cached, hopefully without the cookie. The solution that worked for us was segment authenticated requests onto https and leave anon requests on http. Then configure the URL space used to perform authentication such that it would not be cached, and ensure an anon users never accessed https content, and an authenticated user, never accesses http content. The latter restriction ensures no authenticated content ever gets cached and the former ensures that the expected tsunami of anon users doesn’t impact the management of the repository. Much though I would have liked to serve everything over a single protocol on one virtual host the approach is generally applicable to all webapps.

I think the key message is, if you can host using Apache Httpd with mod_mem_cache or even the disk version, then there is no need to jump through hoops to use exotic applications stacks. My testing of Dspace 3 was done with Apache HTTPD 2.2 and all the other components running on a single 4 core box probably well past its sell by date.

AIS NMEA and Google Maps API

13 11 2012

Those who know me will know I like nothing better than to get well offshore away from any hope of network connectivity. It’s like stepping back 20 years to before the internet and its blissfully quite. The only problem is that 20 years ago it was too quiet. Crossing the English Channel in thick fog with no radar and a Decca unit that could only be relied on to give you a fix to within a mile some of the time, made you glad to be alive when you stood on solid ground again. Rocks and strong currents round the Alderney race were not nearly as frightening as the St Malo ferry looming out of the fog, horns blaring, as if there was anything a 12m yacht could do in reply. After a white knuckle trips I bought a 16nm Radar which turned the unseen steel monster of the Channel into a passage like a tortoise crossing a freeway reading an ipod. I don’t know which was better, trying to guess if the container ship making 25kn, 10nm away was going to pass in front or behind you, or placing your trust in the unseen ships crew who had spent the past 4 days rolling north through Biscay with no sleep.

Those going to sea today will not experience any of this excitement. They will probably have at least 3 active GPS receivers on board and which will be able to tell them when they are at the bow, stern and sitting on or standing in the heads, (W/C for landlubber).  The second bit of kit that they will probably have is an AIS receiver. All ships now carry AIS transmitters, as do some yachts whose owners. Vanity domains for boat owners. AIS transmits on marine 2 marine VHF channels 161.975 MHz and 162.025 MHz using variants of TDMA sharing. The data that is transmitted  is in a standard form NMEA0183 which is the same standard as used in many older marine systems. In the case of AIS, the payload is 8 bit text containing 6bit data in 168bit payload containing a checksum. The information that’s broadcast is mostly position  speed, course and identification of the sender, which although its intended mainly as a instant communication of intentions between large ships is also invaluable to any smaller craft in fog. Its like being on the bridge of all ships in VHF range at the same time and its relatively simple to calculate the closest point of approach (CPA) for all targets. Pre affordable radar, we used to guess CPA using our ears, and sometimes smell (you can smell a super tanker in a strong wind). With radar we used to try and guess the path of an approaching target from the screen. Easy on a stable platform, not so easy when your radome is doing the samba. Today we have speed and real course often to three significant figures.

I now live about 20km from the Sea north of Sydney harbor. VHF is line of sight so I would have thought it was not going to be possible to receive VHF from that distance taking into account buildings and trees. Normal marine whip aerials probably would not work, but a strip of 300 Ohm ribbon cable cut precisely to length and tuned to resonate with 1/4 wavelengths at 162 MHz is receiving and decoding signals from Newcastle to the north and Wollongong to the south, around 80km in each direction. Not bad for $5 worth of cable. The receiver is a cheap headless unit from ACR that sends the NMEA0183 signals down a USB/serial port. A simple Python scripts receives and decodes the NMEA0183 stream, converting (using from GPSD project) it into JSON containing current position,  speed, MMSI number and a host of other information. All very interesting, but not very visual. I could just use one of the many free apps to display the NMEA0183 information over TCP/UDP, but they are limited.

Google Maps v3 API allows Javascript to create an overlay of markers. So a few 100 lines of Javascript loads the json file into a browser every 15s and displays the results on an overlay on Goole Maps. Ships are red with a vector, the wake is green. Sydney is a good place to test this as nearly all the ferries in the harbor transmit AIS messages all the time. Its a busy place. Obviously using Google maps 100nm offshore isn’t going to work. The next step is to load the Python onto a Raspbery Pi board, plug in a USB Wifi dongle and create my own mobile wifi hotspot which an iPad loaded with marine charts can connect to, all for significantly less that 1 Amp. If your on 12V you care about juice. Having IP offshore does defeat the purpose of being there, I may have to turn it off from time to time to remind myself I am alive.

This interface was just an exercise to validate the NMEA to TCP over Wifi sever works. If you want to know when you ship will come in, visit, but don’t try and use it at sea.

HowTo: Quickly resolve what an Sling/OSGi bundle needs.

30 10 2012

Resolving dependencies for an OSGi bundle can be hard at times, especially if working with legacy code. The sure-fire way of finding all the dependencies is to spin the bundle up in an OSGi container, but that requires building the bundle and deploying it. Here is a quick way of doing it with maven, that may at first sound odd.

If your building your bundle with maven, you will be using the BND tool via the maven-bundle-plugin. This analyses all the byte code that is going into the bundle to work out what will cross over the class-loader boundary. BND via the maven-bundle-plugin has a default import rule of ‘*’. ie import everything. If you are trying to control which dependencies are embedded, which are ignored and which are imported, this can be a hinderance. Strange though it sounds, if you remove it life will be easier. BND will immediately report everything that it needs to import that can’t be imported. It will refuse to build which is a lot faster than generating a build that won’t deploy. The way BND reports is also useful. It tells you exactly what it can’t find and this gives you a list of packages to import, ignore or embed. Once you think you have your list of package imports down to a set that you expect to come from other bundles in your container, turn the ‘*’ import back on and away you go.

In maven that means editing the pom.xml eg:

         <!-- add ignore packages before the * as required eg. !org.testng.annotations, -->
         * <!-- comment the * out to cause BND to report everything its not been told to import -->
         <!-- add packages that you want to appear as raw classes in the jar as private packages Note, they dont have to source code in the project, they can be anywhere on the classpath for the project, but be careful about resources eg* -->

           <!-- embed dependencies (by artifact ID, including transitives if Embed-Transitive is true) that you dont want exposed to OSGi -->

The OSGi purists will tell us that it’s heresy to embed anything but sometimes with legacy systems it’s just too painful to deal with the classloader issues.

There is probably a better way of doing this, if so, do tell.