iMac Fans spinning fast for no reason ?

3 10 2013

Sometimes even a search on Google provides no answers due to the number of people asking the same question. The result is the collective knowledge migrates away from possible answers and is replaced by no knowledge, just myth. Now fans spinning fast on an iMac feels like a laptop from the 1990s, but here is low tech cause to a seemingly high tech problem. Someone took it apart, and put it back together screwing a case screw through the sensor cable. Its not like its mains voltage, so nothing as exciting as a spark fly out. All that happens is the sensor reads 1C or 150C depending on which wires got cut or shorted.

The SMC controller in charge of temperature regulation which has no reliance on the CPU or OS switches to safe mode. It doesn’t matter how many times you reset it, it will jump into that mode and spin one or more of the fans at a speed that will ensure nothing could ever bust into flames.

So if you find your fans spinning, and an SMC controller reset fails, check to see if any of the temperature readings are ridiculously low or high, and check to see if any of the wires are damaged. If you can get your hands on the Apple Diagnostic Software or Apple Hardware Test it may tell you which sensor is busted. In the case of many iMacs from 2007 onwards, its could be the LCD thermal sensor that passes directly over one of the holes for a long case screw.

Hopefully there have been enough words in this post to help anyone else at a loss to figure out why their family iMac sounds more like a 747.





HowTo: Make your MacBookPro feel like new again.

1 03 2013

Most computers have inbuilt obsolescence that’s fundamental to the way they were created. When the chips were made they semiconductor capability was implanted by doping areas of the silicon with atoms to change the electrical behaviour. Often performed by diffusing those atoms at a higher than normal operating temperature. Once in server, over time the atoms continue to diffuse and eventually they diffuse enough to cause the semiconductor to fail. The CPU stops functioning, or the memory chip becomes… oh whats the word… forgetful.

Under normal conditions this takes a long time, and the older the chip the longer it takes. Older chips, from the 1980s were built on a huge feature scale relative to todays silicon giving atoms extended journeys to complete before diffusion did its damage. 2 years ago I replaces a marine auto pilot, installed in 1984 with a failed 16MHz micro controller the size of a football field. It had survived almost 20 years in a black box in the sun before the doping atoms completed their journey.

Sitting on the train with my legs being slowly roasted by a hot MackBook Pro I realised something wasnt right. I was only reading a PDF with both CPUs at 1% and the fans whirling like dervishes trying in vain to keep the CPU temp below 85C. My legs would recover but I like my MBP and even though it’s becoming old its slow(er) processor and lack of RAM makes me write faster code. I don’t really want those atoms to finish their hop skip and a jump of a journey ending the life of the CPU. At 85C I’ll bet they are hopping all over the place.

After

After

Before

Before

On opening the back I discovered the fan exhausts were clogged with fluff. After cleaning the fans are hardly spinning and the CPU temperatures are well below 50C most of the time. Unintentionally, Apple have added inbuilt obsolescence to their laptops. As you use your MBP it will get hot. The fans will pull in dust even in the most sterile office and home environments and they will eventually block. The silicon components will run hotter than they should, the dopant atoms will be hopping, finish their diffusion journey and your digital life will be in the bin sooner than it should.

Having cleaned the fans, my MBP feels like when it was new and cool…. or am I just getting old and forgetful, where is my fan?





Java, Vulnerabilities and FUD

13 01 2013

There are plenty of people in the IT industry that would like nothing better than for Java never to have existed. The current vulnerability is being swooped by anyone with an agenda to feed the media with FUD. The media, not knowing any better, is dutifully reporting the information they are given. What are the facts ?

1. The vulnerability is only really significant for Java running inside a web browser using the unsigned Java Applet mechanism, which accounts for 0.1% of the usage of Java.

2. No one (since about 2001) should ever have support for Java Applets turned on in a Browser, and no web developer since should deploy a Java Applet.

Statements from “Security Experts” like “java security is a mess” are true only in the context of running Java in a Web Browser, and should be qualified by:  Running any native code, downloaded from the internet on an Operating system that has no real intrinsic security is suicidal. It’s not really Java that’s the problem, the problem is Operating systems that allow a web browser to run in an environment where it can make fundamental changes to the core aspects of Operating systems. Just as you would never download and run a untrusted native executable as the root user, or even an “Administrator” you only have yourself to blame if click “Ok” when asked the question “Do you mind if I run this untrusted code …. that will steal your identity, empty your bank account, sell your house and destroy your life”. Don’t blame a language (any language), blame the browser or the OS or yourself.

Applets should have been deprecated in 2001. They were relevant when browsers were incapable, but have been superseded since. This vulnerability has nothing to do with 99.9% of Java usage. It’s a pity, but not unsurprising that this message will never reach mainstream media. Long live FUD.

 

 





Scaling streaming from a threaded app server

3 01 2013

One of the criticisms that is often leveled against threaded servers where a thread or process is bound to a request for the lifetime of that request, is that they don’t scale when presented with a classical web scalability problem. In many applications the criticism is justified, not because the architecture is at fault, but often because some fundamental rules of implementation have been broken. Threaded servers are good at serving requests, where the application thread has to be bound to the request for the shortest possible time, and while it is bound, no IO waits are encountered. If that rule is adhered to, then some necessities of reliable web applications are relatively trivial to achieve and the server will be capable of delivering throughput that saturates all the resources of the hardware. Unfortunately, all to often application developer often break that rule and think the only solution has to be to use a much more complex environment that requires event based programming to interleave the IO wait states of thousands of in progress requests. In the process they dispose of transactions, since the storage system they were using (a RDBMS) can’t possibly manage several hundred thousand in progress transactions even if there was sufficient memory on the app server to manage the resources associated with each request to transaction mapping…. unless they have an infinite hardware budget and there was no such thing as physics.

A typical situation where this happens is where large files are streamed to users over slow connections. The typical web application implementation spins up a thread, that performs some queries to validate ACLs on the item, perhaps via SQL or via some in memory structured. Once the request if validated that thread, with all its baggage and resources laboriously copies blocks of bytes out to the client while keeping the thread associated with the request. The request to thread association is essentially long lived. If the connector managing the http connection knows about keep alives, it might release the thread to connection association at the end of the response, but it can’t do that until the response is complete. So a typical application serving large files to users will rapidly run out of spare threads giving threaded servers a bad name. That’s bad in so many ways. Trickled responses can’t be cached, so they have to be regenerated every time. The application runs like a dog, because a tiny part of its behaviour is always a resource hog. Anyone deploying in production will find simple DOS’s are easy to execute by just holding down the refresh button on a browser.

It doesn’t have to be like that. The time taken for the application to process the request and send the very first byte should be no greater than any other request processed by the application. Most Java based applications can get that response time below 10ms and responses below 1ms are no to hard on modern hardware with a well structured application. To do this with a streamed body is relatively simple. Validate the request, generate a response header in the threaded application server that instructs the connector handling the front end http connection to deliver content from an internal location. Commit the response with no body, and detach the thread servicing the request from the request freeing it to service the next request. Since if implemented efficiently, there were hardly any IO waits involved in that operation, the potential for a thread or CPU core to do other processing while waiting for IO is reduced.

If the bitstream to be send is stored as a file, then you can use X-Sendfile originating LiteHttpd, with close implementations in  Apache Httpd (mod_xsendfile),  nginx ( X-Accel-Redirect). If the file is stored at a remote httpd location then some other delivery mechanism can be used. Obviously the http connector (any of the above) should be configured to handle a long lived connection delivering bytes slowly.

In the blog post prior to this I mentioned that DSpace 3 could be made to serve public content via a cache exposing literally thousands of assets to slow download. I am using this approach to ensure that the back end DSpace server does not get involved with streaming content which might small PDFs but could just as well be multi GB video files or research datasets. The assets in DSpace have been stored on a mountable file system allowing a front end http server to deliver the content without reference to the application server. I have used the following snippets to set and commit the response headers after ACLs have been processed. I also deliver such content have a HMAC secured redirect to ensure that user submitted content into the Digital Repository can’t maliciously steal administrative sessions. Generation of HMAC secured redirect takes in the region of 50ms during which time resources are dedicated. If the target is public, the redirect pointer may be cached. Conversion of HMAC secured redirect into X-Sendfile header takes in the region of 1ms with no requirement for database access. Serving the bitstream itself introduces IO waits, but the redirects cant be sent to simple evented httpd servers in a farm. If all the app server is doing is processing the HMAC secured redirects then a few 100 threads at 1ms per request can handle significant traffic in the app server layer. I’ll leave you to do the math.

The same technique could be used for any long lived httpd request, eliminating the need to use an evented application server stack and abandon transactions. Obviously, if your application server code has become so complex the non streaming requests are taking so long they are limiting throughput, then this isn’t going to help.

For Apache mod_xsendfile:

protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
  response.setHeader("X-Sendfile", assetStoreBase+path);
  response.setHeader("Content-Type", (String) meta.get("content-type"));
  if ( meta.has("filename")) {
     response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));
  }
  // thats it, response can be committed.
}


For nginx:

 

protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
    response.setHeader("X-Accel-Redirect", assetStoreBase+path);
    response.setHeader("X-Accel-Buffering",buffering);
    response.setHeader("X-Accel-Limit-Rate",rateLimit);
    response.setHeader("X-Accel-Expires",cacheExpires);
    response.setHeader("Content-Type", (String) meta.get("content-type"));
    if ( meta.has("filename")) {
        response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));
    }
}


For LiteHttpd:

 

protected void doSendFile(String path, Meta meta, HttpServletResponse response) {
   response.setHeader("X-LIGHTTPD-send-file", assetStoreBase+path);
   response.setHeader("Content-Type", (String) meta.get("content-type"));
   if ( meta.has("filename")) {
       response.setHeader("Content-Disposition", "attachment; filename="+meta.get("filename"));
   }
}




Making the Digital Repository at Cambridge Fast(er)

18 12 2012

For the past month or so I have been working on upgrading the Digital Repository at the University of Cambridge Library from a heavily customised version of DSpace 1.6 to a minimally customised version of DSpace 3. The local customizations were deemed necessary

List of commandments from Lev. 9–12; numbered Halakhot 8–18; includes a list of where they appear in Maimonides’ Book of Commandments and the Mishneh Torah, and possibly references to another work.

to achieve the performance required to host the 217,000 items and the 4M metadata records in the Digital Repository. DSpace 3 which was releases at the end of November 2012 showed promise in removing the need for many of the local patches. I am happy to report that this has proved to be the case and we are now able to cover all of our local use cases using a stock DSpace release with local customizations and optimizations isolated into an overlay. One problem however remains, performance.

The current Digital Repository contains detailed metadata and is focused on preservation of artifacts. Unlike the more popular Digital Library which has generated significant media interest in recent weeks with items like “A 2,000-year-old copy of the 10 Commandments” , the Digital Repository does not yet have significant traffic. That may change in the next few months as the UK government is taking a lead in the Open Access agenda which may prompt the rest of the world to follow. Cambridge, with its leading position in global research will be depositing its output into its Digital Repository. Hence, a primary concern of the upgrade process was to ensure that the Digital Repository could handle the expected increase in traffic driven by Open Access.

Some basics

DSpace is a Java web application running in Tomcat. Testing Tomcat for a trivial application reveals that it will deliver content at a peak rate of anything up to 6K pages per second. If that rate were sustained for 24h, 518M pages would have been served. Unfortunately traffic is never evenly distributed and applications always add overhead but this gives an idea of the basics. At 1K pages/s 86M pages would be served in 24h. Many real Java webapps are capable of jogging along happily at that rate. Unfortunately DSpace is not. It’s an old code base that has focused on the preservation use case. Many page requests perform heavy database access and the flexible Cocoon based XMLUI  is resource intensive. The modified 1.6 instance using a JSP UI delivers pages at 8/s on a moderate 8 core box and the unmodified DSpace 3, using the XMLUI instance a 15/s on a moderate 4 core box. Surprisingly, because the application does not have any web 2.0 functionality to speak of, even at that low level it feels quite nippy as each page is a single request once the page assets (css/js/png etc) are distributed and cached. With the Cambridge Digital Library regularly serving 1M pages per hour, Open Access on the Digital Repository at Cambridge will change that. Overloaded DSpace remains solid and reliable, but slow.

Apache Httpd mod_cache to the rescue

Fortunately this application is a publishing platform. For anonymous users that data changes very slowly and the number of users that log into the application is low. The DSpace URLs are well structured and predictable with no major abuse of the HTTP standard. Event the search operations backed by Solr are well structured. The current data set of 217K items published as html pages represents about 3.9GB of uncompressed data, less if the responses are stored and delivered gzipped. Consequently configuring Apache HTTPD with mod_cache to cache page responses for anonymous users has a dramatic impact on throughput. A trivial test with Apache Benchmark over 100 concurrent connections indicates a peak throughput of around 19K pages per second. I will leave you to do the rest of the maths. I think network will be the limiting factor.

Loosing statistics

There are some disadvantages to this approach. Deep within DSpace statistics are recorded. Since the cache will serve most of the content for anon users these statistics no longer make sense. I have misgivings about the way in which the statistics are being collected since if the request is serviced by Cocoon, the access is recorded in a Solr Core by performing an update operation on the core. This is one of the reasons why the throughput is slow, but I also have my doubts that this is a good way of recording statistics. Lucene indexes are often bounded by the cardinality of the index. I worry that over time the Lucene indexes behind the Solr instance recording statistics will overflow available memory. I would have thought, but have no evidence, that recording stats in a Big Data way would be more scalable, and in some ways just as easy for small institutions (ie append only log files, periodically processed with Map Reduce if required). Alternatively, Google Analytics.

Gottchas

Before you rush off and mod_cache all your slow applications there is one fly in the ointment. To get this to work you have to separate anonymous responses from authenticated responses. You also have to perform that separation based on the request and nothing else, and you have to ensure that your cache never gets polluted, otherwise anonymous users, including a Google spider, will see authenticated responses. There is precious little in an http request that a server can influence. It can set cookies, and change the url. Applications could segment URL space based on the role of the user, but that is ugly from a URI point of view. Suddenly there are 2 URIs pointing to the same resource. Setting a cookies doesn’t work, since the response that would have set the cookie is cached, hopefully without the cookie. The solution that worked for us was segment authenticated requests onto https and leave anon requests on http. Then configure the URL space used to perform authentication such that it would not be cached, and ensure an anon users never accessed https content, and an authenticated user, never accesses http content. The latter restriction ensures no authenticated content ever gets cached and the former ensures that the expected tsunami of anon users doesn’t impact the management of the repository. Much though I would have liked to serve everything over a single protocol on one virtual host the approach is generally applicable to all webapps.

I think the key message is, if you can host using Apache Httpd with mod_mem_cache or even the disk version, then there is no need to jump through hoops to use exotic applications stacks. My testing of Dspace 3 was done with Apache HTTPD 2.2 and all the other components running on a single 4 core box probably well past its sell by date.





AIS NMEA and Google Maps API

13 11 2012

Those who know me will know I like nothing better than to get well offshore away from any hope of network connectivity. It’s like stepping back 20 years to before the internet and its blissfully quite. The only problem is that 20 years ago it was too quiet. Crossing the English Channel in thick fog with no radar and a Decca unit that could only be relied on to give you a fix to within a mile some of the time, made you glad to be alive when you stood on solid ground again. Rocks and strong currents round the Alderney race were not nearly as frightening as the St Malo ferry looming out of the fog, horns blaring, as if there was anything a 12m yacht could do in reply. After a white knuckle trips I bought a 16nm Radar which turned the unseen steel monster of the Channel into a passage like a tortoise crossing a freeway reading an ipod. I don’t know which was better, trying to guess if the container ship making 25kn, 10nm away was going to pass in front or behind you, or placing your trust in the unseen ships crew who had spent the past 4 days rolling north through Biscay with no sleep.

Those going to sea today will not experience any of this excitement. They will probably have at least 3 active GPS receivers on board and which will be able to tell them when they are at the bow, stern and sitting on or standing in the heads, (W/C for landlubber).  The second bit of kit that they will probably have is an AIS receiver. All ships now carry AIS transmitters, as do some yachts whose owners. Vanity domains for boat owners. AIS transmits on marine 2 marine VHF channels 161.975 MHz and 162.025 MHz using variants of TDMA sharing. The data that is transmitted  is in a standard form NMEA0183 which is the same standard as used in many older marine systems. In the case of AIS, the payload is 8 bit text containing 6bit data in 168bit payload containing a checksum. The information that’s broadcast is mostly position  speed, course and identification of the sender, which although its intended mainly as a instant communication of intentions between large ships is also invaluable to any smaller craft in fog. Its like being on the bridge of all ships in VHF range at the same time and its relatively simple to calculate the closest point of approach (CPA) for all targets. Pre affordable radar, we used to guess CPA using our ears, and sometimes smell (you can smell a super tanker in a strong wind). With radar we used to try and guess the path of an approaching target from the screen. Easy on a stable platform, not so easy when your radome is doing the samba. Today we have speed and real course often to three significant figures.

I now live about 20km from the Sea north of Sydney harbor. VHF is line of sight so I would have thought it was not going to be possible to receive VHF from that distance taking into account buildings and trees. Normal marine whip aerials probably would not work, but a strip of 300 Ohm ribbon cable cut precisely to length and tuned to resonate with 1/4 wavelengths at 162 MHz is receiving and decoding signals from Newcastle to the north and Wollongong to the south, around 80km in each direction. Not bad for $5 worth of cable. The receiver is a cheap headless unit from ACR that sends the NMEA0183 signals down a USB/serial port. A simple Python scripts receives and decodes the NMEA0183 stream, converting (using ais.py from GPSD project) it into JSON containing current position,  speed, MMSI number and a host of other information. All very interesting, but not very visual. I could just use one of the many free apps to display the NMEA0183 information over TCP/UDP, but they are limited.

Google Maps v3 API allows Javascript to create an overlay of markers. So a few 100 lines of Javascript loads the json file into a browser every 15s and displays the results on an overlay on Goole Maps. Ships are red with a vector, the wake is green. Sydney is a good place to test this as nearly all the ferries in the harbor transmit AIS messages all the time. Its a busy place. Obviously using Google maps 100nm offshore isn’t going to work. The next step is to load the Python onto a Raspbery Pi board, plug in a USB Wifi dongle and create my own mobile wifi hotspot which an iPad loaded with marine charts can connect to, all for significantly less that 1 Amp. If your on 12V you care about juice. Having IP offshore does defeat the purpose of being there, I may have to turn it off from time to time to remind myself I am alive.

This interface was just an exercise to validate the NMEA to TCP over Wifi sever works. If you want to know when you ship will come in, visit http://www.marinetraffic.com/ais/, but don’t try and use it at sea.





HowTo: Quickly resolve what an Sling/OSGi bundle needs.

30 10 2012

Resolving dependencies for an OSGi bundle can be hard at times, especially if working with legacy code. The sure-fire way of finding all the dependencies is to spin the bundle up in an OSGi container, but that requires building the bundle and deploying it. Here is a quick way of doing it with maven, that may at first sound odd.

If your building your bundle with maven, you will be using the BND tool via the maven-bundle-plugin. This analyses all the byte code that is going into the bundle to work out what will cross over the class-loader boundary. BND via the maven-bundle-plugin has a default import rule of ‘*’. ie import everything. If you are trying to control which dependencies are embedded, which are ignored and which are imported, this can be a hinderance. Strange though it sounds, if you remove it life will be easier. BND will immediately report everything that it needs to import that can’t be imported. It will refuse to build which is a lot faster than generating a build that won’t deploy. The way BND reports is also useful. It tells you exactly what it can’t find and this gives you a list of packages to import, ignore or embed. Once you think you have your list of package imports down to a set that you expect to come from other bundles in your container, turn the ‘*’ import back on and away you go.

In maven that means editing the pom.xml eg:

...
 <plugin>
   <groupId>org.apache.felix</groupId>
   <artifactId>maven-bundle-plugin</artifactId>
   <version>2.3.6</version>
   <extensions>true</extensions>
   <configuration>
     <instructions>
       <Import-Package>
         <!-- add ignore packages before the * as required eg. !org.testng.annotations, -->
         * <!-- comment the * out to cause BND to report everything its not been told to import -->
       </Import-Package>
       <Private-Package>
         <!-- add packages that you want to appear as raw classes in the jar as private packages Note, they dont have to source code in the project, they can be anywhere on the classpath for the project, but be careful about resources eg org.apache.sling.commons.cache.infinispan.* -->
       </Private-Package>
       <DynamicImport-Package>sun.misc.*</DynamicImport-Package>
       true</Embed-Transitive>

           <!-- embed dependencies (by artifact ID, including transitives if Embed-Transitive is true) that you dont want exposed to OSGi -->
       </Embed-Dependency>
     </instructions>
   </configuration>
 </plugin>

The OSGi purists will tell us that it’s heresy to embed anything but sometimes with legacy systems it’s just too painful to deal with the classloader issues.

There is probably a better way of doing this, if so, do tell.


				







Follow

Get every new post delivered to your Inbox.

Join 114 other followers