Fixing a Windows 10 upgrade blue screen error, wpprecorder.sys

A quick post, for a change. Ive been finishing my PHD and working a full time research job so never get the chance to post.

I, like a large collection of people in the world (14M on the first day), am upgrading all my machines to windows 10.

This was mostly seamless process, apart from on my main gaming machine. This went through the upgrade process until installing features and drivers, where it threw a bluescreen with reference to wpprecorder.sys.

This bug is related to compressed drivers in the windows/system32/drivers folder.

Below is a picture of the point where the bug usually occurred.

Just past the normal point of failure

Just past the normal point of failure

The work around is quite simple:
1) Get the windows installer and and make a USB drive, Microsoft offer a tool to do this here

2) Boot from this drive on your machine. When it loads press shift and F10 to get a command line.

3) Identify your system drive by cycling through c: to z: (use DIR and CD to make sure this is the correct drive).

4) Navigate through your drive to the system32 folder, in my case it was f, so i issued the following command

cd f:\windows\system32

5) Run the command to unpack your current drivers, to do this issue the following command

compact.exe /u f:\windows\system32\drivers\*.sys

6) After this runs, reboot to you current windows install and run a command line as an administrator. Once the command line box presents itself issue the following command
fsutil behavior set DisableCompression 1

7) Run the upgrade from the USB drive

8) Relax

A successful outcome.



How to use both FileVault2 and Bitlocker simultaneously on a dual-boot Mac running bootcamp

I am the proud owner of a beautifully noisy drive.

A noisy drive?!?!?!!!! Get your data off that before it dies!!!

Thanks for the concern but I’m not taking about that kind of noise.

Oh, did you get a HDD stepper to play music??

No but I think the imperial march and still alive are my favourite step-step songs.

That fine, if you don’t want to tell me I’ll go to another blog

Ok, Ok. This post is about something I have been wanting for ages and didn’t think was possible. This shows how to use both FileVault 2 and Bitlocker simultaneously on a multi-boot mac (running bootcamp). The noise I’m referring to is the pseudorandom noise of an encrypted drive (FDE) :D

What’s so special about that? Is this not an easy thing to do?

No, generally it is not possible to do this due to some good design choices both Apple and Microsoft (and other encryption providers) have employed mixed with a silly one that Apple have made.

The Microsoft and Apple boot providers require a boot loader to remain on  an unencrypted volume which then provides an mechanism to access the protected , encrypted, partition. So to use FDE on either of these Operating Systems you need 2 primary partitions. Legacy partition tables use MBR to describe the  layout of the partition scheme. This partition scheme has a limit of 4 primary partitions.

Can we not just use those four primary partitions?

Nope as the Mac has a recovery partition that you need to keep intact so there there are 3 primary partitions available. This means that typically only on of these operating system can enjoy FDE (and I’m not using containers in a OS, that’s too leaky…).

Are we going to time travel and change some specs???

Nope, luckily for us the Mac uses 2 partition tables. One is GPT and the other is MBR.

The MBR partition is used to boot the Mac encrypted boot loader which in turn provides access to the encrypted Mac partition from the GPT table.

When loading the Windows/Bootcamp parition the MBR is used to determine the availability of the windows encrypted boot loader which in turn uses the MBR to access the encrypted windows partition.

This means for a dual boot fully encrypted system we only need 3 partitions listed in the MBR. Normally the MBR partition is just a clone of the GPT one which is where the problem lies.

Happily we can use some ancient tools to edit these manually and allow dual boot of fully encrypted operating systems, while keeping our beloved restore partition.

Ok, how?

The process is as follows:

<disclaimer: this guide is provided as is, and has no warranty. If you  suffer data-loss, damage etc then this is your responsibility so be a mature person and accept that>

  • Partition the disk using the Disk utility in OS X
  • Enable Filevault
  • Reboot
  • Get a list of partition parameters from the GPT partition table
  • Erase and recreate the MBR table, including only the windows partitions and the Mac encrypted loader partition
  • Run the windows installer
  • Edit the windows recovery/encrypted loader partition
  • Install windows
  • Allow TPM-free bitlocker use
  • Enable bitlocker
  • Smile

    Partition the disk using the Disk utility in OS X

    Reduce the size of the OS X partition to make room for windows, here I am giving OS X 98GB


    Then create 2 new partitions:

    • One for the windows partition
    • One for the Bitlocker boot loader (around a 200 MB – 1gb)


    Enable Filevault and Reboot

    Go to System Preferences > Security & Privacy > [Filevault] and turn it on. Remember to responsibly store your recovery key, mine is on an encrypted backup…. not stored with Apple.


    Then reboot to enable/verify the encryption.

    Get a list of partition parameters from the GPT partition table

    Open the terminal and run the following command to get the details of the GPT table.

       1:  sudo gpt -rv show -l disk0


    Take note of the start point, size and type of partition.

    Erase and recreate the MBR table, including only the windows partitions and the Mac encrypted loader partition

    The windows partitions and mac encrypted loader are the items at indexes 1, 4 and 5. On your system this may differ but index 1 should remain the same.

    Go to a terminal and open fdisk in edit mode

       1:  sudo fdisk -e /dev/disk0

    Erase the MBR table

       1:  erase

    Add a new table and then a new entry for the mac bootloader

       1:  add 1

    Edit this partition to match the OS X encrypted loader partition, the details are from index 1 above but should be identical to this (as of OS X 10.9 in a standard scenario)

       1:  edit 1
       2:  Partition id : EE
       3:  CHS mode : no
       4:  Partition offset: 1
       5:  Partition size:  409600

    Then add the first windows partition, I got the offset and size from index 4 above – this will be different on your systems so pay attention to YOUR start and size parameters and ensure its the the correct windows partition.

       1:  edit 2
       2:  Partition id : 07 
       3:  CHS mode : no
       4:  Partition offset: 193085424
       5:  Partition size: 296059568

    Now add the second windows partition, I got the offset and size from index 5 above – this will be different on your systems so pay attention to YOUR start and size parameters and ensure its the the correct windows partition.

       1:  edit 3
       2:  Partition id : 07 
       3:  CHS mode : no
       4:  Partition offset: 489407136
       5:  Partition size: 827576

    Then write the MBR table using the command

       1:  write

    When you issue the print command a table similar to the following will be displayed (the exact details will be different as appropriate for your disk layout)


    Now that the tough stuff is done we only need to install windows, pop in a windows install disk and do the following.

  • Reboot the machine and run the windows installer
  • Edit the windows recovery/encrypted loader partition – format it
  • Install windows
  • Allow TPM-free bitlocker use

    Open the group policy editor by running gpedit.msc

    In the group policy editor open:

    Computer Configuration > Administrative Templates > Windows Components > BitLocker Drive Encryption > Operating System Drives.

    Then open the entry for Require additional authentication at startup

    This will bring up an editor for this policy where you can enable the option by clicking the Enabled radio button and then on the options panel click the  Allow BitLocker without a compatible TPM  checkbox.

  • Enable bitlocker
  • Smile

    Enjoy your dual-boot, encrypted work-capable machine :)

  • Building a web service endpoint in Java

    Ok, so what is a web service?

    I provide an overview of what Web Services (WS) are in this post, An introduction to Web Services. Give it a read and pop back here Smile

    Seriously, Java? The most insecure software platform ever?

    Yes, Java. The platform agnostic software environment that is taught in a lot of University CS courses and has had a consistently high spot in the tiobe programming language popularity chart (Currently #2, with only .2% less market than the number one, C).

    Its also worth bearing in mind that the security issues that have plagued Java in recent months are usually related to the use of a web plug in. This plug in causes issues as Java is a powerful, fully featured platform which when combined with a web plugin introduces a large number of attack vectors. From a Desktop/Server application perspective Java is quite safe and in many ways safer than native code due to its Bytecode interpreting sandboxed process virtual machine.

    Which Web Service technology will be used here?

    In this example SOAP based web services will be used to provide a quick example. For large production systems you may prefer to look at JSON (Message format) and REST (communication strategy) based WS.  This example will be built using Netbeans.

    Plug-in and communicate

    What??? I’m quite I read a lot so I’m plugged in but I‘m a stereotypical Socially Awkward Penguin, I don’t like the talky talk. Not fair!!!

    Not that type of plugged in and communicate, to develop WS in Java we need to install a  development plugin. To do this go to the Tools –> Plugins menu.


    In the plugin interface go to Available plugins and install the Java web and EE option. After installation restart Netbeans.


    Once the  WS support is enabled a hosting server is needed, this can be specified or isntaleld from the Tools-> Servers menu. Click Add Server


    Next choose a server, I have chosen the Glassfish server below.


    If you have the installed already you may specify the path or download the server. To download tick the agreement then select Download Now. Then go through the Installation process. Follow the screen shots below





    Now that your environment is ready you can now create a project.

    Time to make a project.

    Create a New ‘Web Application’ Project as shown below.



    Yes, I am aware I forgot the ‘l’, I have another Example Application in the same folder and am a little lazy.



    Do not click any of these frameworks as they are not needed for this tutorial.


    Codey Time!

    Under web services click New –> Web Service and then specify customization.




    You should now be presented with a source file which contains some boiler plate code. To this code a small math operation should be added. image

    The code in question will go into a class called MathFun and will calculate the Pearson product-movement coefficient.  A walkthrough of calculating this is shown here. This code accepts 2 arrays containing Doubles and returns a double as a reply.


    Once the code has been added to a class it can then be called by the service endpoint code by specifying a @WebMethod.

    This code follows this form.

       1: private MathFun mathFun = new MathFun();


       3: @WebMethod(operationName = "CalculatePearson")

       4: public Double CalculatePearson(Double  a1[], Double  a2[]) {

       5:     return mathFun.CalcPearsonCorrelation(a1,a2);

       6: }


    Once the code is complete and tested it is then possible to test the web service with a useful Open Source tool called SoapUI.


    And we are done!!

    Yes, it is that easy to set up a WS. The next step involves consuming it from a device/web browser. This is simple enough as most development environments support SOAP.

    Clearing a SAN lockdown state

    Hah, are you really bragging that you know how to clear a lockdown at one of California’s finest hotels?

    No… Just No, I think that would involve blackmailing the governator or some mild kidnapping both are way beyond my ability to pull off.

    Is this something to do with restoring operation of a faulty biological SAN/pacemaker?

    As much as I enjoyed biology in A level that would also be beyond my knowledge and this is a computer orientated blog.

    I am talking about restoring operation of a Storage Area Network controller that has reached a software enforced lockdown state specifically the LSI/Netapp built IBM branded DS3524 SAN incorporates lockdown counters which track the number of power cycles that one/both controllers on this unit experiences and if a threshold is reached then a lockdown is enabled which prevents the unit in question from booting.

    Is that a nasty tactic to sell service contracts?

    I don’t think so, this behaviour is likely to protect your precious data from a flaky controller which is advantageous unless your SAN controllers are operating correctly but experience a series of temporary power fluctuations in that case once reliable power has been restored this lockdown would need to be cleared to enable operations.

    This lockout is indicated by LU appearing on the twin 7-segment LED displays on the controller mounted status indicators to the rear of the SAN.

    A word of warning, please ensure that the SAN controller is only in this state due to external factors and not a fault and as normal I take no responsibility for your actions.. if you kill you data it was your choice to try and clear this lockdown.

    To perform this lockout reset you will need an IBM serial-console/din connector and a computer/remote console host equipped with a RS232/Serial/DB9 connector.

    The IBM serial to Console cable is pictured below and should have been included with the SAN if not IBM may send you one


    This cable plugs into a female DIN port hidden under a black plastic cover at the rear of the SAN controllers.


    Install Putty on the computer connected to the SAN to be used as a terminal emulator then once the connection has been made set the connection to serial and then set the parameters as follows.



    Serial Line: This should be the COM port that your RS232 is port is listed as, this can be determined by the your system HW manager but in my case it should be COM 1.

    Speed (Baud): any speed will do as the SAN controller uses adaptive baud rates but I use 19200.

    Data bits: 8

    Stop bits: 1

    Parity: None

    Flow Control: XON/XOFF

    When you connect there may not be a message so press <CONTROL> and <BREAK> until correct speed detected, this will be indicated by intelligible onscreen text.
    When prompted to press <S> for Service Menu press <ESC> instead.  This will take opens a shell prompt.

    At this shell prompt you *may* be able to use these credentials as they seem to be generic with FW images (verified by these working on a replacement controller) if these dont work then contactyour IBM/LSI representative

    User ID:    shellUsr
    Password:    wy3oo&w4

    The following commands need to be issued into this terminal in a specific order depending on which controller is in lockdown.

    If controller A is in lockdown:

    Run the following commands on both controllers –

    Then unplug ctrl-A and run the following commands on ctrl-B:

    Then Hot-plug ctrl-A

    This should sync setting across the controllers and clear the lockdown and you should be able to use your SAN again : )

    An introduction to Web Services

    No matter what you say I wont appreciate spiders

    While spiders are useful housing companions for people who would like to see a reduction in insect numbers in their residence this isn’t the type of web services that I am referring to.

    So is this a post about forcing genetic abnormalities on people to create a team of spider-men types to clean the windows of skyscrapers\help pick up litter with their extraordinary abilities?

    Although those types of dangerous mutations may initially seem to be offer advantages I doubt inducing them would be a reasonable solution to the age old dirty skyscraper windows problem.

    Web services offer a method of presenting functionality offered by a service to software\consumers in a platform agnostic manner using web based messaging protocols.

    What exactly do you mean by “web”, should they not be called internet services?

    No, internet based communications an undefined approach to inter system interoperability but generally these do not have to offer a shared standard way for these components to communicate. For example UDP video game client A may not have any facility to communicate with a meaningful manner with TCP based game server B.

    I thought the web and the internet were the same thing, what’s so special about “web” technologies?

    The internet is the channel that web communication takes place over and facilitates many different diverse and potentially (likely) in-operable communications mechanisms examples of these mechanisms include:

    Web communications take place over a much more defined\rigid\compatible set of web derived specifications which basically include

    • Use of a standard internet protocol – e.g. TCP
    • Application level communication using a common protocol – e.g. HTTP\REST
    • Messages and responses formatted in a standard manner  – SOAP\JSON etc

    Put simply there are many approaches to networked communication and using web based technology provides a lingua franca for offering and consuming services.

    So that explains the web part, what do you mean by services?

    A service in this context offers some facility that may be consumed by clients which may be entities like web browsers, mobile apps, server programs and desktop software.

    One such service could be getting realtime data from a information source (e.g. stock prices from an exchange) or providing some data processing service  e.g using an heavy iron server than can factor primes more quickly than a home machine.

    Essentially web services allow hosting of some modular function(s) in a self contained system and providing a mechanism for interaction, a lot of mobile apps rely on this sort of service.


    Overview of web service interaction



    As the above diagram shows a web service consumer interacting with a web service.

    The server HTTP component listens for and responds requests sent by the HTTP client, these request are formatted in (typically) a mutual message format as generated/understood by the Message Parser/Packer which then generates/extracts parameters/responses and passes them to the consumer/function logic.

    In simple terms the whole purpose of using web service technologies is to pass parameters to functions and receive responses over a network using http communications


    A SOAP Example

    Lets imagine that there is a web service offered at which communicates using the XML-based SOAP message format, this service at offers a function called GetStockPrice which takes one parameter StockName.

    POST /Stock HTTP/1.1

    Content-Type: application/soap+xml; charset=utf-8
    Content-Length: 223

    <?xml version=”1.0″?>
    <soap:Body xmlns:m=””>

    This message is sent via a HTTP 1.1 post to the /Stock path of with a content type of soap+xml in Unicode format.

    The above message is colour coded to indicate the parts that are germane to  each stage of processing, this coding is keyed as following.

    HTTP Message – The entire body sent and received by the Client/Server
    Message Body – The XML + SOAP which is generated and processed by the Parser/Generator
    Desired Information – Consumed and generated by the Service/Consumer logic

    When this message is sent to an appropriate service is it processed and passed to a specified function (<m:GetStockPrice>) with the parameters specified by (<m:StockName>IBM</m:StockName> ).

    The following is an example of such a function called GetStockPrice where the StockName variable would contain IBM.

    public int GetStockPrice(String StockName){

    return hashMap.Get(StockName);


    This service would then reply with the following message assuming there was no error, remember to provide error feedback and handling

    HTTP/1.1 200 OK
    Content-Type: application/soap+xml; charset=utf-8
    Content-Length: nnn
    <?xml version=”1.0″?>
    <soap:Body xmlns:m=””>


    As you can see from this response the price for IBM stock is 34.5, this message is received  and then parsed by the client and the useful information is extracted from it and  used by the the consumer.

    In summary web services allow a relatively firewall friendly communications mechanism as most L7 firewalls allow HTTP traffic using fairly platform agnostic approach to messaging (most platforms have support for HTTP).

    50 Shades of Grey – An important balancing act

    I get it, you are taking about balancing the instant entertainment that we are exposed to with sitting down with a good book!

    Although I do agree that modern thought-free pastimes must be avoided with less time being spent on passive entertainment this is not what Im talking about as my kindle only has 16 shades of grey and I don’t think the book of that title would be of interest to me.

    Getting it all right – The Blue Zone

    My current library/study computer is an Acer 1830t, a relatively small and capable 11.6″  lappy which was perfect for me… until I powered it on.

    There is a single issue with this laptop (for me)  the screen has a natural blue bias… not as extreme as some sony Vaio models I have played with but strong enough to make using this laptop an uncomfortable experience.

    Time for rose tinted glasses?

    As fun as rose tinted glasses are I don’t think they would help with this issue but luckily windows has a screen calibration tuner which allows a user to properly calibrate gamma, contrast, brightness and Colour mixing ratios.

    Ok captain Exaggeration, how bad was this problem?

    Below is a screen-dump of the windows display calibration tool, basically all of the colour bars are to represent shades of grey with no perceivable colour tint present.

     The photo below shows the display before calibration with a noticeable blue tint, in reality this is far more pronounced as the camera employed some photographic wizardry which changed the representation of the actual display (yes even my camera thought that there was something wrong with the image it was being presented with).

    Below is the same tool whenever I had dialled down the  blue in the colour mix, my camera (a Lumix LX3, so not a too poor quality device) didn’t employ as much colour correction but still didn’t  show a true image.



    Im not convinced, those pictures both look different but bad.

    Yes that is due to in-camera colour correction and the difficult nature of taking a photo of an emissive glossy display.

    I cheated in my calibration, I hooked an external Xerox display to my laptop which gives a very good colour representation and ran 2 instances of  the colour calibration tool on both it and the laptop display simultaneously and ensured the laptop display matched the accurate external screen.


    This calibration worked and i’m now very comfortable using this  laptop display. The final colour calibration mix that I needed to employ is below.

    Ok im sold, where is this tool?

    Its built into Windows and the easiest way to start it is to press the start button (on Windows Vista+) and type in calibrate, the colour calibration tool will then be listed Open it and then follow the straightforward instructions that Microsoft provide.


    Update: a good friend of mine is the owner of a professional screen clibration tool and was kind enough to allow me to use it in this laptop, the new calibration is amazing and closer to my reference displays than anything I had ever seen before but it did reveal a shocking characteristic of this display… it can only cover 58% of the sRGB gamut :( perhaps its time to go screen shopping for something like this legendary device

    I have attached the colour calibrated profile here, this may not give the same results on your Timeline 1830 as it did on mine but  should hopefully give a better experience (assuming Acer have some form of controls on what they release) (Standard monitor types) 1830t-1

    Software Encrypted SSD Performance – A Surprising outcome

    Seriously? Are you surprised at the speed increase provided by an SSD?

    No I’m not, actually I was a little disappointed in my SSDs Performance.


    Did you buy a no-Brand SSD from some shady eBay seller?

    Well, Kind of… I bought a Dell OEM branded edition of a Samsung PM830 from a reputable eBay seller… in person to avoid ebay charges for him, this drive is reported to be a High performance part by trusted sources. I then used Diskcryptor to provide protection against unauthorised access to my files from konboot, ntpasswd, linux live disks  or any other number of NTFS access based attacks.

    Ahhh! I know what happened…. your SSD performance was limited by a less capable CPU that could only encrypt at low rate!

    Actually, no the laptop hosting this drive has a recent model intel core chip that in benchmarks can easily encrypt twofish at a rate that could saturate the reported  550mb/s  max speed of this drive.


    After some investigation and what seemed like an endless series of setting tweaks the issue seemed to stem from a problem that plagued the first generations of SSDs…. Wastage due to deleted flash memory blocks not being released cleanly back to the drive controller for reuse. This performance issue was overcome with the introduction of the TRIM command which ‘recycles’ deleted data blocks (explained here and here).


    How did the Software based Full Disk Encryption (FDE) intefere with TRIM?

    At a low level FDE intercepts file system operations from the Operating system to the Disk and turns them into what looks like random gibberish, so instead of a disk populated with nice sensibly structured file system the stored data  looks like nothing comprehensible until the appropriate encryption key is applied (these are usually derived from a password using something like PBKDF2).


    This encryption provider that intercepts File system commands is where the performance degradation problem lies (at least in the case of diskcryptor) as it appears to interfere with the operation of the TRIM command.

    This could be for many reasons but I would guess that the most likely culprit is that:

    The  TRIM command issued by the Operating System (OS) provides a set of LBAs where files previously were deleted from, these blocks do not exist as a structure in the FDE container and mapping from the OS specified blocks to the FDE blocks can not happen due to various reasons related to the abstraction of the encrypted data on the SSD into a virtual HDD  (e.g potential storage errors due to  lack of discreet block level representation of files meaning that a TRIM command would wipe out a block of data  representing a segment of  the encrypted container and so would have a corrupting effect on subsequent data in that container) so the encryption provider may likely strip the TRIM command out to ensure integrity.


    Should you not have known this?

    I thought this would have been the case BUT there was so much anecdotal evidence on online forum sites stating that late editions of solutions such as Truecrypt and Diskcryptor would not  degrade performance on SSDs so I thought it was worth a check.

    On initial encryption the performance was on par with its unencrypted throughput so I thought I had proven the online observations correct in this case.

    My blind trust in the then ‘proven’ software solution is also why I spent a lot time looking @ other factors on my beta operating system before removing encryption especially as I installed an Intel chipset driver on this Win 8 edition around the time of the performance degradation and assumed a bug showed its face.


    So is it a case of Speed or Security

    Happily no, most modern SSD units support some form of strong encryption (e.g the PM830 has AES256, The Intel 320 has AES128 ) that can be ‘enabled’ (this is likely always on as there is no long initial encryption process) by adding a HDD password in the BIOS.

    This has one major pro and huge negative:

    Pro – The encryption is performed by the SSD controller so there is no host machine performance degradation due to the removal of an encryption overhead.

    Con – The HDD password on the Samsung PM830 is 8 characters MAX, much weaker than my previous 37 character diskcryptor password (this is the reason I wanted to used a software approach in the first place).


    So whats the outcome

    Major laziness on my part came back back to bite me ,  I should have  checked the  performance of the software encrypted SSD  after filling and then removing data from it and not just shortly after I encrypted it and should not have assumed that the on-line anecdotes and my intital benchmark were correct.

    Lesson learned and I am peeling the egg of my face but enjoying my again speedy SSD .


    Java–Serial Killer

    Its about time a public mourning took place for people who are allergic to coffee…

    Although I do feel pity for the people who are allergic to coffee (what a horrible existence… i would hate not being able to have the best addiction)  this is not the topic of this post.

    Ahhh so are you taking about the mass deprecation that takes place in Java?

    Although the deprecation of components in Java does annoy me (Java 7 Deprecations, J6 Deps) it is a necessity in most cases  but this is not the choice of topic.

    I get it, you are taking about all the nuclear facilities that had faulty Java control software in place which caused a meltdown or 2!!

    Eh no, the java agreement requires that “You acknowledge that Licensed Software is not designed or intended for use in the design, construction, operation or maintenance of any nuclear facility”

    Im taking about the state of communicating with RS232 devices from Java.

    That’s simple just use java.comm

    Exactly what I thought too but it appears that the write once run anywhere promise of Java has been broken further, back in the day it was possible to write windows Java apps that could use Java.comm but it appears that it is no longer possible to use this functionality outside of Linux, Mac OS X and Solaris.

    I recently discovered this when I developed a web service component that uses a RS232 sensor on linux where it worked fine and I was a happy coder but when I brought it over to windows I was left astonished and searching for a solution.

    RXTX to the rescue

    After doing some searching around I found a beautiful LGPL product called rxtx which provides platform agnostic RS232 support through a native library and jar class file combination.

    The native library goes into the the /bin sub folder of both the Java runtime and Development Kit folders and the .jar file goes into the companion /lib subfolders of those directories.

    The API is easy to use and clear with a fairly logical structure for example to detect COM ports on a system the following code fragment is used

     1: List <String> list = new ArrayList<String>();
     2: Enumeration portList = CommPortIdentifier.getPortIdentifiers();
     4: while (portList.hasMoreElements()) {
     6:      CommPortIdentifier portId = (CommPortIdentifier) portList.nextElement();
     8:      if (portId.getPortType() == CommPortIdentifier.PORT_SERIAL) {
     10:           list.add(portId.getName());
     11:      }
     12: }

    The rxtx project lives at where you can get a variety of downloads and a wealth of documentation

    The SANDISK SD Card Slowdown

    Are you talking about how capacities aren’t growing as quickly as they did in the past?

    No but the rate of doubling capacity (from 2009:   that existed until we reached 16gb or so needed to stop if only due to the reason that most people with media players don’t use up that amount of space (especially since the advent of cloud based players such as Spotify and Google music) and photographers shouldn’t carry all of their photos in a single high capacity storage card that could easily be swallowed/lost or fail.

    I am talking about the apparent slowdown in new high capacity micro SD card read/write rates, in reality this is just an evaluation of the r/w rates of SD cards over the past 5 years.

    That is a big claim… is there proof?

    Yes! I recently bought a HTC HD2 and promptly installed both WP7 and Android on it,  these are 2 operating systems which shine on the HD2 when given fast storage cards so I had to evaluate suitability of cards that I purchased throughout the years (these aren’t cherry picked ‘review samples’ provided by manufacturers)  including a new 16GB ‘class 6’ SanDisk card.

    What cards were tested?

    These were all Micro SD cards, the majority of which were made by SanDisk

    1. 16GB SanDisk Mobile Ultra, Class 6, 2012
    2. 2GB SanDisk, Class 2, 2010 (pack in with new phone)
    3. 6GB SanDisk, Class 4, Mid 2008 (Killed in Action)
    4. 2GB Verbatim,  Mid 2008
    5. 4GB SanDisk, Class 2,  Nov  2007


    Not Pictured: The Verbatim 2GB card that was used to take this image

    All of these cards are genuine and were bought from respected retail chains.

    How were these tested?

    The cards were formatted using FAT 32 with default cluster settings using the windows formatter tool, they were then tested using CrystalDiskMark (sequential) and H2Testw (used to detect counterfeit flash memory products) both with a 500mb file size.

    These tests were run on 2 different computers one desktop using a Belkin USB card Reader (2011) and one Laptop which with an inbuilt card reader, both  benchmarks were run twice on both computers and averaged (the raw numbers are in the attached spread sheet ).



    Testing Outcome



    The results show the following ranking by performance (fastest first):

    1. 2008 SanDisk 6GB
    2. 2007 SanDisk 4GB
    3. 2008 Verbatim 2GB
    4. 2012 SanDisk 16GB
    5. 2010 SanDisk 2GB

    So it seems that these cards ARE getting slower this could be due to many reasons but I think either the newer Classification system (Class 6 etc) means that card manufacturers can develop cards which support the bare minimum speeds required to satisfy a speed rating or that this performance reduction was to provide more reliable operation of the cards.

    On a side note the 6GB SD card died during this testing, it is no longer being recognised in ANY device not even in some E series Nokia phones that seem to be able to resurrect some locked cards from the grave. Was this due to a high performance interface that was unreliable or simple fatigue? I will likely never know.

    In the end I used the 16Gb card on my HD2 as holding more than 3 albums held too much appeal.

    The raw data can be accessed in a Google spread sheet at :

    UPDATE: It appears to be a SanDisk problem, other people have noted and recorded similar speed drops

    Tasty Treats–reflowing a laptop motherboard

    Reflowing, what on earth are you babbling about?? Spilling coffee on a laptop?

    No that would be an article about how coffee is not to blame for my stupidity, I am taking about repairing a defective computer component.

    Various computer components have a flaw related to ‘cold’/poor  solder joints that may crack/degrade after many heat/cool cycles, these joints provide electrical connections between electronic components and if these connections degrade there will likely be a failure or partial failure or the component containing the troublesome joint .

    Contemporary computer systems generate a lot of heat while in operation and so cause cycles in operational temperature (in 10 years time when the low energy/heat ARM architecture has taken over as the dominant CPU architecture and GPUs are built on 8nm processes this may not be the case).

    After a few years of use these degraded joints may show themselves as they did with my beloved Dell Latitude D820, a fantastic magnesium alloy dockable laptop with a beautiful keyboard and 2 hard disks, in this case my graphics card stopped outputting to the inbuilt LCD and produced scrambled graphics when connected to an external VGA display (no, I’m not a Dell salesman I am just trying to justify why I spent the time fixing this thing… apart from curiosity).

    The technique detailed in this post can also be used to repair a XBOX 360 that has a bad case of red eye/Red ring of death

    DISCLAIMER: this is a recount of my experience if you read this then get inspired to perform a similar operation and mess up then you may need to grow up and admit its your fault as I take no responsibility for your actions and decisions

    Ok what is reflowing?

    In short reflowing is bringing the temperature of the degraded solder connecting components to a point where it become liquid/flows and then disengaging the source of heat so that the solder can cool slowly and hopefully re-establish the joints.

    So will I get a soldering Iron?

    No, most components are BGA/surface mounted and although technically may be soldered using an iron you would waste too much time and need to be a master of soldering…. so I am going to suggest a much better approach, using an ambient heat source.

    Ambient? what do you mean? mood heating?

    I am suggesting putting your components in the oven or using a heatgun/hairdryer to melt the solder.

    I tried the hairdryer method and although it worked well with a faulty XBOX 360 is was not so successful with my D820 which has a NVidia chip that has known heat problems which i supposed meant that a higher temperature than what a hairdryer can provide would be required to reliably reflow the D820.

    Easy as Preheat, Prepare and Cook

    The steps are as follows

    1. Preheat the Oven to 220C
    2. Remove all extraneous components e.g fans, plastics, keyboards, displays and disassemble the problem device
    3. Wrap all non-relevant portions of the component in aluminium foil
    4. Cook the component for 12 minutes or so
    5. Turn off the oven
    6. Turn on air extraction systems and open all windows/vents
    7. Open the oven door
    8. Leave the component to cool to room temperature before removing from the oven
    9. Reassemble the device (in my case the D820)
    10. Turn on and hope for the best


    Preheating the Oven


    A Powered-on faulty laptop


    Disassembling the Laptop


    Preparing the laptop by exposing only the troublesome components


    Powering a now fully working laptop

    A month later my D820 is still working, even after burning it in for 2 weeks using a 3d stress test and no this is not my only computer its one of a bajillion but this is the one I wrote my undergrad and postgrad dissertations on and so I’m nostalgic about it and want it to live forever Smile