Software Encrypted SSD Performance – A Surprising outcome

Seriously? Are you surprised at the speed increase provided by an SSD?

No I’m not, actually I was a little disappointed in my SSDs Performance.


Did you buy a no-Brand SSD from some shady eBay seller?

Well, Kind of… I bought a Dell OEM branded edition of a Samsung PM830 from a reputable eBay seller… in person to avoid ebay charges for him, this drive is reported to be a High performance part by trusted sources. I then used Diskcryptor to provide protection against unauthorised access to my files from konboot, ntpasswd, linux live disks  or any other number of NTFS access based attacks.

Ahhh! I know what happened…. your SSD performance was limited by a less capable CPU that could only encrypt at low rate!

Actually, no the laptop hosting this drive has a recent model intel core chip that in benchmarks can easily encrypt twofish at a rate that could saturate the reported  550mb/s  max speed of this drive.


After some investigation and what seemed like an endless series of setting tweaks the issue seemed to stem from a problem that plagued the first generations of SSDs…. Wastage due to deleted flash memory blocks not being released cleanly back to the drive controller for reuse. This performance issue was overcome with the introduction of the TRIM command which ‘recycles’ deleted data blocks (explained here and here).


How did the Software based Full Disk Encryption (FDE) intefere with TRIM?

At a low level FDE intercepts file system operations from the Operating system to the Disk and turns them into what looks like random gibberish, so instead of a disk populated with nice sensibly structured file system the stored data  looks like nothing comprehensible until the appropriate encryption key is applied (these are usually derived from a password using something like PBKDF2).


This encryption provider that intercepts File system commands is where the performance degradation problem lies (at least in the case of diskcryptor) as it appears to interfere with the operation of the TRIM command.

This could be for many reasons but I would guess that the most likely culprit is that:

The  TRIM command issued by the Operating System (OS) provides a set of LBAs where files previously were deleted from, these blocks do not exist as a structure in the FDE container and mapping from the OS specified blocks to the FDE blocks can not happen due to various reasons related to the abstraction of the encrypted data on the SSD into a virtual HDD  (e.g potential storage errors due to  lack of discreet block level representation of files meaning that a TRIM command would wipe out a block of data  representing a segment of  the encrypted container and so would have a corrupting effect on subsequent data in that container) so the encryption provider may likely strip the TRIM command out to ensure integrity.


Should you not have known this?

I thought this would have been the case BUT there was so much anecdotal evidence on online forum sites stating that late editions of solutions such as Truecrypt and Diskcryptor would not  degrade performance on SSDs so I thought it was worth a check.

On initial encryption the performance was on par with its unencrypted throughput so I thought I had proven the online observations correct in this case.

My blind trust in the then ‘proven’ software solution is also why I spent a lot time looking @ other factors on my beta operating system before removing encryption especially as I installed an Intel chipset driver on this Win 8 edition around the time of the performance degradation and assumed a bug showed its face.


So is it a case of Speed or Security

Happily no, most modern SSD units support some form of strong encryption (e.g the PM830 has AES256, The Intel 320 has AES128 ) that can be ‘enabled’ (this is likely always on as there is no long initial encryption process) by adding a HDD password in the BIOS.

This has one major pro and huge negative:

Pro – The encryption is performed by the SSD controller so there is no host machine performance degradation due to the removal of an encryption overhead.

Con – The HDD password on the Samsung PM830 is 8 characters MAX, much weaker than my previous 37 character diskcryptor password (this is the reason I wanted to used a software approach in the first place).


So whats the outcome

Major laziness on my part came back back to bite me ,  I should have  checked the  performance of the software encrypted SSD  after filling and then removing data from it and not just shortly after I encrypted it and should not have assumed that the on-line anecdotes and my intital benchmark were correct.

Lesson learned and I am peeling the egg of my face but enjoying my again speedy SSD .


The SANDISK SD Card Slowdown

Are you talking about how capacities aren’t growing as quickly as they did in the past?

No but the rate of doubling capacity (from 2009:   that existed until we reached 16gb or so needed to stop if only due to the reason that most people with media players don’t use up that amount of space (especially since the advent of cloud based players such as Spotify and Google music) and photographers shouldn’t carry all of their photos in a single high capacity storage card that could easily be swallowed/lost or fail.

I am talking about the apparent slowdown in new high capacity micro SD card read/write rates, in reality this is just an evaluation of the r/w rates of SD cards over the past 5 years.

That is a big claim… is there proof?

Yes! I recently bought a HTC HD2 and promptly installed both WP7 and Android on it,  these are 2 operating systems which shine on the HD2 when given fast storage cards so I had to evaluate suitability of cards that I purchased throughout the years (these aren’t cherry picked ‘review samples’ provided by manufacturers)  including a new 16GB ‘class 6’ SanDisk card.

What cards were tested?

These were all Micro SD cards, the majority of which were made by SanDisk

  1. 16GB SanDisk Mobile Ultra, Class 6, 2012
  2. 2GB SanDisk, Class 2, 2010 (pack in with new phone)
  3. 6GB SanDisk, Class 4, Mid 2008 (Killed in Action)
  4. 2GB Verbatim,  Mid 2008
  5. 4GB SanDisk, Class 2,  Nov  2007


Not Pictured: The Verbatim 2GB card that was used to take this image

All of these cards are genuine and were bought from respected retail chains.

How were these tested?

The cards were formatted using FAT 32 with default cluster settings using the windows formatter tool, they were then tested using CrystalDiskMark (sequential) and H2Testw (used to detect counterfeit flash memory products) both with a 500mb file size.

These tests were run on 2 different computers one desktop using a Belkin USB card Reader (2011) and one Laptop which with an inbuilt card reader, both  benchmarks were run twice on both computers and averaged (the raw numbers are in the attached spread sheet ).



Testing Outcome



The results show the following ranking by performance (fastest first):

  1. 2008 SanDisk 6GB
  2. 2007 SanDisk 4GB
  3. 2008 Verbatim 2GB
  4. 2012 SanDisk 16GB
  5. 2010 SanDisk 2GB

So it seems that these cards ARE getting slower this could be due to many reasons but I think either the newer Classification system (Class 6 etc) means that card manufacturers can develop cards which support the bare minimum speeds required to satisfy a speed rating or that this performance reduction was to provide more reliable operation of the cards.

On a side note the 6GB SD card died during this testing, it is no longer being recognised in ANY device not even in some E series Nokia phones that seem to be able to resurrect some locked cards from the grave. Was this due to a high performance interface that was unreliable or simple fatigue? I will likely never know.

In the end I used the 16Gb card on my HD2 as holding more than 3 albums held too much appeal.

The raw data can be accessed in a Google spread sheet at :

UPDATE: It appears to be a SanDisk problem, other people have noted and recorded similar speed drops