Last week I received an email via the blog that I thought would be good to publish. Graham, a Diskeeper user from the UK asked: "I have been advised that it is wrong to defrag an SSD hard drive. So is it safe to run Diskeeper now that I have a 128Gb ssd in my computer?"
The popular theory that “there are no moving parts” does not accurately lead to the conclusion that fragmentation is not an issue. There is more behind the negative impact of fragmentation than seek time and rotational latency on electro-mechanical drives. Most SSDs suffer from free space fragmentation due to inherent NAND flash limitations. In more severe cases (likely a server issue) the OS overhead from fragmentation is impacted as well.
As always, the “proof is in the pudding”. Tests conclusively show you can regain lost performance by optimizing the file system (in Windows). We have run and published numerous tests (and here -done with Microsoft), but so have many in various tech forums (if you would prefer independent reviews).
In short, it is advisable to run an occasional consolidation of free space. The frequency you would want to run this depends on how active (writing and deleting files) the system using the SSD is. It also depends on the SSD. A latest gen 128GB SSD from a reputable vendor is going to be all-around better than a 16-32GB SSD from 2-3 years ago.
The HyperFast product (a $10 add–on to Diskeeper) is designed to consolidate free space when it is needed, without “over” doing it. HyperFast is unique as you do not ever need to manually analyze or manually run, or even schedule it. It is smart enough to automatically know what to do and when. A common concern is that defragmentation can wear out an SSD. While that is unlikely unless it is a poorly written defragmenter, the general premise is correct, and is also something HyperFast takes into consideration by design.
Abov pic: You can always add HyperFast anytime after your purchase of Diskeeper.
More reading:
Here are a few blogs we have done on SSDs.
While a bit dated, here is one product review.
Greetings,
I recently installed Diskeeper Enterprise 2011 on my Win7 64 bit laptop and after optimizing and all I did a few benchmarks and the incredible results you can see in the picture of link below, how is that possible?
https://img16.imageshack.us/f/vertexlebench.jpg/
Yes, it is compatible. Previous testing of BitLocker and Diskeeper on HDD showed compatibility and performance gains similar to systems without encryption. I’d expect that Bitlocker would not change the performance value for HyperFast (better or worse), though we’ve not tested it specifically.
Tests on encrypted drives:
downloads.diskeeper.com/…/…Defragmentation.pdf
Is Hyperfash compatible with Bitlocker and/or other software based full disk encryption schemes? If so, what performance value has been realized by Hyperfast in this environment?
Hi Mike,
HyperFast automatically detects and runs the appropriate optimization routines for SSD, so no need to change settings.
What is the proper setting for Automatic Defragmentation with hyperfast enabled?
OS overhead is not something you’re likely to see on desktops/laptops. It may be noticeable if you are doing A/V rendering, compiling code, etc… On servers/VMs it can be an issue – especially if the hardware is shared. On the really high-iops flash storage sytems (e.g. Violin, FusionIO) it can be a major bottleneck. The thing with fragmentation is that it always depends how much of it you have, and what hardware you have to handle it. More and faster hardware (i.e. CPU, RAM, storage) can mitigate the symptoms, but at a hefty price. If you could cache/buffer every bit of data in the CPU (or high speed RAM), and only write data to non-volative storage prevent loss in the event of a power outage, you don’t need to worry about file system/OS overhead.
Yes, HyperFast defragments files to some degree (if they are in bad shape).
Michael, how significant is the OS overhead from fragmentation that you mentioned? Does Hyperfast defragment files (in addition to free space consolidation) in order to reduce such overhead?