Low Free Space Impact on Storage Performance

In the last few months, I was approached by three different companies that were frantic because they were experiencing an EXTREME fragmentation issue. Not just files that were in hundreds or thousands of fragments, but in millions! Of course, this was causing performance issues but even worse, the fragmentation levels on one or two files were reaching a Windows Files System limitation that could prevent any more data written to that file. That would have been catastrophic as it would shut down the respective applications using that data. A couple of these were medical facilities. Luckily, we could assist in getting these resolved with our technology, but I will save that story for another time.

The Common Factor

The common factor found at all 3 sites is quite interesting. In each case, it occurred on volumes with extremely low free space. In fact, one had a 1.4TB volume that had less than 1% free space. Now, you may think 1% of 1.4TB is 14GB of free space and that may sound like a lot, but as the volumes get larger, like this 1.4TB volume, more and more files and data are getting placed on these volumes, so 14GB looks small in comparison.

With the smaller amount/percentage of free space on a volume, it is more likely that the free space is going to get fragmented and at a higher fragmentation rate. Now, with the higher amount of free space fragmentation (smaller pieces of free space), then when more files get created and extended, they are more likely to fragment and at a higher rate. This increase in data fragmentation will then cause the extreme fragmentation cases mentioned above.

Recommended Free Space for Storage Performance

For storage performance, Condusiv recommends maintaining at least 20% free space to help ensure there are contiguous pieces of free space. This is not just to prevent this extreme fragmentation from occurring, but to help maintain the performance of your valuable storage infrastructure. A while back, we did some testing and found at the 50% free space mark, results started to show some throughput degradation. By the time it reached the 20% mark, user complaints started coming in. Thus our 20% recommendation.

This applies to both HHDs and SSDs. With SSDs, we have found with high free space fragmentation, writes are more likely to occur in smaller random I/Os rather than larger and more efficient sequential I/Os. For example, 10 smaller random I/Os rather than 1 single sequential I/Os to write out the same amount of data. That impacted the write performance on both HDDs and SSDs.

Maintain at least 20% free space on your volumes – better yet, up to 50% – to help maintain the performance you expect from your storage.

 

Learn more about optimizing your storage performance:

Do SSDs Degrade Over Time?
Why Faster Storage May NOT Fix It
How to make NVMe storage even faster
How To Find Out If Your Servers Have an I/O Performance Problem
Myriad of Windows Performance Problems Traced to a Single Source
I/Os Are Not Created Equal – Random I/O versus Sequential I/O