Diskeeper (data performance for physical systems) and V-locity (optimization for virtual systems) are designed to deliver performance, reliability, longer life and energy savings. Increased performance and saved energy from our software are relatively easy to empirically test and validate. Longer life is a matter of minimizing wear and tear on hard drives (MTTF) and providing an all around better experience for users so they can continue to be productive with aging equipment (rather than frequent hardware refreshes).
Reliability is far more difficult to pinpoint as the variables involved are difficult, if not impossible, to isolate in test cases. We have overwhelming anecdotal evidence from customers in surveys, studies, and success stories that application hangs, freezes, crashes, and the sort are all remedied or reduced with Diskeeper and/or V-locity.
However, there is a reliability "hard ceiling" in the NTFS file system; a point in which fragmentation/file attributes become so numerous reliability is jeopardized. In NTFS, files that hit the proverbial "fan", and spray out into hundreds of thousands and millions of fragments, result in a mess that is well… stinky.
In short, fragmentation can become so severe that it ultimately ends up in data loss/corruption. A Microsoft Knowledge Base article describes this phenomenon. I've posted it below for reference:
A heavily fragmented file in an NTFS file system volume may not grow beyond a certain size caused by an implementation limit in structures that are used to describe the allocations.
In this scenario, you may experience one of the following issues:
When you try to copy a file to a new location, you receive the following error message:
In Windows Vista or in later versions of Windows
The requested operation could not be completed due to a file system limitation
In versions of Windows that are earlier than Windows Vista
insufficient system resources exist to complete the requested service
When you try to write to a sparse file from the Application log, Microsoft SQL Server may log an event that resembles the following:
In Windows Vista or in later versions of Windows
Event Type: Information
Event Source: MSSQLSERVERDescription: …
665(The requested operation could not be completed due to a file system limitation.) to SQL Server during write at 0x000024c8190000, in filename…
In versions of Windows that are earlier than Windows Vista
Event Type: Information
Event Source: MSSQLSERVERDescription: …
1450(Insufficient system resources exist to complete the requested service.) to SQL Server during write at 0x000024c8190000, in file with handle 0000000000000FE8 …
When a file is very fragmented, NTFS uses more space to save the description of the allocations that is associated with the fragments. The allocation information is stored in one or more file records. When the allocation information is stored in multiple file records, another structure, known as the ATTRIBUTE_LIST, stores information about those file records. The number of ATTRIBUTE_LIST_ENTRY structures that the file can have is limited.We cannot give an exact file size limit for a compressed or a highly fragmented file. An estimate would depend on using certain average sizes to describe the structures. These, in turn, determine how many structures fit in other structures. If the level of fragmentation is high, the limit is reached earlier. When this limit is reached, you receive the following error message:
Windows Vista or later versions of Windows:
STATUS_FILE_SYSTEM_LIMITATION The requested operation could not be completed due to a file system limitationVersions of Windows that are earlier than Windows Vista:
STATUS_INSUFFICIENT_RESOURCES insufficient system resources exist to complete the requested serviceCompressed files are more likely to reach the limit because of the way the files are stored on disk. Compressed files require more extents to describe their layout. Also, decompressing and compressing a file increases fragmentation significantly. The limit can be reached when write operations occur to an already compressed chunk location. The limit can also be reached by a sparse file. This size limit is usually between 40 gigabytes (GB) and 90 GB for a very fragmented file.
WORKAROUND
For files that are not compressed or sparse, the problem can be lessened by running Disk Defragmenter. Running Disk Defragmenter will not resolve this problem for compressed or sparse files.
Leave A Comment
You must be logged in to post a comment.