One of the topics that a reader requested to be covered in the Diskeeper blog was “what makes I-FAAST different than other file placement/sequencing strategies available on the market?” Don’t let the title of this entry mislead you, as this blog entry is not intended to be a head-to-head/which-is-better comparison. I think that would be unprofessional, and it is obviously biased (hmmm… I wonder who I would pick?), so it would amount to nothing more than self-serving benchmarketing. I will also steer clear of making “assumptions” (for those familiar with the expression).

What I will do, is present facts – as I understand them, and let you make the decisions. I’m admittedly not the expert on the file ordering strategies other products use, so I suggest you confirm behavior/design with them. Diskeeper’s number one competitor is the product we gave to Microsoft for inclusion into Windows back in the late 90’s. The Diskeeper product has enjoyed market dominance akin to that of Microsoft’s Windows OS in the desktop space. The biggest challenge we face is public education of what fragmentation is, and what it does to file system performance (like what many of you do probably regularly do for co-workers, friends, relatives, etc… about computers in general). Therefore, most of my knowledge is centered around Windows and Diskeeper and how they relate to performance (not what some other file system tool is doing). On that matter, there are many other numerous Windows file system experts, but very few others are expert on I-FAAST or Diskeeper, so please, if you ever have a question; just ask me!

For the record, I’m a very careful buyer, so I need proof that product claims are legitimate. For example, I never believe a car manufacturer’s reported Miles-Per-Gallon estimates. Maybe I drive like a maniac, but my dear sweet old gradma apparently had more of a leadfoot than the drivers hired to gather those MPG numbers!

As I mentioned, while I do know I-FAAST, I’m also fairly well versed in NTFS. I’ll present added depth on these topics and how they relate to the topic at hand.

As a prerequisite, I strongly suggest reading the following brief and relatively easy to read paper to better understand the file write behavior of NTFS:

You can watch a flash video of this document in the FAQ section of the Diskeeper Multimedia Tour in the chapter titled “How does fragmentation occur?”.

The suggested reading/viewing will provide you the necessary background to better gauge the value of given file arrangement strategies.

It also very important to note that NTFS takes effort to write new iterations of a given file nearby its previous allocation. That’s not directly covered in the above documents, but it is key when discussing file placement for the express purpose of reducing future fragmentation.

Placing files by modification or usage into certain logical regions of the volume is done by several vendors, including Diskeeper (based on frequency of use, not a file attribute). Claims, made by some vendors, that this minimizes future defragmentation time and effort as resources can be focused elsewhere are probably very valid. However, this brings up another point to mention – defrag algorithms need not rely on placing files logically on a volume to ignore unchanged data and concentrate on new fragmentation. That one vendor accomplishes this by moving files around doesn’t mean another cannot do so without having to move files “out of the way”. If speed of non-manual defragmentation is deemed important, does that suggest that whatever form of automation is offered must complete quickly to “get out of the way” because it interferes with system use? Ah, but I have digressed…

Pertaining to the positioning of free space into some geographic location; perhaps the craziest thing I’ve read about defragmentation strategy is that is that every file on a disk is moved around to defragment one file with an extent (fragment) located at the front of the disk, tightly packed in with other files, and another extent somewhere else in a pool of free space. A really, really, REALLY bad algorithm might do that, but I’ve yet to see one that ridiculous.

Now, with the understanding that new iterations of existing files are likely to be written near the original version of that same file, if all the files that change frequently are grouped together, that region on the disk would incur dramatic and constant file and free space fragmentation. I could then make the argument that because all the files that regularly change are intermingled, that all the “small” free spaces left behind by other changed files would be deemed “nearby” and therefore be even more likely to be considered best-fit free space candidates for a newly modified file (i.e a file that the defrag program deems frequently used). That is hypothetical, in the same way that placing a large chunk of free space near this region to reduce re-fragmentation is. The point is that I can make a reasonable argument why it might not work. I may well be wrong, but we do know that a defragmenter cannot control where the OS decides to write files. The proof is in the pudding, so ask the vendor for a test case, independent verification/analysis, etc…

Remember that files are written to disk via a write cache using a lazy write method. That cache can be flushed to the disk, either by filling up, or by forced command from the application writing the file. The lazy write will routinely, once per second, queue an amount of data to be written to disk, throttling the write operation when it is determined it may negatively affect performance. The write back cache with lazy write process allows for relatively consolidated and unobtrusive file writes, but consequentially can still create fragmentation. It is, and always has been, a trade-off.

Other blog entries on I-FAAST have described what I-FAAST is and how it works, so I won’t duplicate that info here, but I will clear up a few other confusions I’ve heard about the technology. I-FAAST maintains consistency of the XP boot optimization zone (a technology Diskeeper co-developed with Microsoft). It also optimizes a large chunk of free space, adjacent to where the frequently accessed files reside, near the front of the volume. That free space chuck is specifically in a location so that new file writes can be accelerated. Note I use the word “can” as no defrag vendor can control or wholly predict the NTFS algorithms for new file writes. However, if the file being written is a modification of an existing file deemed frequently used (by I-FAAST), then there is an increased likelihood that it will be written in that free space segment and not towards the back of the volume. And not to beat a dead horse, but to claim that fragmentation of new file writes will be reduced, while possible to say (after all it might happen), is impossible to guarantee.

Arranging files alphabetically is another strategy. Read for a good overview of how NTFS accesses a file. If you want a good idea of what files Windows accesses in the boot up process, and in what relative order, open the layout.ini file on any XP system with any text editor. That should provide better insight.

The “last modified” file attribute is another best-guess approach. Given that the Indianapolis Colts of the NFL are undefeated 8 weeks into the season, I could make a reasonable assertion they will win the SuperBowl. You could certainly raise valid arguments to that assumption, and be very right. It’s a guess based on some data, but is imperfect. Taking one data point – i.e. data that a file changed or was created or accessed recently, does not provide any reasonable indicator that it will change or be accessed again (e.g. many of the files associated with a Windows service pack). Perhaps the only benefit here is to move files that have not recently changed or been accessed elsewhere. But again, to what end? So that a future defrag can run quicker?

What about file strategy patents, you may ask? Simply because a product employs patented technology, does not guarantee it is valuable. It just means it does something unique. I could invent octagonal tires, but I don’t think any but would want to drive to work with them :). Keep in mind that patents are published and available to read at places like or – you should investigate it. You can find out when the patent was written (e.g. the 80’s), and learn whether it was patented on the Windows NT platform, or if it was designed for the current version of NTFS (NTFS has changed dramatically since NT4)? You may also want to research that what is defined in the patent is still what is done in the product today? Is it still relevant? If the patent used the rote attribute of last accessed time, is that still the current design application? All are good points to investigate.

Cool-sounding theory captures buyer interest, but still has to proof itself in practice or it’s relegated to an undelivered promise or some idea that should have stayed on a napkin. If a vendor makes a claim, ask them to provide tangible proof! It’s your money, so I think that’s a fair request?

And now for the differences:

Now let’s give the benefit of the doubt and assume (ok I am making an assumption after all) that strategies that minimize future defragmentation run times work, and that future fragmentation is somehow mitigated, now what? Well, I-FAAST increases file access speed. That is very different than the expressed purpose(s) of other technologies, so you can’t really compare them anyway. And remember that I-FAAST and Diskeeper’s standard defragmentation run in RealTime, so they address fragmentation near-immediately (right after/soon after a file leaves memory). The standard real-time defragmentation is aware of I-FAAST file sequencing and will not undo its efforts.

You may hear that defrag is an I/O intensive process. While it is true that I/O activity must occur (in order to prevent excess I/O in the future to file fragments), that operation need not be intrusive. That is what InvisiTasking solves. While it addresses all major computer resources, it also absolves the interference of defrag overhead with respect to disk I/O. Yes, I-FAAST will move some files, but it does not shuffle them around regularly – it is an intelligent technology. Sequenced files are moved only if their usage frequency changes in relation to other data on the volume.

I-FAAST is one of the technologies that affords Diskeeper the ability to call itself more than a defragmenter, and raise the bar to a file system performance application. As I regularly mention, I-FAAST delivers what is promised, admittedly sometimes its only a few percent faster file access, but it is genuine. There are no best-guess efforts with this technology, it either works or it doesn’t, and it tells you exactly what it will provide.

I’ll end this blog with the comment, for the third time [I think I’ve made my point :)], that you should always go to the manufacturer/developer to learn more about how a technology works. The manufacturers are here to help you, and are the best resource to answer your questions. Hopefully I’ve provided some data that will help you make informed decisions about Diskeeper and things to look for when evaluating other technologies. I believe, as I stated before in a previous blog, that there are many good options on the market these days. Most any defragmenter is going to improve your computer’s performance. Choosing a third party solution is likely to offer additional benefits/performance and reduced overhead (especially in a business network). It’s up to you to determine what strategy is more valuable. It’s great in our, mostly free-trade, world economy to have a choice. And, while I’m partial to Diskeeper, ultimately the decision rests with you – the customer. And, as I firmly believe, the customer is always right. You made, or will make, a decision for your own very valid reasons – whatever they may be.