While most users are familiar with the main Diskeeper®/V-locity®/SSDkeeper™ Dashboard view which focuses on the number of I/Os eliminated and Storage I/O Time Saved, the I/O Performance Dashboard tab takes a deeper look into the performance characteristics of I/O activity. The data shown here is similar in nature to other Windows performance monitoring utilities and provides a wealth of data on I/O traffic streams.
By default, the information displayed is from the time the product was installed. You can easily filter this down to a different time frame by clicking on the “Since Installation” picklist and choosing a different time frame such as Last 24 Hours, Last 7 Days, Last 30 Days, Last 60 Days, Last 90 Days, or Last 180 Days. The data displayed will automatically be updated to reflect the time frame selected.
The first section of the display above is labeled as “I/O Performance Metrics” and you will see values that represent Average, Minimum, and Maximum values for I/Os Per Second (IOPS), throughput measured in Megabytes per Second (MB/Sec) and application I/O Latency measured in milliseconds (msecs). Diskeeper, V-locity and SSDkeeper use the Windows high performance system counters to gather this data and it is measured down to the microsecond (1/1,000,000 second).
While most people are familiar with IOPS and throughput expressed in MB/Sec, I will give a short description just to make sure.
IOPS is the number of I/Os completed in 1 second of time. This is a measurement of both read and write I/O operations. MB/Sec is a measurement that reflects the amount of data being worked on and passed through the system. Taken together they represent speed and throughput efficiency. One thing I want to point out is that the Latency value shown in the above report is not measured at the storage device, but instead is a much more accurate reflection of I/O response time at an application level. This is where the rubber meets the road. Each I/O that passes through the Windows storage driver has a start and completion time stamp. The difference between these two values measures the real-world elapsed time for how long it takes an I/O to complete and be handed back to the application for further processing. Measurements at the storage device do not account for network, host, and hypervisor congestion. Therefore, our Latency value is a much more meaningful value than typical hardware counters for I/O response time or latency. In this display, we also provide meaningful data on the percentage of I/O traffic- which are reads and which are writes. This helps to better gauge which of our technologies (IntelliMemory® or IntelliWrite®) is likely to provide the greatest benefit.
The next section of the display measures the “Total Workload” in terms of the amount of data accessed for both reads and writes as well as any data satisfied from cache.
A system which has higher workloads as compared to other systems in your environment are the ones that likely have higher I/O traffic and tend to cause more of the I/O blender effect when connected to a shared SAN storage or virtualized environment and are prime candidates for the extra I/O capacity relief that Diskeeper, V-locity and SSDkeeper provide.
Now moving into the third section of the display labeled as “Memory Usage” we see some measurements that represent the Total Memory in the system and the total amount of I/O data that has been satisfied from the IntelliMemory cache. The purpose of our patented read caching technology is twofold. Satisfy from cache the frequently repetitive read data requests and be aware of the small read operations that tend to cause excessive “noise” in the I/O stream to storage and satisfy them from the cache. So, it’s not uncommon to see the “Data Satisfied from Cache” compared to the “Total Workload” to be a bit lower than other types of caching algorithms. Storage arrays tend to do quite well when handed large sequential I/O traffic but choke when small random reads and writes are part of the mix. Eliminating I/O traffic from going to storage is what it’s all about. The fewer I/Os to storage, the faster and more data your applications will be able to access.
In addition, we show the average, minimum, and maximum values for free memory used by the cache. For each of these values, the corresponding Total Free Memory in Cache for the system is shown (Total Free Memory is memory used by the cache plus memory reported by the system as free). The memory values will be displayed in a yellow color font if the size of the cache is being severely restricted due to the current memory demands of other applications and preventing our product from providing maximum I/O benefit. The memory values will be displayed in red if the Total Memory is less than 3GB.
Read I/O traffic, which is potentially cacheable, can receive an additional benefit by adding more DRAM for the cache and allowing the IntelliMemory caching technology to satisfy a greater amount of that read I/O traffic at the speed of DRAM (10-15 times faster than SSD), offloading it away from the slower back-end storage. This would have the effect of further reducing average storage I/O latency and saving even more storage I/O time.
Additional Note: For machines running SQL Server or Microsoft Exchange, you will likely need to cap the amount of memory that those applications can use (if you haven’t done so already), to prevent them from ‘stealing’ any additional memory that you add to those machines.
It should be noted the IntelliMemory read cache is dynamic and self-learning. This means you do not need to pre-allocate a fixed amount of memory to the cache or run some pre-assessment tool or discovery utility to determine what should be loaded into cache. IntelliMemory will only use memory that is otherwise, free, available, or unused memory for its cache and will always leave plenty of memory untouched (1.5GB – 4GB depending on the total system memory) and available for Windows and other applications to use. As there is a demand for memory, IntelliMemory will release memory from it’s cache and give this memory back to Windows so there will not be a memory shortage. There is further intelligence with the IntelliMemory caching technology to know in real time precisely what data should be in cache at any moment in time and the relative importance of the entries already in the cache. The goal is to ensure that the data maintained in the cache results in the maximum benefit possible to reduce Read I/O traffic.
So, there you have it. I hope this deeper dive explanation provides better clarity to the benefit and internal workings of Diskeeper, V-locity and SSDkeeper as it relates to I/O performance and memory management.
You can download a free 30-day, fully functioning trial of our software and see the new dashboard here: www.condusiv.com/try