You might be responsible for a busy SQL server, for example, or a Web Server; perhaps a busy file and print server, the Finance Department’s systems, documentation management, CRM, BI, or something else entirely.
Now, think about WHY these are the workloads that you care about the most?
Were YOU responsible for installing the application running the workload for your company? Is the workload being run business critical, or considered TOO BIG TO FAIL?
Or is it simply because users, or even worse, customers, complain about performance?
If the last question made you wince, because you know that YOU are responsible for some of the workloads running in your organisation that would benefit from additional performance, please read on. This article is just for you, even if you don’t consider yourself a “Techie”.
Before we get started, you should know that there are many variables that can affect the performance of the applications that you care about the most. The slowest, most restrictive of these is referred to as the “Bottleneck”. Think of water being poured from a bottle. The water can only flow as fast as the neck of the bottle, the ‘slowest’ part of the bottle.
Don’t worry though, in a computer the bottleneck will pretty much always fit into one of the following categories:
The good news is that if you’re running Windows, it is usually very easy to find out which one the bottleneck is in, and here is how to do it (like an IT Engineer):
• Open Resource Monitor by clicking the Start menu, typing “resource monitor”, and pressing Enter. Microsoft includes this as part of the Windows operating system and it is already installed.
• Do you see the graphs in the right-hand pane? When your computer is running at peak load, or users are complaining about performance, which of the graphs are ‘maxing out’?
This is a great indicator of where your workload’s bottleneck is to be found.
SO, now you have identified the slowest part of your ‘compute environment’ (continue reading for more details), what can you do to improve it?
The traditional approach to solving computer performance issues has been to throw hardware at the solution. This could be treating yourself to a new laptop, or putting more RAM into your workstation, or on the more extreme end, buying new servers or expensive storage solutions.
BUT, how do you know when it is appropriate to spend money on new or additional hardware, and when it isn’t. Well the answer is; ‘when you can get the performance that you need’, with the existing hardware infrastructure that you have already bought and paid for. You wouldn’t replace your car, just because it needed a service, would you?
Let’s take disk speed as an example. Let’s take a look at the response time column in Resource Monitor. Make sure you open the monitor to full screen or large enough to see the data. Then open the Disk Activity section so you can see the Response Time column. Do it now on the computer you’re using to read this. (You didn’t close Resource Monitor yet, did you?) This is showing the Disk Response Time, or put another way, how long is the storage taking to read and write data? Of course, slower disk speed = slower performance, but what is considered good disk speed and bad?
To answer that question, I will refer to a great blog post by Scott Lowe, that you can read here…
In it, the author perfectly describes what to expect from faster and slower Disk Response Times:
“Response Time (ms). Disk response time in milliseconds. For this metric, a lower number is definitely better; in general, anything less than 10 ms is considered good performance. If you occasionally go beyond 10 ms, you should be okay, but if the system is consistently waiting more than 20 ms for response from the storage, then you may have a problem that needs attention, and it’s likely that users will notice performance degradation. At 50 ms and greater, the problem is serious.”
Hopefully when you checked on your computer, the Disk Response Time is below 20 milliseconds. BUT, what about those other workloads that you were thinking about earlier. What’s the Disk Response Times on that busy SQL server, the CRM or BI platform, or those Windows servers that the users complain about?
If the Disk Response Times are often higher than 20 milliseconds, and you need to improve the performance, then it’s choice time and there are basically two options:
• In my opinion as an IT Engineer, the most sensible option is to use storage workload reduction software like Diskeeper for physical Windows computers, or V-locity for virtualised Windows computers. These will reduce Disk Storage Times by allowing a good percentage of the data that your applications need to read, to come from a RAM cache, rather than slower disk storage. This works because RAM is much faster than the media in your disk storage. Best of all, the only thing you need to do to try it, is download a free copy of the 30 day trial. You don’t even have to reboot the computer; just check and see if it is able to bring the Disk Response Times down for the workloads that you care about the most.
• If you have tried the Diskeeper or V-locity software, and you STILL need faster disk access, then, I’m afraid, it’s time to start getting quotations for new hardware. It does make sense though, to take a couple of minutes to install Diskeeper or V-locity first, to see if this step can be avoided. The software solution to remove storage inefficiencies is typically a much more cost-effective solution than having to buy hardware!
Visit www.condusiv.com/try to download Diskeeper and V-locity now, for your free trial.