The Two I/O Taxes
The effect of virtualization on throughput
“As great as virtualization has been for server efficiency and cost savings, the downside is that it really does add complexity to the data path and adds latency as well,” Jennifer Joyce, VP Sales, North America and LATAM
Jennifer Joyce, VP Sales, North America and LATAM
In this short technical briefing on the two I/O taxes in a virtual environment and their effects on throughput, we will cover unhealthy and healthy I/O profiles.
An unhealthy I/O profile is split, small, and random. It is detrimental to performance in Windows environments.
You want a healthy, optimum I/O profile. You need contiguous, large, sequential I/O because that is how you can achieve a 30-40% performance.
Watch the short technical briefing on the two I/O taxes.
[Or read the transcript below]
“Really interesting stuff coming up here. So, this is really a profile of what we’re looking at for the unhealthy I/O, it’s split, it’s small, it’s random, and this is really not what we want. As greatest virtualization has been for server efficiency, and for bringing the VDI space in, one of the biggest downsides to virtualization is that it really has added a lot of complexities to the data path, and this is what the I/O stream ends up looking like.
“So, there are two severe I/O efficiencies contributing to this.
“The first one is kind of what we’ve been pounding on for the first half of the webinar, is what we refer to as the “Windows I/O tax”. It really is Windows creating this I/O characteristic that’s much smaller, more fractured, more random than it needs to be. And it’s kind of the perfect trifecta for bad storage performance. It’s this death by 1000 cuts scenario kind of eating away at the environment. Again, as we covered, what can you do about it? Throw more hardware at it, optimized the image, which we all know really can’t be done. You’re not going to go rewrite code, and you’ve already got all the applications in there because you think you need them. So, the only other thing in the image left to optimize is the Windows O/S. And, you know we do a lot of the over-provisioning, all the excess IOPS we talked about earlier, focusing on I/O response time, which are two of the fallacies. We can actually get a lot more out of the Maserati that we put in there.
“Now, the other thing that we want to look at is called the “I/O blender effect”. This is that second I/O penalty also, aka, I/O contention. We all know what this is. It means that the workload from VDI Client #1 or Server #1 is being impacted by the workload from Client or Server #37, whether they’re related or not, they are sharing the same hardware. And, even if they’re not on the same Host, they’re sharing the same backend storage. That can be really tough.
“One of our most common use cases is SQL. I’ll give you a real quick example of SQL. We did a proof of concept very recently on an environment that was 6 Hosts with 120 SQL Servers, and they deployed to 1 Server and it looked just like this and it was that they were missing an SLA with that client every month, and it was costing them a lot of money every month and a penalty back to the billing on that client.
“And we told them it’s not going to work, we’re 99% sure it won’t work because of the I/O blender effect, so they did it anyway. The next month, it didn’t work. The next month, we begged them, “please give us as many of your heaviest I/O Servers in this cluster is you can.” They gave us 10. They made the SLA by three minutes for the first time in over a year. The next month, the software was removed, and they missed the SLA by five minutes, and then the fourth month, they put us on 119 of 120 Servers and they made the SLA by 17 minutes. It was phenomenal.
“So we also were able to verify in their change control process that V-locity was the only change being made.
“The reason I bring this example up is because it is a textbook, classic case of the IO Bender effect getting in the way. So like I said, as great as virtualization has been for efficiency and cost savings, it really does add a lot of complexity to the data path and it adds latency as well.
“This is what healthy I/O should look like. We get this optimum I/O profile, we get contiguous, large, sequential I/O, and this is what’s going to give us that 30% to 40% improvement in throughput of the overall body of data.
“So let’s take a look at how V-locity does this and what it looks like.”
The above video is a short excerpt from the full webinar available here.
More resources to resolve the two I/O taxes in virtual environments
Article: I/Os Are Not Created Equal – Random I/O versus Sequential I/O
Case Study: Telestream solves timeouts and slows on I/O intensive applications
Article: Myriad of Windows Performance Problems Traced to a Single Source
Article: How To Get The Most Out Of Your Flash Storage Or Move To Cloud
Video: The Two I/O Fallacies Surrounding IOPS and I/O Response Time
Webinar: Two I/O Myths Killing 40% of Your Throughput
Do Your Windows Servers Have an I/O Performance Problem?
Find out by running the free Condusiv I/O Assessment Tool. It gathers 11 key performance metrics and performs statistical analyzes looking for potential problems. It then displays these metrics to help you understand where potential bottlenecks might be.
Download it here