I/O Acceleration Whiteboard Session

How Can I/O Acceleration Software Guarantee to Solve the Toughest Performance Problems

“To throw a new All-flash array or new Hybrid array at a performance problem ends up being the most expensive and disruptive way to solve a performance problem when all you have to do is the same thing thousands of our customers have done: simply try our I/O acceleration software on any Windows server and watch the application run at least 50% faster and in many cases 2X-10X faster,” Brian Morin, technology expert

Description:

We had received many requests from customers for a white board video that succinctly explains the two silent killers of VM performance and how our I/O acceleration software guarantees to solve performance problems, so applications run perfectly on every Windows server. So, here it is!

Expensive backend storage upgrades should ONLY take place when needing more capacity – not more performance. Anytime we tell someone our I/O acceleration software guarantees to solve their toughest performance problems…the very first response is invariably the same…HOW? Not only have we answered this question hundreds of times, our own customers find themselves answering this question repeatedly to other team members or new hires.

To make this easier, we’ve answered it all here in this 10-min White Board Video (link above).

[Or read the transcript below]

“Hi, my name is Brian Moran and I’m here to give you a white board overview of our I/O Acceleration Software that guarantees to solve the toughest performance problems in your environment. Let’s start with the Data Center view. In every Data Center you have the same basic hardware layers, you have your compute layer, your network layer, your storage layer. And the point of having this hardware infrastructure is not just to house your data, but it’s to create this magical performance threshold to run your applications smoothly so your business can run smoothly.

“As long as your hardware doesn’t crash, and God forbid that you’ll lose any data, and as long as all your applications fit nice and neatly inside this performance boundary, well then, life is good and you don’t really need to be watching this video. But, the reason why we have thousands of organizations, some of them the largest organizations in the world, who have deployed our I/O Acceleration Software to their Windows virtual servers or physical servers is because in every organization there is always, and I mean always, one or two applications that are the ugliest babies in the business, that are pushing the performance boundaries and thresholds that your architecture is capable of delivering, that simply needs more performance.

“We often times see this as applications running on SQL or Oracle, it could be Exchange or SharePoint. We see a lot of file servers, web servers, image applications, backup. It could be one of the acronyms: VDI, BI, CRM, ERP, you name it. As soon as you have an application that is testing the I/O ceilings in your environment, under peak load what happens? Applications get sluggish, users start complaining, back office batch jobs are just taking far too long, the same for backups. Users run certain reports and get frustrated, and now you’re getting all this pressure outside of IT to jump in and solve this problem.

“Typically, what do most IT professionals at least think that they have to do? They think they have to throw more hardware at the problem to fix it. This means adding in more servers and that means throwing more storage, expensive storage and All-flash array to support this particular application maybe a Hybrid array over here to support that application. And ultimately this ends up being a very, very expensive not to mention disruptive way to solve performance problems when all you have to do is just install our I/O Acceleration Software right here that guarantees to solve your toughest application performance challenge on your existing hardware infrastructure.

“Now, as I mentioned we have thousands of organizations using this platform, some of them the largest organizations in the world and most of the time they’re seeing at least 50% performance boost in application performance, many of them see far more than that. In fact, if you were to visit our website condusiv.com it’s littered with case studies all of them citing at least a minimum of doubling in performance. It’s the reason why Gartner named us the Cool Vendor of the year when we brought this technology to the market four years ago. Now you may look at this and wonder how can a 100% software approach have this kind of impact on performance and to understand that what we have to do is we have to take a magnifying glass and we have to zoom in and get under the hood of this technology stack to see the severe I/O inefficiencies that are robbing you of the performance that you paid for, which brings us to over here.

“So as great as virtualization has been for server efficiency, the one downside is how it adds complexity to the data path. Voila the I/O blender effect that mixes and randomize the I/O streams from all of the disparate VMs that are sitting on the same host hypervisor, and if this performance penalty wasn’t bad enough, up here you have a performance penalty even worse with Windows on a VM, on a virtual machine. Windows doesn’t play well in a virtual environment, doesn’t play well in any environment where it’s abstracted from the physical layer. So, what this means is you end up with I/O characteristics that are far smaller, more fractured, more random than they need to be, it’s the perfect trifecta for bad storage performance.

“Now, what you want is you want an I/O profile that looks something like this where you’re getting nice, clean contiguous writes and reads. You’re getting a nice healthy relationship between I/O and data. You’re getting maximum payload with every I/O operation and look at the sequential manner of your traffic. In a virtual environment running a Windows machine this is not what you get, instead what you get is something that looks like this. You get many small, tiny reads and writes. And all of this means that you need far more I/O than is needed to process any given workload, and it creates a death by a thousand cuts scenario. It’s like pouring molasses on your systems. Your hardware infrastructure is processing workloads about 50% slower than they should. In fact, for some of our customers it’s far worse than that. For some of our customers their applications barely run. In fact, for some of their users, their users can barely use the application because they’re timing out so quickly from the I/O demand.

“So, our patented I/O acceleration software solves this problem, and we solve it in two ways. The first way that we solve this problem is right up here within the Windows file system, we add a layer of intelligence into the Windows OS where it’s just a very thin file system driver near zero overhead. It would be difficult for you to even see the CPU footprint and what we’re doing is we’re eliminating all these really small, tiny writes and reads that’s chewing up your performance and we’re displacing it with nice, clean, contiguous writes and reads. So now you’re getting back to having a very healthy relationship between your I/O and data. Now you’re getting that maximum payload with every I/O operation. Look at what’s happening with the sequential nature of your traffic, particularly down here where that matters the most. So, this engine all by itself has a huge impact for our customers but it’s not the only thing that we do.

“The second thing that we do to help improve overall performance is we establish a Tier-0 caching strategy using our DRAM caching engine, the same caching engine that we’ve OEM’d to some of the biggest OEM players in the marketplace. And all that we’re doing is just taking the idle available DRAM already committed to these VMs that are sitting there not being used, and we’re putting that to good use. Now, the real genius behind this engine is it’s completely automatic. Nothing has to be allocated for cache. The software’s aware moment by moment of how much memory is unused and only uses that portion to serve reads. So that means you never get any kind of memory contention or resource starvation. Now, if you have a system that’s under provisioned and memory constrain, that engine will back off completely.

“But when you look at this, both engines one optimizing writes, another optimizing reads, you may wonder what does this all really mean? And I’m going to share typical results in a minute. But honestly, the best way is to simply just install the software. Try it for yourself on a virtual server or physical server and see what happens. Let it run for a few days, and then pull up our built-in time saved dashboard where you can see the amount of I/O that we’re offloading from your underlying storage device. And more importantly, see how much time that’s actually saving that individual system. Now, we do have some customers, they might do a before and after stopwatch test, they might take their storage UI to get a baseline of workloads before so they can see what happens. But really, all you have to do is just install the software and experience the performance and then pull up that time saved dashboard, communicating the benefit that means the most to your business: time saved.

“Now as far as typical results, we’re going to show you a screenshot that you can see that represents a median of the typical results that you can expect. And as you see on this screenshot, you see the number of I/Os that we’re removing, but take a look at the percentages right there in the top middle. You’re seeing 50% of reads that are being served out of DRAM meaning it’s being offloaded from going down the storage, and on the right side, you’re seeing 30% of write I/O that’s being eliminated. Now, in this typical median system, that saves 5.5 hours of I/O time over two weeks of testing. Now, obviously this time, I/O time saved is going to be relative to the intensity of the workload. There are some systems with our software that are saving 5.5 hours in a single day. ASL marketing had a SQL import batch job that was taking 27 hours. We cut it down to 12 hours, talk about huge single day time savings.

“Now, you may wonder, what is really… You know the sweet spot where you can get optimum performance? Well, we have found out that if a customer can maintain at least 4 gig of available DRAM that our software can leverage for cache, it means give or take, but on average, you’re going to see 50% of reads being served. What does that mean? It means essentially this. So you have just eliminated 50% of your write traffic that’s gone down to your storage device, you’ve opened up all of this precious throughput from this very expensive architecture that you paid for, and you’re serving a large part of your traffic from the storage media that’s faster, more expensive than anything else, 15 times faster than an SSD sitting closer to a processor than anything else.

“Now, if you can crank up these 4 gigs to something even larger, clearly you’re going to get a higher cache hit rates. A good example is the University of Illinois. Their hardest-hitting application was sitting on an Oracle database, it was supported by a very expensive All-flash array. It still wasn’t getting enough performance, so many users were accessing that system. They installed our software and saw 10x performance gains. And that’s because they were able to leverage a good amount of that DRAM and we were able to eliminate all these small, tiny reads and writes that were chewing up their performance.

“Now, if you have a physical server, physical servers are typically over-provisioned from a memory standpoint. You have more available memory to work with, and you’ll see huge gains on a physical server. So, all of this is happening all automatic. This is all running transparently in the background. The software is literally set and forget running at near zero overhead. You may wonder what are some typical use cases or customer examples? You can go to our website right here, all the bedtime reading material that you need on this website, examples of customers where we saved them millions of dollars, new hardware upgrades. We helped them extend the life and longevity of their existing hardware infrastructure, double performance, triple SQL queries, cut backup times in a half, you name it.

“One final thing I should mention is that the software can be self-served and evaluated on your own on one virtual server and one physical server. However, in a virtual environment, you will see far better performance gains if you evaluate the software on all the VMs that are sitting on the same host hypervisor. This has to do with the I/O blender effect, chatty neighbor issues. So, if that’s the case, you may want to contact us about getting our centralized management console that makes deployment easy to many servers at once. It’s going to take less than 30 minutes. It’s that simple. We look forward to chatting with you soon. Thank you.”

Additional resources for I/O acceleration:

Article: I/Os Are Not Created Equal – Random I/O versus Sequential I/O

Case Study: Telestream solves timeouts and slows on I/O intensive applications

Article: Myriad of Windows Performance Problems Traced to a Single Source

Article: How To Get The Most Out Of Your Flash Storage Or Move To Cloud

Video: The Two I/O Fallacies Surrounding IOPS and I/O Response Time

Webinar: Two I/O Myths Killing 40% of Your Throughput

Do Your Windows Servers Have an I/O Performance Problem?

Find out by running the free Condusiv I/O Assessment Tool. It gathers 11 key performance metrics and performs statistical analyzes looking for potential problems. It then displays these metrics to help you understand where potential bottlenecks might be.

Download it here