40% More Windows Throughput

Two I/O Secrets for 40% More Throughput on Your Windows Systems (Cloud/Physical/Virtual)

It’s not the hardware’s fault. The hardware is working just fine. It’s Windows. And V-locity (or DymaxIO) fixes the problem right at the source, INSIDE of Windows. Most people don’t think about that because we are all geared to the idea of ‘you want to improve performance, well you change the hardware’. Well, as Jennifer says, I’m a race car driver and I dig into the weeds and I try to figure out how I can literally out-perform my competitor by taking advantage of certain things under the hood. So, that’s a way of looking at where V-locity (or DymaxIO) is providing benefit,” Howard Butler, Senior Director Field Engineering.

Featuring:

Howard Butler, Senior Director Field Engineering
Jennifer Joyce, VP Sales, North America and LATAM

November 19, 2020

Description:

Hear best practices to fix Windows performance problems, such as application slows and freezes, at the source.

The webinar is divided into 2 parts:

Part One, the Executive Briefing:  Real world use cases showing how Windows is Windows, no matter the underlying storage, and how it’s killing your throughput. 3rd party benchmarks show I/O transformation regains 30-40% of you “stolen” throughput.

Part Two, the Technical Briefing: A deep dive into the two severe I/O inefficiencies and how to tune Windows for 40% faster data transfer rates

Watch Now!

[Transcript]

00:05 Jennifer Joyce: Hey, everyone, and thank you for joining us today. My name is Jennifer Joyce, and I am the Vice President of North American sales for Condusiv. And here with me today, I’ve got my partner-in-crime, Howard Butler. He is the Senior Director of Systems Engineering. A little bit about Howard, he is a 30+ year veteran of our company, and he is also an expert in the inner workings of the Windows Operating System. One other thing about Howard, too, is that he’s also a race car driver and instructor, so he specializes not only in helping computers go as fast as possible, but also cars. So, if you ever get a chance to chat him up on that, definitely do that. Hey, Howard, how are you?

00:43 Howard Butler: Hey, Jennifer, thanks very much. Really glad to be here, although you stole my thunder. I was going to ask the audience to see if they could figure out which of us is really the true race car driver. But anyway, glad to be here, Jennifer. One quick housekeeping note is that, guys, we do want to make this an interactive webcast, so feel free to drop in your questions into the Q&A box. And as we go along, we’ll answer them on the fly or towards the end, during our Q&A session. Jennifer, back over to you.

01:21 JJ: Okay, awesome. Thanks, Howard. One other housekeeping item real quick is that we’re gonna divide today’s webinar into two parts. In the first part, we’re gonna just really treat as an executive briefing. We’re gonna cover real world use cases, benchmark with third-party tools, and you can… I’ll just go ahead and advance us here, you can see some of the use cases we’re gonna be touching on. Some of the environments may overlap some of the tech you guys have there. And the second part, we’re actually gonna get into the technical briefing, which is really gonna be the deep dive under the hood of exactly how we’re doing what we do. We’re gonna hop into really focusing on two facets, two secrets that are gonna help you gain 40% faster throughput from your Windows Operating System and the whole environment sitting on the hardware that you already have.

02:04 JJ: Just a little bit about us, it’s always good to know who you’re talking to. I’m not gonna spend much time on this, but it is good to know, we are a 39-year-old software company. We are actually the 12th oldest software company in the world. And I also wanna just call out our partnerships, the Microsoft Gold Partner, VMware TAP partner. We also have a SQL Server I/O Reliability certification, which is really hard to get, usually reserved only for the really big hardware vendors. We are the only software vendor with that certification. In fact, Howard, you were on the team that worked with Microsoft to get us that certification. Correct?

02:41 HB: Yeah, that’s exactly correct. And I think it’s a pretty big feather in our cap to be amongst the awesome group of hardware vendors that were selected to participate, and what we were bringing to the table is something that we’ll talk about during the presentation today.

03:00 JJ: Yeah. And we’ll also talk a little bit about one of the use cases here with our Citrix Ready certification. Now you can see on the bottom of the slide, our software publications. We’re here today talking about V-locity, and our next generation of both our V-locity and our Diskeeper platforms is going to be migrating into a new release called DymaxIO, which is on the cusp, was coming out. We’re just gonna be focusing on V-locity today. And some of you may be familiar with our Undelete product, software data recovery. We will hop right into what we’re talking about today, and really what we are looking at is the fact that the… Windows is what we’re talking about. Windows is everywhere. You’ve had this advent of what Windows can run in, as far as context, and we’re obviously accelerating very much right now into the cloud, hyperconverged. We’ve had a lot of on-prem private cloud going for quite a while. VDI is really coming into its own as well.

03:54 JJ: The main thing that all of this has in common is the Windows Operating System. Where it’s being run, where it’s being hosted is almost immaterial. What we wanna get though is the fastest performance that we possibly can out of the hardware or the platform that we’ve chosen to host our Windows in. And that’s what we’re here to do. Because one thing that we can do is we can look at any kind of an environment and we can look at every piece of it and try to make every single piece go faster.

04:22 JJ: Now, one of the things that a lot of people lose sight of is the fact that the Windows Operating System is in the image. This story came from one of our customers I was just speaking to recently. They have us deployed on 400 servers throughout their enterprise, and we were working with them on their VDI environment, it’s a hospital, and they’ve got 4000 clients and… In their hospital network. And they were getting all of these metrics back, saying that their performance could be a lot better. And so they went to the software vendor that publishes the application this was running on, and they said, “How can we handle this without adding more hardware?” Ironically, the answer back was, “Well, you could add more hardware.” And then they also got the answer, “Assess everything in the image.” And as I was talking with the client about this, he said, “Well, the Windows OS is in the image.” And we both started laughing, ’cause he knows what we were doing on their servers. And our product accelerates throughput by a minimum of 30% to 40%, just by directly optimizing the Windows Operating System. And most people don’t optimize Windows Operating System simply because they don’t know it can be done. So, that’s what we’re gonna talk about.

05:26 JJ: I’m gonna hop into the first use case here real quick. And when we’re looking at this first use case, this is actually some of the testing, a piece of the testing, that we did for our Citrix Ready certification. And you can see here that the data transaction increased by 90% and we had 60% more workload processed in the same amount of time, and we were using a third party tool to do this benchmarking. This was done with Intel’s Iometer tool, and so these metrics are directly from that third party tool. So, that’s pretty impressive when we look at that in isolation. But what we wanna talk about are two other concepts that I think are really important, and what I… I like to call these the two I/O myths or the two I/O fallacies.

06:07 JJ: The first one is the IOPS Fallacy. This is the idea that, “Hey, I’ve got plenty of IOPS to handle the workload.” In fact, where this came to me one day was I was working with a client, they had come to us, and they literally had an environment where they had an all-flash, pure SAN, and they had 11 physical servers attached to it. They didn’t even have any VMs. And this was so unusual, that I remember the customer had to correct me 12 times when I kept referring to them as VMs. It’s like, “No, no, they’re physical. There’s no hypervisor.” So, they have these 11 physical servers attached to this all-flash pure SAN that can push 600,000 IOPS, and they’re using… scratching the surface of that, they’re using maybe 3% of that IOPS capacity, and they’re still missing their SLAs. And these SLAs were very time-sensitive. They’re in healthcare, had to… Things had to run at midnight, be done by 5:00 AM. They were running it to 7:00, 8:00 AM.

07:00 JJ: They threw our software on, and it handled it. And they couldn’t understand why they had all that hardware horsepower that our software would get the throughput to make them make their SLA. That was a really interesting situation and they asked me about that. And so one of the things I pointed out to them is that we really do have this workload drag, this performance drag, from the Windows Operating System itself that is completely independent of how hardware behaves. And the reason for this is because of split, small, random I/O patterns generated by, where, guess what? The Windows OS. And everything else is always having to compensate and over-work because of the way and the design of the pattern of data that Windows is sending out.

07:44 JJ: Keep in mind that sequential I/O always outperforms random I/O, and we’re only using a small percentage of that I/O capacity at any one time. What we wanna focus on is not that part, the headroom, giving us that false sense of security that we’ve got all the room in the world to get as much processing and as fast as we can. What we really need to focus on is the work being done. A great example for this is, if I walk into a room, I’m 5’3″, I walk into a room, and it’s really crowded, it’s got 20-foot ceilings, and someone says, “Your job is to get from one side of the room to the other, and don’t touch anybody.” And I can’t do that, there’s elbows everywhere. But I look up and there’s 15 feet of space above me. It’s not Mission Impossible, I don’t have harnesses and wires, I can’t use that headroom to get across that space. I’m gonna have to navigate through the five to six feet of space that’s actually being used. So, one solution is, how do we optimize that space that’s actually being used? That’s what we come in and do. That’s the first I/O fallacy.

08:43 JJ: The second I/O fallacy is what I like to call the I/O Response Time Fallacy. I/O response time can be really misleading. The myth on this is that the faster the I/O response time is, the better. Now, that’s a true statement, to a very large degree. Let me just… Work with me here, and let me just drop that into the context of what I’m talking about. The reality is that each individual I/O transfers at different speeds based on how big it is. If you’ve got less KB per I/O, that individual I/O will go faster. If you’ve got more KB in an I/O, it will go slower. Just bigger payload, a little bit longer to transfer. So, when we keep that in mind, we come back to how is the I/O structured and how is it being issued from the source, from the Windows Operating System, which is right next to the application.

09:29 JJ: We’re still talking all logical here, we haven’t gotten down to hardware yet. What we’re looking at is split versus contiguous I/O, and random versus sequential I/O. Contiguous and sequential outperform every time. So, if we can get Windows, instead of sending out split and random I/O, we can get it to send contiguous I/O that will become more sequential than random, that’s gonna be the place that we wanna live. Really, when we’re looking at it on this truth section, is that the individual I/O response time perhaps has been over-prioritized. It’s important, sub-millisecond I/O response times will get you there all day long, but we can lose sight over the nature of the data that is being transferred, and you’re gonna get the faster performance, if you can get rid of the small split random I/O and move it to that contiguous larger sequential I/O.

10:18 JJ: That’s the conversation we’re having today. And what I’d like to do right now is give you a break from how fast I talk, Howard talks a little slower than I do, and have him take you through this first use case from one of our hospitals. And this environment is actually a VMware Horizon 7 environment, running VDI, you can see the specs here. Howard, let me go ahead and turn this over to you. And I find this really interesting, especially since this is tested with VMware’s vROps, with another third party tool.

10:51 HB: Sure, Jennifer, thanks very much. Guys, let’s take a quick look at what I/O transformation with V-locity can do. We’re just gonna go through these slides rather quickly, but feel free to toss some questions in there, in case anybody has a few things to comment about. The nice thing, as Jennifer mentioned, is that we can validate, that is, with third party tools, tools you probably already have in your environment, like vROps, which is what we use at this particular hospital. And the orange lines represent the data measurements with V-locity. And we can see a pretty nice trend of improvement in write requests per second.

11:36 HB: Jennifer, let’s go on to the next slide. But, wait, take a look at this now. What’s up with the write latency? And this is exactly what Jennifer was talking about here, is that if this was the only metric that you looked at, you’d probably walk away thinking, “Gee, that V-locity product seems to be slowing everything down.” Like I said, this ties back to what Jennifer was talking about in terms of I/O response time myth. Now, let’s move forward and see what happens in the next metric. Whoa, whoa, whoa, whoa, tap the brakes and get set up for a late apex turn here, because the write rate just shoots through the roof once V-locity is enabled. Jennifer, do me the favor, if you can go back to the previous slide.

12:32 JJ: Sure.

12:35 HB: And again, take a look at the write latency. Sometimes the latency is perhaps maybe up to three times longer in some cases. Now, let’s go back to the write rate, Jennifer. And here we can see that the rate of data transfers are improving anywhere between two to six times. Okay? This goes to show the power of focusing on the overall throughput of data and not over-focusing on individual I/O response time. Now, let’s go ahead and see what happens on the next slide.

13:17 HB: Okay. We can see something pretty similar here on read latency. It looks like it got a bit longer. Now, let’s take a look at the next slide. As I said, the read rate just kills it. It’s off the charts, in terms of the amount of data throughput that is occurring in the environment. And guys, that’s what we really tend to focus on, is how much data can you push or pump through the system?

13:48 HB: Let’s take a look at the next slide. And I just wanna throw in one more metric here, just in case it makes difference to anybody there, is the disk usage. And I think this picture pretty much speaks for itself. And let me just mention here, it isn’t always about end user experience. Now naturally, of course, it does come down to that, but why do you have your existing virtual server or VDI density count set the way it is today? That is because of user experience. So, when you do install V-locity, it really wouldn’t be valid to go back and ask users, “Hey, did you notice anything going faster?” You’ve already managed that end user experience by scaling down your density today. Think of this: With V-locity installed, you can scale back up and still keep the end user experience at its optimum level, not to mention the fact that you’re able to keep existing hardware for an extra year or even longer period of time, sweat that hardware asset before having to do any type of major upgrades.

15:09 HB: If we move on to the next slide, thanks very much, Jennifer. I did wanna touch on one more point here about how important this concept of I/O transformation is, and I will go into greater detail during our technical briefing portion of the event today. But just looking at it from a high level, and to sum it up, this is what we want to have happen to the write I/Os. We take those split, small random I/Os, and transform them into contiguous, larger sequential I/O. That is the key in getting back 30% to 40% of your throughput. Think of this, your hardware should be able to perform faster than it is today, but the way Windows is handling the data logically, it’s like Windows got its foot on the brake. You’re driving this hot rod Maserati, think of V-locity as pressing down on the accelerator.

16:13 HB: Now, let’s take a look at our third case. We’re not gonna spend a lot of time on this one, but it is another Horizon environment running on an all Pure flash storage array, again, tested with vROps. And these numbers are from a Fortune 500 company. In fact, it’s one of those household named cable companies, and this was a sampling from their call center. Disk usage rate is definitely significantly better, more than double in most cases. And when we take a look at the commands per second, these two also improved significantly, again, more than double in most cases. And when we take a look at the average number of write requests, we can see that, across the board, every system realized a significant increase in activity, its data throughput. And with the write rate kilobytes per second shows the average number of kilobytes written to the disk each second, we can see that, across the board, every system realized a significant increase in activity. This would tie back to V-locity transforming those small split random I/Os, and helping Windows generate larger, more contiguous and sequential data streams being set up to storage. And here in this last slide here, we have the read rate kilobytes per second, showing the average number of kilobyte reads from each disk, each site. And we can realize a significant increase, again, in activity and throughput.

18:11 HB: Okay. I got two more cases to show you, and then we’ll get into the technical briefing portion. In this one we just wrapped up, one of our existing customers has us deployed to a tier one electronic medical records application that was isolated on its own cluster. And they just recently expanded us to all 200 servers in their enterprise cluster all at once. And I’ll just mention something there. That helps illustrate just how easy it is to deploy our software either through our built-in management console or however you wanna package it up and use with SCCM or other type of deployment tools. Think of us as really, it’s just an MSI installer file, requires no reboot to install, so you can literally push it out on the fly.

19:08 HB: Back to this particular case. In this case, they deployed us to all 200 enterprise servers in one shot. Literally, took us less than 30 minutes. And a global deployment like this is truly, truly the only time you can measure at the storage hardware level. ‘Cause if you just do one or two systems here and expecting a miracle, you’re not really going to see that show up on the storage side of things, because all the other systems that are tied into that infrastructure are not being optimized. Again, when you do this universal type of deployment, hitting all Windows VMs at once, this is where you can really see the benefit of a universal I/O transformation. And we were pretty methodical about the testing. These are perf-mon style type of counters that we’ve listed here, and it really shows the importance of performance, which is the throughput. And these were collected, again, using the HPE Infosight tools, pointing directly at their Nimble storage array.

20:24 HB: So, let’s dig in under the hood a little bit and take a look at the numbers. And here we can see both the max and average values of one individual server, as benchmarked from the Nimble storage perspective. Nimble is a fantastic platform. Love them, worked with them for years, and they provide an incredible performance boost just all on their own. But every customer we’ve talked to, who uses Nimble, simply loves their product and sees the added value that we bring on top of their hardware environment. And these numbers are, could be thinking of just astronomical, a bit shocking. The reason we get this is because it’s not the hardware’s fault. The hardware is working just fine. It’s Windows. And V-locity fixes the problem right at the source, inside of Windows. Most people don’t think about that because we’re all geared and used to the idea of, “You wanna improve performance? Well, you change the hardware.” As Jennifer says, I’m a race car driver, I dig into the weeds and I try to figure out how I can literally outperform my competitor by taking advantage of certain things under the hood. That’s a way of looking at where V-locity is providing benefit.

21:51 HB: Now, if we look at this next slide here, we can take a quick look at a second server from the same test group. Again, these numbers speak for themselves. Okay, Jennifer? Now, let’s look at this last server case. And I think this is our fifth and final use case. This, again, was tested in a load balanced Citrix Terminal Server kind of environment. The customer picked 10 systems to install V-locity on, and then a different 10 systems to be used as a control group. And the customer tested using vSphere, sent us the data back so we could analyze and look at it and compare their numbers and see what was happening there, and we both concluded that there would have been a 32% improvement in reads and there was an 18% improvement on writes.

22:56 HB: This wraps up those use cases. I hope this information, guys, was really helpful for you. Jennifer, I’m gonna turn it back over to you now.

23:07 JJ: Great. Thank you, Howard. I really appreciate you taking through all of that information for us.

23:10 HB: Sure.

23:13 JJ: Yeah, it was really good stuff. You guys are probably hearing a common theme here. A lot of the stuff that we hear from people is just better application performance, increasing density. The throughput, it’s all about the throughput, that’s really what it comes down to. And in SQL, we hear a lot about reduced timeouts and crashes. And Howard touched on as well extending the hardware life cycle, so there’s really a lot to be done here. And one of the really interesting ones, just at the top of the slide here, CHRISTUS Health, in fact, just got… Was just in touch with them this morning, they are fully deployed on almost 2000 VMs throughout their data center, and they’re about to start deploying to their entire VDI cluster, supporting their EMR, their medical operations, really incredible. They had originally come in with performance issues, and were looking at a $2 million forklift upgrade on some hardware. Threw us in, first, when they were looking for solutions, it fixed it within a week, and they were able to keep the existing hardware on the floor for another 18 months.

24:21 JJ: And now… They weren’t my account when that happened, and so when I finally started working with them directly, I met with them at VMworld, at dinner, a couple of years back, and I asked them, “Was that true?” And they said, “Every word of it.” So, that’s a published case study on our website. And we actually have about 20 case studies on our website, so if you wanna check that out, please do. But really, we hear it time and time again, just different results from people.

24:41 JJ: Now, how can you explore this in your own environment? We do offer a free proof of concept, where you can actually get trial where it’s full-functioning trial worth 30 days, and we will do a pre-POC consultation with you, about a 15-minute call, see if it’s a good fit or not. We also have a pre-assessment tool that takes about 10 minutes to set up, that you can run against your servers to see if they’re even good candidates. And then we’ll actually conduct the POC if it looks like your environment is a good candidate for the software, and then we’ll do a review with you after the fact, and there’s the steps I just said verbally.

25:15 JJ: We’ll hop right into the technical briefing now. What we wanna talk about now is, now that you know what we can do and the benefits we can provide, how do we do it? This is a really rudimentary extraction of a virtual environment and… I love virtualization, as I’m sure we all do. It’s been incredible for cost savings and efficiencies. But there is a really big downside to virtualization, and that is that it does add a lot of complexity to the data path. This is what your I/O stream ends up looking like. Now, this is where the two really severe inefficiencies come in causing this. One is just the Windows I/O Tax, with that behavior we’ve been referring to of Windows breaking files down into much smaller pieces they need to be, getting that small split randomized data pattern.

26:00 JJ: The second problem then is the term we coined with Gartner a while back, you probably heard it by now, was the I/O Blender Effect, and that’s the randomization of data in this shared environment and the shared footprint. Now, it’s good that we can come in and we can actually prevent Windows from behaving like that going forward, and we’ll talk about that engine in just a moment. There are two key engines we’re gonna talk about. But one of the things that we get asked a lot is, “Hey, when you’re doing that, can I just do it on this one server?” Yes, you can, and that one server will be optimized and it may solve the issue. But it may not, and the reason is because of that I/O blender effect. This would be your most ideal usage of V-locity. In fact, we make that really easy to do. You can license it by VM, you can license it by host, but we also offer site license, which gives you universal deployment across the entire site. For any Windows asset, that would be physical or virtual, desktop or server, VDI, it really doesn’t matter. You can really accomplish this with those licensing offers that we have.

26:55 JJ: One really interesting story that just illustrates this perfectly is we have a customer that is in the financial services industry, and I’m just gonna back this up a couple of slides, they had this going on and they had 120 SQL servers on a six-host cluster. And one of those was supporting a customer… Well, each server was for a different client. One of the SQL servers was supporting a particular customer, and they were missing this monthly SLA, and every time they missed this SLA, it cost them a $10,000 bill back off of that customer’s invoice every month. It was a very expensive SLA to be missing.

27:30 JJ: So, they wanted to do that. They deployed just to the one server, we told them it wasn’t gonna work because of the I/O blender effect, “but go for it and then let’s expand later.” That’s exactly what happened, it did not make the SLA that month. The next month, they agreed to deploy to their 10 busiest servers. They actually made the SLA by three minutes the next month. Then they uninstalled the software, they missed the SLA again by five minutes. Expensive, got their attention, and so they agreed to go to all of the servers. And when they did that, they got into 119 of the 120 servers. I don’t know what happened with the last one, but we didn’t get to it. They made the SLA by 17 minutes. Yes, that was a four-month POC, but they had crazy change control, and it was perfect on and off an experiment to really show the power of getting rid of that I/O blender effect. So, it’s really powerful.

28:16 JJ: One of the things that we wanna really consider here as we’re looking at this, is exactly where V-locity is sitting, and this is really important. We are sitting, that orange bar, right inside the Windows Operating System and we’re fenced by Windows. This is why we can be universal. This is why, at the very beginning, I was talking about whether it’s physical, private cloud, public cloud, wherever that Windows instance is sitting, it does not matter. This behavior is happening inside of Windows, so we install inside of Windows to fix it. So you can port us anywhere you want. That’s the first thing.

28:48 JJ: The second thing is that this is really top of the stack, so where the I/O is getting originated from and then having to cascade everywhere downstream, everything downstream is having to use those valuable resources to overwork. You’re probably not gonna use more than 10% of the IOPS on that big SAN, but how you’re using that 10% is really important. And coming back to that I/O transformation that Howard was talking about earlier, you wanna transform that I/O before everyone else downstream has to touch it. That’s where we’re sitting.

29:14 JJ: And then the last point on this slide is compatibility. Because we are fenced by Windows and we’re not trying to interact with any other level in any environment, we’re compatible with everything. Think about it this way. There isn’t a special flavor of Windows for Nutanix, or Windows for Dell, or Windows for Hyper-V. It’s just Windows, or Windows for VMware. It’s just Windows. And so everything has to be compatible with Windows, as are we, so that makes this compatible with the full environment, which is really nice.

29:41 JJ: We’re gonna talk a little bit about just the write optimization and the read optimization, and then our reporting. The first engine of how we do this is going to be called IntelliWrite. Now, I’m gonna cover this off really high level, please start dropping questions into the question box if you have any, but I don’t think we need to dive too deeply into this. It’s a pretty simple concept. Basically, Windows is breaking files down, we don’t let it anymore. Instead of a break fix of having to go and clean all that mess up after the fact, we’re actually able to just get Windows to write files the correct way the very first time. I like to use a childhood story, the egg on the wall, breaks, falls, all the king’s men come back, try to put that egg together. That’s really the break fix model, we’re not doing that. We just don’t let that egg or, AKA, the file, break and fall in the first place. We get Windows to issue it clean the first time. How it writes is how it reads, so you get a clean write, you get a clean read back.

30:38 JJ: This is where you’re getting that 40% plus faster throughput, because the data has been transformed, and it’s organized, and it’s transferring how it should. And what this means, on average, if you wanna just look at the number of I/Os it takes to transfer data, this means, on average, a reduction of 30% of all I/O. This is how you could look at it. If it takes about an average of 100,000 I/Os to transfer 1 gig of data, reduce that down to 70,000 I/Os, get rid of 30,000 I/Os off the… Skim it off of every gig of data that has to go through, and multiply that across all of the workload you’re doing every single day, it is a significant reduction in I/O, and the I/O that’s left has been transformed. It’s larger and it’s sequential, and your throughput is gonna go out the roof.

31:25 JJ: The second engine that layers on top of that is what we call IntelliMemory. This is our DRAM read-caching engine, and this is really interesting technology as well. Now, this is where we’re gonna start leveraging some memory. We do not use any memory on the write engine, that’s all pass-through. We’re only gonna be caching reads. And, Howard, let me tap you on the shoulder to explain a little bit about how we’re using that memory, ’cause I know a lot of people instantly kind of go, “Ooh, I’m not sure about that. I know SQL hogs a lot of memory, are you… How much do you need and are you using it? And am I gonna get into memory starvation situations?” Howard, maybe I could ask you to speak to those common questions that we get real quick.

32:09 HB: Sure. Thanks very much, Jennifer. Guys, this is an animated visual that I’ve put together to help describe or show you how V-locity dynamically adjusts its memory usage to only using free available memory that would otherwise be idle and unused anyway. It’s also intelligent enough to know when there is a demand for memory by other applications, including Windows, SQL, what have you, and release memory from its cache, so there’s never going to be a shortage of free memory, and this will allow or keep Windows and other applications happy.

32:54 HB: One of the things that we’ve discovered a long time ago is that most systems are not utilizing 100% of their resources all the time, and that we’ve discovered that many systems have… I don’t wanna necessarily say an over-abundance of free memory, but do have memory that they’re not currently making use of. What a great free available resource for us to dynamically use and put it to good use, allow us to satisfy those frequently requested repetitive reads directly from memory, so the storage system isn’t having to waste its time processing that type of data. And then if that memory is needed, we back out of the picture and give that memory back to Windows. So, it’s no harm, no foul. Thanks, Jennifer.

33:47 JJ: Yeah, thanks, Howard. I think you could consider this like a tier zero cache strategy. And I mentioned that the other engine is gonna, just off the top, take 30% of the I/O out of the picture. This engine, on the read I/O, is gonna take another 20% of the read I/O and serve it directly from memory. And you’ve got that much faster transfer, memory to memory, data transfers, and even going down to a flash layer on your SAN. So, really a lot of benefit here, especially when you combine the two engines.

34:14 JJ: And we’re just gonna wrap up here with the last couple of slides. I like this slide a lot. This is VMware… Directly from VMware’s documentation. They talk about disk I/O performance enhancements, and they do two things here. It’s a little bit of a tiny thing I’ll just summarize. The first one says that, “Hey, increase the virtual machine memory. This should allow the operating system to cache more.” We all know that’s not gonna happen. It’s already got plenty of memory, it’s not actually caching more. The point here is that they’re saying, “Offload as much read traffic as you can, get something caching it directly from memory on that VM, you’re gonna just get that I/O out of the ecosystem, serve it directly from memory, and that I/O doesn’t have to consume resources, making the round trip.” That’s what our DRAM read-caching IntelliMemory does.

34:56 JJ: Number two, “Defragment the file systems on all guests.” No one’s gonna defragment today, that’s just not gonna happen. But they are acknowledging the fact that the fragmentation that exists in the Windows Operating System is still a problem. That’s where IntelliWrite engine comes in, and that’s where… Don’t let the egg break in the first place, just get a clean write out the gate. So, we’re really in line with an enterprise class solution that’s highly compatible with environments.

35:24 JJ: And just a quick thing that I’ll mention as well is, when you deploy our software, there is no reboot required. As Howard mentioned earlier, it’s an MSI file. You can just deploy it. If you’ve got a non-persistent VDI environment, you just throw it right into the master image, you can just deploy it with our console or other things, so it’s extremely easy to deploy. This is what the console… One of the UI screenshots looks like. You basically have great I/O reduction that you can see. You can see your storage I/O time saved. We have centralized reporting from our console. And that’s about it, guys.

35:58 JJ: We wanted to make sure that we were able to cover… And we covered an awful lot today, we did go about six minutes over, we really appreciate all of you sticking in with us. I’m gonna just check and see if we had any questions. And I actually… Let me see if I can see if there’s any questions in there. It looks like there are, and I’d have to expand my window so I can read them. Give me one second. Howard, I’m actually not being able to get into the question box, but I know that some of the common type of stuff that we’re asked, we’ve already answered a lot of that in the presentation. “How do you install it?” “Where does it install?” “No reboot required.” So, Howard, unless there’s anything else that you would like to add on for our audience today, I think we can go ahead and wrap it up.

36:43 HB: No, I didn’t see any additional questions there. Maybe you can just elaborate a little bit about how to get in touch with us and how they can get their hands on a proof of concept.

36:57 JJ: Yeah, absolutely. Our team members will be reaching out to you via email and also a phone call to go ahead and schedule you. If you would like to have a proof of concept, the software trial is free. And again, like I said, we will just hop on a very quick 15-minute call with you, we can give you that pre-assessment tool, which is called the Condusiv I/O Assessment Tool. There may be some questions around that, so I’ll just explain that briefly. That tool does not require you to install anything on your targets. It uses remote WMI calls to get existing perf-mon counters off your target servers, and then it puts it into a nice interface that you can see, and you can also send to us and we can see. It will tell us if your servers are candidates for this or not, if your Windows Operating System, in combination with the applications that you’re running on those, are gonna be good candidates. So, that’s a perfect place to start. It takes about 10 minutes to set up. You let it run for a couple of days, and then we can come back and let you know, they’re good or they’re not. From there, we can go for a proof of concept, if it makes sense. Howard, did I leave anything off?

37:58 HB: No. I think you covered it really well. Thank you very much, Jennifer.

38:02 JJ: Okay. All right. Well, I wanna thank everyone again for attending, and our team will be in touch with you, and thank you very much.

38:09 HB: Alrighty. Thanks, everyone. Bye-bye.

Additional I/O Transformation resources

Article: I/Os Are Not Created Equal – Random I/O versus Sequential I/O

Case Study: Telestream solves timeouts and slows on I/O intensive applications

Article: Myriad of Windows Performance Problems Traced to a Single Source

Article: How To Get The Most Out Of Your Flash Storage Or Move To Cloud

Video: The Two I/O Fallacies Surrounding IOPS and I/O Response Time

Webinar: Two I/O Myths Killing 40% of Your Throughput

Do Your Windows Servers Have an I/O Performance Problem?

Find out by running the free Condusiv I/O Assessment Tool. It gathers 11 key performance metrics and performs statistical analyzes looking for potential problems. It then displays these metrics to help you understand where potential bottlenecks might be.

Download it here