Guess what….you can now officially stop blaming storage as the most expensive part of a VDI deployment. 🙂
Gunnar Berger (@gunnarwb) with Gartner sent out a provocative tweet that grabbed my attention:
He followed that tweet up with a blog post that gave some detail: The Real Cost of VDI Storage . In it he referenced that Citrix had recently created a Citrix Ready VDI Capacity Validation Program for Storage Partners and invited some storage vendors to participate in a 750 seat, Citrix PVS test.
The crux of the exercise was Citrix created a test environment and invited each storage vendor to show up with their storage array, plug it in and run the test. In my opinion, the results went about the way I would expect it to. 750 desktops is a rounding error with some arrays, and the sweet spot for others. In fact, as I read through the various documents I found myself laughing and in other cases asking “what the hell were you thinking submitting that configuration for this 750 seat test”. My gut feeling is some people were shooting for “hero performance numbers” and didn’t care what the cost was, and others new exactly what the goal of this validation was and ran everything as cheap as they could possible get it to pass Either way, do yourself a favor and don’t simply buy a storage array based on these results. In fact, as an EMC’er if you asked me to design a 750 seat VDI solution, based on your requirements, I probably wouldn’t have gone with an All Flash Array!
I can honestly say that the way I architected storage solutions just a few short years ago (read:Quick VDI Sizing HowTo) is very different than I would do today. In fact, back then it was a little easier because it was math (drive IOPS and Desktop IOPS), and it was pretty clear what array to use. The answer to everything back then was to “throw more spindles at it until it stops complaining”. As you can imagine, that wasn’t a cheap way of sizing or fixing things.
Today, if it’s less than 1000 desktops you can look at things like VMware vSAN, EMC ScaleIO or any other software based storage solution. If you need to scale between 500 desktops and 2000 then a Hybrid Approach (SSD/SAS/NLSAS) makes more sense. If you need to scale beyond 2000+ desktops then All Flash Arrays makes a ton of sense. Keep in mind, there is probably a solution to match just about any budget and design criteria you may have. As you scale up your requirements, economies of scale kick in and “costs per desktop” drop like a hammer. In fact, if you look at the Citrix designs some of the storage vendors call out how many desktops their configuration can scale, based on the tested system.
The net result of this is we can FINALLY put to bed this notion that the storage purchases are the #1 reason VDI Budgets get blown up. 4 years ago, storage companies wore that “honor” like a scarlet letter, but with the advent of Software-Based storage, Hybrid Arrays and All Flash Arrays this is no longer the case. Now the biggest expense is typically the Virtual Desktop License or the server infrastructure needed to drive the number of desktops.
Either way, 2014 is the year for VDI
One thought on “Stop Blaming Storage For Busting you VDI Budget”
Hi vTexan –
I am with Tegile Systems and there are several problems with the use of these reference architectures being stacked up as a comparison. First, a reference architecture is not a baseline for a bakeoff. A reference architecture is to validate that your stuff works. Our solution is on the low side of the range – we easily could have used a system that had a lower $/desktop or a system that would have been much higher. Citrix never positioned this as an economic bakeoff, so to do so will only send more confusion into the market. Second, and even more confusing is the IOPS discussion. Using different block sizes will yield massively different IOPS numbers – it is just math. I have read all of the vendors’ papers, and as a group, there are some that used 4K blocks, and some that used 32K blocks. Nowhere does the comparison normalize block size to rationalize IOPS to a consistent metric. Those of us that used 32K block sizes could see 4X the IOPS if normalized with our 4K brethren.
Now to Gunnar’s credit, I have chatted with him about this and he has asked me for a data sheet or other document that will show a higher 4K block size specification for our product, so his table will show a better number for Tegile. While most marketeers would jump at that, I am pressing for a wholesale normalization, so customers will have consistent data to make their own judgements from. I hope he decides to do so.
Is 2014 the year of VDI? Perhaps, but not if customers are still confused.
Thanks for posting!