From time to time I’m asked to do quick/dirty IO sizing for customers looking to do View Virtual Desktops. Normally it’s a request by the EMC Technical Consultant to help size the array. Now, I usually throw a bunch of caveats into the mix like “they really should be working with either EMC Consulting Services or one of our Virtual Desktop Specialist partners” but we can at the very least get them into the right ballpark using standard and accepted numbers. I thought I would take you guys through this process so that you can get a better understanding around sizing for virtual desktops on your array.
Lets assume you are doing a Proof of Concept for 100 desktops. First thing we do is a simply frontend IO calculation. For 100 concurrent Windows 7 desktops (notice my assumptions) I would look at 20 IOPS a desktop and multiple that times the 100 desktops to get 2000 frontend IOPS. Now, to take it to the next level, we need to take those numbers and see what our “backend” IOPS needs to be. Essentially, we need to figure out how many spindles we would need to support that number of desktops. So here is more math fun 🙂
In talking with Andre Leibovici (Blog: http://myvirtualcloud.net/) on the vSpecialist team, all of his deep-dive understanding of Read/Write ratios for VDI is that the ratio in the real world is about 80/20 Writes/Reads for Windows 7. Yup, I’m not kidding. Now, if you spend some time tuning, you might be able to nudge this closer to 50/50 but for this sizing exercise lets assume you will learn to improve this after the initial 100 desktop POC. So, 80/20 of 20 IOPS a desktop is 16/4 writes/reads. So 16(w) X 100 desktops = 1600(w) IOPS and 4(r) X 100 desktops = 400(r) IOPS.
Now the fun part. We need to pick the RAID level. Lets assume you want as much capacity as you can get, and you want to look at a worst case write penalty scenario so for conversation purposes let’s go with RAID 5 :). RAID 5 (4+1) assumes a 4 write penalty so you take 1600 write IOPS X 4 and you get 6400 write IOPS. Now, the good news is reads don’t get a penalty so you simply carry over the 400 read IOPS to your 6400 number and you end up with a backend IOPS pool size of 6800 IOPS.
The next step is to figure out how many disk spindles it would take to meet this requirement. I’ve seen all sorts of disks numbers from various different array manufactures, from 180 all the way up to 300+ and they all have various caveats to using that number. For this simple exercise I’m going to use 180 IOPS per spindle which is a typical 15K RPM drive at about 80% full. So, 6800 writes divided by 180 IOPS = 38(ea) 15k RPM spindles to meet the backend IOPS requirement for this given config. By the way, if you aren’t concerned about capacity amounts, you could change this from RAID 5 to RAID 10. The write penalty for RAID 10 is only 2 so essentially you could cut your IOPS and the number of disk spindles in half. You do give up a little capacity but you make up for it by not having to by twice as many spindles.
So, the net-net is if you want to roll out 100 concurrent VMware View (or Citrix Machine Creation Services (MCS)) Windows 7 desktops you need to add (or have) about 38(ea) 15k spindles worth of RAID 5 performance into your system. Now, there are some EMC features like Fully Automated Storage Tiering (FAST) as well as FASTCache that can help drive those spindle counts even lower, as well as some really cool “quality of life” features but this is just a simple vendor neutral sizing exercise so lets leave it at this for now.
At the end of the day my recommendation is to look for your storage vendors VDI Reference Architecture to help validate your numbers. Finally, don’t be afraid to ask for help !! There are some ROCKSTAR VDI consulting groups that can help you get this project moving in the right direction!!
Hopefully all of this made sense and also keep in mind, your mileage will vary 🙂
6 thoughts on “Quick VDI Sizing HowTo”
Great write up. I use the same methods and it has worked well in the field. Of course, I also try and add in FAST Cache to get the lower spindle count and “quality of life” features whenever possible…it makes a big difference.
Why not just use true virtualized storage and save all this hassle?
Thanks for the comment or suggestion in this case 🙂
Compellent/EMC/3PAR/EveryStorageCompany today does block-based storage virtualization. EMC has been shipping it for years, Compellent a few years more than that and people like HP EVA and Xiotech have been doing it for over a decade. Now, from a religious perspective, I’m sure everyone will say theirs is better because X but at the end of the day, no one is managing RAID Sets – it’s all just in 1 big pool of storage and from a management point of view, that’s all that matters. That still doesn’t mean you can ignore properly sizing for VDI. At the end of the day, it’s the backend IO pool that you have to watch. In other words, you can’t simply put 1000 desktops on 5 spindles just because you wide stripe (Storage Virtualization) on the backend. This is why companies like EMC, Compellent, 3PAR, NetApp all have reference-architectures with spindle counts in them. When a boot storm rolls through your VDI environment, if you don’t have enough Tier 1 spindles your boot times go from seconds, to VERY long 10’s of minutes and then your phone blows up with upset/angry end users !! Thats not a good day !!
Again, this blog post was simply a way for customers/engineers to get some ideas of how big of an array you would need to look at to meet your desktop requirements. You should ALWAYS ask your storage vendors for their reference-architectures so that you can minimize the chances of a VDI deployment failure. If I’ve said it once I’ve said it a 1000 times, your VDI environment will live or die based on your backend storage sizing.
By the way, as I mentioned in the blog, SSD’s can help a TON and sometimes the most strategic way of using them is to setup SSD’s in their own pool and dedicate them to specific portions of the desktop. Specifically the “read only” portions like Replicas. Is that what you are referencing in your comment? If so, i’d be happy to address that as well.
Thanks again for your comment.
“So, the net-net is if you want to roll out 100 concurrent VMware View Windows 7 desktops you need to add (or have) about 38(ea) 15k spindles worth of RAID 5 performance into your system.”
Does the 38 drives in your example include just the data drives (sans the parity drives)? Thank you for the informative article.
Hey Ken – Great question and you made me realize I made a mistake in my post!! So to quickly answer your question, no that doesn’t include hot spares. That spindle count is strictly IO producing spindles.
As far as the mistake, Citrix MCS and View would have the same general IO performance/spindle count requirement for Windows 7. I know I specifically called out View but that was an oversight that I corrected. I really wanted the blog post to be netural to Citrix/VMware – now An argument could be made that Citrix PVS could reduce the IOPS/#ofSpindles but that would only really affect the “Read” part of the IOPS calculation. At least that is my understanding of some differences in PVS vs MCS from Citrix.
Also, remember, this is just rules of thumb, most of the time the spindle counts will be less, especially with SSD and other tweaks that can be made to the enviornment. This was simply a way to do a quick sanity check on spindles needed for VDI.
But thank you for the comment, and I’ve edited the blog to reflect the change (MCS/View)