Continuing the “How to install, configure and deploy VMware View 5 on vSphere 5” series is one near and dear to every storage persons heart!! I One of the really cool features in VMware View is the ability to separate some of the VDI storage performance profiles into separate datastores. This is really useful when trying to maximize the performance of your storage array. Now, from a home office or even remote office-back office (RoBo) point of view this is even more important. As I’ve discussed in prior blog posts, anything you can do to help put the right IOPS on the right type of disk pool is a good thing and a good thing in a View environment is making sure the end users have a fantastic experience.
Now – i will tell you that Andre Leibovici (Fellow vSpecialist and VDI Guru) has done some testings and in larger enviornments it might make more sense to put the linked clones on the SSD and the replica’s on spinning drives. I’ll let you decide which way to go on that one – here is the link to his article –Use Flash Drives (SSD) for Linked Clones, not Replicas
By the way, I was inspired to do this blog a while back when my buddy and fellow vSpecialist Matt Cowger (VCDX#52) did a awesome blog post called “Pushing Limits – Running View 5 on iomega PX6” in which he benchmarked running 50 Virtual Desktops on a 6 drive Iomega PX6-300d array utilizing 2(ea) SSD’s and 4(ea) SATA drives !!! He has a bunch of performance info on his site if you want to explore this a little further. I figured I would take you through the process on how to set that up in VMware View.
If we go back to the “How to setup your first desktop pool in VMware View 5” blog I only used 1 datastore for both Replica’s and Linked Clones (Step 20 on that page) which is fine for a small View deployment but when you want to roll this out to end users you may want to tweak this a bit for all the reasons I mentioned above. Below is the step by step process on how to do this in your own lab/environment. Let’s get started.
First thing we need to do is verify the SSD’s are setup in the Iomega PX6-300d
1. Let me point out that I’m using some “unsupported” SSD drives in this Iomega PX6. Please note, I’m a trained professional, do NOT try this at home without strict supervision 🙂
2. You will notice I have 2(ea) Tiers of storage. An SSD (2-drive) and a RAID 10 Set consisting of 4(ea) SATA drives. My goal is to cut down the IOPS required by 1/2 in using RAID 10. I give up capacity but I’m not capacity bound for some of the stuff I’m working on. I’ll go back and change it later.
3. One of the other thing I did was bounded the 2(ea) 1GB Network ports in the Iomega PX6.
Now you need to go add these new volumes into your vSphere environment. Here is a blog post I did a while back on adding Iomega iSCSI Storage into your vSphere hosts.
Once all that is squared away, we need to open up the View Admin Console and edit the pools. Please note, at the end of this process there will be an outage to the desktops in this pool. You should do this during non-peak or off hours to reduce headaches.
1. Log into the View Admin Console, click on Pools and then Click on Edit
2. Across the top you will see vCenter Settings (not very intuitive that this is the place to make those changes) – click on Browse
3. Notice we are just using 1 datastore. Now click on “Use Different Datastore for View Composer replica disks”.
4. First thing that happens is you get a warning about changing the control over the placement of the parent image that linked clones use as a their base image”. It then goes on to recommend that you use a high performance datastore for this feature. Good thing we are using SSD’s 🙂
5. Notice the new column called “Use For”. This is where we can make changes to the types of datastores we want to use for different functions.
6. If you setup your SSD datastores correctly you will see if in the list of Datastores on that page. Scroll to that datastore and check the box to the left of it then from the “Use For” list, select “Replica disks”.
7. You should now see some changes in the bottom of that window. It should show LinkedClones and the Replicas on separate volumes. Click OK to get back to the main page.
So, if you remember how linked clones work, essentially it takes the snapshot we created “here” and creates a “Replica” or “Master Image” of it and from there the linked clones are created from it. In this case, the “Replica” is still sitting on the old datastore and we need it to be on the new SSD datastore. The best/easiest way to do this is with a “Recompose” process.
1. From the main View Admin Console click on Pools, select the pool you will be editing and click on Edit
2. Once the Edit screen screen pops up – select Recompose from the “View Composer” dropdown menu. (you can actually do this from the “Desktop” area as well.
3. First screen that pops up gives you the ability to change the parent VM location, since we didn’t make any changes to that just verify that we will be changing the default image for the new desktops (should already be checked) and click next.
4. Again, this is an offline process and in this screen they give you the ability to schedule it. In my home office case, lets just plow through it !!
5. Review the changes and then click “Next” and run over to the vCenter and watch all the action take place.
Essentially what happens is the old replica is deleted and a new one is created. this is a lot like “deploy from template” so it might take a few minutes to work. At the end, the best way to verify that your new replica is on the correct data storage is to go into vCenter and see:
So to summarize, when we originally started off we just took the default of creating the replicas and linked clones on the same datastore. After the fact we decided that we wanted to better match IO profile with different tiers of storage so we created an SSD datastore in our Iomega PX6-300d and added that into vCenter as a new iSCSI datastore. We then went into VMware View Console and edited the Desktop pool under the vCenter settings area and chose to separate out the replicas from the linked clones. We then went through the process of a recompose so that the old replica (sitting on the old datastore) would get destroyed and a new replica would be created on our new SSD datastore.
That’s it for now!! Easy peasy !!
3 thoughts on “Maximizing View 5 Storage Performance in your home lab”
great post. Is RAID 10 recommended as a best practice for View?
Hey Mike – great question – I looked through a couple of best practice guides and I didn’t see where they called out a specific RAID level. I chose RAID 10 to help drive down the number or “IO WRITES” I would need since RAID 10 is a 2 write penalty whereas RAID 5 (4+1) is a 4 write penalty. The problem is you pay a capacity price for RAID 10. Then again, very rarely is capacity a metric I size towards since performance plays such a large part is storage design.
Your mileage will vary