Register: VMUG Webinar and Pure Storage September 22

Register here: http://tinyurl.com/pq5fd9k

September 22 at 1:00pm Eastern time Pure Storage and VMware will be highlighting the results of ESG Lab Validation paper. The study on consolidating workloads with VMware and Pure Storage used a single FlashArray //m50 and deployed five virtualized mission-critical workloads VMware Horizon View, Microsoft Exchange Server, Microsoft SQL Server (OLTP), Microsoft SQL Server (data warehouse) and Oracle (OLTP). While I won’t steal all the thunder it is good to note that all of this was run with zero tuning on the applications. Want out of the business of tweaking and tuning everything in order to get just a little more performance from your application? Problem Solved. Plus check out the FlashArray and the consistent performance even during failures.

Tier 1 workloads in 3u of Awesomeness

wpid1910-media_1442835406510.png

You can see in the screenshot the results of running tier one application on an array made to withstand real-world ups and downs of the datacenter. Things happen to hardware and software even, but it is good to see the applications still doing great. We always tell customers, it is not how fast the array is in a pristine benchmark, but how does it respond when things are not going well, when controller loses power or a drive (or two) fails. That is what sets Pure Storage apart (that and data reduction and real Evergreen Storage).

Small note: Another proven environment with near 32k block sizes. This one hung out between 20k and 32k, don’t fall for 4k or 8k nonsense benchmarks. When the blocks hit the array from VMware this is just what we see.

Register for the Webinar
http://tinyurl.com/pq5fd9k
You can win a GoPro too.

VDI Calculator for VNX

Biggest question around sizing your VDI usually comes down to sizing the storage.

Some of the solutions team created a pretty cool sizing whitepaper a few months back. Which inspired me to create this web based calculator. It is not meant to do everything in the whole world.

Just give a quick and easy VNX setup.

http://vdi.2vcps.com

The source is on GitHub so please have fun with that.

Sample Output:
calc-shot

VMware View Stretched Cluster

The last few days I have been considering the best way to stretch a cluster of VMware View resources. After digging and talking to people smarter than me I figured out there is a lot of things to consider and that means lots of ways to solve this. In this first post I want to highlight the first overall solution that was inspired by an actual customer. This design came from one of the fine EMC SE’s and it inspired me to share further. I stole his picture. It is very storage centric (imagine that) so most of what I share will give some detail to the VDI and VMware portion.

VMware View and VNX and Isilon

wpid1163-media_1360942508658.png

Probably more detail then you need. Important things to remember. The VPLEX will keep the Volume in sync across distance at each site. All the benefits of FAST Cache will still be in place for each site.
In this solution each location will have file data redirected to the Isilon for SMB shares. I will use the VMware View pools and entitlements to force users to each side. Group Policy (GPO) or Third Party Persona Management will direct the users to their data. We are active/active in the sense that workloads are live at each site. Active Passive for the File portion as we will only kick users to site B in the event of a planned or unplanned event.
In another post I will discuss what I learned to make it complete non-persistent site to site active active everything. There is some cool stuff coming here.
First I used Resource pools to map to the VMware View Pools I created to. In the picture below the “Dell-Blades” cluster hosts 1-3 are in site A and hosts 3-6 are site B. One problem How to make sure each pool is pinned to each location?

wpid1164-media_1360944054169.png

Create the VM Group and Host Groups first!

wpid1165-media_1360944271778.png

Create the VM Site A and B group first. Then Create the Host Groups. As simple as editing the settings in your cluster and clicking DRS Groups Manager. One gotcha is you have to have hosts and VM’s first before making the groups. This may be an issue you have have not provisioned your View desktops first (I would wait). Just use some dummy VM’s at first to get the rules created.

With the Groups created Create the Rule

wpid1166-media_1360944500319.png

Remember these rules should say “Should run on hosts in group” (big thanks to @VirtualChappy). If you don’t have the rules right failover won’t work in case of a site going away for whatever reason.

Useful Script for setting DRS Host Affinity for all VM’s in a Resource Pool

wpid1162-media_1360597971375.png

This script I located on the community forums from the amazing LucD and at fix from another community user "GotMoo" I love the VMware Community.

What is so cool is I can run this after provisioning all of my desktops to get them in the right DRS VM Group and since usually in VMware View Environments you might create and destroy desktop VM’s regularly this helps a ton.

$vCenterServer = "vcenter.domain.lab"
#authenticating and Connecting to the VC

$CurrentUserName = [Security.Principal.WindowsIdentity]::getcurrent().name
$cred = Get-Credential $CurrentUserName
Write-Output "Connecting to vCenter. Please stand by..."
Connect-VIServer -Server $vCenterServer -Credential $Cred

 

#Function for updating the Resource VM Groups
function updateDrsVmGroup ($clusterName,$resourcePoolName,$groupVMName){
$cluster = Get-Cluster -Name $clusterName
$spec = New-Object VMware.Vim.ClusterConfigSpecEx
$groupVM = New-Object VMware.Vim.ClusterGroupSpec
#Operation edit will replace the contents of the GroupVMName with the new contents seleced below.
$groupVM.operation = "edit"

$groupVM.Info = New-Object VMware.Vim.ClusterVmGroup
$groupVM.Info.Name = $groupVMName

# Perform your VM selection here. I use resource pools per cluster to identify group members,
# but you could use any method to select your VM's.
get-cluster $clusterName | Get-ResourcePool $resourcePoolName | get-vm | %{
$groupVM.Info.VM += $_.Extensiondata.MoRef
}
$spec.GroupSpec += $groupVM

#Apply the settings to the cluster
$cluster.ExtensionData.ReconfigureComputeResource($spec,$true)
}

# Calling the function. I've found the group names to be case sensitive, so watch for that.
#updateDrsVmGroup ("ClusterName") ("ResourcePool Name") ("DRS VM Groupname")
updateDrsVmGroup ("UCS") ("DesktopsA") ("VM Site A")
updateDrsVmGroup ("UCS") ("DesktopsB") ("VM Site B")
# updateDrsVmGroup ("Cluster_STAGE") ("Group A") ("Group A VMs (Odd)")
# updateDrsVmGroup ("Cluster_STAGE") ("Group B") ("Group B VMs (Even)")
Disconnect-VIServer -Confirm:$False

More to come…

Finally this is a quick look at setting up View to be cross location. Of course other considerations about web load balancers, networking, number of View Connection Managers all need to be decided for your environment. Next post will include some of the stuff I found about keeping the users data live in both sites. Things like Windows DFS (Isilon can be a member), Atmos, VNX replication, and something called Panzura.

View Client on iPad with Bluetooth Keyboard

I was very excited to try out my View desktop using my new ZaggFolio Keyboard case. I did not have a chance to try out the View Client with the keyboard until today. I was sad to find out the keyboard does not work very smoothly. So I would like to point this out:

20120508-200728.jpg

First you have to tap the keyboard icon in the top menu. Not sure why this exists, but it would be great if the keyboard fully worked. The keyboard fully working would be great because using the on screen keyboard it uses the half of the screen.

Anyone else think this is kind of weird?

An Idea for vCloud Director and View

Sometimes I am sitting up late at night and I have a thought of something I think would be cool, like if x and y worked together to get z. This time I thought this was good enough to blog about. Now I want to stress that I do not have any special insight into what is coming. This is just how I wish things would be.

Today there are two end user portals from VMware. The vCloud Director for self-service cloud interface and the View Manager access point for end-users to access Virtual Desktops. Each interface interacts with one or more vCenter instances to deploy, manage, and destroy virtual machines. Below is a way over simplified representation of how View, vCloud Director (plus Request Manager) relate to the user experience. I think maybe there is a divide when there does not need to be (someday).

 

 

My idea

What if vCloud director could be used in the future to be the one stop user interface portal. Leveraging vCloud Request Manager, vCD could deploy cloud resources, Desktops or Servers or both. vCloud Director would be the orchestration piece for VMware View. Once the Request for a desktop is approved the entitlement to the correct pool is automatically given. If extra desktops are needed the cloning begins. vCloud Director will learn to speak the View Composer’s language, providing the ever elusive ability to use linked clones with vCD. vCloud Director with this feature could be great for lab and test/dev environments. The best part is operationally there is one place to request, deploy, manage all virtual resources from the end-user perspective. This could eliminate the ambiguity for a user (and service providers) on how to consume (and deliver) resources. This has implications on how IaaS and DaaS would be architected.

 

Now some drawbacks

You might say, hey, Jon you are going to make me buy and run vCD just to get VDI? No. The beauty of the API’s is each product could stand alone or work together (in my Vision of how they should work). Maybe even leverage Composer with vCD without View or Request Manager with View without vCD.

One Cloud Portal to rule them all.

Using Network Load Balancing with View

If you have a smaller View deployment but still want to have redundant connection servers look no further than Microsoft NLB. Solve this problem without the need for an expensive hardware loadbalancer. Will it have all of the bells and whistles? No. If you have less than a 1000 users you probably would not see the benefit of the advanced features in a hardware load balancer. Make sure to read the whitepaper from VMware about NLB in Virtual Machines.

I am making the assumption you are like me and want everything to be as virtual as possible. So the View Connection Manager servers will be VM’s

Setup the primary and replica View Servers

I won’t go over installing View. Just be sure to setup the initial manager server. Then go ahead and setup the replica VM.

Configure NLB

media_1272294630960.png

Go the the Administrative tools and open the Network Load Balancing Manager. Right click the top node in the tree menu on the left and select New Cluster.
Set the IP and other information you will used for the Load Balanced cluster. This is a new IP not used by your View Manager servers.
In the VMware document referenced above VMware recommends setting the Cluster operation mode to Multicast.
Click Next then next again. When asked to configure port rules I leave it on the default and click next. You can chose limit this to certain ports.

media_1272295397190.png

Click Next again and enter localhost in the wizard to configure the local interfaces for NLB. Click next and make sure to note the priority. When setting up the replica server this number needs to be different. Finally click finish and wait for the configuration to finish. You should now be able to ping your new cluster IP address.

Setup the Replica Server in the Load Balancer

media_1272295861268.png

Righ Click the node in the tree menu for the NLB Cluster you just created and select Add new host to cluster. Enter the IP for the Replica Server and click connect. Select the interface that will be used for the Load Balancing and click next. Make sure the Priority is unique from the first server. If it gives you any grief after this point close and re-open the Network Load balancing Manager. The working cluster should look like this:

media_1272296379739.png

Test the Failover

media_1272296907908.png

Start a continual ping to the cluster IP. Now use the vSphere Client to disconnect the network from one of the servers. Watch the pings continue to come back.

Finally, create a DNS A record (something like desktop.yourdomain.com) and point it to the cluster IP. You now have some decent failover in case of a VM failure and even a host failure (suggestion would be to use seperate hosts for the VM’s).

Note – You may need to add static ARP entries into your switching depending on your network topology. Be sure to test this fully and consult your network manufacturer’s documention for help with static ARP.

Ask Good Questions

This happened a long time ago. I arrived at a customer site to install View Desktop Manager (may have been version 2). This was before any cool VDI sizing tools like Liquidware Labs. I am installing ESX and VDM I casually ask, “What apps will you be running on this install?” The answer was, “Oh, web apps like youtube, flash and some shockwave stuff.” I thought “ah dang” in my best Mater voice. This was a case of two different organizations thinking someone else had gathered the proper information. Important details sometimes fall through the cracks. Since that day, I try to at least uncover most of this stuff before I show up on site.

Even though we have great assessment tools now, remember to ask some questions and get to know what is your customers end goal.

Things I learned that day. As related to VDI.

1. Know what your client is doing, “What apps are you going to use?”

2. Know where your client wants to do that thing from, “So, what kind of connection do you have to that remote office with 100+ users?”

This is not the full list of questions I would ask, just some I learned along the way.

VMware View and Xsigo

*Disclaimer – I work for a Xsigo and VMware partner.

I was in the VMware View Design and Best practices class a couple weeks ago. Much of the class is built on the VMware View Reference Architecture. The picture below is from that PDF.

It really struck me how many IO connections (Network or Storage) it would take to run this POD. Minimum (in my opinion) would be 6 cables per host with ten 8 host clusters that is 480 cables! Let’s say that 160 of those are 4 gb Fiberchannel and the other 320 are 1 gb ethernet. The is 640 gb for storage and 320 for network.

Xsigo currently uses 20 gb infiniband and best practice would be to use 2 cards per server. The same 80 servers in the above cluster would have 3200 gb of bandwidth available. Add in the flexibility and ease of management you get using virtual IO. The cost savings in the number director class fiber switches and datacenter switches you no longer need and the ROI I would think the pays for the Xsigo Directors. I don’t deal with pricing so this is pure contemplation. So I will stick with the technical benefits. Being in the datacenter I like any solution that makes provisioning servers easier, takes less cabling, and gives me unbelievable bandwidth.

So just in the way VMware changed the way we think about the datacenter. Virtual IO will once again change how we deal with our deployments.

Storage Design and VDI

Recently I have spent time re-thinking certain configuration scenarios and asking myself, “Why?” If there is something I do day to day during installs is this still true when it comes to vSphere? or will it still be true when it comes to future versions.
Lately I have questioned how I deploy LUNs/volumes/datastores. I usually deploy multiple moderate size datastores. In my opinion this was always the best way to fit in MOST situations. I also will create datastores based on need afterward. So will create some general use datastores then add a bigger or smaller store based on performance/storage needs. After all the research I have done and asking questions on twitter* I still think this is a good plan in most situations.
I went over a VMworld.com session TA3220 – VMware vStorage VMFS-3 Architectural Advances since ESX 3.0 and read this paper:
http://www.vmware.com/resources/techresources/1059
I also went over some blog posts at Yellow-Bricks.com and Virtualgeek.

An idea occurred to me when it comes to using extents in VMFS, SCSI Reservations/Locks, and VDI “Boot Storms”. First some things a picked up.
1. Extents are not “spill and fill” VMFS places VM files across all the LUNs. Not quite what I would call load balancing, since it does not take IO load into account when placing files. So in situations where all the VM’s have similar loads this won’t be a problem.
2. Only the first LUN in a VMFS span gets locked by “storage and VMFS Administrative tasks” (Scalable Storage Performance pg 9). Not sure if this implies all locks.

Booting 100’s of VM’s for VMware View will cause locking and even though vSphere is much better when it comes to how quickly this process takes. There is still an impact. So I am beginning to think of a disk layout to ease administration for VDI, and possibly lay the groundwork for improved performance. Here is my theory:

Create four LUNs with 200GB each. Use VMFS to extents to group them together. Resulting in an 800 GB datastore with 4 disk queues and only 1 LUN that locks during administrative tasks.

Give this datastore to VMware View and let it have at it. Since the IO load for each VM is mostly the same, and really at the highest during boot other tasks performed on the LUN after the initial boot storm will have even less impact. So we can let desktops get destroyed and rebuilt/cloned all day with only locking that first LUN. This part I still need to confirm in the LAB.

What I have seen in the lab is with same sized clones the data on disk was spread pretty evenly across the LUNs.

Any other ideas? Please leave a comment. Maybe I am way off base.

*(thanks to @lamw @jasonboche and @sakacc for discussing or answering my tweets)

VMware View – Repurpose your Existing PC’s as Thin Clients

I was looking for last couple weeks for a good way to re-purpose PC’s as thin clients to ease the investment in VDI. I stumbled across this PDF from VMware and I thought it was great. I would tend towards using group policy to deploy the new shell described on pages 3 and 4. It can always be undone if the PC is needed as a PC again.

Check it out.

You pretty much replace the default shell (explorer.exe) with the VMware View Client. I would suggest using some group policy to keep people from using the task manager to start new processes. This should be a temporary solution until you have budget to buy some real thin clients or net books even.

There are of course lots of options out there for thin clients, and software to provision a “thin OS” to machines. This is free and easy though. I thought it was cool so I decided to share.