VMware View and Xsigo

*Disclaimer – I work for a Xsigo and VMware partner.

I was in the VMware View Design and Best practices class a couple weeks ago. Much of the class is built on the VMware View Reference Architecture. The picture below is from that PDF.

It really struck me how many IO connections (Network or Storage) it would take to run this POD. Minimum (in my opinion) would be 6 cables per host with ten 8 host clusters that is 480 cables! Let’s say that 160 of those are 4 gb Fiberchannel and the other 320 are 1 gb ethernet. The is 640 gb for storage and 320 for network.

Xsigo currently uses 20 gb infiniband and best practice would be to use 2 cards per server. The same 80 servers in the above cluster would have 3200 gb of bandwidth available. Add in the flexibility and ease of management you get using virtual IO. The cost savings in the number director class fiber switches and datacenter switches you no longer need and the ROI I would think the pays for the Xsigo Directors. I don’t deal with pricing so this is pure contemplation. So I will stick with the technical benefits. Being in the datacenter I like any solution that makes provisioning servers easier, takes less cabling, and gives me unbelievable bandwidth.

So just in the way VMware changed the way we think about the datacenter. Virtual IO will once again change how we deal with our deployments.

iSCSI Connections on EqualLogic PS Series

Equallogic PS Series Design Considerations

VMware vSphere introduces support for multipathing for iSCSI. Equallogic released a recommended configuration for using MPIO with iSCSI.   I have a few observations after working with MPIO and iSCSI. The main lesson is know the capabilities of the storage before you go trying to see how man paths you can have with active IO.

  1. EqualLogic defines a host connection as 1 iSCSI path to a volume. At VMware Partner Exchange 2010 I was told by a Dell guy, “Yeah, gotta read those release notes!”
  2. EqualLogic limits the number of hosts in the to 128 per pool or 256 per group connections in the 4000 series (see table 1 for full breakdown) and to 512/2048 per pool/group connections in the 6000 series arrays.
  3. The EqualLogic MPIO recommendation mentioned above can consume many connections with just a few vSphere hosts.

I was under the false impression that by “hosts” we were talking about physical connections to the array. Especially since the datasheet says “Hosts Accessing PS series Group”. It actually means iSCSI connections to a volume. Therefore if you have 1 host with 128 volumes singly connected via 1 iSCSI path each, you are already at your limit (on the PS4000).

An example of how fast vSphere iSCSI MPIO (Round Robin) can consume available connections can be seen this this scenario. Five vSphere hosts with 2 network cards each on the iSCSI network. If we follow the whitepaper above we will create 4 vmkernel ports per host. Each vmkernel creates an additional connection per volume. Therefore if we have 10 300 GB volumes for datastores we already have 200 iSCSI connections to our Equallogic array. Really no problem for the 6000 series but the 4000 will start to drop connections. I have not even added the connections created by the vStorage API/VCB capable backup server. So here is a formula*:

N – number of hosts

V – number of vmkernel ports

T – number of targeted volumes

B – number of connections from the backup server

C – number of connections

(N * V * T) + B = C

Equallogic PS Series Array Connections (pool/group)
4000E 128/256
4000X 128/256
4000XV 128/256
6000E 512/2048
6000S 512/2048
6000X 512/2048
6000XV 512/2048
6010,6500,6510 Series 512/2048

Use multiple pools within the group in order to avoid dropped iSCSI connections and provide scalability. This reduces the number of spindles you are hitting with your IO. Using care to know the capacity of the array will help avoid big problems down the road.

*I have seen the connections actually be higher and I can only figure this is because the way EqualLogic does iSCSI redirection.

New VMware KB – zeroedthick or eagerzeroedthick

Due to the performance hit while zeroing mentioned in the Thin Provisioning Performance white paper this article in the VMware knowledge base could be of some good use.

I would suggest using eagerzeroedthick for any high IO tier 1 type of Virtual Machine. This can be done when creating the VMDK from the GUI by selecting the “Support Clustering Features such as Fault Tolerance” check box.

So go out and check your VMDK’s.

Thin Disk on vSphere My First Glance

So today I got around to putting ESXi 4 on my spare box at home. I first deployed a new virtual server and decided to use the thin provisioning built into the new version. After getting everything all setup. I was suprised to still see this.

I was like DANG! that is some awesome thin provisioning. I was more thinking something had to be wrong. A 42 GB drive with Windows 2008 only using 2.28KB that is sweet! I thought for sure since I had not seen this screen on the information of the VM it had already refreshed. It was too good to be true though I clicked the Refresh Storage and it ended up like this. Which made alot more sense for a fresh and patched Windows install. So far this leads to my first question, why the manual refresh? Should this refresh automatically when the screen redraws?

ESX Commands – esxcfg-hwiscsi

Next in the order of commands is esxcfg-hwiscsi. This command according to the iSCSI SAN configuration guide will let you set certain settings as required by your SAN on your hardware iSCSI HBA.

esxcfg-hwiscsi -h – this is the help. Not a ton there but enough.

esxcfg-hwiscsi -l – lists the current settings.

esxcfg-hwiscsi -a – allow arp redirection on the HW iSCSI HBA. This is used be some sans to move traffic between ports.

esxcfg-hwiscsi -j – Will enable a jumbo frame (MTU 9000 bytes) when it is disabled the frame is 1500 bytes.

I will bet if these settings are required you till be directed to use them be the SAN vendor or HBA vendor. IF something bizarre is happening on your iSCSI SAN with hardware HBA’s one of these might not match the SAN.

The Forging of the new Network/VMware/Storage Professional

When I first started out in College I needed a work study job. Since I liked to help people with their computer problems I applied and was hired for a position doing phone and in person support for the University. One of the best things about starting out at a school they don’t mind teaching. Our trainer said that in previous years new employees would be slotted into being Windows or Mac or UNIX support. He said we would be Wunder-Cons (our title was consultant instead of help desk dude). We had the privilege of having to support all of it. This thrust me into the world IT no matter what the piece of paper from USC said I was a Bachelor of.

I believe a new kind of Wunder-Consultant/Engineer is being made. With the announcement of the Nexus 1000v last fall the line between Network Engineer and Datacenter/Server Engineer is getting blurred. The SAN and Server Engineers have had this tension for a while now. Virtualization is a fun technology to learn but who gets the responsibility? I have seen where the SAN team owns the ESX’s and the Server team operates the VM’s like they are physical. The Network team not trusting or understanding why they want a bunch of 1GigE trunk ports. Across larger organizations it would look different but the struggle may be just the same. Who is in control of the VM’s? Are they secure? Who gets called at 1am when something dies? This is internal to the IT department and does not consider that Sales doesn’t want to share memory with accounting.

I can see these technologies pushing engineers into being jacks of all trades. To be a truly Architect level in VMware today you must be awesome with storage and servers. You have to be able to SSH into an ESX, choose the right storage for an application, and setup templates of Windows 2003. That is an easy day. You already will have to troubleshoot IO (because all problems get blamed on the virtualization first).

With the Nexus 1000v I picture the Virtualization Admins learning the skills to configure and troubleshoot route/switch inside and outside the Virtual Infrastructure. Add to that Cisco’s push this year with 10GigE and FCoE and their own embedded virtualization products. The lines between job duties are getting blown away.

Who is poised to become the experts in this realm? The network, server or storage admins? In this economy it may be good to know how to do all three jobs. I am sure corporations would love to pay just one salary to perform these tasks.

Randomly I though how would this relate to SOX? Could it pose any problems with compliance? I will save that for next time.

From Professional VMware – Virtual Machine Disk Sizing Tool

Cool Sizing spreadsheet I found at
Professional VMware

“This is a tool that I created a while back to assist in sizing needed disk space in a deployment. Straight forward to use, the totals are calculated as follows: VMDK Size + Ram Size * 1.1 + 12Gb = Total Needed. While the VMDK may be obvious, the others are just as important. Ram Size is included, as ESX will create a swap file on the disk where the VM’s configuration resides (unless you specify otherwise) and needs to be included. The * 1.1 is to add 10% to the overall solution, to allow for snapshots. This can likely be adjusted up or down depending on your specific requirements, but I’ve found that at least 10% works best. The last number, 12GB. This one may seem like a mystery, and likely it is.”

I love good tools and tips like this. This comes from someone that has to plan and design the disk space usage well.

Don’t Delete anything. Ever!

Ok, so after a stressful morning I am writing mainly to tell myself never delete anything, ever again.

Anyone else, if you don’t know vmware very well, don’t try to manipulate your vmdk files. Probably should not perform this combo of commands:

1. snapshot
2. snapshot
3. revert to here.
4. extend disk
5. extend disk
6. extend disk
7. Call consultant and say you don’t know what happened it just isn’t working.

Extending a vmdk is not instant, and requires additional steps in Windows to actually see it work. Please please please start using VCB to backup your vmdk’s. Plus Backup Exec needs to do a SQL backup if you want your databse to work again.