You down with VDP? Yeah you know me!

I had to be the first one to make a really bad joke.

Everyone will admit, how to efficiently back up your VM’s is a hot topic. Remember VDP is VMware’s product, but a lot of EMC technical people should be able to let you know right away how it works. VDP will be an excellent fit for a lot of customers with environments where they can’t spend extra on “virtual” backups.

Here are some of my favorite things in the new VDP.

  1. First it is built right into the new vSphere Web Client
  1. A simple wizard guides you through making the jobs.
  2. VDP uses Change block tracking to accelerate full restores.
  3. Integrated Self-service File level restore. What is better than file level restore? No one opening a ticket to ask you to do it!

The other stuff

Someone will eventually ask what is the difference between VDP and Avamar?

VDP

  • Max # VMs: 100
  • Storage Pool: up to 2TB
  • Replication (DR): None
  • Image-Level backup only

Avamar

  • Max # VMs: Unlimited
  • Storage Pool: up to 124TB *
  • Replication (DR): Included
  • Image-Level backup
  • Guest-Level backup
  • Extensive application support
  • Physical & Virtual backup
  • NAS/NDMP support
  • Desktop/Laptop backup
  • Extended retention to VTL/tape
  • Enterprise management
  • Purpose-Built Backup Appliances
    • Avamar Data Store
    • Data Domain integration *

Book Review: Automating vSphere with VMware vCenter Orchestrator

So to be 100% honest I have had this book on my desk for several months. Just staring at me. Calling my name. VMware press provided this copy to me along with Mike Laverick’s SRM book and so I am finally going to review the first one.

Cody Bunch does an amazing job of breaking down one of the most mystifying yet powerful products hidden in the VMware portfolio. VMware vCenter Orchestrator is almost mythical in the promises of automation of typical tasks of a vSphere administrator. While you can bang your head against the wall for weeks trying to figure out how to properly setup the vOrchestrator server and client I was able to use Cody’s guidance to have to operational and running test workflows in just a few hours (I am a slow reader).

I can’t stress enough the need for automation and orchestration in today’s virtual machine environment. The business is demanding more and more from the Virtualization team and in order to deliver vCenter Orchestrator is a good start since you probably already OWN it.

Hopefully soon there will be an update with information on the vApp version of Orchestrator. Check it out here on Amazon or your favorite book reseller.

Thanks again

Extents vs Storage DRS

I was meeting with a customer today and had to stop for a second when they said they were using 10 TB datastores in vSphere 4.1.

At first I was going through my head of maybe NFS? No they are an all block shop. Oh wait yeah, extents. They were using 2 TB -512 byte luns to create a giant Datastore. I asked, why? The answer was simple, “so we only manage one datastore.”

I responded with well check out Storage DRS in vSphere 5! It gives you that one point to manage and automatic placement across multiple datastores. Additionally you actually can find which VM lives where, and use Storage Maintenance mode to do storage related maintenance. Right now they are locked into using extents. If they change their datastores into a Cluster the gain flexibility while not losing the ease of management.

I wanted to use the opportunity to list some information I think about Extents with VMware.

  1. Extents do not equal bad. Just have the right reason to use them, and running out of space is not one.
  2. If you lose one extent you don’t lose everything, unless that one is the first extent.
  3. VMware places blocks on extents in some sort of even fashion. It is not spill and fill. While not really load balancing you don’t kill just one lun at a time.

An extent with a datastore is like a stack of luns. Don’t knock out the bottom block!

 

Some points about Storage DRS.

  1. Storage DRS places VMDK’s based on IO and Space metrics.
  2. Storage DRS and SRM 5 don’t play nice, last time I checked (2/13/12).
  3. Combine Storage DRS with Storage Policy and you have a really easy way to place and manage VM’s on the storage. Just set the policy and check if it is compliant.

A Storage DRS cluster is multiple datastores appearing as one.

Some links on the topics:

Some more information from VMware on Extents
More on Storage DRS (SDRS)

In conclusion, SDRS may be removing some of the last reasons to use an extent (getting multiple lun performance with single point of management). Add that to being able to have up to 64 TB Datastores with VMFS and using extents will become even rarer than before. Unless you have another reason? Post it in the comments!

vSphere Metro Stretched Clusters – Some Info/Links

A lot of questions lately about vSphere Clusters across distance. I really need to learn for myself so I collected some good links.

Make sure you understand what “Only Non-uniform host access configuration is supported” means. Someone correct me if I have this wrong but your device that enables the distributed virtual storage needs to be sure that hosts in site A are writing to their preferred volumes in site A and vice versa in Site B. Probably way over simplifying it.


LINKS

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007545

http://virtualgeek.typepad.com/virtual_geek/2011/10/new-vmware-hcl-category-vsphere-metro-stretched-cluster.html

http://www.yellow-bricks.com/2011/10/07/vsphere-metro-storage-cluster-solutions-what-is-supported-and-what-not/

http://www.yellow-bricks.com/2011/10/05/vsphere-5-0-ha-and-metro-stretched-cluster-solutions/

Big thanks to Scott Lowe for clearing the details on this topic.

iSCSI Connections on EqualLogic PS Series

Equallogic PS Series Design Considerations

VMware vSphere introduces support for multipathing for iSCSI. Equallogic released a recommended configuration for using MPIO with iSCSI.   I have a few observations after working with MPIO and iSCSI. The main lesson is know the capabilities of the storage before you go trying to see how man paths you can have with active IO.

  1. EqualLogic defines a host connection as 1 iSCSI path to a volume. At VMware Partner Exchange 2010 I was told by a Dell guy, “Yeah, gotta read those release notes!”
  2. EqualLogic limits the number of hosts in the to 128 per pool or 256 per group connections in the 4000 series (see table 1 for full breakdown) and to 512/2048 per pool/group connections in the 6000 series arrays.
  3. The EqualLogic MPIO recommendation mentioned above can consume many connections with just a few vSphere hosts.

I was under the false impression that by “hosts” we were talking about physical connections to the array. Especially since the datasheet says “Hosts Accessing PS series Group”. It actually means iSCSI connections to a volume. Therefore if you have 1 host with 128 volumes singly connected via 1 iSCSI path each, you are already at your limit (on the PS4000).

An example of how fast vSphere iSCSI MPIO (Round Robin) can consume available connections can be seen this this scenario. Five vSphere hosts with 2 network cards each on the iSCSI network. If we follow the whitepaper above we will create 4 vmkernel ports per host. Each vmkernel creates an additional connection per volume. Therefore if we have 10 300 GB volumes for datastores we already have 200 iSCSI connections to our Equallogic array. Really no problem for the 6000 series but the 4000 will start to drop connections. I have not even added the connections created by the vStorage API/VCB capable backup server. So here is a formula*:

N – number of hosts

V – number of vmkernel ports

T – number of targeted volumes

B – number of connections from the backup server

C – number of connections

(N * V * T) + B = C

Equallogic PS Series Array Connections (pool/group)
4000E 128/256
4000X 128/256
4000XV 128/256
6000E 512/2048
6000S 512/2048
6000X 512/2048
6000XV 512/2048
6010,6500,6510 Series 512/2048

Use multiple pools within the group in order to avoid dropped iSCSI connections and provide scalability. This reduces the number of spindles you are hitting with your IO. Using care to know the capacity of the array will help avoid big problems down the road.

*I have seen the connections actually be higher and I can only figure this is because the way EqualLogic does iSCSI redirection.

Get iSCSI iqn from the ESX Command Line

I was in my personal ESX about to upgrade to update 1. I was distracted by trying to setup iSCSI from the command line. Right before I looked to the vSphere Client to get the iqn I said, “There is surely a way for me to find this from the command line.”

Searching around I found the command vmkiscsi-tool. Really good stuff, I can complete the rest of my setup without the GUI. One thing though to list out the iqn for iscsi after you enable it you must know the device name (ie vmhba??).

Using this command:
vmkiscsi-tool -I -l

I usually guess the iscsi hba is vmhba33 or 32 but how do I know for sure?
Try:
esxcfg-scsidevs -a
esxcfg-scsidevs-a
Ok great, now we know it is vmhba33

[root@esxhost01 sbin]# vmkiscsi-tool -I -l vmhba33
iSCSI Node Name: iqn.1998-01.com.vmware:esxhost01-35151883
[root@esxhost01 sbin]#

Now with a few more vmkiscsi-tool commands I can finish configuring my iSCSI.
Add the Ip of the SAN:
[root@esxhost01 sbin]# vmkiscsi-tool -D -a 172.16.23.251 vmhba33
Now rescan:
[root@esxhost01 sbin]# esxcfg-rescan -a vmhba33

Upgrade to vSphere already

OK, SRM and View 4 are out. Go ahead and start planning those upgrades from 3.x to 4. I mean really, vSphere is out now for almost 6 months. Get Enterprise Plus or the Acceleration kit, just get to vSphere. Here are a few of my reason’s why.

1. Round Robin Storage IO. Those without can giant Fiber Channel SAN infrastructure can start to stack the Software iSCSI ports and get performance above and beyond what was possible before with iSCSI. Equalogic, Lefthand and other iSCSI SAN manufacturers have to be throwing huge parties about this. While talking about iSCSI you don’t need a Service Console just for every iSCSI VMkernel port. This always seemed like extra setup in 3.x.

2 Thin Provisioning, I am not technical enough with storage to know if SAN based thin provisioning is better for some reason. It is great to be able to save space with template and other large footprint VM’s.

3. dVSwitch, VMsafe and vShield zones. New hooks for security will eventually give us insight into areas of the VI we could not see before. VMsafe will let vendors tie into the kernel (at least that is how I understand it). Additionally the new dVSwitch (Distributed Virtual Switch, sometimes it is called something else) will give control and sight into the network stack in ways that was impossible before.

This is stuff many may have read on the release date in May, but now that I have seen vSphere in action and some of the biggest hurdles (SRM and View) have been overcome, it is now time to upgrade, already.

VMworld Day 2 – All your vCloud are belong to us

Note: I rarely do posts like this. I would rather explain an admin problem and solution. I hope this doesn’t scare too many away. I am in a rant mood.

Day 2 was a great day at VMworld. The key-note today combined with the announcement after the keynote sparked a couple thoughts. I would bet I am not the only one that noticed. VMware basically put Microsoft, Google and Amazon on notice. VMware now has the tools to make a challenge to these previously unchecked organizations.

Microsoft and Google
First the giant world domination bent company based in Redmond and its information hording rival from Nor Cal. The significance of the Springsource merger/purchase and vCloud API’s is telling everyone use your existing apps in the private/public cloud, also your custom developed applications will soon fly into the cloud. Nothing we didn’t already know but now they are all supplied by VMware. Virtualization in general can make Microsoft mortal, and who would use Google apps if the apps they actually know and like could be highly available in a per month charge model.

Amazon
The vCloud Express being made available means VMware can provide virtual services on demand to anyone with a credit card. All at the same time letting the hosting companies front the major expenses for datacenter buildout. The software gets sold no matter how successful everything is. How will Amazon be able to turn a profit when competing against the most proven Enterprise platform for providing virtual servers? Pretty hard to do in my opinion.

Now before being called a VMware fanboy. I think VMware is starting a game they better win. There may be a glimmer of hope that competition will benefit the consumer. The bullying by software vendors of their customers should turn into innovation to set themselves apart. The first company to become complacent will lose. At this point who would know what is going to happen.

Using iSCSI to get some big ole disk in a Virtual Machine

First, I have lived in the South too long, because I said “Big ole disk” and couldn’t think of a more appropriate phrase. Now someone rescue me if I start to tell you to “mash” the power button on your server or SAN. I kid.

I am sure everyone out there has used this before but I like to document these things just case someone else needs help.

A coworker and I were installing a vSphere environment last week to support some new software for a customer. The software vendor required approximately 30 x 146GB drives in a Raid 5 to store images. Never would guess the software vendor happens to sell SANs too! I exaggerate it actually called for 3TB of usable space.

So my thought was to get over 2TB limit of VMFS we would need to use the MS iSCSI initiator inside the VM. Then my coworker thought we could enable MPIO using two virtual Nics with vmxnet3. We tied each vmxnet3 nic to a separate port group and assigned one of the 2 physical NICs to each port group. Additionally vmxnet3 lets you enable jumbo frames and the physical nics were already set to mtu 9000 because this was on the software iscsi vswitch. So we were able to get multiple paths from the VM to the network and have jumbo frames all the way through.

Next we presented the iSCSI volume of 3TB to the Windows machines. Of course at first it sees it as a couple of smaller volumes. Convert the disk to GPT and align to 64k, then format with NTFS. Just like that a 3TB disk inside a Virtual Machine.

iSCSI MPIO

Now we saw IOMETER push better sequential IO than an RDM that was set up for Round Robin, but not quite as good in the Random IO department as a RDM.

The main gain here is to get a file bigger than 2TB minus 512B. Useful for the scan/image servers that store a tons of files for a long time.

To sum up and make it clear.

1. Use the Microsoft iSCSI initiator and MPIO inside the VM when you need more than 2TB on  a disk.

2. Use 2 port groups and bind them to separate physical nics to let the MPIO actually work over 2 nics.

3. With vSphere use the VMXNET3 driver for network to use jumbo frames, the E1000 driver does not support this.

A Brick Hit Me

During the VMware Communities Roundtable today. I learned that Lab Manager 4 only works with vCenter Server 4. Ah! So be sure to plan ahead because even though ESX 3.5 and 4 hosts are supported your vCenter 2.5 is not. So once you upgrade to vCenter 4 you will need to upgrade to Lab Manger 4.

This is no fun because it means after hours work when the users of Lab Manager 3 are at home and happy. Which I started to wonder at what kind of pitfalls will there be upgrading. What do back up to be sure your linked clones stay linked? I guess it is time to read some docs.

Good Luck.