No matter what you do to accelerate, optimize and transform your desktop environment (physical or virtual) if the presentation is sub-par, no one cares. The common message from any vSpecialist when it comes to EUC (End User Computing, VDI is so 2011) is focus on the end user experience. Make it easy to access my data and applications from anywhere at any time and I am a happy user.
This is something I really believe in. Having delivered VDI (or TS) solutions in the past, starting as a Citrix Metaframe XP administrator. So when I noticed this webcast I wanted to be sure share it with everyone. EMC is a huge place and there is ALWAYS something going on, but I wanted to take special notice when Cisco, EMC, VCE and VMware team up with a focus on getting the end user experience done right.
Save the date and sign up! August 22, 2012 11:00 AM EDT / 8:00 AM PDT.
So sign up now here: http://bit.ly/vdia22
What to expect?
When it comes to EUC there are so many “best” practices out there many times you just need someone to tell you what works. I will take a few seconds to detail the high level bullets I always share with customers when speaking about EUC.
- From the EMC perspective it often relates to putting the right data in the right place. When using Flash drives to lower cost and footprint knowing how VDI I/O works is very important.
- Also from the EMC realm is the amazing impact FAST Cache can have on these deployments vs. trying to account for all unexpected I/O with spinning media. This additionally lowers your cost and spindle count. That is right, someone at EMC saying buy less drives.
- Use the money you save to put more RAM in your Cisco UCS B – series blades. Memory being the second bottle neck after storage when it comes to your VDI role out.
- Speaking of memory make sure you use the best hypervisor for consolidation and memory management. vSphere 5 is still years ahead of even the promised products from the other guys. The TCO picture for hardware is ONLY part of the story, so make sure you get every last drop out of those Cisco UCS blades.
- Lastly, if you want to deliver this in a tested and proven manner AND you realize your time to market is critical, EMC VSPEX and VCE Vblock take the world’s best components and software and make it work for you. No more testing for 9 months before pushing the go button.
Get to the WEBCAST Already
Once again, if you are exploring, testing, POC’ing, or running in production VDI in any way shape or form. Join the webcast on August 22 and see when EMC and Cisco have in store.
Save the date and sign up! August 22, 2012.
So sign up now here: http://bit.ly/vdia22
More on VSPEX
More on VCE and End User Compute and FASTPATH
EMC Reference Architecture -one of many…
From the Cisco Site <-Cisco UCS / EMC VNX RA
I was meeting with a customer today and had to stop for a second when they said they were using 10 TB datastores in vSphere 4.1.
At first I was going through my head of maybe NFS? No they are an all block shop. Oh wait yeah, extents. They were using 2 TB -512 byte luns to create a giant Datastore. I asked, why? The answer was simple, “so we only manage one datastore.”
I responded with well check out Storage DRS in vSphere 5! It gives you that one point to manage and automatic placement across multiple datastores. Additionally you actually can find which VM lives where, and use Storage Maintenance mode to do storage related maintenance. Right now they are locked into using extents. If they change their datastores into a Cluster the gain flexibility while not losing the ease of management.
I wanted to use the opportunity to list some information I think about Extents with VMware.
- Extents do not equal bad. Just have the right reason to use them, and running out of space is not one.
- If you lose one extent you don’t lose everything, unless that one is the first extent.
- VMware places blocks on extents in some sort of even fashion. It is not spill and fill. While not really load balancing you don’t kill just one lun at a time.
An extent with a datastore is like a stack of luns. Don’t knock out the bottom block!
Some points about Storage DRS.
- Storage DRS places VMDK’s based on IO and Space metrics.
- Storage DRS and SRM 5 don’t play nice, last time I checked (2/13/12).
- Combine Storage DRS with Storage Policy and you have a really easy way to place and manage VM’s on the storage. Just set the policy and check if it is compliant.
A Storage DRS cluster is multiple datastores appearing as one.
Some links on the topics:
Some more information from VMware on Extents
More on Storage DRS (SDRS)
In conclusion, SDRS may be removing some of the last reasons to use an extent (getting multiple lun performance with single point of management). Add that to being able to have up to 64 TB Datastores with VMFS and using extents will become even rarer than before. Unless you have another reason? Post it in the comments!
A lot of questions lately about vSphere Clusters across distance. I really need to learn for myself so I collected some good links.
Make sure you understand what “Only Non-uniform host access configuration is supported” means. Someone correct me if I have this wrong but your device that enables the distributed virtual storage needs to be sure that hosts in site A are writing to their preferred volumes in site A and vice versa in Site B. Probably way over simplifying it.
Big thanks to Scott Lowe for clearing the details on this topic.
Previously I posted on how using bigger VMFS volumes helps Equallogic reduce their scalability issues when it comes to total iSCSI connections. There was a comment about does this mean we can have a new best practice for VMFS size. I quickly said, “Yeah, make em big or go home.” I didn’t really say that but something like it. Since the commenter responded with a long response from Equallogic saying VAAI only fixes SCSI locks all the other issues with bigger datastores still remain. ALL the other issues being “Queue Depth.”
Here is my order of potential IO problems on with VMware on Equallogic:
- Being spindle bound. You have an awesome virtualized array that will send IO to every disk in the pool or group. Unlike some others you can take advantage of a lot of spindles. Even then, depending on the types of disks some IO workloads are going to use up all your potential IO.
Solution(s): More spindles is always a good solution if you have unlimited budget. Not always practical. Put some planning into your deployment. Don’t just buy 17TB of SATA. Get some faster disk and break your Group into pools and separate the workloads into something better suited to the IO needs.
- Connection Limits. The next problem you will run into if you are not having IO problems is the total iSCSI connections. In an attempt to get all of the IO you can from your array you have multiple vmk ports using MPIO. This multiplies the connections very quickly. When you reach the limit, connections drop and bad things happen.
Solution: The new 5.02 firmware increases the total maximum connections. Additionally, bigger datastores means less connections. Do the math.
- Queue Depth. There are queues everywhere, the SAN ports have queues. Each LUN has a queue. The HBA has a queue. I would need to defer to a this article by Frank Denneman (a much smarter guy than myself.) That balanced storage design is best course of action.
Solution(s): Refer to problem 1. Properly designed storage is going to give you the best solution for any potential (even though unlikely) queue problems. In your great storage design, make room for monitoring. Equallogic gives you SANHQ USE IT!!! See how your front end queues are doing on all your ports. Use ESXTOP or RESXTOP to see how the queues look on the ESX host. Most of us will find that queues are not a problem when problem one is properly taken care of. If you still have a queuing problem then go ahead and make a new datastore. I would also request Equallogic (and others) release a Path Selection Policy plugin that uses a Least Queue Depth algorithm (or something smarter). That would help a lot.
So I will repeat my earlier statement that VAAI allows you to make bigger datastores and house more VM’s per store. I will add a caveat, if you have a particular application that needs a high IO workload, give it a datastore.
So I often have epiphany teasers while driving long distances or stuck in traffic. I call them teasers because they are never fully developed ideas and often disappear into thoughts about passing cars, or yelling at the person on their cell phone going 15 MPH taking up 2 lanes.
Here is some I was able to save today (VMware related):
1. What if I DID want an HA cluster to be split in two different locations, Why?
2. Why must we over-subscribe iSCSI vmkernel ports to make the best use of the 1gbe phyical nics. Is it a just the software iSCSI in vSphere? Is just something that happens with IP storage? I should test that sometime…
3. If I had 10 GB nics I wouldn’t use them on Service Console or Vmotion that would be a waste. No wait, VMotion ports could use it to speed up your VMotions.
4. Why do people use VLAN 1 for their production servers? Didnt’ their Momma teach em?
5. People shouldn’t fear using extents, they are not that bad. No, maybe they are. Nah, I bet they are fine, how often does just 1 lun go down. What are the chances of it being the first lun in your extent? Ok maybe it happens a bunch. I am too scared to try it today.