Finally Getting my vSphere 6 Lab running on Ravello

Using the Autolab 2.6 Config

Head on over the Labguides.com and check out autolab. I wanted a quick start, but didn’t want all the fun to be automated out of my hands. So I will give a quick tour of how I got my basic lab up and going. Part 2 I will add a VSAN cluster so I can catch up there too.

The auto-builds of Windows worked great. The domain controller auto setup with DHCP, Active Directory and the fun bits to get PXE working the the ESXi install. This is stuff I didn’t want to waste time on.

I had to re-kick off the vCenter build to get Powershell and PowerCLI up and going. I had to manually install vCenter 6 and the vCenter Appliance doesn’t play nice with Autolab. Ok for me because I actually wanted to run through the install to check the options and see if things like SSO setup got any better.

Letting the hosts PXE boot for vSphere 6 install.

ESXi Install finished

Installing vCenter 6.0

Deploying vCenter was actually pretty smooth. Small lab so I am using the Embedded Deployment.

Adding Hosts

Troubleshooting HA Issues

Just like old times the HA agents didn’t install correctly the first time. The more things change…

 

Great stuff from Ravello

Very thankful for the vExpert Lab space Ravello provided. If you are considering a home lab but don’t want to buy servers and switches and even storage this can be a good way to play with vSphere. I also spun up Photon and Openstack. Although I want to walk through the Openstack install from start to finish.

One of my hosts did this on boot but a quick restart and it was fine. Next step is to add some VSAN hosts which I will show next time.

(Hey, it’s emulated IntelVT on top of AWS, so its not-PROD.)

Links:

http://www.ravellosystems.com

http://www.labguides.com

Use Ravello Repo to get the autolab config, openstack (which I am also playing with), and some other blueprints for labs.

https://www.ravellosystems.com/repo/

Some help from this post from William Lam

http://www.virtuallyghetto.com/2015/04/running-nested-esxi-vsan-home-lab-on-ravello.html

 

 

VMware vCenter Operations Manager and Pure Storage Rest API

I was playing with the REST API and Powershell in order to provision vSphere Datastores. I started to think what else could we do with all the cool information we get from the Pure Storage REST API?
I remembered some really cool people here and here had used the open HTTP Post adapter. So I started to work on how to pull data out of the Flash Array and into vCOPS.

Pure Dashboard

media_1407352996822.png

We already get some pretty awesome stats in the Pure web GUI. What we don’t get is the trends and analysis. Also I don’t see how my data reduction increases and decreases over time. Also I don’t get stats from multiple arrays.

First Dashboard with Array Stats, Heat Map, and Health based in vCops Baseline

media_1407353219054.png
media_1407360492500.png

Array Level Stats

First each of these scripts require Powershell 4.0.
1. Enter the Flash Array Names in the variable for $FlashArrayName. You can see I have 4 arrays in the Pure SE Lab.
2. I create a file with the credential to vCOPS. Since we are going to schedule this script to run every few minutes you need to create this file. More information on creating that credential here http://blogs.technet.com/b/robcost/archive/2008/05/01/powershell-tip-storing-and-using-password-credentials.aspx

You MUST read and do that to create the cred.txt file in c:\temp that I reference in the script.

3. Change the $url variable to be the IP or name of your vCOPS UI server.
4. Don’t forget to modify the Pure Flash Array and Password in each script.

Find it on GitHub https://github.com/2vcps/purevcops-array

[code]
cls
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = @(‘pure1′,’pure2′,’pure3′,’pure4’)

$AuthAction = @{
password = "pass"
username = "user"
}

# will ignore SSL or TLS warnings when connecting to the site
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$pass = cat C:\temp\cred.txt | ConvertTo-SecureString
$mycred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist "admin",$pass

# function to perform the HTTP Post web request
function post-vcops ($custval,$custval2,$custval3)
{
# url for the vCOps UI VM. Should be the IP, NETBIOS name or FQDN
$url = "<vcops ip>"
#write-host "Enter in the admin account for vCenter Operations"

# prompts for admin credentials for vCOps. If running as scheduled task replace with static credentials
$cred = $mycred

# sets resource name
$resname = $custval3

# sets adapter kind
$adaptkind = "Http Post"
$reskind = "Pure FlashArray"

# sets resource description
$resdesc = "<flasharraydesc>"

# sets the metric name
$metname = $custval2

# sets the alarm level
$alrmlev = "0"

# sets the alarm message
$alrmmsg = "alarm message"

# sets the time in epoch and in milliseconds
#This is setting us 7 hours behind
$epoch = [decimal]::Round((New-TimeSpan -Start (get-date -date "01/01/1970") -End (get-date)).TotalMilliseconds)

# takes the above values and combines them to set the body for the Http Post request
# these are comma separated and because they are positional, extra commas exist as place holders for
# parameters we didn’t specify
$body = "$resname,$adaptkind,$reskind,,$resdesc`n$metname,$alrmlev,$alrmmsg,$epoch,$custval"

# executes the Http Post Request
Invoke-WebRequest -Uri "https://$url/HttpPostAdapter/OpenAPIServlet" -Credential $cred -Method Post -Body $body
#write-host $resname
#write-host $custval2 "=" $custval "on" $custval3
}
ForEach($element in $FlashArrayName)
{
$faName = $element.ToString()
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
api_token = $ApiToken.api_token
}
Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

$PureStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?action=monitor" -WebSession $Session
$PureArray = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?space=true" -WebSession $Session
ForEach($FlashArray in $PureStats) {

$wIOs = $FlashArray.writes_per_sec
$rIOs = $FlashArray.reads_per_sec
$rLatency = $FlashArray.usec_per_read_op
$wLatency = $FlashArray.usec_per_write_op
$queueDepth = $FlashArray.queue_depth
$bwInbound = $FlashArray.input_per_sec
$bwOutbound = $FlashArray.output_per_sec
}
ForEach($FlashArray in $PureArray) {

$arrayCap =($FlashArray.capacity)
$arrayDR =($FlashArray.data_reduction)
$arraySS =($FlashArray.shared_space)
$arraySnap =($FlashArray.snapshots)
$arraySys =($FlashArray.system)
$arrayTP =($FlashArray.thin_provisioning)
$arrayTot =($FlashArray.total)
$arrayTR =($FlashArray.total_reduction)
$arrayVol =($FlashArray.volumes)
}

post-vcops($wIOs)("Write IO")($faName)
post-vcops($rIOs)("Read IO")($faName)
post-vcops($rLatency)("Read Latency")($faName)
post-vcops($wLatency)("Write Latency")($faName)
post-vcops($queueDepth)("Queue Depth")($faName)
post-vcops($bwInbound)("Input per Sec")($faName)
post-vcops($bwOutbound)("Output per Sec")($faName)

post-vcops($FlashArray.capacity)("Capacity")($faName)
post-vcops($FlashArray.data_reduction)("Real Data Reduction")($faName)
post-vcops($FlashArray.shared_space)("Shared Space")($faName)
post-vcops($FlashArray.snapshots)("Snapshot Space")($faName)
post-vcops($FlashArray.system)("System Space")($faName)
post-vcops($FlashArray.thin_provisioning)("TP Space")($faName)
post-vcops($FlashArray.total)("Total Space")($faName)
post-vcops($FlashArray.total_reduction)("Faker Total Reduction")($faName)
post-vcops($FlashArray.volumes)("Volumes")($faName)

}
[/code]

 

For Volumes

Find it on GitHub https://github.com/2vcps/purevcops-volumes

[code]
cls
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = @(‘pure1′,’pure2′,’pure3′,’pure4’)

$AuthAction = @{
password = "pass"
username = "user"
}

# will ignore SSL or TLS warnings when connecting to the site
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
$pass = cat C:\temp\cred.txt | ConvertTo-SecureString
$mycred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist "admin",$pass

# function to perform the HTTP Post web request
function post-vcops ($custval,$custval2,$custval3,$custval4)
{
# url for the vCOps UI VM. Should be the IP, NETBIOS name or FQDN
$url = "<vcops ip or name>"
#write-host "Enter in the admin account for vCenter Operations"

# prompts for admin credentials for vCOps. If running as scheduled task replace with static credentials
$cred = $mycred

# sets resource name
$resname = $custval

# sets adapter kind
$adaptkind = "Http Post"
$reskind = "Flash Volumes"

# sets resource description
$resdesc = $custval4

# sets the metric name
$metname = $custval2

# sets the alarm level
$alrmlev = "0"

# sets the alarm message
$alrmmsg = "alarm message"

# sets the time in epoch and in milliseconds
#This is setting us 7 hours behind
$epoch = [decimal]::Round((New-TimeSpan -Start (get-date -date "01/01/1970") -End (get-date)).TotalMilliseconds)

# takes the above values and combines them to set the body for the Http Post request
# these are comma separated and because they are positional, extra commas exist as place holders for
# parameters we didn’t specify
$body = "$resname,$adaptkind,$reskind,,$resdesc`n$metname,$alrmlev,$alrmmsg,$epoch,$custval3"

# executes the Http Post Request
Invoke-WebRequest -Uri "https://$url/HttpPostAdapter/OpenAPIServlet" -Credential $cred -Method Post -Body $body

write-host $custval,$custval2,$custval3
}
ForEach($element in $FlashArrayName)
{
$faName = $element.ToString()
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
api_token = $ApiToken.api_token
}
Invoke-RestMethod -Method Post -Uri "https://${faName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

$PureStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/array?action=monitor" -WebSession $Session
$PureVolStats = Invoke-RestMethod -Method Get -Uri "https://${faName}/api/1.1/volume?space=true" -WebSession $Session
ForEach($Volume in $PureVolStats) {
#$Volume.data_reduction
#$Volume.name
#$Volume.volumes
#$Volume.shared_space
#$Volume.system
#$Volume.total
#$Volume.total_reduction
#$Volume.snapshots
$adjVolumeSize = ($Volume.Size /1024)/1024/1024
#$Volume.thin_provisioning

post-vcops($Volume.Name)("Volume Size")($adjVolumeSize)($faName)
post-vcops($Volume.Name)("Volume Data Reduction")($Volume.data_reduction)($faName)
post-vcops($Volume.Name)("Volumes")($Volume.volumes)($faName)
post-vcops($Volume.Name)("Shared Space")($Volume.shared_space)($faName)
post-vcops($Volume.Name)("System")($Volume.system)($faName)
post-vcops($Volume.Name)("Total")($Volume.total)($faName)
post-vcops($Volume.Name)("Total Reduction")($Volume.total_reduction)($faName)
post-vcops($Volume.Name)("Thin Provisioning")($Volume.thin_provisioning)($faName)
post-vcops($Volume.Name)("Snapshots")($Volume.snapshots)($faName)
}
}
[/code]

Once each of the scripts is working schedule them as a task on a windows server. I do one for volumes and one for arrays and run them every 5 minutes indefintely. This will start to dump the data into vCOPS.

Now you can make Dashboards.

Creating Dashboards

media_1408382785427.png

Login to the UI for vCOPS. You must by in the custom UI, the standar UI hides all of the cool non-vSphere customization you can do.

 

Go to Environment –> Environment Overview

media_1408383055768.png

Expand Resource Kinds

media_1408383114822.png

This lets you know that data is being accepted to the array. Other than the Powershell script bombing out and failing this is the only way you know it is working. Now for a new Dashboard.

Click Dashboards -> Add

media_1408383203181.png

Drag Resources, Metric Selector, Metric Graph and Heat Map to the Right

media_1408383262000.png

Name it and Click OK

Adjust the Layout

media_1408383477679.png

I like a nice Column for information and a bigger display area for graphs and heat maps. Adjust to your preference.

Edit the Resources Widget

media_1408383579549.png

Edit the Name and filters to tag

media_1408383667269.png

Now we just see the Flash Arrays

media_1408383734620.png
media_1408383840440.png

Select your Resource Provider I named mine Lab Flash Arrays as the Providing Widget for the Metric Selector. Also Select the Lab Flash Arrays and Metric Selector as the Providing Widgets for the Metric Graph.

Edit the Metric Graph Widget by clicking the gear icon

media_1408384372245.png

I change the Res. Interaction Mode to SampleCustomViews.xml. This way when I select a Flash Array the Graph does show up until I double click the Metric in the Metric Selector. You are of course free to do it as you like.

The Heat Map

media_1408384493307.png

Edit the heat map and you will find tons of options.

media_1408384631976.png

Create a Configuration

media_1408384728117.png

Name the New Configuration

media_1408384811714.png

Group by and Resource Kinds

media_1408384843862.png

Group by the Resource Kind and then select Pure Flash Array in the drop down.

Select the Metric to Size the Heatmap and Color the Heatmap

media_1408384873077.png

Adjust the colors if you think Read and Green are boring

media_1408384896168.png

Save the Config!

media_1408384924548.png

Look! A cool new heatmap

media_1408384959172.png

Do this for all the metrics you want to have as a drop down in teh dashboard.

Obviously there are a lot more things you can do with the Dashboards and widgets. Hopefully this is enough to get you kicked off.

A Brand New Dashboard

media_1408385301227.png

Provision vSphere Datastores on Pure Storage Volumes with Powershell

A week or so ago our Pure Storage powershell guru Barks @themsftdude sent out some examples of using Powershell to get information via the Pure Storage REST API. My brain immediately started to think how we could combine this with PowerCLI to get a script to create the LUN on Pure and then the datastore on vSphere. So now provision away with Powershell! You know, if that is what you like to do. We also have a vCenter plugin if you like that better.

So now you can take this code and put it into a file New-PSDataStore.ps1

What we are doing:

1. Login to vCenter and the REST API for the Array.
2. Create the Volume on the Flash Array.
3. Place the new volume in the Hostgroup with your ESX cluster.
4. Rescan the host.
5. Create the new Datastore.

Required parameters:

-FlashArray The name of your array
-vCenter Name of your vCenter host
-vCluster Name of the cluster your hosts are in. If you don’t have clusters (what?) you will need to modify the script slightly.
-HostGroup The name of the hostgroup in the Pure Flash Array.
-VolumeName Name of the volume and datastore
-VolumeSize  Size of the volume. This requires denoting the G for Gigabytes or T or Terabytes
-pureUser The Pure FlashArray username
-pureUser The Pure FlashArray  password

[powershell]
# example usage
#.\new-PSdatastore.ps1 -FlashArray "Array" -vCenter "vcenter" -vCluster "clustername" -HostGroup "HostGroup" -VolumeName "NewVol" -VolumeSize 500G -pureUser pureuser -purePass purepass
#On the Volume Size parameter you must include the letter after the number I have tested <number>G for Gigabytes and <number>T for Terabytes
#Special thanks to Barkz www.themicrosoftdude.com @themsftdude for the kickstart on the API calls.
#Find me @jon_2vcps on the twitters. Please make this script better.
# If you do not have a stored PowerCLI credential you will be prompted for the vCenter credentials.
#Not an official supported Pure Storage product, use as you wish at your own risk.
#

Param(
[Parameter(Mandatory=$true)]
[ValidateNotNullOrEmpty()]
[string] $FlashArray,
[string] $VCenter,
[string] $vCluster,
[string] $HostGroup,
[string] $VolumeName,
[string] $VolumeSize,
[string] $pureUser,
[string] $purePass

)

Add-PSSnapin VMware.VimAutomation.Core

#cls
$vname=$VolumeName
$vSize=$VolumeSize
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
$FlashArrayName = $FlashArray
$vCenterServer = $VCenter
$esxHostGroup = $HostGroup
Connect-viserver -Server $vCenterServer

$workHost = get-vmhost -Location $vCluster | select-object -First 1

$AuthAction = @{
password = $purePass
username = $pureUser
}
$ApiToken = Invoke-RestMethod -Method Post -Uri "https://${FlashArrayName}/api/1.1/auth/apitoken" -Body $AuthAction

$SessionAction = @{
api_token = $ApiToken.api_token
}
Invoke-RestMethod -Method Post -Uri "https://${FlashArrayName}/api/1.1/auth/session" -Body $SessionAction -SessionVariable Session

Invoke-RestMethod -Method POST -Uri "https://${FlashArrayName}/api/1.1/volume/${vname}?size=${vSize}" -WebSession $Session
Invoke-RestMethod -Method POST -Uri "https://${FlashArrayName}/api/1.1/hgroup/${esxHostGroup}/volume/${vname}" -WebSession $Session
$volDetails = Invoke-RestMethod -Method GET -Uri "https://${FlashArrayName}/api/1.1/volume/${vname}" -WebSession $Session
$rescanHost = $workHost | Get-VMhostStorage -RescanAllHba
$volNAA = $volDetails.serial
$volNAA = $volNAA.substring(15)
$afterLUN = $workHost | Get-scsilun -CanonicalName "naa.624*${volNAA}"
New-Datastore -VMhost $workHost -Name $vname -Path $afterLUN -VMFS
[/powershell]

 

Changing the vCenter Graphs to Microsecond

So if you are moving your data center to the next generation of Flash Storage you may have noticed your performance charts in VMware vCenter or other tools look something like this.

wpid1734-media_1400074458056.png

You start to think, what good is the millisecond scale in a microsecond world? (I know that screenshot is from vCOPS.)

Luckily VMware provided an answer (sorta kinda).

Using microsecond for Virtual Disk Metrics

wpid1735-media_1400074662471.png

Go ahead and select your VM and go to Monitor –> Performance and select Advanced.
First change the View from CPU to Virtual Disk(1).
Then select Chart Options(2)

wpid1736-media_1400074748513.png

Deselect the Legacy and move on to microseconds.

wpid1737-media_1400090502397.png

Then you can select Save Options to use these settings easily next time. The new settings will be saved in the drop down list in the top right corner.

wpid1738-media_1400090559293.png

Finally, you have a scale that can let you see what the Virtual Disks are doing for read and write latency.

wpid1739-media_1400090652937.png

Disk vs Virtual Disk Metrics

In the vSphere Online documentation the Disk Metric group is described as:
Disk utilization per host, virtual machine, or datastore. Disk metrics include I/O performance (such as latency and read/write speeds), and utilization metrics for storage as a finite resource.

While Virtual Disk is defined:
Disk utilization and disk performance metrics for virtual machines.

Someone can correct me if I am wrong, but the differences I see is even though they are both choices when a VM is selected only the Disk metric gives stats for the datastore device that the VM lives on and can be shown side by side with that VM’s stats but does NOT give the option to change the scale to microsecond if needed. Virtual Disk allows only VM level statistics but permits you to view them as microseconds at least for read and write latency.
Hope this helps.

What happened while getting 100% Virtualized

I often think about how many people have stalled around getting to 100% virtual. I know you are thinking I need to find some fun things to do. You are probably right.

The first thing I thought when I deployed my very first virtual infrastructure project back in the day was, “Man, I want to see if I can virtualize EVERYTHING.” This is before I knew much about storage, cloud, and management. I may be naive but I think there is real potential out there to achieve this goal. There is low hanging fruit still out there depending how you deploy your infrastructure. Having attended VMware Partner Exchange (PEX) I know how the ecosystem is built around your journey to virtualization. The biggest slide to resellers and other partners is the one VMware shows off that says, “Every $1 a customer spends on VMware they buy $9-11 in infrastructure.” Which I fully believe is the reason many customers never saw the FULL cost savings they could have when going virtual.

Roadblocks

media_1395085571475.png

I believe we all ran into a couple of different kinds of roadblocks on our path. First were organizational. Line of business owners, groups within IT and other political entities made traveling the road very difficult. Certain groups didn’t want to share. Others started to think VM’s were free and went crazy with requests. Finally the very important people who own the very important application didn’t want to be virtual because somehow virtualization was a downgrade from dedicated hardware.

Then if we were able to dodge the roadside problems organizationally, there were technical problems. Remember that $11 of drag? The big vendors made an art of refreshing and updating you with new technology. I know, I helped do it. So performance was a problem? Probably buy more disk or servers. Then every 3-5 years they were back, with something new to fix what the previous generation did not deliver on. This “spinning drag” in the case of storage slowed you from getting to your goal. 100%.

Disillusionment

media_1395085999383.png

At some point you lose the drive to be 100% virtual. The ideal has been beaten out of you. Well at least my vendor takes me for steak dinner and I get to go to VMworld and pretend I am a big shot every year. This is where you settle. Resign yourself to the fact that everything is so complicated and hard it will never get done. The big vendors make a huge living on keeping you there. Changing the name from VI, to Private Cloud, Hybrid super happy land or whatever some marketing guys that have never opened the vCenter client think of next.

Distractions

media_1395143968665.png

So trying to rebuild Amazon in your data center? Probably lots of other things to fix first. Using more complicated abstraction layers may help in the long run to building a cloud. I see more customers continue to refresh wasteful infrastructure with new infrastructure while they are still trying to figure this out. What we need is a quick an easy win. Make things better and save money right away. Then maybe we can keep working on building the utopian cloud.

The low hanging fruit

media_1395163000352.png

When we first started to virtualize we looked for the easy wins. To get you rolling again down the path we need to identify the lowest hanging fruit in the data center. We found all the web servers running at 1% CPU and 300MB of Ram (if that) and virtualized those so quick the app owner didn’t even know it happened. Just like a room of 1000 servers all running at 2% CPU usage there are giant tracks of heat generating spinning waste covering the data center. You had to get so many of them and stripe so wide just to make performance serviceable. You wasted weeks of your life in training classes to learn how to tweak and tune these boat anchors because it was always YOUR fault it didn’t do what the vendor said it would.

Take that legacy disk technology and consolidate to a system made to make sure it is not the roadblock on the way to being 100% virtual. I remember taking pictures of the stacks of servers getting picked up by the recycling people and now is the time to send off tons of refrigerator sized boxes of spinning dead weight. I am not in marketing so I don’t want to sound like a sales pitch. I am seeing customers realize their goal of virtualization with simple and affordable flash storage. No more data migrations or End of Life forklift upgrades. No more having to decide if the maintenance is so high I should just buy a new box. Just storage that performs well all the time and is fine running virtual Oracle and VDI on the same box.

How we do it

media_1395163736250.png

How is Pure Storage able to replace disk with Flash (SSD)? Mainly, we created a system from the ground up just for Flash. We created a company that believes the old way of doing business needs to disappear. Customers say, “You actually do what you said, and more.” (Biggest reason I am here). Also, do it all at the price of traditional 15k disk. Not there on SATA, yet.

  1. Make it ultra simple. No more tweaking, moving, migrating or refreshing. If you can give a volume a name and a size you can manage Pure Storage.
  2. Make it efficient. No more wasted space due to having to short stroke drives, no more wasted space because you created a RAID 10 pool and now have nowhere to move things so you can destroy and recreate it.
  3. Make it Available. Support that is awesome because things do happen. Most likely though most of your downtime is planned when it comes to migrating or upgrading code. Pure Storage will allow zero performance hit and zero outage to reboot a controller to upgrade the firmware/code (whatever you want to call it). Pretty nice for an environment that needs ultimate it uptime.
  4. Make sure it alway performs. Imagine going to the DBA’s and saying, “everything is under 1ms latency, How about you stop blaming storage and double check your SQL code?” Now that is something as an administrator I wanted to say for a long long long time.

Once you remove complicated storage from the list of things preventing you from thing preventing 100% virtual you can focus on getting the applications working right, the automation to make life easier and maybe make it to your kid’s soccer games on Saturday.

What do we really need? Cloud? or Change?

Going through the VCAP-DCD material and I had a question. Since it comes with the assumption that everyone is working toward building a private cloud. So I started asking, do I need to build a “cloud” and why? Now don’t think I have completely gone bonkers. I still think the benefits of cloud could help many IT departments. I think more than how do I build a cloud, the question should be what do we need to change to provide better service to the business.

We are infrastructure people

wpid1684-wpid-image1.png

As VMware/Storage/Networking professionals we tend to think about what equipment we need to do this our that. Or how if I could just get 40Gb Ethernet problems XYZ would go away. Often we have to build it on top of a legacy. If we do ever get a green field opportunity it usually needs to be done so quickly we never quite to investigate all the technology we wish we could. There is stuff like All Flash, Hyper-converged things, accelerator appliances, software defined everything all aiming at replacing legacy Compute/Network/Storage.

My last post was about knowing the applications and this is not a repeat of that, but it is very important to for us to look at how our infrastructure choices will impact the business. Beyond business metrics of my FlashArray allows business unit X to do so many more transactions in a day which means more money for the business. What else do the internal customers require from the blinking lights in the loud room with really cold AC.

Ask better questions

wpid1685-media_1390582947194.png
  • How does faster storage change the application?
  • What will change if we automate networking?
  • Could workers be more productive if the User experience was better?
  • What are things we do just because we always do them that way?
  • What legacy server, storage and network thought processes can we turn upside down?

This type of foundation enables you to focus on the important things like getting better at Halo. Just kidding. My goal is one day Infrastructure Administrators will get to sleep well at night, their kids will know their names and weekends will once again be for fun things and not Storage, Server or Network cutovers. That is the value of Private Cloud, not that I can now let internal customers self-service provision a VM or application (which is still cool). We gain confidence that our infrastructure is manageable. We have time to work on automating the boring repetitive stuff. You get your life back. Awesome.

No Spindles Bro

I was assisting one of my local team members the other day with sizing a VM for Microsoft SQL. I usually always fall back to this guide from VMware. So I started out with the basic seperation of Data and Logs and TempDB.

Make it look like this:

VM Disk Layout

LSI SCSI Adapter
C: – Windows

Paravirtual SCSI Adapter
D: – Logs
E: – Data
F: – TempDB

Which is pretty standard. Then someone said, “Why do we need to do that?” I thought for a second or five. Why DO we need to do that? I knew the answer in the old school. Certain raid types were awesomer at the types of data written by the different parts of the SQL Database. We are in a total post-spindle count world. No Spindles Bro! So what are some reasons to still do it this way for an All Flash Array?

1. Disk Queues
I think of these like torpedo tubes. The more tubes the less people are waiting in line to load torpedoes. You can fire more, so to speak. Just make sure the array on the other end is able to keep up. Having 30 queues all going to one 2 Gbps Fiber Channel port would be no good. See number 3 for paths.

2.  Logical Separation and OCD compliance (if using RDMs)
Don’t argue with the DBA. Just do it. If something horrifically bad happens the logs and data will be in different logical containers. So maybe that bad thing happens to one or the other, not both. I am not a proponent of RDM’s. SO much more to manage. If you can’t win or don’t want to fight that fight at least with RDM’s you will be able to label the LUN on the array “SQLSERVER10 Logs D” so you know the LUN matches to something in Windows. This also makes writing snapshot scripts much easier.

3. Paths
Each Datastore or RDM has its own paths, if you are using Round Robin (recommended for Pure Flash Array) more IO on more paths equals better usage of the iSCSI or FC interconnects. If you put it all on one LUN, you only get those queues (see #1) and those paths. Remember do what you can to limit waiting.
Am I going down the right path? How does this make it easier? Are there other reasons to separate the logs and data for a database other than making sure the Raid 10 flux capacitor is set correctly for 8k sequential writes? I don’t want to worry about that anymore. Pretty sure plenty other VM Admins and DBA’s don’t either.

For me a good exercise in questioning why I did things one way and if I should still do them this way now.

VMware vCenter Appliance 5.5 – Tour

So you have ESXi up and running. What is next? Get the vCenter appliance running. I downloaded the OVA and imported in just a few minutes.

media_1379943251696.png
media_1379943393581.png

After getting the appliance all booted go the https://<your-ip>:5480

Setup vCenter Options

media_1379943489580.png

I selected custom so I could go through all the options.

media_1379943516515.png

Oracle is also an option.

media_1379943559316.png

Fill in if external. Embedded you just need to choose a password for the Administrator.

media_1379943617246.png

Setup your Active Directory authentication. You can do this later if you don’t have the right information now. One thing I learned is the hostname of the appliance MUST be set to a FQDN for this to work.

media_1379943660938.png

NTP rocks!

media_1379943724939.png
media_1379943789886.png
media_1379944072168.png

Sign in. the default username and password for the appliance is root and vmware

media_1379944172479.png
media_1379944189917.png
media_1379944218620.png

Now you have a ESXi all ready and added. Start being Virtually awesome.

Installing VMware vSphere 5.5 – Quick Tour

So if you haven’t gone through it in your lab, what is better than getting an idea of how to install vSphere 5.5 with a few screenshots. For the beginners out there I just wanted to walk through the process really quick like.

media_1379941998728.png

Boot from the media!

media_1379942059558.png

Still looks very familar if you have done this before. Of course if you are so awesome why are you still reading?

media_1379942207911.png
media_1379942228953.png

Look! It’s vSAN

media_1379942266170.png

Is it VSAN, vSAN or Vsan?

media_1379942320221.png

I always use password123 – just so it is easy. Just kidding. SRSLY!

media_1379942420991.png

By the way a note to VMware: Hitting F11 is not awesome on a Mac. Just hold every key on the bottom left side of the keyboard and hit volume down key for those that have always been mac people and thought F11 is some kind of Air Force project. Actually just FN +F11

media_1379942840755.png

Woot! Now you are a pro. Go take the VCP. Oh and study a bunch first.

Now it is time to add it to your vCenter.

Virtual Storage Integrator 5.6 – What’s New

The Virtual Storage Integrator or VSI has been around for a while. Seems every release something new and exciting gets added that customer have asked for. The VSI 5.6 plugin for EMC is the latest version (9/13/2013) of the plugin to help streamline and simplify interactions between the vSphere client and the EMC storage used to support your Virtual Data Center/Private Cloud/Software Defined Data Center.

The VSI plugin can be downloaded for no extra charge if you have a current support.emc.com account (BTW so glad it is not powerlink anymore).

VSI Support and Downloads Page

You may just want to post a question on the EMC Community about the VSI. You can do that here.

Yeah community!

Enough background already what is new in the new version 5.6?

XtremIO Support

wpid1361-media_1378997478605.png

Awesome provisioning and visibility for the new all flash array from EMC. Ready now for the people with XtremIO and for the many waiting to get one. Coming soon!

Here is a quick demo of the XtremIO functionality. Select 720p for better viewing.

VPLEX Support

wpid1362-media_1378997599802.png

Our data mobility team is super excited about now supporting VPLEX provisioning in the VSI plugin. So now you are able to create the VPLEX datastores straight from the vSphere client. Very cool.
Update 9/23/13 Demo of VPLEX Provisioning with VSI

VMAX Provisioning with Striped Meta

We were all very excited when VMAX provisioning was added to the VSI plugin and now it is able to use the striped meta volume, which is a big deal for some VMAX users. This is an option now and you can select either method when provisioning to the VMAX.

Update 9/19/13 -> a demo from @drewtonnesen

Did you hear there is a new VNX?

wpid1364-media_1378997948493.png

The newest versions of the VNX are supported in VSI 5.6 and as you see in the slide the some of the coolest new features of the VNX will be available for use with the new VSI 5.6

I hope you are as excited about the newest release of the plugin. Remember that is supports vSphere 5.5 too!

If you have any questions please leave a comment of better yet start a thread on the community.