Use PKS Enterprise on VMware SDDC and Pure Storage

Use PKS Enterprise on VMware SDDC and Pure Storage

Pivotal Container Services (PKS) provides a deeply integrated Kubernetes (k8s) architecture for the VMware SDDC. It is a joint engineering project from VMware and Pivotal. In my conversations with Pure Storage customers or potential customers around Kubernetes I often get asked about how Pure Storage can help a PKS Enterprise environment. The good news is there is a very easy path to utilizing k8s with Pure + VMware + PKS.

The Architecture

Using Pure with PKS is actually very straight forward. Since Pure FlashArray is already leading choice for all VMware environments it is not anything out of the ordinary to support PKS. 

Understanding the underlying technology that integrates PKS into VMware you may soon realize that highly reliable, stateless and shared storage is the best choice when deploying PKS. 

The choice between drivers (shown in the graphic above) to deliver the Storage is up to you. The vSphere Cloud Provider provides automated creation and management of the virtual disks presented to containers in PKS. This supports the use of vVols and enables great possibilities for your PKS environment.  Pure Service Orchestrator utilizes a direct connection to Pure Storage FlashArrays, FlashBlades and Cloud Block Stores. It is installed with a single Helm command or Kubernetes Operator. It includes Smart Provisioning in order to place volumes on the most optimal storage device in your fleet.

The choice of which tool will be dictated by your workload. It is not an exclusive choice either. It is easy to do both. After VMworld I hope to publish the details on how to install PSO on PKS. If you have really good github search foo you may be able to find the bosh deployment.

Highly Reliable

Pure Storage has measured 6×9’s of uptime across its customer base. Many storage solutions for container environments will require hours of planning and weeks of proper implementation to provide high availability. Do not spend time re-architecting your storage infrastructure for PKS. Spend your time delivering k8s to your customers so they can deliver innovation for your business.  Use the Pure Storage devices you already have. You may not even need a whole new dedicated array (don’t tell sales I said that). 

Stateless Arrays for Stateful Data

Migrating data should be eliminated from your daily tasks. As FlashArrays move further into the future where data always stays in place. The ability to keep the data in place for multiple hardware generations is a proven benefit of Pure. Migrating persistent storage in k8s even on VMware is a non-trivial task. Depending on your scale this could take weeks of planning and careful flawless execution to accomplish non-disruptively. The underlying hardware should not be a concern for delivering applications. Pure Storage has made this a reality since the FlashArray debut 7 years ago.

Shared Storage

Delivering highly reliable data across multiple PKS and vSphere clusters, allowing applications to failover if the compute in an availability zone becomes unavailable, is key to delivering a cloud experience for your k8s rollout. While the Pure sales teams would gladly help you acquire a FlashArray per vSphere cluster hosting PKS this is simply un-needed for nearly all situations. Especially as you start on your Kubernetes journey.

But Why PURE?

Simple; vVols on the FlashArray combined with the PKS integration with vSphere enables mobility of data and freedom unavailable on a legacy datastore. Have a group that rolled their own k8s? FlashArray can clone their persistent data instantly into PKS using vVols. Need to copy data from a bare metal (non-VM) k8s cluster to PKS? Pure vVols makes this possible. Have multiple k8s clusters within PKS today that require the same data for test/dev/prod Pure Storage enables this nearly instantly. Pure Storage FlashArray Snapshots and Clones move at the speed of an API call from any of our SDK’s from Python to Powershell to Ansible to Terraform and more to give you an easy way to fit Pure Storage into your Infrastructure as Code tools. 

You can probably spend the next 5 hours reading blogs and papers of all the other benefits of Pure Storage and they all apply to your PKS on vSphere environment but I wanted to provide a few examples directly related to operating PKS on Pure.

VMworld 2019 Session

In my session for VMworld in San Francisco I will demonstrate how Pure Storage is able to instantly migrate persistent volumes from “other” k8s clusters to PKS. Make sure you make it to this session if you considering PKS.

PSO and “Failed to Log in to Any iSCSI Targets.”

So I create and destroy Kubernetes clusters on vSphere on a pretty regular basis. Some I create with Terraform and Ansible. Some I use PKS. I have a plumbing test for Pure Service Orchestrator that mounts a single volume to a pod on each node.

Every once in a while I get an error like this, on just one node:

Failed to log in to any iSCSI targets! Will not be able to attach volume

In order to make sure it isn’t PSO with the error and it shouldn’t be since the other nodes are working. Run this command:

iscsiadm -m discovery -t st -p 192.168.230.24
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.24,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.25,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.26,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
 iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.27,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
 192.168.230.24:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.25:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.26:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
 192.168.230.27:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479

Now I that isn’t what should be the result. So I thought at first to restart iscsi and that didn’t help. Then I thought, well this is a lab so lets just…

#cd /etc/iscsi
#rm -r nodes

Do not try this if you have other iSCSI targets for other storage. Not sure you will be happy. At first, I thought I should stop iSCSI before doing this. It doesn’t seem to have any effect. Now every node is able to mount and start the pod. Pure Service Orchestrator is trying to mount that volume over and over so it didn’t take long to see everything showing the way I wanted.

NAME                                        READY   STATUS    RESTARTS   AGE
 pure-flex-4zlcq                             1/1     Running   0          12m
 pure-flex-7stfb                             1/1     Running   0          12m
 pure-flex-g2kt2                             1/1     Running   0          12m
 pure-flex-jg5cz                             1/1     Running   0          12m
 pure-flex-n8wkw                             1/1     Running   0          6m34s
 pure-flex-rtsv7                             1/1     Running   0          12m
 pure-flex-vtph2                             1/1     Running   0          12m
 pure-flex-w8x22                             1/1     Running   0          12m
 pure-flex-wqr9k                             1/1     Running   0          12m
 pure-flex-xwbww                             1/1     Running   0          12m
 pure-provisioner-9c8dc9f79-xrq6d            1/1     Running   1          12m
 redis-master-demolocal-1-779f74876c-9k24t    1/1     Running   0          12m
 redis-master-demolocal-10-6695b56f47-zgqc7   1/1     Running   0          12m
 redis-master-demolocal-2-778666b57-5xdh8     1/1     Running   0          6m3s
 redis-master-demolocal-3-84848dfb87-fhj6n    1/1     Running   0          12m
 redis-master-demolocal-4-7c9dfdffb9-6cjv5    1/1     Running   0          12m
 redis-master-demolocal-5-65b555fc79-jjdkl    1/1     Running   0          12m
 redis-master-demolocal-6-6d495bfdf-cb5r2     1/1     Running   0          12m
 redis-master-demolocal-7-5c5db655-fx2qd      1/1     Running   0          12m
 redis-master-demolocal-8-74bc65b8d9-2bt8h    1/1     Running   0          12m
 redis-master-demolocal-9-65dd54c587-zb9p2    1/1     Running   0          12m

Get going with MicroK8s

Last week I was getting stickers from the Ubuntu booth during the Open Infrastructure Conference in Denver. I asked a sorta dumb question, since this was a so new to me. My very first Open Infra Conference (formerly OpenStack Summit). I was asking a lot of questions.

I saw a sticker for MicroK8s (Micro-KATES).

Me: What is that?

Person in Booth: Do you know what MiniKube is?

Me: Yes.

Person in Booth: It is like that, but from the Ubuntu Opinionated version.

Me: Ok, cool, my whole lab is Ubuntu, except when it isn’t. So I’ll try it out.

Ten minutes later? Kuberenetes is running on my Ubuntu 16.04 VM.

Go over to https://microk8s.io/ to get the full docs.

Want a quick lab?

snap install microk8s --classic
microk8s.kubectl get nodes
microk8s.kubectl get services

Done. What? What!

So this was slightly annoying to me to type microk8s.blah for everyhing. So alias that if you don’t already have kubectl. I didn’t, this was a fresh VM.

snap alias microk8s.kubectl kubectl

You can run this command to push the config into a file to be used elsewhere.

microk8s.kubectl config view --raw > $HOME/.kube/config

Want the Dashboard? Run this:

microk8s.enable dns dashboard

It took my 5 minutes to get to this point. Now I am like OK lets connect to some Pure FlashArrays.

First we need enable priveleged containers in MicroK8s. Add this line to the following 2 config files.

–allow-privileged=true

# kubelet config
sudo vim /var/snap/microk8s/current/args/kubelet
#kube-apiserver config
sudo vim /var/snap/microk8s/current/args/kube-apiserver

Restart services to pick up the new config:

sudo systemctl restart snap.microk8s.daemon-kubelet.service
sudo systemctl restart snap.microk8s.daemon-apiserver.service

Now you can install helm, and run the Pure Service Orchestrator Helm chart.

More info on that here:

https://github.com/purestorage/helm-charts

The sticker joined my laptop.

It is “NFSEndPoint”

I think I have updated my blog post and PSO guide to reflect this change. In case you are using Pure Service Orchestrator with FlashBlade. The original yaml for the arrays when installing PSO was “NfsEndPoint”. At somepoint, it was fixed to expect “NFSEndPoint” matching the proper name for NFS. I never updated my blog and docs until now.

Sample values.yaml

arrays:
  FlashArrays:
    - MgmtEndPoint: "1.2.3.4"
      APIToken: "a526a4c6-18b0-a8c9-1afa-3499293574bb"
      Labels:
        rack: "22"
        env: "prod"
    - MgmtEndPoint: "1.2.3.5"
      APIToken: "b526a4c6-18b0-a8c9-1afa-3499293574bb"
  FlashBlades:
    - MgmtEndPoint: "1.2.3.6"
      APIToken: "T-c4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.7"
      Labels:
        rack: "7b"
        env: "dev"
    - MgmtEndPoint: "1.2.3.8"
      APIToken: "T-d4925090-c9bf-4033-8537-d24ee5669135"
      NFSEndPoint: "1.2.3.9"
      Labels:
        rack: "6a"

New Pure Service Orchestrator Demo

You may want to make this full screen to see all the CLI glory.

What you will see in this demo is the initial install of Pure Service Orchestrator on a upstream version of Kubernetes. Then by running the ‘helm upgrade’ command I can add a FlashArray to scale the environment and take advantage of Smart Provisioning. First we see the new m50 is not used over the original m70. So the final upgrade adds labels for the failure domain or availability zone in Kubernetes. I also add my FlashBlade to enable block and file if needed for my workload. We use the sample application with node and storage selectors to now request the app use compute and storage in a particular AZ. Kubernetes will only schedule the compute on matching nodes and PSO will provision storage on matching storage arrays.

I would love to hear what you think of this and any other ways I can show this off to enable cloud native applications. I am always looking for good examples of containerized apps that need persistent storage. Hit me up on the twitters @jon_2vcps or submit a comment below.

Kubecon 2018 Seattle Pure Storage – also We are hiring

I will be at the Pure Storage booth at Kubecon next week December 11-13. Booth G7. Come see us to learn about Pure Service Orchestrator and Cloud Block Store for AWS. Find out How our customers are leveraging K8s to transform their applications and Pure Storage for their persistent storage needs.

It has been a fun (nearly 2 years) time at Pure working with our customers that already love Pure Storage for things like Oracle, SQL and VMware as they move into the world of K8s and Containers. Also helping customers that never used Pure before move from complicated or underperforming solutions for persistent storage to FlashArray or FlashBlade. With Cloud Block Store entering beta and GA later next year even more customers will want to see how to automate storage persistence on premises, in the public cloud or in a hybrid model. All of that to say if you are an architect looking to grow on our team please find me at Kubecon. I want meet you and learn why you love cloud, containers, Kubernetes and automating all the things in-between.

  • Send me a message on twitter @jon_2vcps
  • Find me at the Pure Booth
  • Stop me in the hall between sessions.

I look just like one of the following people:

Pure Service Orchestrator Guide

Over the last few months I have been compiling information that I have used to help customers when it comes to PSO. Using Helm and PSO is very simple, but with so many different ways to setup K8s right now it can require a broad knowledge of how plugins work. I will add new samples and work arounds to this Github repo as I come across them. For now enjoy. I have the paths for volume plugins for Kubespray, Kubeadm, Openshift and Rancher version of Kubernetes. Plus some quota samples and even some PSO FlashArray Snapshot and clone examples.

https://github.com/2vcps/PSO-Guide

A nice picture of some containers because it annoys some people, that makes me think it is funny.

Kubernetes on VMware vSphere Demo and more

This post is a recap of my session at VMworld last week in Las Vegas. First, due to lighting, the demo was no very easily viewable. I am really disappointed this happened. I posted the full demo here on youtube:

All of the scripts and instructions are available here on my github repo.

https://github.com/2vcps/vmworld2018_vin3762bus

Coming up next is some work around kubespray and terraform.

 

Storage Quotas in Kubernetes

One thing since we released Pure Service Orchestrator I get asked is, “How do we control how much developer/user can deploy?”

I played around with some of the settings from the K8s documentation for quotas and limits. I uploaded these into my gists on GitHub.

git clone git@gist.github.com:d0fba9495975c29896b98531b04badfd.git
#create the namespace as a cluster-admin
kubectl create -f dev-ns.yaml
#create the quota in that namespace
kubectl -n development create -f storage-quota.yaml
#or if you want to create CPU and Memory and other quotas too
kubectl -n development create -f quota.yaml

This allows users in that namespace to be limitted to a certain number of Persistent Volume Claims (PVC) and/or total requested storage. Both can be useful in scenarios where you don’t want someone to create 10,000 1Gi volumes on an array or create one giant 100Ti volume. 

Credit to dilbert.com When I searched for quotas on the internet this made me laugh. I work with salespeople a lot.

 

VMworld 2018 in Las Vegas

I was going to write my own post, but Cody Hosterman already did a great one.

Cody’s VMworld 2018 and Pure Storage Blog

The sessions are filling up so it will be a good idea to register and get there early. I am very excited about talking about Kubernetes on vSphere. It will follow my journey of learning containers and Kubernetes over the last 2 years or so. Hope everyone learns something.

Last year,  here I am talking about containers in front of a container. Boom!