Pure Service Orchestrator is Validated for Enterprise PKS

The Pure Service Orchestrator Team is excited to announce that PSO is now validated with PKS Enterprise 1.4. PSO is PKS Partner Ready.

Note: Most of this was written as vSphere Cloud Provider in K8s was transitioned to the Cloud Native Storage CSI driver. Use CNS and CSI when you can, the more stable versions of PKS don’t support CSI by default as the K8s version is older. Use PSO for ReadWriteMany on Pure FlashBlade and CNS for block on FlashArray with VMFS Datastores (vVols coming).

The Pure Service Orchestrator Team is excited to announce that PSO is now validated with PKS Enterprise 1.4. PSO is PKS Partner Ready.

You can find more information on the VMware Marketplace https://marketplace.vmware.com/vsx/solutions/pure-service-orchestrator-2-5-2?ref=search

Learn more about PKS and Pure Storage with these posts:

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Why PSO and PKS?

I think it is crucial to understand the options for Storage in Kubernetes first. If you have seen me present in the last 12 months you may have already seen this graphic. When I talk about Hypervisor options I am referring to the vSphere Cloud Provider (and now Cloud Native Storage). At the time you deploy the cluster you provide credentials to contact vCenter and create/manage/destroy persistent volumes on a vSphere Datastore. This option is built into PKS Enterprise. This allows you to create a custom Storage Class per datastore or even use SPBM for provisioning. Yes, that means VMFS and vVols are supported today. There is no validation or certification needed to use this plugin (VCP or CNS) with Pure Storage. Customers of PKS are already doing this today. VCP works very well with vVols. I have advocated at VMworld during my session that this unlocks data mobility options between DIY K8s clusters and PKS (or even between PKS clusters). 

How does PSO fit in with PKS?
If you require “ReadWriteMany” aka RWX. This means I want many scalable containers to all attach to and use a single Persistent Volume Claim. The Pure Storage FlashBlade handles this use case. If you need simplicity in storage management across multiple devices both file and block PSO can consolidate this into a single orchestration layer deployed to any cluster with single command. PSO will also scale to new devices with a single command. Simplifying what was traditionally a very complex portion of a K8s environment.

Use vSphere Cloud Provider and Pure Service Orchestrator Side-by-Side

CNS + Pure Service Orchestrator

Due to the nature of the Storage Classes made by PSO and ones you manually create for VCP/CNS you can provide choice to the end users of PKS with only an initial install effort for the Ops team and nearly zero effort Day 2 onward. For more information please take a look at my post on “Getting Started with PKS” and the “How to install PSO in a PKS Cluster” posts.

The thing that happened was…

So if you follow k8s development at all, you know that CSI became GA at the beginning of 2019. The vSphere Cloud Provider “driver” that is in-tree is now deprecated. VMware is has released Cloud Native Storage, the CSI driver for Kubernetes clusters running on vSphere. This does not change the need for Pure Service Orchestrator for RWX volumes or even possible with in guest iSCSI too. Officially for RWO (read write once) you should be using CNS.

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Installing PSO in a PKS Cluster using the Operator

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Remember to have the K8s cluster created within PKS and remember to think about how those PKS vm’s can communicate with the FlashArray and FlashBlade.

More information and detail:
https://github.com/purestorage/helm-charts/tree/master/operator-k8s-plugin
First we must download the git repo with the installer for the Operator.

$ git clone --branch <version> https://github.com/purestorage/helm-charts.git
$ cd helm-charts/operator-k8s-plugin
$./install.sh --namespace=pso --orchestrator=k8s -f values.yaml
$ kubectl get all -n pso
NAME                                    READY   STATUS    RESTARTS   AGE
pod/pso-operator-b96cfcfbb-zbwwd        1/1     Running   0          27s
pod/pure-flex-dzpwm                     1/1     Running   0          17s
pod/pure-flex-ln6fh                     1/1     Running   0          17s
pod/pure-flex-qgb46                     1/1     Running   0          17s
pod/pure-flex-s947c                     1/1     Running   0          17s
pod/pure-flex-tzfn7                     1/1     Running   0          17s
pod/pure-provisioner-6c9f69dcdc-829zq   1/1     Running   0          17s
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/pure-flex   5         5         5       5            5           <none>          17s
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pso-operator       1/1     1            1           27s
deployment.apps/pure-provisioner   1/1     1            1           17s
NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/pso-operator-b96cfcfbb        1         1         1       27s
replicaset.apps/pure-provisioner-6c9f69dcdc   1         1         1       17s
 

Sample deployment you can copy this all to a file called deployment.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim-rwx
  labels:
    app: minio
spec:
  storageClassName: pure-file
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 101Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio-deployment
spec:
  selector:
    matchLabels:
      app: minio
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        # Label is used as selector in the service.
        app: minio
    spec:
      # Refer to the PVC created earlier
      volumes:
      - name: storage
        persistentVolumeClaim:
          # Name of the PVC created earlier
          claimName: minio-pv-claim-rwx
      containers:
      - name: minio
        # Pulls the default Minio image from Docker Hub
        image: minio/minio:latest
        args:
        - server
        - /storage
        env:
        # Minio access key and secret key
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
          hostPort: 9000
        # Mount the volume into the pod
        volumeMounts:
        - name: storage
          mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  type: LoadBalancer
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio

Now apply the file to the cluster

# kubectl apply -f deployment.yaml

Check the pod status

$ kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
minio-deployment-95b9d8474-xmtk2   1/1 Running 0 4h19m
pure-flex-9hbfj                    1/1 Running 2 3d4h
pure-flex-w4fvq                    1/1 Running 1 3d23hpure-flex-zbqvz                    1/1 Running 1 3d23h
pure-provisioner-dd4c4ccb7-dp76c   1/1 Running 7 3d23h

Check the PVC status

$ kubectl get pvc
NAME                 STATUS VOLUME                               CAPACITY ACCESS MODES STORAGECLASS AGE
minio-pv-claim-rwx   Bound pvc-04817b75-f98b-11e9-8402-005056a975c2   101Gi RWX pure-file 4h19m

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Installing PSO in PKS with Helm

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

To get started installing PSO with your PKS cluster using helm follow these instructions.
Before installing PSO the Plan in Enterprise PKS must have the “allow privileged” box checked. This setting allows the access to mount storage.

Scroll way down…

Apply the settings in the Installation Dashboard and wait for them to finish applying.

Create a cluster. Go get a Chick-fil-a Biscuit. 

# pks create-cluster testcluster -e test.domain.local -p small

Quick install for FlashBlade and NFS

Install Helm more info here and https://helm.sh/docs/using_helm/#role-based-access-control

  1. Setup the rbac role for tiller.
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
  1. # helm init  –service-account tiller

Install PSO

  1. # helm install -n pso pure/pure-k8s-plugin -f values.yaml

This is the quickest method to getting PSO up and running. We are not adding any packages to the PKS Stem. NFS is built in therefore supported out of the box by PKS.

Installing PSO for FlashArray

Before deploying the PKS Cluster you must tell Bosh director to install a few things at runtime.

Details and the packages are on my github page:

https://github.com/2vcps/pso_prereqs

This is the same method used by other vendors to add agents and drivers to PKS or CloudFoundry. 

Once you finish with the intructions you will have PSO able to mount both FlashArray and FlashBlade using their respective StorageClass, pure-block or pure-file.

Please pay attention to networking

PKS does not allow for the deployment to add another NIC to the vm’s that are deployed. With PKS and NSX-T this is also all kept behind logical routers. Please be sure that VM’s have access. I would prefer no firewall and no routing from a VM to the storage, this may not be possible. You may be able to use VLANS to reduce this routing to a minimum. Just be sure to document your full network path from VM to Storage for future reference.

Using PSO

Sample deployment you can copy this all to a file called deployment.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim-rwx
  labels:
    app: minio
spec:
  storageClassName: pure-file
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 101Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio-deployment
spec:
  selector:
    matchLabels:
      app: minio
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        # Label is used as selector in the service.
        app: minio
    spec:
      # Refer to the PVC created earlier
      volumes:
      - name: storage
        persistentVolumeClaim:
          # Name of the PVC created earlier
          claimName: minio-pv-claim-rwx
      containers:
      - name: minio
        # Pulls the default Minio image from Docker Hub
        image: minio/minio:latest
        args:
        - server
        - /storage
        env:
        # Minio access key and secret key
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
          hostPort: 9000
        # Mount the volume into the pod
        volumeMounts:
        - name: storage
          mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  type: LoadBalancer
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio

Now apply the file to the cluster

# kubectl apply -f deployment.yaml

Check the pod status

$ kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
minio-deployment-95b9d8474-xmtk2   1/1 Running 0 4h19m
pure-flex-9hbfj                    1/1 Running 2 3d4h
pure-flex-w4fvq                    1/1 Running 1 3d23h
pure-flex-zbqvz                    1/1 Running 1 3d23h
pure-provisioner-dd4c4ccb7-dp76c   1/1 Running 7 3d23h

Check the PVC status

$ kubectl get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
minio-pv-claim-rwx   Bound    pvc-04817b75-f98b-11e9-8402-005056a975c2   101Gi      RWX            pure-file      4h19m

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Some NSX-T Things I learned

The last few months I have done a lot of work with NSX-T. I have not done so much networking since my CCNA days. I wanted to share a couple of things that were really helpful I found out on the web.

I was using NSX-T 2.4.2 and sometimes some troubleshooting guides were not very helpful as they were very specific to other versions.

This was great for removing some things I broke.
Deleting NSX-T Objects

Since the end goal with NSX was PKS this is where I spent time over and over.
https://docs.pivotal.io/pks/1-5/nsxt-prepare-env.html

After everything worked installing PKS I got this error when deploying a PKS cluster.
The deployment was failing and I kept seeing references to: pks-nsx-t-osb-proxy
https://community.pivotal.io/s/article/pks-nsx-t-osb-proxy-job-fails-while-installing-PKS
https://cormachogan.com/2019/02/05/reviewing-pks-logs-and-status/
https://community.pivotal.io/s/question/0D50P00003xMeUBSA0/installing-pivotal-container-service-failed-with-pksnsxtosbproxy

Some helpful information in those few links. Main thing is when you create certificates for NSX-T Manager, you should apply them too.

Also, make sure the NIC’s on your ESXi hosts are all setup the same way. I had 4 nics and 4 different VLAN/Trunk configs, no bueno. Also as VXLAN wants the frames to be at least 1600 MTU. I set everything to 9000 just for fun. That worked much better.

See you in Barcelona next week.

Migrate Persistent Data into PKS with Pure vVols

While I discussed in my VMworld session this week some of the architectural decisions to be made while deploying PKS on vSphere my demo revolved around once it is up and running how to move existing data into PKS.

First, using the Pure FlashArray and vVols we are able to automate that process and quickly move data from another k8s cluster into PKS. It is not limited to that but this is the use case I started with.

Part 1 of the demo shows taking the persistent data from a deployment on and cloning it over the vVol that is created by using the vSphere Cloud Provider with PKS. vVols are particularly important because they keep the data in a native format and make copy/replication and snapshotting much easier.

Part 2 is the same process just scripted using Python and Ansible.

Demo Part 1 – Manual process of migrating data into PKS

Demo Part 2 – Using Python and Ansible to migrate data into PKS

How to automate the Migration with some Python and Ansible

The code I used is available from code.purestorage.com. Which also links to the GitHub repo https://github.com/PureStorage-OpenConnect/k8s4vvols

They let me on a stage. Again. 🙂

Use PKS Enterprise on VMware SDDC and Pure Storage

Use PKS Enterprise on VMware SDDC and Pure Storage

Pivotal Container Services (PKS) provides a deeply integrated Kubernetes (k8s) architecture for the VMware SDDC. It is a joint engineering project from VMware and Pivotal. In my conversations with Pure Storage customers or potential customers around Kubernetes I often get asked about how Pure Storage can help a PKS Enterprise environment. The good news is there is a very easy path to utilizing k8s with Pure + VMware + PKS.

The Architecture

Using Pure with PKS is actually very straight forward. Since Pure FlashArray is already leading choice for all VMware environments it is not anything out of the ordinary to support PKS. 

Understanding the underlying technology that integrates PKS into VMware you may soon realize that highly reliable, stateless and shared storage is the best choice when deploying PKS. 

The choice between drivers (shown in the graphic above) to deliver the Storage is up to you. The vSphere Cloud Provider provides automated creation and management of the virtual disks presented to containers in PKS. This supports the use of vVols and enables great possibilities for your PKS environment.  Pure Service Orchestrator utilizes a direct connection to Pure Storage FlashArrays, FlashBlades and Cloud Block Stores. It is installed with a single Helm command or Kubernetes Operator. It includes Smart Provisioning in order to place volumes on the most optimal storage device in your fleet.

The choice of which tool will be dictated by your workload. It is not an exclusive choice either. It is easy to do both. After VMworld I hope to publish the details on how to install PSO on PKS. If you have really good github search foo you may be able to find the bosh deployment.

Highly Reliable

Pure Storage has measured 6×9’s of uptime across its customer base. Many storage solutions for container environments will require hours of planning and weeks of proper implementation to provide high availability. Do not spend time re-architecting your storage infrastructure for PKS. Spend your time delivering k8s to your customers so they can deliver innovation for your business.  Use the Pure Storage devices you already have. You may not even need a whole new dedicated array (don’t tell sales I said that). 

Stateless Arrays for Stateful Data

Migrating data should be eliminated from your daily tasks. As FlashArrays move further into the future where data always stays in place. The ability to keep the data in place for multiple hardware generations is a proven benefit of Pure. Migrating persistent storage in k8s even on VMware is a non-trivial task. Depending on your scale this could take weeks of planning and careful flawless execution to accomplish non-disruptively. The underlying hardware should not be a concern for delivering applications. Pure Storage has made this a reality since the FlashArray debut 7 years ago.

Shared Storage

Delivering highly reliable data across multiple PKS and vSphere clusters, allowing applications to failover if the compute in an availability zone becomes unavailable, is key to delivering a cloud experience for your k8s rollout. While the Pure sales teams would gladly help you acquire a FlashArray per vSphere cluster hosting PKS this is simply un-needed for nearly all situations. Especially as you start on your Kubernetes journey.

But Why PURE?

Simple; vVols on the FlashArray combined with the PKS integration with vSphere enables mobility of data and freedom unavailable on a legacy datastore. Have a group that rolled their own k8s? FlashArray can clone their persistent data instantly into PKS using vVols. Need to copy data from a bare metal (non-VM) k8s cluster to PKS? Pure vVols makes this possible. Have multiple k8s clusters within PKS today that require the same data for test/dev/prod Pure Storage enables this nearly instantly. Pure Storage FlashArray Snapshots and Clones move at the speed of an API call from any of our SDK’s from Python to Powershell to Ansible to Terraform and more to give you an easy way to fit Pure Storage into your Infrastructure as Code tools. 

You can probably spend the next 5 hours reading blogs and papers of all the other benefits of Pure Storage and they all apply to your PKS on vSphere environment but I wanted to provide a few examples directly related to operating PKS on Pure.

VMworld 2019 Session

In my session for VMworld in San Francisco I will demonstrate how Pure Storage is able to instantly migrate persistent volumes from “other” k8s clusters to PKS. Make sure you make it to this session if you considering PKS.