Installing PSO in PKS with Helm

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

To get started installing PSO with your PKS cluster using helm follow these instructions.
Before installing PSO the Plan in Enterprise PKS must have the “allow privileged” box checked. This setting allows the access to mount storage.

Scroll way down…

Apply the settings in the Installation Dashboard and wait for them to finish applying.

Create a cluster. Go get a Chick-fil-a Biscuit. 

# pks create-cluster testcluster -e test.domain.local -p small

Quick install for FlashBlade and NFS

Install Helm more info here and https://helm.sh/docs/using_helm/#role-based-access-control

  1. Setup the rbac role for tiller.
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
  1. # helm init  –service-account tiller

Install PSO

  1. # helm install -n pso pure/pure-k8s-plugin -f values.yaml

This is the quickest method to getting PSO up and running. We are not adding any packages to the PKS Stem. NFS is built in therefore supported out of the box by PKS.

Installing PSO for FlashArray

Before deploying the PKS Cluster you must tell Bosh director to install a few things at runtime.

Details and the packages are on my github page:

https://github.com/2vcps/pso_prereqs

This is the same method used by other vendors to add agents and drivers to PKS or CloudFoundry. 

Once you finish with the intructions you will have PSO able to mount both FlashArray and FlashBlade using their respective StorageClass, pure-block or pure-file.

Please pay attention to networking

PKS does not allow for the deployment to add another NIC to the vm’s that are deployed. With PKS and NSX-T this is also all kept behind logical routers. Please be sure that VM’s have access. I would prefer no firewall and no routing from a VM to the storage, this may not be possible. You may be able to use VLANS to reduce this routing to a minimum. Just be sure to document your full network path from VM to Storage for future reference.

Using PSO

Sample deployment you can copy this all to a file called deployment.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pv-claim-rwx
  labels:
    app: minio
spec:
  storageClassName: pure-file
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 101Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio-deployment
spec:
  selector:
    matchLabels:
      app: minio
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        # Label is used as selector in the service.
        app: minio
    spec:
      # Refer to the PVC created earlier
      volumes:
      - name: storage
        persistentVolumeClaim:
          # Name of the PVC created earlier
          claimName: minio-pv-claim-rwx
      containers:
      - name: minio
        # Pulls the default Minio image from Docker Hub
        image: minio/minio:latest
        args:
        - server
        - /storage
        env:
        # Minio access key and secret key
        - name: MINIO_ACCESS_KEY
          value: "minio"
        - name: MINIO_SECRET_KEY
          value: "minio123"
        ports:
        - containerPort: 9000
          hostPort: 9000
        # Mount the volume into the pod
        volumeMounts:
        - name: storage
          mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
  name: minio-service
spec:
  type: LoadBalancer
  ports:
    - port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    app: minio

Now apply the file to the cluster

# kubectl apply -f deployment.yaml

Check the pod status

$ kubectl get pod
NAME                               READY   STATUS    RESTARTS   AGE
minio-deployment-95b9d8474-xmtk2   1/1 Running 0 4h19m
pure-flex-9hbfj                    1/1 Running 2 3d4h
pure-flex-w4fvq                    1/1 Running 1 3d23h
pure-flex-zbqvz                    1/1 Running 1 3d23h
pure-provisioner-dd4c4ccb7-dp76c   1/1 Running 7 3d23h

Check the PVC status

$ kubectl get pvc
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
minio-pv-claim-rwx   Bound    pvc-04817b75-f98b-11e9-8402-005056a975c2   101Gi      RWX            pure-file      4h19m

Learn more about PKS and Pure Storage with these posts:
Getting started with Persistent Storage and PKS

Installing PSO in PKS with Helm
Installing PSO in PKS with the Operator
Use PKS + VMware SDDC + Pure Storage
Migrating PSO Volumes into vVols and PKS

Leave a Reply

Your email address will not be published. Required fields are marked *