Sometimes I have to look up information and I think that is so simple I shouldn’t blog about it. Then I think I should share the link so if anyone else finds it, I might be helpful. Today the 2nd one wins.
I just want to note that the alarm comes at like 180 days, which is super nice but the renewed cert is only good for 364 more days. This can not be changed right now. I suggest though for the ease of use renew the certificate before it expires to avoid extra work.
$ git clone --branch <version> https://github.com/purestorage/helm-charts.git
$ cd helm-charts/operator-k8s-plugin
$./install.sh --namespace=pso --orchestrator=k8s -f values.yaml
$ kubectl get all -n pso
NAME READY STATUS RESTARTS AGE
pod/pso-operator-b96cfcfbb-zbwwd 1/1 Running 0 27s
pod/pure-flex-dzpwm 1/1 Running 0 17s
pod/pure-flex-ln6fh 1/1 Running 0 17s
pod/pure-flex-qgb46 1/1 Running 0 17s
pod/pure-flex-s947c 1/1 Running 0 17s
pod/pure-flex-tzfn7 1/1 Running 0 17s
pod/pure-provisioner-6c9f69dcdc-829zq 1/1 Running 0 17s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/pure-flex 5 5 5 5 5 <none> 17s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pso-operator 1/1 1 1 27s
deployment.apps/pure-provisioner 1/1 1 1 17s
NAME DESIRED CURRENT READY AGE
replicaset.apps/pso-operator-b96cfcfbb 1 1 1 27s
replicaset.apps/pure-provisioner-6c9f69dcdc 1 1 1 17s
Sample deployment you can copy this all to a file called deployment.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minio-pv-claim-rwx labels: app: minio spec: storageClassName: pure-file accessModes: - ReadWriteMany resources: requests: storage: 101Gi --- apiVersion: apps/v1 kind: Deployment metadata: # This name uniquely identifies the Deployment name: minio-deployment spec: selector: matchLabels: app: minio strategy: type: Recreate template: metadata: labels: # Label is used as selector in the service. app: minio spec: # Refer to the PVC created earlier volumes: - name: storage persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pv-claim-rwx containers: - name: minio # Pulls the default Minio image from Docker Hub image: minio/minio:latest args: - server - /storage env: # Minio access key and secret key - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 hostPort: 9000 # Mount the volume into the pod volumeMounts: - name: storage mountPath: "/storage" --- apiVersion: v1 kind: Service metadata: name: minio-service spec: type: LoadBalancer ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: minio
Now apply the file to the cluster
# kubectl apply -f deployment.yaml
Check the pod status
$ kubectl get pod
NAME READY STATUS RESTARTS AGE minio-deployment-95b9d8474-xmtk2 1/1 Running 0 4h19m pure-flex-9hbfj 1/1 Running 2 3d4h pure-flex-w4fvq 1/1 Running 1 3d23hpure-flex-zbqvz 1/1 Running 1 3d23h pure-provisioner-dd4c4ccb7-dp76c 1/1 Running 7 3d23h
Check the PVC status
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE minio-pv-claim-rwx Bound pvc-04817b75-f98b-11e9-8402-005056a975c2 101Gi RWX pure-file 4h19m
Learn more about PKS and Pure Storage with these posts: Getting started with Persistent Storage and PKS
To get started installing PSO with your PKS cluster using helm follow these instructions. Before installing PSO the Plan in Enterprise PKS must have the “allow privileged” box checked. This setting allows the access to mount storage.
Scroll way down…
Apply the settings in the Installation Dashboard and wait for them to finish applying.
Create a cluster. Go get a Chick-fil-a Biscuit.
# pks create-cluster testcluster -e test.domain.local -p small
This is the quickest method to getting PSO up and running. We are not adding any packages to the PKS Stem. NFS is built in therefore supported out of the box by PKS.
Installing PSO for FlashArray
Before deploying the PKS Cluster you must tell Bosh director to install a few things at runtime.
This is the same method used by other vendors to add agents and drivers to PKS or CloudFoundry.
Once you finish with the intructions you will have PSO able to mount both FlashArray and FlashBlade using their respective StorageClass, pure-block or pure-file.
Please pay attention to networking
PKS does not allow for the deployment to add another NIC to the vm’s that are deployed. With PKS and NSX-T this is also all kept behind logical routers. Please be sure that VM’s have access. I would prefer no firewall and no routing from a VM to the storage, this may not be possible. You may be able to use VLANS to reduce this routing to a minimum. Just be sure to document your full network path from VM to Storage for future reference.
Using PSO
Sample deployment you can copy this all to a file called deployment.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: minio-pv-claim-rwx labels: app: minio spec: storageClassName: pure-file accessModes: - ReadWriteMany resources: requests: storage: 101Gi --- apiVersion: apps/v1 kind: Deployment metadata: # This name uniquely identifies the Deployment name: minio-deployment spec: selector: matchLabels: app: minio strategy: type: Recreate template: metadata: labels: # Label is used as selector in the service. app: minio spec: # Refer to the PVC created earlier volumes: - name: storage persistentVolumeClaim: # Name of the PVC created earlier claimName: minio-pv-claim-rwx containers: - name: minio # Pulls the default Minio image from Docker Hub image: minio/minio:latest args: - server - /storage env: # Minio access key and secret key - name: MINIO_ACCESS_KEY value: "minio" - name: MINIO_SECRET_KEY value: "minio123" ports: - containerPort: 9000 hostPort: 9000 # Mount the volume into the pod volumeMounts: - name: storage mountPath: "/storage" --- apiVersion: v1 kind: Service metadata: name: minio-service spec: type: LoadBalancer ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: minio
Now apply the file to the cluster
# kubectl apply -f deployment.yaml
Check the pod status
$ kubectl get pod
NAME READY STATUS RESTARTS AGE minio-deployment-95b9d8474-xmtk2 1/1 Running 0 4h19m pure-flex-9hbfj 1/1 Running 2 3d4h pure-flex-w4fvq 1/1 Running 1 3d23h pure-flex-zbqvz 1/1 Running 1 3d23h pure-provisioner-dd4c4ccb7-dp76c 1/1 Running 7 3d23h
Check the PVC status
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
minio-pv-claim-rwx Bound pvc-04817b75-f98b-11e9-8402-005056a975c2 101Gi RWX pure-file 4h19m
Learn more about PKS and Pure Storage with these posts: Getting started with Persistent Storage and PKS
The last few months I have done a lot of work with NSX-T. I have not done so much networking since my CCNA days. I wanted to share a couple of things that were really helpful I found out on the web.
I was using NSX-T 2.4.2 and sometimes some troubleshooting guides were not very helpful as they were very specific to other versions.
Some helpful information in those few links. Main thing is when you create certificates for NSX-T Manager, you should apply them too.
Also, make sure the NIC’s on your ESXi hosts are all setup the same way. I had 4 nics and 4 different VLAN/Trunk configs, no bueno. Also as VXLAN wants the frames to be at least 1600 MTU. I set everything to 9000 just for fun. That worked much better.
This all started as I was needing a side project. I had purchased a Raspberry Pi 4 in July but was looking for a great way to use it. Then in August I received another Pi 3 from the vExpert Community at VMworld.
I setup the Pi 3 to be an AirPlay speaker for my old basement stereo. What does this have to do with K8s? Nothing.
I took the Pi 4 and purchased 3 more to complete a mini-rack cluster using K3s. https://k3s.io/ this is a crazy easy way to get Kubernetes up and running when you really don’t want to mess with the internals of everything. Perfect for the raspberry pi.
So I know have a single master cluster with 3 worker nodes. Although the master can run workload too… so actually. Four node cluster is best way to describe it.
First was a multi-node deployment of Minio to front end my ancient Iomega Nas. I wrote some Python to take timelapse photos from my PiZero camera and push them into Minio. Pretty cool and should work with any S3 interface (hint hint).
Next was I wanted to make something that could help me do a little more with Python. So I took a look at Tweepy and created a twitter developer account. @Jonbot17 was born.
Take a look at my github page for the code so far.
UPDATE: My bot wasn’t just shadow banned but banned banned. So it would retweet any tweet with #PureAccelerate, then the conference started, the account did a little too much activity for twitter. I guess 1000 tweets in a few hours is too much for the platform.
Does anyone have any other ideas of what I should run on my k3s’s and Pi4 cluster?
There was a question on twitter and I thought I would write down my process for others to learn from. First, a little background. Kubernetes is managed mostly using a tool called kubectl (kube-control, kube-cuddle, kube-C-T-L, whatever). This tool will look for the configuration to talk to the API for kubernetes management. A sanitized sample can be seen by running:
You can see there is Clusters, Contexts and Users. The following commands kubectl config get-context and use-context allow you to see and switch contexts. In my use case I have a single context per cluster.
kubectl config get-context
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* I-AM-GROOT@k8s-ubt18 k8s-ubt18 I-AM-GROOT
k8s-dev-1-admin@k8s-dev-1 k8s-dev-1 k8s-dev-1-admin
k8s-lab-1-admin@k8s-lab-1 k8s-lab-1 k8s-lab-1-admin
k8s-prod-1-admin@k8s-prod-1 k8s-prod-1 k8s-prod-1-admin
kubectl config use-context k8s-dev-1-admin@k8s-dev-1
Switched to context "k8s-dev-1-admin@k8s-dev-1".
Switching this way became cumbersome. So I now use a tool called kubectx and with it kubens. https://github.com/ahmetb/kubectx. Now you can see below my prompt shows my cluster + the namespace. Pretty sweet to see that and has saved me from removing deployments from the wrong cluster. “k8s-dev-1-admin@k8s-dev-1:default”
Now the kubectl tool will look in your environment for a variable KUBECONFIG. Many times this will be set to KUBECONFIG=~/.kube/config . If you modify your .bash_profile on OSX or .bashrc in Ubuntu(and others) you can point that variable anywhere. I formerly had this pointed to a single file for each cluster. For example:
This worked great but a few 3rd party management tools had issues switching between multiple files. At least for me the big one was the kubernetes module for python. So I moved to doing a single combined config file at ~/.kube/config
Now what do I do now?
Here is my basic workflow. I don’t automate it yet as I don’t want to overwrite something carelessly. 1. Run an ansible playbook that grabs the admin.conf file from /etc/kubernetes on the masters of the cluster. 2. Modify manually the KUBECONFIG environment variable to be KUBECONFIG=~/.kube/config:~/latestconfig/new.config 3. Run kubectl config view –raw to make sure it is all there the –raw tag unhides the keys and such. 4. COPY the ~/.kube/config to ~/.kube/config.something 5. Run kubectl config view –raw > ~/.kube/config 6. Open a new terminal to use my original env variable for KUBECONFIG and make sure all the clusters show up. 7. Clean up old config if I am feeling extra clean.
Not really hard or too complicated. I destroy clusters pretty often so sometimes I will blow away the config and then remerge my current clusters into a new config file.
Only a slight nudge at from @CodyHosterman to put this post together.
Kubernetes deployed into AWS is a method many organizations are using to get into using K8s. Whether you deploy K8s with Kubeadm, Kops, Kubespray, Rancher, WeaveWorks, OpenShift, etc the next big question is how do I do persistent volumes? While EBS has StorageClass integrations you may be interesting in getting better efficiency and reliability than traditional block in the cloud. That is one of the great uses of Cloud Block Store. Highly efficient and highly reliable storage built for AWS with the same experience as the on prem FlashArray. By utilizing Pure Service Orchestrator’s helm chart or operator you can now take advantage of Container Storage as a Service in the cloud. Are you using Kubernetes in AWS on EC2 and have questions about how to take advantage of Cloud Block Store? Please ask me here in the comments or @jon_2vcps on twitter.
Persistent Volume Claims may will not always be 100% full. Cloud Block Store is Deduped, Compressed and Thin. Don’t pay for 100% of a TB if it is only 1% full. I do not want to be in the business of keeping developers from getting the resources they need, but I also do not want to be paying for when they over-estimate.
Migrate data from on prem volumes such as K8s PVC, VMware vVols, Native physical volumes into the cloud and attach them to your Kubernetes environment. See the youtube demo below for an example. What we are seeing in the demo is creating an app in Kubernetes on prem, loading it with some data (photos), replicating that application to the AWS cloud and using Pure Service Orchestrator to attach the data to the K8s orchestrated application using Cloud Block Store. This is my re-working of Simon’s tech preview demo from the original launch of Cloud Block Store last November.
3. Simple. Make storage simple. One common tweet I see on twitter from the Kubernetes detractors is how complicated Kubernetes can be. Pure Service Orchestrator makes the storage layer amazingly simple. A single command line to install or upgrade. Pooling across multiple devices.
Get Started today: Below I will include some links on the different installs of PSO. Now don’t let the choices scare you. Container Storage Interface or CSI is the newest API for common interaction with all storage providers. While flexvol was the original storage solution it makes sense to move forward with CSI. This is very true for newer versions of kubernetes that include CSI by default. So if you are starting to use K8s for the first time today or your cluster is K8s 1.11 we have you covered. Use the links below to see the install process and prerequisites for PSO.
While I discussed in my VMworld session this week some of the architectural decisions to be made while deploying PKS on vSphere my demo revolved around once it is up and running how to move existing data into PKS.
First, using the Pure FlashArray and vVols we are able to automate that process and quickly move data from another k8s cluster into PKS. It is not limited to that but this is the use case I started with.
Part 1 of the demo shows taking the persistent data from a deployment on and cloning it over the vVol that is created by using the vSphere Cloud Provider with PKS. vVols are particularly important because they keep the data in a native format and make copy/replication and snapshotting much easier.
Part 2 is the same process just scripted using Python and Ansible.
Demo Part 1 – Manual process of migrating data into PKS
Demo Part 2 – Using Python and Ansible to migrate data into PKS
Use PKS Enterprise on VMware SDDC and Pure Storage
Pivotal Container Services (PKS) provides a deeply integrated Kubernetes (k8s) architecture for the VMware SDDC. It is a joint engineering project from VMware and Pivotal. In my conversations with Pure Storage customers or potential customers around Kubernetes I often get asked about how Pure Storage can help a PKS Enterprise environment. The good news is there is a very easy path to utilizing k8s with Pure + VMware + PKS.
The Architecture
Using Pure with PKS is actually very straight forward. Since Pure FlashArray is already leading choice for all VMware environments it is not anything out of the ordinary to support PKS.
Understanding the underlying technology that integrates PKS into VMware you may soon realize that highly reliable, stateless and shared storage is the best choice when deploying PKS.
The choice between drivers (shown in the graphic above) to deliver the Storage is up to you. The vSphere Cloud Provider provides automated creation and management of the virtual disks presented to containers in PKS. This supports the use of vVols and enables great possibilities for your PKS environment. Pure Service Orchestrator utilizes a direct connection to Pure Storage FlashArrays, FlashBlades and Cloud Block Stores. It is installed with a single Helm command or Kubernetes Operator. It includes Smart Provisioning in order to place volumes on the most optimal storage device in your fleet.
The choice of which tool will be dictated by your workload. It is not an exclusive choice either. It is easy to do both. After VMworld I hope to publish the details on how to install PSO on PKS. If you have really good github search foo you may be able to find the bosh deployment.
Highly Reliable
Pure Storage has measured 6×9’s of uptime across its customer base. Many storage solutions for container environments will require hours of planning and weeks of proper implementation to provide high availability. Do not spend time re-architecting your storage infrastructure for PKS. Spend your time delivering k8s to your customers so they can deliver innovation for your business. Use the Pure Storage devices you already have. You may not even need a whole new dedicated array (don’t tell sales I said that).
Stateless Arrays for Stateful Data
Migrating data should be eliminated from your daily tasks. As FlashArrays move further into the future where data always stays in place. The ability to keep the data in place for multiple hardware generations is a proven benefit of Pure. Migrating persistent storage in k8s even on VMware is a non-trivial task. Depending on your scale this could take weeks of planning and careful flawless execution to accomplish non-disruptively. The underlying hardware should not be a concern for delivering applications. Pure Storage has made this a reality since the FlashArray debut 7 years ago.
Shared Storage
Delivering highly reliable data across multiple PKS and vSphere clusters, allowing applications to failover if the compute in an availability zone becomes unavailable, is key to delivering a cloud experience for your k8s rollout. While the Pure sales teams would gladly help you acquire a FlashArray per vSphere cluster hosting PKS this is simply un-needed for nearly all situations. Especially as you start on your Kubernetes journey.
But Why PURE?
Simple; vVols on the FlashArray combined with the PKS integration with vSphere enables mobility of data and freedom unavailable on a legacy datastore. Have a group that rolled their own k8s? FlashArray can clone their persistent data instantly into PKS using vVols. Need to copy data from a bare metal (non-VM) k8s cluster to PKS? Pure vVols makes this possible. Have multiple k8s clusters within PKS today that require the same data for test/dev/prod Pure Storage enables this nearly instantly. Pure Storage FlashArray Snapshots and Clones move at the speed of an API call from any of our SDK’s from Python to Powershell to Ansible to Terraform and more to give you an easy way to fit Pure Storage into your Infrastructure as Code tools.
You can probably spend the next 5 hours reading blogs and papers of all the other benefits of Pure Storage and they all apply to your PKS on vSphere environment but I wanted to provide a few examples directly related to operating PKS on Pure.
VMworld 2019 Session
In my session for VMworld in San Francisco I will demonstrate how Pure Storage is able to instantly migrate persistent volumes from “other” k8s clusters to PKS. Make sure you make it to this session if you considering PKS.
So I create and destroy Kubernetes clusters on vSphere on a pretty regular basis. Some I create with Terraform and Ansible. Some I use PKS. I have a plumbing test for Pure Service Orchestrator that mounts a single volume to a pod on each node.
Every once in a while I get an error like this, on just one node:
Failed to log in to any iSCSI targets! Will not be able to attach volume
In order to make sure it isn’t PSO with the error and it shouldn’t be since the other nodes are working. Run this command:
iscsiadm -m discovery -t st -p 192.168.230.24
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.24,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.25,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.26,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
iscsiadm: Could not stat /etc/iscsi/nodes//,3260,-1/default to delete node: No such file or directory
iscsiadm: Could not add/update [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.230.27,3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479]
192.168.230.24:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
192.168.230.25:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
192.168.230.26:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
192.168.230.27:3260,1 iqn.2010-06.com.purestorage:flasharray.4ca976f28eb0d479
Now I that isn’t what should be the result. So I thought at first to restart iscsi and that didn’t help. Then I thought, well this is a lab so lets just…
#cd /etc/iscsi #rm -r nodes
Do not try this if you have other iSCSI targets for other storage. Not sure you will be happy. At first, I thought I should stop iSCSI before doing this. It doesn’t seem to have any effect. Now every node is able to mount and start the pod. Pure Service Orchestrator is trying to mount that volume over and over so it didn’t take long to see everything showing the way I wanted.