OpenStack Cinder Replication: Using Multiple ActiveCluster Pods to Increase Scale

Since the Mitaka release of OpenStack, the Pure Storage Cinder driver has supported Cinder replication, although this first iteration only supported asynchronous replication.

The Rocky release of OpenStack saw Pure’s Cinder driver support synchronous replication by integrating our ActiveCluster feature from the FlashArray.

This synchronous replication automatically created an ActiveCluster pod on the paired FlashArrays called cinder-pod. A pretty obvious name I would say.

While this provided a seamless integration for OpenStack users to create a synchronously replicated volume using a correctly configured volume type, there was one small limitation. ActiveCluster pods were limited to 3000 volumes.

Now you might think that is more than enough volumes for any single ActiveCluster environment. I certainly did until I received a request to be able to support 6000 volumes synchronously replicated.

After some scratching of my head, I remembered that from the OpenStack Stein release of the Pure Cinder driver there is an undocumented (well, not very well documented) parameter that allows the name of the ActiveCluster pod to be customizable and that gave me an idea….

Can you configure Cinder to use the same backend as separate stanzas in the Cinder config file with different parameters?

It turns out the answer is Yes.

So, here’s how to enable your Pure FlashArray Cinder driver to use a single ActiveCluster pair of FlashArrays to allow for 6000 synchronously replicated volumes.

First, we need to edit the cinder.conf file and create two different stanzas for the same array that is configured in an ActiveCluster pair and ensure we have enabled both of these backends:

enabled_backends = pure-1-1, pure-1-2
…
[pure-1-1]
volume_backend_name = pure
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
san_ip = 10.21.209.210
replication_device = backend_id:pure-2,san_ip:10.21.209.8,api_token:9c0b56bc-f941-f7a6-9f85-dcc3e9a8f6d6,type:sync
pure_api_token = bee464cc-24a9-f44c-615a-ae566082a1ae
pure_replication_pod_name = cinder-pod1
use_multipath_for_image_xfer = True
pure_eradicate_on_delete = true
image_volume_cache_enabled = True
volume_clear = none

[pure-1-2]
volume_backend_name = pure
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
replication_device = backend_id:pure-2,san_ip:10.21.209.8,api_token:9c0b56bc-f941-f7a6-9f85-dcc3e9a8f6d6,type:sync
pure_replication_pod_name = cinder-pod2
san_ip = 10.21.209.210
pure_api_token = bee464cc-24a9-f44c-615a-ae566082a1ae
use_multipath_for_image_xfer = True
pure_eradicate_on_delete = true
image_volume_cache_enabled = True
volume_clear = none

If we look at the two stanzas, the only difference is that the pure_replication_pod_name is different. I have also set the volume_backend_name to be the same for both configurations. There is a reason for this I will cover later.

After altering the configuration file, make sure to restart your Cinder Volume service to implement the changes.

After restarting the cinder-volume service, you will see on the FlashArray that two ActiveCluster pods now exist with the names defined in the configuration file.

This is the first step.

Now we need to enable volume types to be able to use these pods and also to load-balance across the two pods – why load-balance? It just seems to make more sense to make volumes evenly utilize the pods, but there is no specific reason for doing this. If you wanted to use each pod separately, then you would need to set a different volume_backend_name in the Cinder configuration file for each array stanza.

When creating a volume type to use synchronous replication you need to set some specific extra_specs in the type definition. These are the commands to use:

openstack volume type create pure-repl
openstack volume type set --property replication_type=’<in> sync’ pure_repl
openstack volume type set --property replication_enabled=’<is> True’ pure_repl
openstack volume type set --property volume_backend_name=’pure’ pure_repl 

The final configuration of the volume type would now look something like this:

 openstack volume type show pure-repl
 +--------------------+-------------------------------------------------------------------------------------------+
 | Field              | Value                                                                                     |
 +--------------------+-------------------------------------------------------------------------------------------+
 | access_project_ids | None                                                                                      |
 | description        | None                                                                                      |
 | id                 | 2b6fe658-5bbf-405c-a0b6-c9ac23801617                                                      |
 | is_public          | True                                                                                      |
 | name               | pure-repl                                                                                 |
 | properties         | replication_enabled='<is> True', replication_type='<in> sync', volume_backend_name='pure' |
 | qos_specs_id       | None                                                                                      |
 +--------------------+-------------------------------------------------------------------------------------------+ 

Now, all we need to do is use the volume type when creating our Cinder volumes.

Let’s create two volumes and see how they appear on the FlashArray:

 openstack volume create --type pure-repl --size 25 test_volume
 +---------------------+--------------------------------------+
 | Field               | Value                                |
 +---------------------+--------------------------------------+
 | attachments         | []                                   |
 | availability_zone   | nova                                 |
 | bootable            | false                                |
 | consistencygroup_id | None                                 |
 | created_at          | 2020-04-03T14:48:13.000000           |
 | description         | None                                 |
 | encrypted           | False                                |
 | id                  | 64ef0e40-ce89-4f4d-8c89-42e3208a96c2 |
 | migration_status    | None                                 |
 | multiattach         | False                                |
 | name                | test_volume                          |
 | properties          |                                      |
 | replication_status  | None                                 |
 | size                | 25                                   |
 | snapshot_id         | None                                 |
 | source_volid        | None                                 |
 | status              | creating                             |
 | type                | pure-repl                            |
 | updated_at          | None                                 |
 | user_id             | eca55bb4cd8c490197d8b9d2cdce29f2     |
 +---------------------+--------------------------------------+
 
 openstack volume create --type pure-repl --size 25 test_volume2
 +---------------------+--------------------------------------+
 | Field               | Value                                |
 +---------------------+--------------------------------------+
 | attachments         | []                                   |
 | availability_zone   | nova                                 |
 | bootable            | false                                |
 | consistencygroup_id | None                                 |
 | created_at          | 2020-04-03T14:48:22.000000           |
 | description         | None                                 |
 | encrypted           | False                                |
 | id                  | e494e233-b38a-4fb6-8f3d-0aab5c7c68ec |
 | migration_status    | None                                 |
 | multiattach         | False                                |
 | name                | test_volume2                         |
 | properties          |                                      |
 | replication_status  | None                                 |
 | size                | 25                                   |
 | snapshot_id         | None                                 |
 | source_volid        | None                                 |
 | status              | creating                             |
 | type                | pure-repl                            |
 | updated_at          | None                                 |
 | user_id             | eca55bb4cd8c490197d8b9d2cdce29f2     |
 +---------------------+--------------------------------------+ 

Looking at the FlashArray, we can see the two volumes we just created (I am filtering the volume name on cinder just so you only see the OpenStack related volumes on this array)

The volume naming convention we use at Pure shows that these volumes are in a pod due to the double colon (::) in the name and the pod name for each volume is cinder-pod1 and cinder-pod2 respectively.

The view of each pod also shows only one volume in each.

If you didn’t want to load-balance across the pods and needed the flexibility to specify the pod a volume exists in, all I need do is set the volume_backend_name to be different in the configuration file array stanzas and then create two volume types. Each would point to the different volume_backend_name setting.

PSO wrt DKS & UCP

Please welcome Simon making a guest appearance to go through whatever it is this is about. 🙂 – Jon

Got to love those TLAs!!

To demystify the title of this blog, this will be about installing Pure Service Orchestrator (PSO) with Docker Kubernetes Service (DKS).

Specifically, I’ll be talking about PSO CSI driver v5.0.8, running with Docker EE 3.0 and the Universal Control Plane (UCP) 3.2.6, managing Kubernetes 1.14.8.

Let’s assume you have Docker Enterprise 3.0 installed on 3 Linux nodes, in my case they are running Ubuntu 18.04.  You decide you want them to all run the Docker Kubernetes Service (DKS) and have any persistent storage provided by your Pure Storage FlashArray or FlashBlade – how do you go about installing all of these and configuring them?

Pre-Requisites

As we are going to be using PSO with Pure Storage array for the persistent storage, ensure that all nodes that will part of DKS have the following software installed:

  • nfs-common
  • multipath-tools

Install UCP

The first step to getting your DKS environment up is to install the Docker Universal Control Plane (UCP) from the node you will be using as your master.

As PSO supports CSI snapshots, you will want to ensure that when installing UCP, you tell it to open the Kubernetes feature gates, thereby enabling persistent volumes snapshots through PSO.

The command to install UCP is:

# docker container run --rm -it --name ucp \
  -v /var/run/docker.sock:/var/run/docker.sock \
  docker/ucp:latest install --host-address <host IP> \
  --interactive --storage-expt-enabled

If you don’t want to open the feature gates, don’t use the --storage-expt-enabled switch in the install command.

Answer the questions the install asks, wait a few minutes, and voila you have Docker UCP installed and can access it through its GUI at http://<host IP>. Note that you may be prompted to enter your Docker EE license key on the first login.

When complete you will have a basic, single node, environment consisting of docker EE 3.0, UCP 3.2.6 and Kubernetes 1.14.8.

Add Nodes to Cluster

Once you have your master node up and running, you can add your two worker nodes to the cluster.

The first step is to ensure your default scheduler is Kubernetes, not Swarm. If you don’t set this pods will not run on the worker nodes due to taints that are applied.

Navigate to your username in the left pane and select Admin Settings and then Scheduler. Set the default Orchestrator type to Kubernetes and save your change

Now to add nodes navigate to Shared Resources and select Nodes and then Add Nodes. You will see something like this:

Use the command on each worker node to get them to join the Kubernetes cluster. When complete, your nodes should be correctly joined and look like this is your Nodes display.

You now have a fully functioning Kubernetes cluster managed by Docker UCP.

Get your client ready

Before you can install PSO you need to install a Docker Client Bundle onto your local node that will be used to communicate with your cluster. I use a Windows 10 laptop, but run the Ubuntu shell provided by Windows to do this.

To get the bundle, navigate to your user profile, select Client Bundles and then Generate Client Bundle from the dropdown menu. Unzip the tar file you get into your working directory.

Next, you need to get the correct kubectl version, which with UCP 3.2.6 is 1.14.8, by running the following commands:

# curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.8/bin/linux/amd64/kubectl
# chmod +x ./kubectl
# mv ./kubectl /usr/local/bin/kubectl

Check your installation by running the following commands:

# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-docker-1", GitCommit:"8100f4dfe656d4a4e5573fe86375a5324771ec6b", GitTreeState:"clean", BuildDate:"2019-10-18T17:13:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
# kubectl get nodes
NAME      STATUS   ROLES   AGE   VERSION
docker1   Ready    master  24h   v1.14.8-docker-1
docker2   Ready    <none>  24h   v1.14.8-docker-1
docker3   Ready    <none>  24h   v1.14.8-docker-1

Now we are nearly ready to install PSO, but PSO requires Helm, so now we install Helm3 (I’m using v3.1.2 here, but check for newer versions) and validate:

# wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz
# tar -zxvf helm-v3.1.2-linux-amd64.tar.gz
# mv linux-amd64/helm /usr/bin/helm
# helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}

And finally…

We are ready to install PSO.. Here we are just going to follow the instructions in the PSO GitHub repo, so check in their for updates if you are reading this in my future…

# helm repo add pure https://purestorage.github.io/helm-charts
# helm repo update

So the latest version at this time is 5.0.8, so we should get the values.yaml configuration file that matches this version…

# wget https://raw.githubusercontent.com/purestorage/helm-charts/5.0.8/pure-csi/values.yaml

Edit this file to add your site-specific information, especially the information for your backend arrays and eventually install PSO:

# kubectl create namespace <name>
# helm install pure-storage-driver pure/pure-csi --namespace <name> -f ./values.yaml

Done!!

What does this look like in Docker UCP you ask, well this is what you will see in various screens:

Now you can start using PSO to provide your persistent storage to your containerized applications, and if you enabled the feature-gates as suggested at the start of this blog, you could also take snapshots of your PVs and restore these to new volumes. For details on exactly how to do this read this: https://github.com/purestorage/helm-charts/blob/5.0.8/docs/csi-snapshot-clones.md, but make sure you install the VolumeSnapshotClass first wit this command:

# kubectl apply -f https://raw.githubusercontent.com/purestorage/helm-charts/master/pure-csi/snapshotclass.yaml


The version of Kubernetes provided in Docker UCP 3.2.6 does not support volume cloning, but future releases may enable this functionality – check with Docker UCP and Docker EE release notes.

Migrating K8s Stateful Apps with Pure Storage

I have to move my harbor instance to a new cluster.

  1. old cluster – find all the PVC’s
kubectl -n harbor get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-harbor-harbor-redis-0               Bound    pvc-aebe5589-f484-4664-9326-03ff1ffb2fdf   5Gi        RWO            pure-block     24m
database-data-harbor-harbor-database-0   Bound    pvc-b506a2d4-8a65-4f17-96e3-f3ed1c25c56e   5Gi        RWO            pure-block     24m
harbor-harbor-chartmuseum                Bound    pvc-e50b2487-2a88-4032-903d-80df15483c37   100Gi      RWO            pure-block     24m
harbor-harbor-registry                   Bound    pvc-923fa069-21c8-4920-a959-13f7220f5d90   200Gi      RWO            pure-block     24m
  1. clone in the FlashArray
    Find each PVC listed when you run the above command, you may either create a snapshot or a full clone.
  2. Bring up the new app with the same sized PVC’s on your new cluster.
kubectl -n harbor scale deployment --replicas 0 -l app=harbor
  1. scale app to 0 replicas on the new k8s cluster (example above)
  2. Clone and overwrite each volume on the FlashArray. Using the pvc volume name from the new cluster.
kubectl -n get pvc

  1. Scale app back to the required replicas. Verify it works.
kuebctl -n harbor scale deployment --replicas 1 -l app=harbor
  1. Point DNS to new loadbalancer/ingress
kubectl -n harbor get ingresses
NAME                    HOSTS                                         ADDRESS         PORTS     AGE
harbor-harbor-ingress   harbor.newstack.local,notary.newstack.local   10.xx.xx.xx  80, 443   32m

Change DNS to the new cluster.

All my data is now migrated

Kubespray and vSphere VMs

I build and destroy Kubernetes clusters nearly weekly. Doing it on VMs makes this super easy. I also need to demo Pure Service Orchestrator so having in guest iSCSI is a must. Following this repo should give any vSphere admin an easy way to learn kubectl, helm and PSO quite easily (of course PSO works with Pure FlashArray and FlashBlade). This uses Terraform to create the VM and Kubespray to install k8s. Ansible can also be used for a few automations of package installs and updates.

I am going to try something new and not recreate the github readme and just share the repo link.

https://github.com/2vcps/tf4vsphere

Pure Storage and Weaveworks Webinar – March 17

I am pretty excited to be doing a webinar with Weaveworks on Weave Kubernetes Platform and Pure Storage. I met Damani at Kubecon and Re:Invent and we have been talking about doing this for months. I am excited to integrate Pure Service Orchestrator and Pure Storage into a platform providing a full collection of what you need to run k8s. Some things we will cover:

  • How the Weave Kubernetes Platform and its GitOps workflows unify deployment, management, and monitoring for clusters and apps
  • How Pure Service Orchestrator accelerates application build and delivery with 6 9’s storage uptime. PSO works for ON PREM and Public Cloud
  • Live Demo – I am going to show some CSI goodness. Promise.
How does Pure make Stateful Apps a no brainer?

Use this link to register now!

Some other important questions you might have from this pic:

When did JO’s beard explode into this?

JO from Pure Storage’s North Georgia foothills HQ

Building the Python Twitter Bot with Jenkins and Kubernetes – part 3

This is the third part of the blogs I have been writing to document building a Python based twitter bot and running it in a container and deploying it to Kubernetes. The first post was about building the python, the second was all about building the docker container and using a deployment in Kubernetes. This last part pulls it all together and lets Github, Jenkins and Kubernetes do the work for you.

Getting Started

Pre-requisites:

  1. Working Kubernetes Cluster
  2. Working Container Registry and your Kubernetes cluster is able to pull images from it.
  3. Jenkins and a service account Jenkins can use to do things in K8s.

Jenkinsfile

Go ahead and fork my repo https://github.com/2vcps/python-twitter-bot to your own github account.
Now looking at the Jenkinsfile below inside of the repo. Some things for your to modify for your environment.

  1. Create a serviceAccount to match the serviceAccountName field in the yaml. This is the permissions the pod building and deploying the bot will use to run during the process. If you get this wrong. There will be errors.
  2. make the the images in the file all exist in your private registry. The first image tag you see is used to run kubectl and kustomize. I suggest building this image from the cloud builders public repo. The docker file is here:
    https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/kustomize
    The second image used is public kaniko image. Now using that specific build is the only way it will function inside of a container. Kaniko is a standalone tool to build container images. Does not require root access to the docker engine like a ‘docker build’ command does. Also notice there is a harbor-config volume that allows kaniko to push to my harbor registry. Please create the secret necessary for your container registry.
    Also notice the kubectl portion is commented out and is only left behind for reference. The Kustomize image contains both kubetctl and kustomize commands.
  3. Last thing to take note of is the commands kustomize uses to create a new deployment.yaml called builddeploy.yaml. This way we can build and tag the container image each time and the deployements will be updated with the new tag. We avoid using “latest” as that can cause issues and is not best practice.

podTemplate(yaml: """
kind: Pod
spec:
  serviceAccountName: jenkins-k8s
  containers:
  - name: kustomize
    image: yourregistry/you/kustomize:3.4
    command:
    - cat
    tty: true
    env:
    - name: IMAGE_TAG
      value: ${BUILD_NUMBER}
  - name: kubectl
    image: gcr.io/cloud-builders/kubectl
    command:
    - cat
    tty: true
    env:
    - name: IMAGE_TAG
      value: ${BUILD_NUMBER}
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug-539ddefcae3fd6b411a95982a830d987f4214251
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    env:
    - name: DOCKER_CONFIG
      value: /root/.docker/
    - name: IMAGE_TAG
      value: ${BUILD_NUMBER}
    volumeMounts:
      - name: harbor-config
        mountPath: /root/.docker
  volumes:
    - name: harbor-config
      configMap:
        name: harbor-config
"""
  ) {

  node(POD_LABEL) {
    def myRepo = checkout scm
    def gitCommit = myRepo.GIT_COMMIT
    def gitBranch = myRepo.GIT_BRANCH
    stage('Build with Kaniko') {
      container('kaniko') {
        sh '/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --skip-tls-verify --destination=yourregistry/you/py-bot:latest --destination=yourregistry/you/py-bot:v$BUILD_NUMBER'
      }
    }
    stage('Deploy and Kustomize') {
      container('kustomize') {
        sh "kubectl -n ${JOB_NAME} get pod"
        sh "kustomize edit set image yourregistry/you/py-bot:v${BUILD_NUMBER}"
        sh "kustomize build > builddeploy.yaml"
        sh "kubectl get ns ${JOB_NAME} || kubectl create ns ${JOB_NAME}"
        sh "kubectl -n ${JOB_NAME} apply -f builddeploy.yaml"
        sh "kubectl -n ${JOB_NAME} get pod"
      }
    }
    // stage('Deploy with kubectl') {
    //   container('kubectl') {
    //     // sh "kubectl -n ${JOB_NAME} get pod"
    //     // sh "kustomize version"
    //     sh "kubectl get ns ${JOB_NAME} || kubectl create ns ${JOB_NAME}"
    //     sh "kubectl -n ${JOB_NAME} apply -f deployment.yaml"
    //     sh "kubectl -n ${JOB_NAME} get pod"
    //   }
    // }
  }   
}

Create a jenkins pipeline and name it however you like, the important part is to set the Pipeline section to “Pipleline script from SCM”. This way Jenkins knows to use the Jenkinsfile in the git repository.

Webhooks and Build Now

Webhooks are what Github uses to push a new build to Jenkins. Due to the constraints of my environment I am not able to do this. My Jenkins instance cannot be contacted by the public API of Github. For now I have to click “Build Now” manually. I do suggest in a fully automated scenario investigating how to configure webhooks so that on every commit you can trigger a new pipeline build.
What the build is successful you should see some lovely green stages like below. In this example there are only 2 stages. Build with Kaniko, this builds the container image and pushes to my internal repo (Harbor). Then Deploy and Kustomize, which takes the new image and updates the 3 deployments in my Kubernetes cluster.

Output from Kubectl:

Py-bot in a Container

So during Pure kickoff last week I did several sessions on Pure Storage and Kubernetes for our yearly Tech Summit. It was very fun to prepare for. I wanted to do something different and I decided to take my py-bot I was running on my raspberry pi and up-level with integration into K8s and the FlashBlade with PVC’s. This is the second post and covers how to build the docker container and deploy to k8s.

Image result for twitter
tweet tweet

Check out the repo on github: https://github.com/2vcps/python-twitter-bot

Take a look at the code in ./bots

  • autoreply.py – code to reply to mentions
  • config.py – sets the API connection
  • followFollowers_data.py – Follows anyone that follows you, then writes some of their recent tweets to a CSV on a pure-file FlashBlade filesystem
  • followFollowers.py – All the followback with no data collection
  • tweetgamescore.py – future
  • tweetgamesetup.py – future

Py-bot In Kubernetes

Prereqs

Step 1

Build the docker image and push to your own repo. Make sure you are authenticated to your internal repo.

$ docker build -t yourrepo/you/py-bot:v1 .
$ docker push yourrepo/you/py-bot:v1

Step 2

Create a secret in your k8s environment with the keys are variables. Side note: this is the only methond I found to not break the keys when storing in K8s. If you have a functioning way to do it better let me know.

edit env-secret.yaml with your keys from twitter and the search terms.

kubectl apply -f env-secret.yaml

Verify the keys are in your cluster.

kuebctl describe secret twitter-api-secret

Step 3

Edit deployment.yaml and deploy the app. In my example I have 3 different deployments and one pvc. If you play to not capture data make sure to change the followback deployment to launch followFollowers.py and not followFollowers_data.py. Addiotionally, remove the PVC information if you are not using it.

Be sure to change the image for each deployemnt to your local repository path.
Notice that the autoreply deployment uses the env variable searchkey2 and favretweet deployment will use searchkey1. This allows each app to seach on different terms.

Be careful, if you are testing the favretweet.py program and use a common word for search you will see many many likes and retweets.

Now deploy

kubectl apply -f deployment.yaml

kubectl get pod

NAME                          READY   STATUS    RESTARTS   AGE
autoreply-df85944d5-b9gs9     1/1     Running   0          47h
favretweet-7758fb86c7-56b9q   1/1     Running   0          47h
followback-75bd88dbd8-hqmlr   1/1     Running   0          47h

kubectl logs favretweet-7758fb86c7-56b9q

INFO:root:API created
INFO:root:Processing tweet id 1229439090803847168
INFO:root:Favoriting and RT tweet Day off. No pure service orchestrator today. Close slack Jon, do it now.
INFO:root:Processing tweet id 1229439112966311936
INFO:root:Processing tweet id 1229855750702424066
INFO:root:Favoriting and RT tweet In Pittsburgh. Taking about... Pure Service Orchestrator. No surprise there.  #PSO #PureStorage
INFO:root:Processing tweet id 1229855772789460992
INFO:root:Processing tweet id 1230121679881371648
INFO:root:Favoriting and RT tweet I nearly never repost press releases, but until I can blog on it.  @PureStorage and Pure Service Orchestrator join… https://t.co/A6wxvFUUY7
INFO:root:Processing tweet id 1230121702509531137

kuebctl logs followback-75bd88dbd8-hqmlr

INFO:root:Waiting... 300s
INFO:root:Retrieving and following followers
INFO:root:purelyDB
INFO:root:PreetamZare
INFO:root:josephbreynolds
INFO:root:PureBob
INFO:root:MercerRowe
INFO:root:will_weeams
INFO:root:JeanCarlos237
INFO:root:dataemilyw
INFO:root:8arkz

More info

My Blog 2vcps.io

Follow me @jon_2vcps

Python Twitter Bot

So during Pure kickoff last week I did several sessions on Pure Storage and Kubernetes for our yearly Tech Summit. It was very fun to prepare for. I wanted to do something different and I decided to take my py-bot I was running on my raspberry pi and up-level with integration into K8s and the FlashBlade with PVC’s. This first post is to just go over the python code and how it works and what you need to do to get it working for yourself.

Check out the repo on github: https://github.com/2vcps/python-twitter-bot

Py-Bot

This is a Twitterbot. Built to run on Kubernetes and also uses Pure Service Orchestrator for persistent data.
Take a look at the code in ./bots

  • autoreply.py – code to reply to mentions
  • config.py – sets the API connection
  • followFollowers_data.py – Follows anyone that follows you, then writes some of their recent tweets to a CSV on a pure-file FlashBlade filesystem
  • followFollowers.py – All the followback with no data collection
  • tweet_game_score.py – future
  • tweet_game_setup.py – future

Testing the code on your machine

Prereqs

  • python3
  • twitter account with API keys
  • Pure Service Orchestrator and working Kubernetes

Step 1
$ pip install -r requirements.txt

Step 2
Create env variables for each key. The config.py will pull from the local OS. In this case your local machine.

export CONSUMER_KEY='some key'
export CONSUMER_SECRET='some secret'
export ACCESS_TOKEN='some token'
export ACCESS_TOKEN_SECRET='some token secret'

For the autoreply.py and favretweet.py you need a search key too.

export SEARCH_KEY='the thing I search for'

Be careful, if you are testing the favretweet.py program and use a common word for search you will see many many likes and retweets.

Step 3
Run the code.If all is working you will see logs and action on twitter.

$ python ./bots/autoreply.py 
Example output:

INFO:root:API createdINFO:root:Retrieving mentions
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:1222564855921758209
INFO:root:Searching for purestorage kubernetes
INFO:root:Retrieving mentionsINFO:root:1222564855921758209
INFO:root:Waiting...


It will continue to run so hit control-C to exit.

Now it is time to impress your boss and use big words like kubernetes. Read on in the next post below about how to run this bot as a deployment in kubernetes.

Image result for dilbert kubernetes

Go to the next level:
Run py-bot in Kubernetes

Quickly Install Cloud Native Storage CSI Driver for vSphere 6.7

First, you really should really truly understand the docs on VMware’s CSI driver.
Cloud Native Storage Getting Started

More information can be found at my GitHub.
https://github.com/2vcps/cns-installer

First if you meet all the pre-requisites mention in the CNS documentation clone my repo:

git clone https://github.com/2vcps/cns-installer.git

Then edit the install.sh and add your credentials and vCenter information.

VCENTER="<vcenter name or IP>" 
VC_ADMIN="<vc admin>" 
VC_PASS="<vc password>" 
VC_DATACENTER="<vc datacentername>" 
VC_NETWORK="<vc vm network name>"

VMware requires all the master to be tainted this way.

MASTERS=$(kubectl get node --selector='node-role.kubernetes.io/master' -o name)
for n in $MASTERS
do
    kubectl taint nodes $n node-role.kubernetes.io/master=:NoSchedule
done
kubectl describe nodes | egrep "Taints:|Name:"

Run the installer shell script (sorry Windows users, install WLS or something)

# ./install.sh

To Remove

Remove all PVC’s created with the Storage Class.

kubectl delete pvc 

Then run the cleanup script.

./uninstall.sh

You can run kubectl get all --all-namespaces to verify it is removed.

Note

If the CSI driver for vSphere does not start, the Cloud Controller may not have untainted the nodes when it initialized. I am have seen it work automatically (as designed by VMware) and also had to run this to make it work:

NODES=$(kubectl get nodes -o name)
for n in $NODES
do
    kubectl taint nodes $n node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule-
done
kubectl describe nodes | egrep "Taints:|Name:"
vVols Soon?
Pure Storage + CNS + SPBM will be awesome.

Create StorageClass for CNS

Copy and paste the URL any datastore works:
 kind: StorageClass
 apiVersion: storage.k8s.io/v1
 metadata:
   name: cns-vvols
   annotations:
     storageclass.kubernetes.io/is-default-class: \"false\"
 provisioner: csi.vsphere.vmware.com
 parameters:
   # storagepolicyname: \"pure-vvols\"
   DatastoreURL: \"ds:///vmfs/volumes/vvol:373bb977d8ca3de8-a41c2e2c4d1f43e6/\"
   fstype: ext4

Create a new file called cns-vvols.yaml and paste the above yaml. Now you will have the replace the **DatastoreURL** with a datastore that matches your environment. vVols is not currently “supported” but it can work with SPBM policies that point to FlashArrays and have no other policies enabled. Try it out if you like just remember it is not supported and that is why it is commented out.