Thank You – Play with Docker!

At Kubecon in Austin I was talking with my good friend Jonas Rosland about  how I was getting to train a group of new Pure SE’s on Cloud Native Apps and planned on doing labs with installing docker and running apps in containers. I was going to get every student a vm and let them spin up some containers. He reminded me of Play with Docker.  What I great idea I thought.

He not only did that but went a head and introduced me to Marcos who was the developer for PWD.  On the expo floor I was able to sit and chat with someone that created quite a cool way for people to experience Docker. Just a cool thing about people that do something for the community. A nice guy that was willing to answer questions and was not too busy to help someone out.

Since I had a good number of students logging in at once it seemed a good idea that we set it up on our own. So in just a day I spun up the environment and let the students get to work. Everyone had their own playground that ran Docker in Docker. Everyone got to do something they normally would not get a chance to play with.  Once I clean out some of the Pure specific stuff I will post the class and slides to Github.

I will create a post about what it took to get it up and running on my own in the next day or so. This is more of a thank you for all the work Marcos did to create this cool project for everyone to enjoy.

So if you are looking to learn a little more about Docker head to:

https://labs.play-with-docker.com/

ALSO if you want to learn about Kubernetes, there is a Play with K8s version too!

https://labs.play-with-k8s.com/

My SSH Issue Docker Swarm hosts

That one time you all of sudden could not SSH into your Docker Swarm hosts?

I am writing this so I will remember to be smarter next time.

Ever Get this?

minas-tirith:~ jowings$ ssh scarif
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!

I started to flip out and wondered why this just all of sudden happened on all four host in my swarm cluster. Was something actually nasty happening? Probably not, but you never know. I thought I broke the pub key on my mac. because I went into .ssh/known_hosts and removed the entry for my hosts as I quite commonly see this because I rebuild vm’s and hosts all the time. Then I got something different and got the same exact error from my Windows 10 machine.

Permission denied (publickey).

Pretty sure I didn’t break 2 different ssh clients on 2 different computers.
What did I do?

$docker stack deploy -c gitlab.yml gitlab

So I am keeping local git copies and thoughs I would be smart to have Gitlab to run this service in my home lab.

Problem in my zeal to have git use stander ssh tcp port 22 to push my repos up to the server I did this:

version: '3'
services:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab1'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.2vcps.local'
ports:
- '80:80'
- '443:443'
- '22:22'

So basically my gitlab service was now available using tcp/22 on my entire cluster. Even though the container is only on one host they way Docker overlay networking works is any host in that cluster will forward the request for tcp/22 to that container. The container without my public key, the container that no matter my hostname does not have the same SSH “ID” as my actual hosts.
Bad move JO.
So don’t do that and stuff.

To fix:

version: '3'
services:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab1'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.2vcps.local'

ports:
- '80:80'
- '443:443'
- '12022:22'

I changed the port mapping for now. I can use HAPROXY later to use the virtual hostname and point traffic to the container.

$docker stack deploy -c gitlab.yml gitlab

and it updates the service with the new port mapping.

Deploying Persistent Storage in Docker Swarm using Pure Storage Whitepaper

Spreading the word about a new paper published by Simon Dodsley on Deploying Persistent Storage in Docker Swarm.

In this paper, Simon uses the Pure Docker Volume Plugin to create persistent storage for CockroachDB. That is all well and good if you want to play with CockroachDB, but also shows the foundation for you to use the plugin to create persistent data for your app.

What applications are you using with containers that require persistent (and reliable) data storage? I would be very interested in seeing how this works for everyone else with their own apps.

CockroachDB with Persistent Data

There IS an Official Whitepaper!

While I was writing this post the awesome Simon Dodsley was writing a great whitepaper on Persistent storage with Pure. As you can see there is some very different ways to deploy CockroachDB but the main goal is to keep your important data persistent no matter what happens to the containers as the scale, live and die.

I know most everyone loved seeing the demo of the most mission critical app in my house. I also want to show a few quick ways to leverage the Pure plugin to provide persistent data to a database. I am posting my files I used to create the demo here https://github.com/2vcps/crdb-demo-pure

First note
I started with the instructions provided here by Cockroach Labs.
This is an insecure installation for demo purposes. They do provide the instructions for a more Prod ready version. This is good enough for now.

Second note
The loadbalancer I used was created for my environment using the intructions to output the HAProxy file found here on the Cockroach Labs website:
https://www.cockroachlabs.com/docs/stable/generate-cockroachdb-resources.html

My yaml file refers to a docker image I built for the HAproxy loadbalancer. If it works for you cool! If not please follow the instructions above to create your own. If you really need to know more I can write another post showing how to take the Dockerfile and copy the CFG generated by CRDB into a new image just for you.

 

My nice little docker swarm

media_1501095950777.png

I have three VMware VM’s running Ubuntu 16.04. With Docker CE and the Pure plugin already installed. Read more here if you want to install the plugin.

media_1501096079095.png

Run the deploy

https://github.com/2vcps/crdb-demo-pure/blob/master/3node-cockroachdb-pure.yml

version: '3.1'
services:
    db1:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
            mode: replicated
            replicas: 1
      ports:
            - 8888:8080
      command: start --advertise-host=cockroach_db1 --logtostderr --insecure
      networks:
            - cockroachdb
      volumes:
            - cockroachdb-1:/cockroach/cockroach-data
    db2:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db2 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-2:/cockroach/cockroach-data
    db3:
      image: cockroachdb/cockroach:v1.0.2
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db3 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-3:/cockroach/cockroach-data
    crdb-proxy:
      image: jowings/crdb-proxy:v1
      deploy:
         mode: replicated
         replicas: 1
      ports:
         - 26257:26257
      networks: 
         - cockroachdb

networks:
    cockroachdb:
        external: true

volumes:
    cockroachdb-1:
      driver: pure
    cockroachdb-2:
      driver: pure
    cockroachdb-3:
      driver: pure

 

#docker stack deploy -c 3node-cockroachdb-pure.yml cockroach

Like it shows in the compose file This command deploys 4 services. 3 database nodes and 1 HAproxy. Each database node gets a brand new volume attached directly to the path by the Pure Docker Volume Plugin.

New Volumes

media_1501098437804.png

Each new volume created and attached to the host via iSCSI and mounted into the container.

Cool Dashboard

media_1501098544719.png

Other than being no data do you notice something else?
First lets generate some data.
I run this from a client machine but you can attach to one of the DB containers and run this command to generate some sample data.

cockroach gen example-data | cockroach sql --insecure --host [any host ip of your docker swam]

media_1501098910914.png

I am also going to create a “bank” database and use a few containers to start inserting data over and over.

cockroach sql --insecure --host 10.21.84.7
# Welcome to the cockroach SQL interface.
# All statements must be terminated by a semicolon.
# To exit: CTRL + D.
root@10.21.84.7:26257/> CREATE database bank;
CREATE DATABASE
root@10.21.84.7:26257/> set database = bank;
SET
root@10.21.84.7:26257/bank> create table accounts (
-> id INT PRIMARY KEY,
-> balance DECIMAL
-> );
CREATE TABLE
root@10.21.84.7:26257/bank> ^D

I created a program in golang to insert some data into the database just to make the charts interesting. This container starts, inserts a few thousand rows then exits. I run it as a service with 12 replicas so it is constantly going, I call it gogogo because I am funny.

media_1501108005294.png

gogogo

media_1501108062456.png
media_1501108412285.png

You can see the data slowly going into the volumes.

media_1501171172944.png

Each node remains balanced (roughly) as cockroachdb stores that data.

What happens if a container dies?

media_1501171487843.png

Lets make this one go away.

media_1501171632191.png

We kill it.
Swarm starts a new one. The Docker engine uses the Pure plugin and remounts the volume. The CRDB cluster keeps on going.
New container ID but the data is the same.

media_1501171737281.png

Alright what do I do now?

media_1501171851533.png

So you want to update the image to the latest version of Cockroach? Did you notice this in our first screenshot?

Also our database is getting a lot of hits, (not really but lets pretend), so we need to scale it out. What do we do now?

https://github.com/2vcps/crdb-demo-pure/blob/master/6node-cockroachdb-pure.yml

version: '3.1'
services:
    db1:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
            mode: replicated
            replicas: 1
      ports:
            - 8888:8080
      command: start --advertise-host=cockroach_db1 --logtostderr --insecure
      networks:
            - cockroachdb
      volumes:
            - cockroachdb-1:/cockroach/cockroach-data
    db2:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db2 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-2:/cockroach/cockroach-data
    db3:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db3 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-3:/cockroach/cockroach-data
    crdb-proxy:
      image: jowings/crdb-haproxy:v2
      deploy:
         mode: replicated
         replicas: 1
      ports:
         - 26257:26257
      networks: 
         - cockroachdb
    db4:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db4 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-4:/cockroach/cockroach-data
    db5:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db5 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-5:/cockroach/cockroach-data
    db6:
      image: cockroachdb/cockroach:v1.0.3
      deploy:
         mode: replicated
         replicas: 1
      command: start --advertise-host=cockroach_db6 --join=cockroach_db1:26257 --logtostderr --insecure
      networks:
         - cockroachdb
      volumes:
         - cockroachdb-6:/cockroach/cockroach-data
networks:
    cockroachdb:
        external: true

volumes:
    cockroachdb-1:
      driver: pure
    cockroachdb-2:
      driver: pure
    cockroachdb-3:
      driver: pure
    cockroachdb-4:
      driver: pure
    cockroachdb-5:
      driver: pure
    cockroachdb-6:
      driver: pure
$docker stack deploy -c 6node-cockroachdb-pure.yml cockroach

(important to provide the name of the stack you already used, or else errors)

media_1501172007803.png

We are going to update the services with the new images.

  1. This will replace the container with the new version — v1.0.3
  2. This will attach the existing volumes for nodes db1,db2,db3 to the already created FlashArray volumes.
  3. Also create new empty volumes for the new scaled out nodes db4,db5,db6
  4. CockroachDB will begin replicating the data to the new nodes.
  5. My gogogo client “barage” is still running

This is kind of the shotgun approach in this non-prod demo environment. If you want no downtime upgrades to containers I suggest reading more on blue-green deployments. I will show how to make the application upgrade with no downtime and use blue-green in another post.

Cockroach DB begins to reblance the data.

media_1501172638117.png

6 nodes

media_1501172712079.png

If you notice the gap in the queries it is becuase I updated every node all at once. A better way would be to do one at a time and make sure each node is back up while they “roll” through the upgrade to the new image. Not prod remember?

media_1501172781312.png
media_1501172828992.png

Application says you are using 771MiB of your 192GB. While the FlashArray is using just maybe 105MB across these volumes.

A little while later…

media_1501175811897.png

Now we are mostly balanced with replicas in each db node.

Conclusion
This is just scratching the surface and running highly scalable data applications in containers with persistent data on a FlashArray. Are you a Pure customer or potential Pure customer about to run stateful/persistent apps on Docker/Kubernetes/DCOS? I want to hear from you. Leave a comment or send me a message on Twitter @jon_2vcps.

If you are a developer and have no clue what your infrastructure team does or is doing I am here to help make everyone’s life better. No more weekend long deployments or upgrades. Get out of doing storage performance troubleshooting.

Go to more of your kids soccer games.

Using the Docker Volume Plugin with Docker Swarm

Remember the prerequisites. Check the official README for the latest information. Official README

Platform and Software Dependencies

Operating Systems Supported:

  • CentOS Linux 7.3
  • CoreOS (Ladybug 1298.6.0 and above)
  • Ubuntu (Trusty 14.04 LTS, Xenial 16.04.2 LTS)

Environments Supported :

  • Docker (v1.13 and above)
  • Swarm
  • Mesos 1.8 and above

Other software dependencies:

  • Latest iscsi initiator software for your operating system
  • Latest linux multipath software package for your operating system

Review: To install the plugin –


docker plugin install store/purestorage/docker-plugin:1.0 --alias pure

OR if you are annoyed by having to hit Y for the permissions the plugin requests.


docker plugin install store/purestorage/docker-plugin:1.0 --alias pure --grant-all-permissions

The installation process is the same as a standalone docker host except you must specify your clusterid. This is a unique string you assign to your swarm nodes.


docker plugin disable pure
docker plugin set pure PURE_DOCKER_NAMESPACE=<clusterid>
docker plugin enable pure

When you first install the Pure Volume Plugin the plugin is enabled. Docker will not allow you to modify the namespace while the plugin is in use. So we need to disable the plugin before making changes. This also means it is best to do this before creating and using any volumes.

Remember to put your API token and array management IP in the pure.json file under /etc/pure-docker-plugin/ – for each host.

More information Here

Demo for setting up Swarm and testing container failover

Previous post about installing the Plugin

Pure Storage Docker Plugin

This is a quick guide and how to install the Pure plugin for docker 1.13 and above. For full details check out Pure Volume Plugin on Store.docker.com.

Requirements

 

Operating Systems Supported

CentOS Linux 7.3
CoreOS (Ladybug 1298.6.0 and above)
Ubuntu (Trusty 14.04 LTS, Xenial 16.04.2 LTS)
Environments Supported

Docker 1.13+ I am on 17.03-ce
Swarm
Mesos 1.8 and above
Other dependencies

Latest iSCSI initiator SW
Latest Multipath package (This made a difference for me on Ubuntu remember to update!)

Hosts Before

media_1501006005257.png

Here I am just listing the Pure hosts on my array before I install the plugin.

Volumes Before

media_1501006035507.png

Also listing out my volumes, these are all pre-existing.

Pull and Install the plugin (Docker 1.13 and above)

Create /etc/pure-docker-plugin/pure.json

media_1501006093681.png

edit the file pure.json in /etc/pure-docker-plugin and add your array and API token
to get a token from the Pure CLI – (or go to the GUI of the array and copy the API token for your user).

 

pureeadmin create –api-token [user]
pureadmin list –api-token [user] –expose

Pull the plugin and Install

media_1501006156094.png

docker plugin install store/purestorage/docker-plugin:1.0 –alias pure

Grant the plugins to the directories it requests.

Done. Easy.

For Docker Swarm

Setting the PURE_DOCKER_NAMESPACE variable can be done with the command:

docker plugin set pure PURE_DOCKER_NAMESPACE=<clusterid>

My next blog post will dive more into setting up the plugin with Docker Swarm. The clusterid is just a unique string. Keep it simple.

Test it

media_1501006217870.png

$docker volume create -d pure -o size=200GiB Demo

Remember if you want to create the volume with other units the information is in the README but here it is for now:<Units can be specified as xB, xiB, or x. If no units are specified MiB is assumed.

My host created by the plugin

media_1501006373816.png

Now that I created a volume on the array the host docker01 is now added to the list of hosts. The plugin automates adding the iSCSI IQN and creating the host.

My new volume all ready to go

media_1501006405843.png

You also see the docker01-Demo is listed and sized to my requested 200GiB The PURE_DOCKER_NAMESPACE will prepend the volume name you create. The default will use the docker hostname. In a Mesos and Swarm environment the namespace setting mentioned above is used. This is only identified this way on the array.

Now the volume can be mounted to a container using

 

#docker run –volume Demo:/data [image] [command]
You could also create a new volume and mount it to a container all in the same line with:

 

#docker run –volume-driver pure –volume myvolume:/data [image] [command]

My First DockerCon

Wrapping up my very first DockerCon. I learned great new things, was introduced to new tech and reconnected with some old friends.

This was my first convention in a very long time where I actually just attended the show and went to sessions. It was really nice. While people would usually read my blog looking for tips and tricks on how to do technical things and not my philosophic rambling. So I won’t try to be a pundit on announcements and competition and all that. Some cool things I learned:

  1. Share everything on GitHub. People use github as the defacto standard for sharing information. Usually it is code, but lots more is out there including presentations and demos for a lot for what happened at DockerCon. Exciting for me as someone that always loved sharing what I learn via this blog is that this is expected. I will post some of my notes and other things about specific sessions once the info is all posted.
  2. Being a “storage guy” for the past 6 years or so between Pure Storage and EMC it was good to see how many companies in the ecosystem have solutions built for CI/CD and Container Security. So much different than other shows where the Storage vendors dominate the mind share.
  3. Over the years friends and co-workers have gone there own way and ended up all over the industry. Some of my favorite people that always put a very high value on community and sharing seem to be the same people that gravitate to DockerCon. It was great to see all of you and meet some new people.

More to follow as I pull my notes together and find links to the sessions.

 

Kubernetes Anywhere and PhotonOS Template

Experimenting with Kubernetes to orchestrate and manage containers? If you are like me and already have a lot invested in vSphere (time, infra, knowledge) you might be exctied to use Kubernetes Anywhere to deploy it quickly. I won’t re-write the instruction found here:

https://github.com/kubernetes/kubernetes-anywhere

It works with

  • Google Compure Engine
  • Azure
  • vSphere

The vSphere option uses the Photon OS ova to spin up the container hosts and managers. So you can try it out easily with very little background in containers. That is dangerous as you will find yourself neck deep in new things to learn.

Don’t turn on the template!

media_1491484535602.png

If you are like me and *skim* instructions you could be in for hours of “Why do all my nodes have the same IP?” When you power on the Photon OS template the startup sequence generates a machine ID (and mac address). So even though I powered it back off, the cloning processes was producing identical VM’s for my kubernetes cluster. Those not hip to networking this is bad for communication.

Also, don’t try to be a good VMware Admin cad convert that VM to a VM Template. The Kubernetes Anywhere script won’t find it.

IF you do like me and skip a few lines reading (happens right) make sure to check this documenation out on Photon OS. It will help get you on the right track.

https://github.com/vmware/photon/blob/master/docs/photon-admin-guide.md#clearing-the-machine-id-of-a-cloned-instance-for-dhcp

This is clearly marked in the documentation now.

Setting Docker_gwbridge Subnet

I had an issue with the Docker Swarm subnet automatically generated when I do:

$docker swarm init

Basically it was choosing the subnet my VPN connection was using to assign an IP to my machine on the internal network. Obviously this wreaked havoc on me being able to connect to the docker hosts I was working with in our lab.

I decided it would be worth it to create the docker_gwbridge network and assign the CIDR subnet for the network that would not overlap with the VPN.

$docker network create –subnet 192.168.249.0/24 docker_gwbridge

I did this before I created the swarm cluster. So far everything is working fine in the lab and I am able to SSH to the Docker Host and connect to the services I am testing on those machines. There may be other issues and I will report back as I find them.

MK_SPACEMOUNTAINB_7567608042

Come see @CodyHosterman at VMworld, and if he is too busy you can see me

Co89Fz2UAAAiAD9Look for a post about going to In-n-Out some time soon, it is my tradition.

Be sure to check out what we will be doing at VMworld at the end of the month. Click the banner below once you are done being mesmerized by Chappy. Sign up for a 1:1 demo or meeting, I’ll be there are would love to meed with you. See how focused a demo I give.

 

vmworld-sig chappyFB

Sessions to be sure to see featuring the Amazing Cody Hosterman

SDDC9456-SPO: Implementing Self-Service Storage Provisioning with vRealize Automation Xaas

VMware vCenter is no longer meant to be the end-user interface for requesting and managing virtual machines and related resources. Storage is no exception. Join Cody Hosterman as he discusses how vRealize Automation Anything-as-a-Service (Xaas) provides the ability to easily import vRealize Orchestrator workflows to control, manage and provision storage via the self-service catalog offering vRealize Automation.

Wednesday, Aug 31, 2:00 PM – 3:00 PM

NF9455-SPO: Best Practices for All-Flash Data Reduction Arrays with VMware vSphere

As All-Flash Data Reduction arrays are becoming common place in VMware environments due to their performance, flexibility and ease-of-use, it is important to understand how to best implement and manage them with EXXi. Data-reduction and flash changes how an administrator should think about various configuration options within VMware and those will be discussed in detail. VAAI, Space Reclamation, virtual disks, SIOC, SDRS Queue depths, Multipathing and other points will be highlighted.

Monday, Aug 29, 2:30 PM – 3:30 PM