Test all Kubeconfig Contexts

One day I woke up and had like 14 clusters in my Kubeconfig. I didn’t remember which ones did what or if the clusters even still existed.

kubectl config get-contexts -o name | xargs -I {} kubectl --context={} get nodes -o wide

So I cooked up this command to run through them all and make sure they actually responded. This works for me. If you have an alternative way please share in the comments.

Connecting your Application to Cassandra on PDS

Next up is a way to test Cassandra when deployed with PDS. I saved my python application to GitHub here: https://github.com/2vcps/py-cassandra

The key here is to deploy Cassandra via PDS then get the server connection names from PDS. Each step is explained in the repo. Go over there and fork or clone the repo or just use my settings. A quick summary though (it is really this easy).

  1. Deploy Cassandra to your Target in PDS.
  2. Edit the env-secret.yaml file to match your deployment.
  3. Apply the secret. kubectl -n namespace apply -f env-secret.yaml
  4. Apply the deployment. kubectl -n namespace apply -f worker.yaml
  5. Check the database in the Cassandra pod. kubectl -n namespace exec -it cas-pod — bash
  6. Use cqlsh to check the table the app creates.

That is it pretty easy and it creates a lot of records in the database. You could also scale it up in order to test connections from many sources. I hope this helps you quickly use PDS and if you have any updates or changes to me repo please submit a PR.

Testing Apache Kafka in Portworx Data Services

With the GA of Portworx Data Services I needed a way to connect some test applications with Apache Kafka. Kafka is one of the most asked for Data Services in PDS. Deploying Kafka is very easy with PDS but I wanted to show how it easy it was for a data team to connect their application to Kafka in PDS. I was able to find a kafka-python library, so I started working on a couple of things.

  1. A python script to create some kind of load on Kafka.
  2. Containerize it, so I can make it easy and repeatable.
  3. Create the kubernetes deployments so it is quick and easy.

This following github repo is the result of that project.
https://github.com/2vcps/py-kafka

See the repo for the steps on setting up the secret and deployments in K8s to use with your PDS Kafka, honestly it should work with any Kafka deployment where you have the connection service, username and password.

A quick demo of it all in action

Check out the youtube demo I did above to see it all in action.

Portworx Data Services GA: An Admins View, So easy you won’t believe it…

Portworx Data Services (PDS) the DBaaS platform built on the Portworx Enterprise platform is One Platform for All Databases. This SaaS platform can work with your platform in the Cloud or in your datacenter. Check out this demo of some of the Admin tasks available.

The good part other than add your DB consumers and your target Clusters to run workloads, the rest is configured for you. The power in the platform is you can change many settings but for the best practices are already put in place for you. Now you can have all databases with just one API and one UI.

You no longer need to learn an Operator or Custom resources for every different data service your Developers, Data Architects and DBA’s ask for. Also your data teams don’t learn K8s, They work with databases without having to every become platform exports. Just one API, One UI.
One Platform. All Databases.

Collection of PDS Links (as of May 18, 2022)

Blog from Umair Mufti on should you use DBaaS or DIY
https://blog.purestorage.com/perspectives/dbaas-or-diy-build-versus-buy-comparison/
How to deploy Postgres via Bhavin Shah with video demo
https://portworx.com/blog/accelerate-data-services-deployment-on-kubernetes-using-portworx-data-services/
Deploying Postgres via PDS by Ron Ekins step by step details
https://ronekins.com/2022/05/18/portworx-data-services-pds-and-postgresql/
Portworx Data Services page on Purestorage.com
https://www.purestorage.com/enable/portworx/data-services.html
PDS is GA Announcement from Pure Storage.
https://www.purestorage.com/company/newsroom/press-releases/pure-boosts-developer-productivity-expanding-portworx-portfolio.html
Migrating existing PostgreSQL into Managed PostgreSQL in PDS
https://portworx.com/blog/migrating-postgresql-to-portworx-data-services/




Be Awesome with Neo4j Graph Database In Kubernetes

Graph DB solution Neo4j is popular with Data Scientists and Data Architects trying to make connections of the nodes and relationships. Neo4j is able to do memory management other in memory operations to allow for efficiency and performance. All of that data needs to eventual persist to a data management platform. This is why I was first asked about Neo4j.

Some cool nodes

There are Community and Enterprise Editions like most software solutions these days and the robust enterprise type functions land in the Enterprise Edition. Things like RBAC and much higher scale.

Neo4j provides a repo of Helm charts and some helpful documentation on running the Graph Database in Kubernetes. Unfortunately, the instructions stop short of many of the K8s flavors supported by the cloud and on premises solutions. Pre-provisioning cloud disks might be great for a point solution of a single app. Most of my interactions with the people running k8s in production are building Platform as a Service (PaaS) or Database as a Service (DBaaS). Nearly all at least want the option of building these solutions to be hybrid or multi cloud capable. Additionally, DR and Backup are requirements to run in any environment that values their data and staying in business.

Portworx in 20 seconds

Portworx is a data platform that allows stateful applications such as Neo4j to run on an Cloud, on Premises Hardware on any K8s Distribution. It was built from the very beginning to run as a container for containers. </end commercial>

In this blog post I want to enhance and clarify the documentation for the Neo4j helm chart so that you can easily run the community or Enterprise Editions in your K8s deployment.

As with any database Neo4j will benefit greatly from running the persistence on Flash. All of my testing was done with a Pure Storage FlashArray.

Step 1

Already have K8s and Portworx installed. I used Portworx 2.9.1.1 and Vanilla K8s 1.22. Also already have Helm installed.
Note: The helm chart was giving me trouble until I updated helm to version 3.8.x

Step 2

Add the Neo4j helm repo

helm repo add neo4j https://helm.neo4j.com/neo4j
helm repo update

Step 3

Create a neo4j storage class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: neo4j
provisioner: pxd.portworx.com
parameters:
  repl: "2"
  io_profile: db_remote
kubectl apply -f neo4j-storageclass.yaml

Verify the Storage Class is available

kubectl get sc

Step 4

Create your values.yaml for your install.

values-standalone.yaml
neo4j:
  resources:
    cpu: "0.5"
    memory: "2Gi"

  # Uncomment to set the initial password
  #password: "my-initial-password"

  # Uncomment to use enterprise edition
  #edition: "enterprise"
  #acceptLicenseAgreement: "yes"

volumes:
  data:
    mode: "dynamic"
    # Only used if mode is set to "dynamic"
    # Dynamic provisioning using the provided storageClass
    dynamic:
      storageClassName: "neo4j"
      accessModes:
        - ReadWriteOnce
      requests:
        storage: 100Gi

Cluster values.yaml

node[0-x]values-cluster.yaml

Why do I say 0-x? Well neo4j requires a helm release for each core cluster node (read more detail in the neo4j helm docs). Each file for now is the same. Note: neo4j is an in memory database. 2Gi ram is great for the lab, but for really analytics use I would hope to use much more memory.

neo4j:
  name: "my-cluster"
  resources:
    cpu: "0.5"
    memory: "2Gi"
  password: "my-password"
  acceptLicenseAgreement: "yes"

volumes:
  data:
    mode: "dynamic"
    # Only used if mode is set to "dynamic"
    # Dynamic provisioning using the provided storageClass
    dynamic:
      storageClassName: "neo4j"
      accessModes:
        - ReadWriteOnce
      requests:
        storage: 100Gi
Read Replica values

Create another helm yaml file here is rr1-values-cluster.yaml

neo4j:
  name: "my-cluster"
  resources:
    cpu: "0.5"
    memory: "2Gi"
  password: "my-password"
  acceptLicenseAgreement: "yes"

volumes:
  data:
    mode: "dynamic"
    # Only used if mode is set to "dynamic"
    # Dynamic provisioning using the provided storageClass
    dynamic:
      storageClassName: "neo4j"
      accessModes:
        - ReadWriteOnce
      requests:
        storage: 100Gi

Download all the values.yaml from GitHub

Step 5

Install Neo4j (stand alone)

Standalone

The docs say to run this:

helm install my-neo4j-release neo4j/neo4j-standalone -f my-neo4j.values-standalone.yaml

I do the following and I’ll explain why.

helm install -n neo4j-1 neo4j-1 neo4j/neo4j-standalone -f ./values-standalone.yaml --create-namespace

Cluster Install

For each cluster node values.yaml

helm install -n neo4j-cluster neo4j-cluster-3 neo4j/neo4j-cluster-core -f ./node1-values-cluster.yaml --create-namespace

You need a minimum of 3 core nodes to create a cluster. So you must run the helm install command 3 times for the neo4j-cluster-core helm chart.

Successful Cluster Creation
 kubectl -n neo4j-cluster exec neo4j-cluster-0 -- tail /logs/neo4j.log
2022-03-29 19:14:55.139+0000 INFO  Bolt enabled on [0:0:0:0:0:0:0:0%0]:7687.
2022-03-29 19:14:55.141+0000 INFO  Bolt (Routing) enabled on [0:0:0:0:0:0:0:0%0]:7688.
2022-03-29 19:15:09.324+0000 INFO  Remote interface available at http://localhost:7474/
2022-03-29 19:15:09.337+0000 INFO  id: E2E827273BD3E291C8DF4D4162323C77935396BB4FFB14A278EAA08A989EB0D2
2022-03-29 19:15:09.337+0000 INFO  name: system
2022-03-29 19:15:09.337+0000 INFO  creationDate: 2022-03-29T19:13:44.464Z
2022-03-29 19:15:09.337+0000 INFO  Started.
2022-03-29 19:15:35.595+0000 INFO  Connected to neo4j-cluster-3-internals.neo4j-cluster.svc.cluster.local/10.233.125.2:7000 [RAFT version:5.0]
2022-03-29 19:15:35.739+0000 INFO  Connected to neo4j-cluster-2-internals.neo4j-cluster.svc.cluster.local/10.233.127.2:7000 [RAFT version:5.0]
2022-03-29 19:15:35.876+0000 INFO  Connected to neo4j-cluster-3-internals.neo4j-cluster.svc.cluster.local/10.233.125.2:7000 [RAFT version:5.0]
Install Read Replica

The cluster must be up and functioning to install the read replica.

helm install -n neo4j-cluster neo4j-cluster-rr1 neo4j/neo4j-cluster-read-replica -f ./rr1-values-cluster.yaml
Install the Loadbalancer

To access neo4j from an external source you should install the loadbalancer service. Run the following command in our example.

 helm install -n neo4j-cluster lb neo4j/neo4j-cluster-loadbalancer --set neo4j.name=my-cluster
Why the -n tag?

I provide the -n with a namespace and the –create-namespace tag because it allows me to install my helm release in this case neo4j-1 into its own namespace. Which helps with operations for DR, Backup and even lifecycle cleanup down the road. When installing a cluster all the helm releases must be in the same namesapce.

Start Graph Databasing!

As you can see there are plenty of tutorials to see how you may use Neo4j

Some other tips:
https://neo4j.com/docs/operations-manual/current/performance/disks-ram-and-other-tips/

See below for detials of the PX Cluster

Status: PX is operational
Telemetry: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: ade858a2-30d4-41ba-a2ce-7ee1f9b7c4c0
	IP: 10.21.244.207
 	Local Storage Pool: 1 pool
	POOL	IO_PRIORITY	RAID_LEVEL	USABLE	USED	STATUS	ZONE	REGION
	0	HIGH		raid0		297 GiB	12 GiB	Online	default	default
	Local Storage Devices: 2 devices
	Device	Path							Media Type		Size		Last-Scan
	0:1	/dev/mapper/3624a937081f096d1c1642a6900d954aa-part2	STORAGE_MEDIUM_SSD	147 GiB		29 Mar 22 16:21 UTC
	0:2	/dev/mapper/3624a937081f096d1c1642a6900d954ab		STORAGE_MEDIUM_SSD	150 GiB		29 Mar 22 16:21 UTC
	total								-			297 GiB
	Cache Devices:
	 * No cache devices
	Journal Device:
	1	/dev/mapper/3624a937081f096d1c1642a6900d954aa-part1	STORAGE_MEDIUM_SSD
Cluster Summary
	Cluster ID: px-fa-demo1
	Cluster UUID: 76eaa789-384d-4af2-b476-b9b3fd7fdcab
	Scheduler: kubernetes
	Nodes: 8 node(s) with storage (8 online)
	IP		ID					SchedulerNodeName	Auth		StorageNode	Used	Capacity	Status	StorageStatus	Version		Kernel			OS
	10.21.244.203	f917d321-3857-4ee1-bfaf-df6419fdac53	pxfa1-3			Disabled	Yes		12 GiB	297 GiB		Online	Up	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
	10.21.244.209	c3bea2dc-10cd-4485-8d02-e65a23bf10aa	pxfa1-9			Disabled	Yes		12 GiB	297 GiB		Online	Up	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
	10.21.244.204	b27e351a-fed3-40b9-a6d3-7de2e7e88ac3	pxfa1-4			Disabled	Yes		12 GiB	297 GiB		Online	Up	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
	10.21.244.207	ade858a2-30d4-41ba-a2ce-7ee1f9b7c4c0	pxfa1-7			Disabled	Yes		12 GiB	297 GiB		Online	Up (This node)	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
	10.21.244.205	a413f877-3199-4664-961a-0296faa3589d	pxfa1-5			Disabled	Yes		12 GiB	297 GiB		Online	Up	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
	10.21.244.202	37ae23c7-1205-4a03-bbe1-a16a7e58849a	pxfa1-2			Disabled	Yes		12 GiB	297 GiB		Online	Up	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
	10.21.244.206	369bfc54-9f75-45ef-9442-b6494c7f0572	pxfa1-6			Disabled	Yes		12 GiB	297 GiB		Online	Up	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
	10.21.244.208	00683edb-96fc-4861-815d-a0430f8fc84b	pxfa1-8			Disabled	Yes		12 GiB	297 GiB		Online	Up	2.9.1.3-7769924	5.4.0-105-generic	Ubuntu 20.04.4 LTS
Global Storage Pool
	Total Used    	:  96 GiB
	Total Capacity	:  2.3 TiB

PX Essentials for FA (2.9.0)! How to install it?

This week Portworx Enterprise 2.9.0 released with it comes support for K8s 1.22 and the new Essentials for FlashArray License. You can now have more nodes and capacity and as many clusters as you like. Previously you only got 5 nodes and a single cluster for PX Essentials. More info in the release notes:

https://docs.portworx.com/reference/release-notes/portworx/#2-9-0

How to install your PX License

  1. Install the prerequisites for connecting your worker nodes with either iSCSI or Fiber Channel. https://docs.portworx.com/cloud-references/auto-disk-provisioning/pure-flash-array/
  2. Go to https://central.portworx.com and start to generate the spec
  3. While creating the spec you will get the commands to run with kubectl to create the px-pure-secret. This secret takes the pure.json file and puts it in a place where the Portworx Installer will use it to provision it’s block devices from FlashArray. How to create the pure.json file? More info here: https://docs.portworx.com/cloud-references/auto-disk-provisioning/pure-flash-array/#deploy-portworx
  4. Apply the Operator and the Storage Cluster. Enjoy your new data platform for Kubernetes.
  5. When Portworx detects the drives are coming from a FlashArray the License is automatically set! Nothing special to do!

Watch the walkthrough

PX + FA = ❤️

First, the ability to use Portworx with FlashArray is not new with 2.9.0. What is new is the automatic Essentials license. Of course you would have benefits of upgrading Essentials to Enterprise to get DR, Autopilot and Migration. Why would you want to run Portworx on the FlashArray? The central storage platform from the FlashArray give all the benefits you are used to with Pure but to your stateful workloads. All Flash Performance. No disruptive upgrade everything. Evergreen. Dedupe and compression with no performance impact. Thin provisioning. Put it this way? The same platform you run baremetal and vm based business critical apps? Oracle, MS SQL?Shouldn’t it run your modernized cloud native versions of these applications? (Still can be containerized Oracle or SQL so don’t @ me.) Together with Portworx it just makes sense to be better together.

PX + FB = ❤️

Don’t get confused. While the PX Store layer needs block storage to run, FlashBlade and Portworx has already been supported as a Direct Attach target for a while now. You may even notice in the demo above my pure.json includes my FlashBlade. So go ahead use them all.

Portworx 2.8 with FlashArray and FlashBlade Getting Started

Getting Started with Portworx 2.8 and the FlashArray and FlashBlade

Last week Portworx 2.8 went GA, with it new support for Tanzu TKGs TKGm (We supported PKS/TKGi for a long time), but also Support for Cloud Drives for FlashArray and Direct Access for FlashBlade. It also simplified the installation of Portworx with Tanzu from this earlier version.

FlashArray Volumes for Portworx

Portworx will automate creating a storage pool from volumes provisioned from the FlashArray. This is done for your during the install, you may specify the size of the volumes in the spec generator at https://central.portworx.com

Process to install

NOTE as of 8/4/21: This feature is in Tech Preview (contact me if you want to run in Production)

  1. Create the px-pure-secret from the pure.json file
  2. Generate your Portworx Cluster spec from https://central.portworx.com
  3. Install the PX-Operator (command at the end of the spec generator).
  4. Install the Portworx Storage Cluster

  1. https://docs.portworx.com/reference/pure-json-reference/
    Also look that the pure.json reference in order to get the API key and token in a secret for you to use with Portworx. The installation of Portworx detects this Kubernetes secret and uses that information to provision drives from the array.
    Check out the youtube demo:
FlashArray and FlashBlade from Portworx

Sample pure.json

{
    "FlashArrays": [
        {
            "MgmtEndPoint": "<first-fa-management-endpoint>",
            "APIToken": "<first-fa-api-token>"
        },
        {
            "MgmtEndPoint": "<second-fa-management-endpoint>",
            "APIToken": "<second-fa-api-token>"
        }
    ],
    "FlashBlades": [
        {
            "MgmtEndPoint": "<fb-management-endpoint>",
            "APIToken": "<fb-api-token>",
            "NFSEndPoint": "<fb-nfs-endpoint>",
        },
        {
            "MgmtEndPoint": "<fb-management-endpoint>",
            "APIToken": "<fb-api-token>",
            "NFSEndPoint": "<fb-nfs-endpoint>",
        }
    ]
}
kubectl create secret generic px-pure-secret --namespace kube-system --from-file=pure.json

Remember the secret must be called px-pure-secret and be in the namespace that you install Portworx.

Select Pure FlashArray

2. Generate the spec for the Portworx Cluster.

3. Install the PX-Operator – I suggest using what you get from the Spec generator online or down to a local file.

kubectl apply -f pxoperator.yaml

4. Install the Storage Cluster

kubectl apply -f px-spec.yaml

For FlashBlade!

Also included in the pure.json is the API Token and IP information for my FlashBlade. Since the FlashBlade runs NFS the K8s node mounts it directly. We call with Direct Attach and allows you to leverage your FlashBlade for data that may exist outside of the PX-Cluster. Watch the demo to see it in action. Create a StorageClass for FlashBlade and a PVC using that class. Portworx automates the rest.
More info:
https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/pure-flashblade/

Links

https://docs.portworx.com/cloud-references/auto-disk-provisioning/pure-flash-array/
https://docs.portworx.com/reference/pure-json-reference/
https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/pure-flashblade/

Setting up Portworx on a Tanzu Kubernetes Grid aka TKG Cluster

First, this process works today on clusters made with the TKG tool that does not use the embedded management cluster. For clarity I call those clusters TKC or TKC Guest Clusters. The run as VM’s. You just can’t add block devices outside of the Cloud Native Storage (VMware’s CSI Driver). At least I couldn’t.

Now TKG deploys using a Photon 3.0 template. When I wrote this blog and recorded the demo the current latest version is TKG 1.2.1 and the k8s template is 1.19.3-vmware.

Check the release notes here: https://docs.portworx.com/reference/release-notes/portworx/#improvements-4

First generate base64 encoded versions of your user and password to vCenter.

# Update the following items in the Secret template below to match your environment:

VSPHERE_USER: Use output of printf <vcenter-server-user> | base64
VSPHERE_PASSWORD: Use output of printf <vcenter-server-password> | base64

The vsphere-secret.yaml save this to a file with your own user and password to vCenter (from above).


apiVersion: v1
kind: Secret
metadata:
  name: px-vsphere-secret
  namespace: kube-system
type: Opaque
data:
  VSPHERE_USER: YWRtaW5pc3RyYXRvckB2c3BoZXJlLmxvY2Fs
  VSPHERE_PASSWORD: cHgxLjMuMEZUVw==


kubectl apply the above spec after you update the above template with your user and password.

Follow these steps:

# create a new TKG cluster
tkg create cluster tkg-portworx-cluster -p dev -w 3 --vsphere-controlplane-endpoint-ip 10.21.x.x 

# Get the credentials for your config
tkg get credentials tkg-portworx-cluster

# Apply the secret and the operator for Portworx
kubectl apply -f vsphere-secret.yaml
kubectl apply -f 'https://install.portworx.com/2.6?comp=pxoperator'

#generate your spec first, you get this from generating a spec at https://central.portworx.com
kubectl apply -f tkg-px.yaml 

# Wait till it all comes up.
watch kubectl get pod -n kube-system

# Check pxctl status
PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

You can now create your own or use the premade storageClass

kubectl get sc
NAME                             PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
default (default)                csi.vsphere.vmware.com          Delete          Immediate           false                  7h50m
px-db                            kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
px-db-cloud-snapshot             kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
px-db-cloud-snapshot-encrypted   kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
px-db-encrypted                  kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
px-db-local-snapshot             kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
px-db-local-snapshot-encrypted   kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
px-replicated                    kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
px-replicated-encrypted          kubernetes.io/portworx-volume   Delete          Immediate           false                  7h44m
stork-snapshot-sc                stork-snapshot                  Delete          Immediate           false                  7h44m

Now Deploy Kube-Quake

The example.yaml is from my fork of the kube-quake repo on github where I redirected the data to be on a persistent volume.

kubectl apply -f https://raw.githubusercontent.com/2vcps/quake-kube/master/example.yaml
deployment.apps/quakejs created
service/quakejs created
configmap/quake3-server-config created
persistentvolumeclaim/quake3-content created

k get pod
NAME                      READY   STATUS              RESTARTS   AGE
quakejs-668cd866d-6b5sd   0/2     ContainerCreating   0          7s
k get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
quake3-content   Bound    pvc-6c27c329-7562-44ce-8361-08222f9c7dc1   10Gi       RWO            px-db          2m

k get pod
NAME                      READY   STATUS    RESTARTS   AGE
quakejs-668cd866d-6b5sd   2/2     Running   0          2m27s

k get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                         AGE
kubernetes   ClusterIP      100.64.0.1     <none>        443/TCP                                         20h
quakejs      LoadBalancer   100.68.210.0   <pending>     8080:32527/TCP,27960:31138/TCP,9090:30313/TCP   2m47s

Now point your browser to: http://<some node ip>:32527
Or if you have the LoadBalancer up and running go to the http://<Loadbalancer IP>:8080

Finishing some setup for my Unifi Dream Machine Pro

I wanted a better home router. During the learning from home phase of the 2020 pandemic I learned I could not have advanced security features of the USG (Unifi Security Gateway) turned on and get sufficient bandwidth for 3 Kids and myself to stream and zoom. So I wanted an upgrade. I went with the Unifi Dream Machine Pro.
https://store.ui.com/collections/unifi-network-routing-switching/products/udm-pro
For reals though file this post under, I need to remember what I changed in case I have to do it again.

OpenDNS

First thing that I did on my older routers was to configure opendns as the external DNS for my networks. In order for OpenDNS so apply my content filtering settings it must know the source IP for my home. This can change because most ISP’s use DHCP to assign the IP’s. Although it seems that my ISP likes to reassign the same IP, I can’t trust that will always be true.

So first, make sure you sign up for an opendns and dns-o-matic account.

Log into the UDM UI

Click on the Settings Gear…

Click on Advanced Features -> Advanced Gateway Settings

Click Create new Dynamic DNS

For DNS-o-matic the settings look like:
Hostname: all.dnsomatic.com
Username: [Your DNS-o-matic user]
Password: [Your DNS-o-matic password]
Server: updates.dnsomatic.com/\/nic/update?hostname=%h&myip=%i

Links Below were very helpful

https://community.ui.com/questions/OpenDNS-not-working-with-UDM-Pro/c9d5589b-c14e-4c86-8470-4c228b0b5282

Very helpful link for getting the server URL. Also contains a few for some other services.
https://community.ui.com/questions/UDM-DynDNS-Google-Domains/fe9ba35d-66c3-437d-8323-debe2af55879#answer/2181146e-79b8-485c-8042-eb975c291242

https://community.ui.com/questions/Any-way-to-get-DNS-O-Matic-to-work-with-UDM-Pro-to-enable-OpenDNS-Home-with-dynamic-IP/ede30618-663c-43e0-b198-0f2cf2805e1d

DDClient

Another thing I want to do, is set a DNS A record. I could probably use some form of the settings above to inform my Google Name Service to update the record with the dynamic IP. But why be boring? Lets run the DDClient perl program in a container on my K3s cluster.

First, read the google domains documentation for dynamic records. I created a dynamic record and it generates the host record along with a username and password that can be used via the API to update the IP associated to the Domain Name.

Next, why create the container if I don’t need to?

https://hub.docker.com/r/linuxserver/ddclient/tags?page=1&ordering=last_updated
My k3s is on some Raspberry Pi’s so I choose the arm image.

Then another nice person built the deployment. Check out that blog for full detail. Without getting too distracted by kubesail and setting up k8s. I skipped to the YAML:
https://kubesail.com/template/loopDelicious/ddclient

Save this as ddclient-secret.yaml changing the info necessary for your google account.

apiVersion: v1
kind: Secret
metadata:
  name: ddclient-secret
  labels:
    app: ddclient
stringData:
  ddclient.conf: |
    daemon=300
    syslog=yes
    protocol=dyndns2
    use=web
    server=domains.google.com
    ssl=yes
    login=<google generated login>
    password=<google generated password> 
    your.domain.record.com

Now save this as ddclient.yaml, remember to modify the image for the type of arch your Kubernetes is running on.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ddclient-deployment
  labels:
    app: ddclient
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  replicas: 1
  selector:
    matchLabels:
      app: ddclient
  template:
    metadata:
      labels:
        app: ddclient
    spec:
      volumes:
        - name: ddclient-config-file
          secret:
            secretName: ddclient-secret
      containers:
        - name: ddclient
          image: linuxserver/ddclient:arm64v8-version-v3.9.1
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /config
              name: ddclient-config-file
          resources:
            requests:
              cpu: 10m
              memory: 64Mi
            limits:
              cpu: 50m
              memory: 128Mi

This deployment will use the secret for the settings and deploy the small container to update the Google Domain record with the new IP from the host.

kubectl create ns ddclient
kubectl -n ddclient apply -f ddclient-secret.yaml
kubectl -n ddclient apply -f ddclient.yaml

Some DNS stuff I might try later

This is interesting repo that updates dns with static hostnames. Unfortunately the UDM does not have this built in. I would suggest Ubiquiti to build pi-hole into the UDM Pro to integrate with its DHCP server and also provide some abilities to block bad DNS names for ads/phishing/malware.

Ephemeral or Persistent? The Storage Choices for Containers (Part 3)

In this, the final part of a 3-part series, I cover the latest developments in ephemeral storage. Part 1 covered traditional ephemeral storage and Part 2 covered persistent storage.

CSI Ephemeral Storage

With the release of Kubernetes 1.15, there came the ability for CSI drivers that support this feature, the ability to create ephemeral storage for pods using storage provisioned from external storage platforms. Within 1.15 a feature gate needed to be enabled to allow this functionality, but with 1.16 and the beta release of this feature, the feature gate defaulted to true.

Conceptually CSI ephemeral volumes are the same as emptyDir volumes that were discussed above, in that the storage is managed locally on each node and is created together with other local resources after a Pod has been scheduled onto a node. It is required that volume creation has to be unlikely to fail, otherwise, the pod gets stuck at startup. 

These types of ephemeral volumes are currently not covered by the storage resource usage limits of a Pod, because that is something that kubelet can only enforce for storage that it manages itself and not something provisioned by a CSI provisioner. Additionally, they do not support any of the advanced features that the CSI driver might provide for persistent volumes, such as snapshots or clones.

To identify if an installed CSI supports ephemeral volumes just run the following command and check supported modes:

# kubectl get csidriver
NAME       ATTACHREQUIRED   PODINFOONMOUNT   MODES                  AGE
pure-csi   true             true             Persistent,Ephemeral   28h

With the release of Pure Service Orchestrator v6.0.4, CSI ephemeral volumes are now supported by both FlashBlade and FlashArray storage. 

The following example shows how to create an ephemeral volume that would be included in a pod specification:

 volumes:
  - name: pure-vol
    csi:
      driver: pure-csi
      fsType: xfs
      volumeAttributes:
        backend: block
        size: "2Gi"

This volume is to be 2GiB in size, formatted as xfs and be provided from a FlashArray managed by Pure Service Orchestrator.

Even though these CSI ephemeral volumes are created as real volumes on storage platforms, they are not visible to Kubernetes other than in the description of the pod using them. There are no associated Kubernetes objects and are not persistent volumes and have no associated claims, so these are not visible through the kubectl get pv or kubectl get pvc commands.

When implemented by Pure Storage Orchestrator the name of the actual volume created on either a FlashArray or FlashBlade does not match the PSO naming convention for persistent volumes.

A persistent volume has the naming convention of:

<clusterID>-pvc-<persistent volume uid>

Whereas a CSI ephemeral volumes naming convention is:

<clusterID>-<namespace>-<podname>-<unique numeric identifier>

Generic Ephemeral Storage

For completeness, I thought I would add the next iteration of ephemeral storage that will become available.

With Kubernetes 1.19 the alpha release of Generic Ephemeral Volumes was made available, but you do need to enable a feature gate for this feature to be capable.

These next generation of ephemeral volumes will again be similar to emptyDir volumes but with more flexibility, 

It is expected that the typical operations on volumes that are implementing by the driver will be supported, including snapshotting, cloning, resizing, and storage capacity tracking.

Conclusion

I hope this series of posts have been useful and informative. 

Storage for Kubernetes has been through many changes over the last few years and this process shows no sign of stopping. More features and functionality are already being discussed in the Storage SIGs and I am excited to see what the future brings to both ephemeral and persistent storage for the containerized world.