Kubernetes has swiftly become the de facto standard platform for running containerized workloads. This is because Kubernetes gets a lot of things right straight out of the box, and deployment manifests for application releases can be one of them.
There are two components to typical containerized apps on Kubernetes, if you’re following current recommended practices:
- The application side—known as the deployment: Users outline a desired state through a deployment object and the deployment controller adjusts the actual state to meet the new configuration.
- The access endpoint definition side—known as a service: As pods (and their assigned IP addresses) are ephemeral by nature, thanks to their creation and destruction by replicasets, they require the abstraction of a service to define a logical set of pods as well as an access policy to communicate with them.
Put simply, deployments define which image you want to use, how many containers you want to spin up and run as well as what info is passed into them when they start. Deployments can be created using the kubectl run, kubectl apply, or kubectl create commands. In contrast, services outline the load balancer you want sitting in front of those containers and which containers will receive traffic accordingly. Services are defined by being POSTed to the apiserver to create a new instance.
More often than not, devs use plain Kubernetes manifests to roll out an application and its resources. Deployment manifests are singular to Kubernetes and refer to the file which holds a definition of a deployment object. Here’s an example of a YAML Deployment manifest file:
apiVersion: app/v1 kind: deployment metadata: name: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
- We create an
nginxcontainer deployment, as shown by the
metadata: name field.
- The deployment launches three replicated pods, as shown by the
- Each pod will be labelled
- Run version
1.7.9the nginx image.
- Send and receive traffic through port
80, as shown by the
Deployment manifests can be used in such a way that makes regular deployments repeatable over and over again. Using certain technologies and tools for process iteration allows users to deploy containers at scale through a consistent method without starting from scratch every time.
Manifests also become highly useful for use cases in which a developer working from a laptop needs to refactor the application release heavily when it moves from the development environment on the laptop to in production in the cloud. Here though, it’s possible to make some different choices which make these environments more similar to mitigate the level of refactoring that the manifest will require. But the method requires some planning in order to do so.
To use deployment manifests that work in development and production, rather than using hard-coded host-path volumes in their dev environment manifests, it’s possible to use a better pattern through persistent volume claims. Then you can exchange the storage class for the persistent volume claim based upon whether it was a dev environment or a real cloud one; so the deployment manifest will work in either place. It’s just a case of provisioning a different storage class manifest for dev laptops rather than for the cloud. This method also applies in other areas where this same kind of choice is made. Such as around secrets and the environment variables passed to containers, etc.
Consider an example of what we mean as a manifest example for such purposes:
apiVersion: v1 kind: PersistentVolume metadata: name: local-pv-1 labels: type: local spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: /tmp/data/pv-1
apiVersion: v1 kind: PersistentVolume metadata: name: gce-pv-1 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: gce-1 fsType: ext4
Do the two look familiar? Well they should; each outline is similar by design. The two manifests are essentially a kind of matching template (such as can be leveraged through Helm Charts) with definitions of objects to be replicated in different environments. As you can see, here we’re specifying the apiVersion as v1 and specifying the same Persistent Volume (PV) but optimizing different providers for test purposes. The manifest will work both locally on your dev machine and in Staging/Production through a GCE Provider in this instance to scale as necessary.
The next step would be to bind those Persistent Volumes with a PVC:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi
This PVC will attach the PV that we defined previously (as the size and access modes match). If we are working locally we would need to use the PV according to the local type. On GCE, we use gcePersistentDisk as marked by
name: mysql-pv-claim. This is the part that matters here, as we need to use it later in the pod spec.
The last step now would be to mount it:
apiVersion: apps/v1 kind: Deployment metadata: name: wordpress-mysql labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: mysql strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password livenessProbe: tcpSocket: port: 3306 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim
The last section defines the volume here in the deployment manifest as
mysql-persistent-storage, and in the spec of the pod it gets mounted using
By writing out deployment manifest files in this manner, you can both version control and use them in a repeatable and scalable way in both development and production.
Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.