It’s easy to overlook how young Kubernetes is in the world of containerization. Given its explosion in popularity, you’d be forgiven for forgetting that the software is not even four years old yet. Those using the software are quite literally on the frontier of cutting-edge technology that is leaving other platforms in its wake.
With the aid of containers, software development is simplified for developers through the abstraction of application execution details. Getting these operations right though has become critical for competing platforms. Running modular containerized deployments allows IT teams to drastically reduce overheads as well as operational complexity, in comparison to virtual machines.
Last year, Docker—the original frontrunner of container technology since its release in 2013—ceded the orchestration floor announcing that it too would be offering upcoming support for Kubernetes (also known as Kube or by the numeronym K8s). The revelation was made by CTO Solomon Hykes in October 2017 and engineers can already sign up for the beta version here.
Introduction to Kubernetes
Developed by Google, Kubernetes is a powerful platform for managing containerized applications in a clustered environment. In this article, we’ll shine a spotlight on the architecture, examine the problems it can solve, and take a look at the components the Kube model uses to handle containerized deployments and scaling.
So, What Is Kube?
Kube helps address the logistical challenges organizations face in managing and orchestrating containers in production, development, and test environments through declarative code which limits an abundance of errors. Without Kube’s—or Docker Swarm’s—container orchestration capabilities, teams would need to manually update hundreds of containers every time new features are released making deployments error-prone and slow. Kube is an open source container orchestration platform for managing distributed containerized applications at a massive scale. With Kube, teams can automate application configuration, manage their life cycles, plus maintain and track resource allocations within server clusters. All containerized applications are run on rkt or docker typically.
As well as automating release and deployment processes quickly, should anything crash Kube provides a “self-healing” environment for application infrastructure. In the event of an incident, the platform will reconcile observed cluster states with the user’s desired state. If worker nodes crash, for example, all pods will be rescheduled to available nodes.
With Kubernetes, teams can:
- Automatically, and immediately, scale clusters on demand (scaling back when not needed to save resources and money.)
- Run on-premise anywhere. Whether in a public or private cloud (e.g., AWS or Google) or in a hybrid configuration.
- Deploy constantly across bare metal, local development, and cloud environments, thanks to its portability.
- Spend less time debugging and more on delivering business value.
- Separate and automate operations and development.
- Rapidly iterate deployment cycles and improve system resilience.
The learning curve for Kube is considered slightly more extensive in comparison to Docker as the concepts from vanilla Docker Swarm don’t directly translate across. To work productively with Kube, therefore, it’s worth understanding its components and their functionality within the architecture.
“Pods are the smallest deployable units of computing that can be created and managed in Kubernetes,” states Kubernetes’ official docs. In Docker, such units are single containers. However, in Kubernetes pods can contain a container but they are not limited to just one and can include as many as necessary.
All containers within a pod run in alliance as though on a single-host sharing a set of Linux namespaces, IP address, and port space. As a group of one or more containers, pods communicate over the standard inter-process communications (IPC) namespace and access the same shared volumes. By itself, a pod is ephemeral and will not be rescheduled to a new node if it dies. This can be overcome though, by keeping one or more instances of a pod alive with replica sets. More on these later.
Labels & Selectors:
Labels are key/value attributes that can be assigned to objects including pods or nodes. They should be used to determine distinguishing object characteristics that are significant and appropriate to the user. Labels can be assigned at the time of object creation, or they can be attached/modified at a later date.
Use labels to identify, organize, select object subsets and create order within the multi-dimensions of a development pipeline. Information such as release launch (beta, stable) environment type (dev, prod) and/or architectural tier (front/backend) can all be identified in a label.
Labels and selectors work in tandem with each other as the core means for managing objects and groups. There are two types of Kubernetes selectors: equality-based and set-based. Equality-based selectors use key-value pairs to sort objects/groups according to basic equality (or inequality). Set-based selectors sort keys according to sets of values.
As mentioned above, pods won’t be rescheduled if the node it runs on goes down. Replica sets overcome this issue in Kube by ensuring that a specified number of pod instances (or replica sets) are running together at any given time. Therefore, to keep your pod alive, make sure that there is at least one replica set assigned to it.
As well as managing single pods, replica sets can manage—and scale to major numbers—groups of pods categorized by a common label. As much of this is automated within deployment, you will never need to actively manage this scaling capability, but it’s worth understanding how the system functions to better manage your applications.
Within Kube, networking is all about connecting the network endpoints (pods). Containers in different pods must communicate via an alternative method to IPC due to their distinct IP addresses. Kubernetes networking solves this cross-node pod-to-pod connectivity as well as achieving service discovery and pod-to-pod load balancing.
Pods are secured by limiting access through network segmentation. Network policies define how subsets of pods are allowed to interact with each other and other network endpoints. Configuration is on a per-namespace basis.
A Kubernetes service is an abstraction which outlines a logical subset of pods according to labels (see above). Kube’s services identify and leverage the label selectors for groups of pods in accordance with the service they are assigned. Such management ease of endpoints through services is all down to the labels. As well as service discovery capabilities, this abstraction of services further provides internal load balancing for pods within a cluster.
Kubernetes provides two primary methods of finding a service. As a pod is run on a node, the kubelet (node agent) adds environment variables for each active service according to predefined conventions. The other method is to use the built-in DNS service (a cluster add-in). The DNS server monitors the Kubernetes API for all new services and assigns a set of DNS records for each. When DNS is enabled throughout the cluster, then all pods should be able to do a name resolution of services automatically.
Looking for help running containers in the more immediate future? Caylent has you covered. Check out our new DevOps-as-a-Service offering; it’s like having a full-time DevOps Engineer on staff for a fraction of the cost.