It is clear that using Kubernetes as a deployment format is now considered to be the way forward for many organizations. The growing number of tools designed to simplify deployment and maintenance, along with the community of developers and admins supporting clustering and containerizing tools like Kube and Docker, solidify the use of containers (pods, in the case of Kubernetes) for creating a DevOps-complementary infrastructure.
To benefit more from the use of K8s, many developers set up multiple Kubernetes-based disposable development environments to optimize workflow and minimize faults impacting other code. The idea is to simplify and automate, as much as possible, the transition from development to production. A concept referred to as continuous deployment. In reality, however, that transition isn’t always straightforward.
Due to the different ways development and production environments are set up, using K8s for these purposes raises a new set of challenges. There is a gap that needs to be filled between Kubernetes for development and Kubernetes for production. Fortunately, there are several things you can do to bridge that gap.
Paying Attention to the Operational Needs
Operations and security are typically the two most neglected aspects of Kubernetes in development. After all, these aspects are the least of your worries when you are running closed-loop instances. In most cases, the Kubernetes environment is set up more for compatibility rather than the specific needs of the production environment.
There is nothing wrong with this approach. However, simply migrating the development pod to a production environment isn’t going to cut it. It is necessary to admit that the development pods are not secured properly and are not designed to work at scale.
The key to solving this issue is understanding the operational needs better. It’s important to maintain the reliability, integrity, and performance of the production environment at all costs. This means optimizing each pod, the services supporting your pods, and ReplicationController to work optimally. Typically, optimization happens for production as this environment has the most load, development tends to have smaller machines and less scale as it has less load. In pods, we can request and limit access as necessary to CPU. The idea is to refine this until you get the perfect balance of scaling for the number of pods until the performance of each one is good, then checking overall utilizations and/or resources. Automation can help simplify the deployment.
Making the Necessary Adjustments
Speaking of automation in deployment, the process of deploying new code to the production environment involves a number of repetitive tasks that can—and should—be fully automated, starting with the process of closing unnecessary network connections and securing the pod itself.
In a development environment, it is not necessary to implement and work around such restrictive security settings as should be implemented in a production environment. Such a setup allows for simpler development, giving you that extra agility you need to make quick changes whenever necessary. As the pod gets migrated to production, however, only ports and connections that are absolutely necessary are the ones that should remain open.
Using a trusted base image is also a recommended practice to get into. Yes, custom images tailored to your specific needs—usually from a fellow developer in the Kube community— are more practical, but trusted images provide an additional level of security which you need in production. It is also necessary to go a step further and limit the attack surface of your production environment.
Scaling and Balancing
In addition, remember that not all local K8s environments have everything required in a production environment. Both run on different foundations, with some services required in the production environment often eliminated from the development environment.
Kube in production needs additional services to reach the required operational standards. For starters, you need to have solid monitoring and logging services running properly. The need for better certificate management is also undeniable. And then there are aspects such as firewalls and load balancers that require special attention. Once again, there are some solutions to help bridge the gap. We now have service mesh technologies providing answers to public-private deployments and the challenges that come with them. Istio, for example, simplifies the process of securing, controlling, and monitoring Kubernetes environments by providing a uniformed foundation regardless of the cloud service you use. Twistlock is another great tool for optimizing cloud-native security on Kubernetes.
Bridging the Gap
Kubernetes is designed to allow developers access to a consistent environment, and that objective has been met to a degree. You always have a consistent environment to work with once you are inside a Kube pod.
The same cannot be said for the environment itself. The needs of Kubernetes in development and Kubernetes in production are different. Additional high availability solutions like NGINX and HAProxy provide load balancing features while the extra security measures such as better credentials management are necessities.
For maximum scalability, reliability, and performance, a disposable testing environment, designed to cost-effectively replicate your production environment at a smaller scale, is recommended. Using DevOps best practices and automation can help further close that gap, while built-in features such as K8s recovery allows for problem-free deployment testing in most cases. You can even start early through proper planning and the addition of resource constraints to the development environment. As more tools are made available, the gap we are seeing today between Kubernetes in development and Kubernetes in production is quickly disappearing if used in conjunction with clearly defined best practices and controls.
For more on Kubernetes, don’t miss our article on why Why Kubernetes Is Ideal for Your CI/CD Pipeline
Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with microservices, containers, cloud infrastructure, and CI/CD deployments. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and profit from our DevOps-as-a-Service offering too.