One of the big challenges of going cloud-native and using containers is enforcing security and usage policies. This is an easy task to complete when the cloud infrastructure is relatively simple and has only a limited number of users. Once the cloud environment becomes complex or there are more end-users utilizing cloud resources, having clear governance is a must.
In Kubernetes, policy management and governance are easy thanks to the Open Policy Agent Gatekeeper project or Gatekeeper in short. It enables compliance checks and more thorough management of policies without sacrificing agility or ease of use. Gatekeeper acts as the agent that validates CRD-based policies run by Open Policy Agent.
A Deep Dive into Gatekeeper
Before we can talk about what Gatekeeper can do now, it is interesting to see how the project has evolved. Gatekeeper 1.0 uses OPA as the admission controller. It relies on kube-mgmt to enforce configurations and handle the validation of policies.
Gatekeeper 2.0 uses the same approach, but with added functionalities. It gained audit functions, which are very handy when you have a complex architecture and you want to ensure compliance at every node and Pod.
Gatekeeper 3.0 is the latest iteration and, by far, the most capable. It switches to declarative CRD-based policies and integrates directly with the OPA Constraint Framework. As a result, Gatekeeper can now validate, mutate, and audit policies more reliably.
Gatekeeper also supports templates for policies. You can create Custom Resource Definitions and generate templates using declarations and Constraint Templates. New constraints are made using Rego logic and CRD parameters.
Upon closer inspection, Gatekeeper has some very interesting features indeed, including:
- Admission control, which sits at the core of Gatekeeper as a tool. Gatekeeper admission webhook gets triggered every time a resource is created, updated, or deleted within the monitored cluster. There is no need to go through a complex set up process either; you just need to install the necessary components.
- Constraints management, including support for Rego policies, CRDs, and parameters. A Constraint is no more than a set of declarations that define how resources are provisioned inside the cluster. You can, for instance, assign mandatory labels using standard Metadata-Spec-Parameters declaration.
- Audit, a handy feature for maintaining compliance across your cloud environment. As the name suggests, Audit functionalities are meant to be used for periodic evaluations of resources based on the Constraints recognized by Gatekeeper. All violations are recorded in the audit results.
- Data replication, which automates replication of Kubernetes resources into the OPA. It does so by creating sync config resource so that all namespace and Pod-related resources are replicated into the agent. Future audits and the overall consistency of the cluster policy management depends on this replication.
What’s interesting is the features that will soon come to Gatekeeper, particularly those related to mutating admission control. Context and support for external data are also among the features that will be introduced in the near future, making Gatekeeper a more valuable tool to use.
Deploying Gatekeeper
Deploying Gatekeeper is a straightforward process. You only need to run the prebuilt image using Helm or go the HEAD route and built your own Gatekeeper. The latter requires you to have Kubebuilder and Kustomize installed.
The real challenge is defining the Constraint template—or making several templates—for your cluster. The ConstraintTemplate must include a Rego and a schema for defining how the Constraint governs behavior within the cluster.
A default Constraint template can be found on Git, plus you only need to run the usual kubectl apply command to install it. Targeting and matching are two instruments you can use to make sure that policies are implemented correctly and on the right cluster.
Speaking of matching, there are several fields that you can match in order to define the scope of the policy. Those fields are:
- Namespaces
- Kinds
- excludeNamespaces, for excluding namespaces and governing the rest
- labelSelector
- namespaceSelector
Matchers are processed based on their level of priority, so top-level matchers such as namespaces and kinds are processed first before other matchers are considered. This is handy for when you need to set up multiple matchers for specific parts of your cluster.
Using Gatekeeper
The possibilities are endless with Gatekeeper. The agent is designed to be flexible enough without making policy management and enforcement complicated. A good way to use Gatekeeper is for limiting the spooling up of new instances when illegal registries are used. You can define the policy to check for registries and deny illegal ones:
deny[reason] {
some container
input_containers[container]
not startswith(container.image, "yourregistry.com/")
reason := "container image refers to illegal registry)"
}
Any new container that doesn’t use yourregistry.com will automatically be denied access to server resources. You can maintain the integrity of your entire cluster better with this simple policy in force. On top of that, you have better control over how automation tools manage the provisioning of server resources.
Another good way to use Gatekeeper is to do ingress validation. After configuring the TLS certificate and key, you can set a policy to check if ingress is done using the TLS certificate and a secure protocol:
deny[msg] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
not fqdn_matches_any(host, valid_ingress_hosts)
msg := sprintf("invalid ingress host %q", [host])
}
You can even add extra parameters to whitelist connections from certain IP addresses. Adding a valid_ingress_hosts policy is as easy as whitelisting the IP addresses using a comma-separated list. Of course, you can also prevent this policy from affecting certain namespaces or kinds by configuring the Constraint accordingly.
As an added bonus, there are ways to test your policies without affecting the entire cluster. With the previous example of validating ingress, for example, you can create ingress objects using kubectl create and checking if the policy gets enforced accordingly.
It is also clear that policy enforcement doesn’t require you to recompile or rebuild your cluster. There is no need to recompile Kubernetes components just to get Gatekeeper working effectively. With Gatekeeper API up and running, you can manage policies as codes and use declarative means to further secure your Kubernetes cluster.
Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.