Running an application as a series of microservices is not without its challenges. While this type of deployment is flexible and offers good scalability, there are still issues such as observability for each request, detailed statistics, logging, and distributed tracing. If you run a CI/CD cycle, you know how tricky these can be.
To mitigate this challenge,
Service mesh is deployed on a data level. The service mesh control plane handles interaction with external traffic through APIs and automation, while service mesh functions as the foundation of data plane—the configuration allows for each service to be configured to work with each other.
What Can a Service Mesh Do?
Before we get to the various tools that you can use to manage service mesh in a Kubernetes environment, it is important to understand what the layer can and cannot do. Service mesh generally brings a number of features and benefits to the table, starting with improved resilience. The service mesh will automate retries and other communication parameters too.
In addition, it is also a layer that prevents a single point of entry. In the event of a microservice failure, the service mesh will find—within its parameters—a way to keep the entire app running. This feature includes the ability to balance the load of microservices and communications between them, allowing for redundancies to be introduced to the layer.
The main function of service mesh, however, is communication control. It controls request routing, instances, and communication endpoints. In a CI/CD cycle, service mesh eliminates the need for any manual reconfiguration of the containers. The more you understand how service mesh can be beneficial to your use case, the more tools you can use at this layer.
Service Mesh Tools
There are numerous service mesh tools to choose from, but the four we are going to focus on in this article are Linkerd, Consul, Istio, and Linkerd2—potentially the most well known of the available tools out there. The general principles of these tools are all fundamentally similar—we’ve covered that earlier—but the way those principles are implemented differ depending on the tool and environment you use.
So, which of these service mesh tools is the best for you? To answer that question, it’s important to take a deeper look into the features and advantages offered by each tool.
Linkerd is recognized as the first tool that coined the term “service mesh” and this makes it popular in the cloud landscape. The service mesh tool is developed using Finagle Library from Twitter and it is written in Scala. The combination offers something unique: a performance that is nearly unrivaled. Linkerd can handle a large volume of requests per second, plus you have the ability to scale the service mesh using multiple nodes.
Linkerd is unique compared to other tools we are going to discuss in this article in terms of how it is modeled. Linkerd uses node agent service mesh architecture rather than the usual sidecar pattern. It is also compatible with multiple container architectures, including Docker and Kubernetes.
If you are running a small app that requires a single node, Linkerd may be the most efficient option to go for. It doesn’t support TCP requests and websockets, but you do get superb traffic control and the ability to connect to external services in another cluster.
These advantages make Linkerd perfect for managing multiple clusters running different containers. In fact, Linkerd may be the only option if that is the environment you are trying to manage.
Linkerd2.x is not a refined version of Linkerd. It is actually rewritten from the ground up for Kubernetes. Instead of using Scala and JVM like Linkerd, Linkerd2 is written in Golang and Rust. It also uses the sidecar pattern instead of a single node agent, giving it more flexibility in terms of scalability and performance. The move brings substantial improvement to memory management.
Linkerd2’s main advantage is its tight configuration of data and control planes. You’ll be surprised by how far Linkerd2 can minimize the latency between the two with an efficient configuration and close proximity. That said, Linkerd2 is still in its early stage, so you may not get the same advanced features as other, more mature service mesh tools.
For example, Linkerd2 is yet to support encryption natively. There is an encryption feature built-in, but it is still experimental and may not provide the same level of security as native encryption. It also has no traffic control features and no support for external clusters.
Linkerd2 is catching up. One thing worth noting is the huge community of developers working on Linkerd2 as the service mesh tool of their choice. It may take time before the tool can offer an extensive set of features, but the future is bright for Linkerd2.
Consul is another popular service mesh tool for Kubernetes. It is incredibly stable and offers the right set of features for managing service-to-service communication, which is why it quickly became a favorite among administrators and developers alike. It too uses a sidecar pattern, but it supports different container environments, including Kubernetes.
Consul is positioned as a single integrated tool. It can help you manage both the data plane and the control plane in a seamless way. When you have two or more Kubernetes instances with multiple services—and both of them are running Consul—you can have those services communicate without having to jump through hoops to configure them.
From the start, Consul is developed to be a full mesh without a separate control plane. The approach is good for lowering your latency and removing performance bottlenecks. However, you maintain control over the separation of your data plane rather than sacrificing flexibility for a more seamless service mesh.
It even supports Envoy as the default data proxy, plus it is designed with a user-friendly configuration interface; Consul is a service mesh tool that will suit many developers without extensive cloud administration experience. Since Consul is also relatively new, expect to see many more improvements being added to this tool.
There is no doubt that Istio is the most stable of all service mesh tools on the market. It is the oldest, but age may not be a big factor in this case. Istio is fully backed by top names in Kubernetes, including IBM, Google, and Lyft; it is difficult to imagine Google backing a service mesh tool that isn’t reliable for large-scale use.
Istio is built using the same sidecar pattern, but it has an extra trick up its sleeve. Istio actually supports automatic sidecar injection, so you can add a service mesh to any Helm chart without manually configuring the deployment every time. It even supports Envoy as the default data plane.
A big downside is the lack of support for secure out-cluster connections. It is also not the easiest service mesh tool to use; you have to be willing to learn about how to optimize Istio and use its features before you can fully benefit from this service mesh.
To make Istio more accessible, the default configuration is good enough for most instances. You can configure a Kubernetes instance using minikube, Helm, and Istio in one seamless go; that’s how well-integrated Istio is with Kubernetes.
Which Service Mesh Tool Should I Use?
Unfortunately, there is no definitive answer to this question. No service mesh tool is the outright winner in this comparison. They offer different feature sets and are great for different use cases. The service mesh tools we discussed in this article are also developed differently and use different languages, which is something you need to take into account as well.
Choosing the right service mesh tool is a matter of deep-diving into your environment and understanding your requirements to find one that meets those needs. What I can tell you is this: between Linkerd, Linkerd2, Consul, and Istio, there is a service mesh tool that suits you.
Keen to read more about working with Kubernetes? Check out our other posts on Kubernetes here.
Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.