AWS Serverless Kubernetes Infrastructure with Amazon EKS on AWS Fargate

Cloud Native App Dev

The focus on most cloud services and infrastructure is not just making cloud resources available but also making sure that your applications can run smoothly...

The focus on most cloud services and infrastructure is not just making cloud resources available but also making sure that your applications can run smoothly and efficiently. The latter is very important because cost-efficiency has always been a challenge for developers and administrators alike. Everything from provisioning more resources than required to not destroying provisioned nodes when they are no longer in use could result in your cloud expenses ballooning without you even realizing.

Cloud service providers are aware of this demand for better cost-efficiency, which is why they have been introducing features like elasticity and serverless services these past few years. In this article, however, we are going to focus on a specific service, the AWS Fargate, and how it can be used to create a serverless Kubernetes infrastructure that supports your application. Let’s take a closer look, shall we?

EKS Pods on Fargate

AWS Fargate for EKS was first announced in 2019 and has since become the go-to service for developers and organizations who want to save money one pod at a time. As the name suggests, the orchestration service is based on EKS⁠—there is also AWS Fargate for ECS⁠—and Kubernetes as the foundation.

What Fargate does is abstract the entire cluster from pods operations. You don’t have to establish your own control plane. You don’t even need a data plane. You can go straight to creating a cluster and provisioning pods to run microservices or entire applications.

Since there is no need to allocate resources for the underlying cluster, AWS Fargate for EKS offers maximum efficiency. You only pay for the pods that you run. The fact that it is a managed service further lowers your overhead. A small team of developers can manage a complex cloud infrastructure with ease.

Fargate offers added flexibility too. For starters, you can run pods as they are⁠—without your own cluster⁠—or use Fargate pods in a mixed or hybrid way. You can still create a cluster using EKS, and then run additional pods on Fargate. For on-demand applications or for when pods require more processing power than you have, Fargate is extremely useful in the hybrid mode.

Deployment with Fargate EKS

That brings us to the serverless nature of Fargate EKS. The hybrid mode is interesting, but the real power of Fargate lies in how it can handle deployment in a completely serverless way. You have several ways to deploy and manage pods, but the one thing that ties the entire orchestration together is the namespace you use.

You can access Fargate functionalities in several ways. You can, for instance, use Kubernetes Webhooks to send and process requests. You can also use a custom Fargate scheduler to handle requests and provisioning of pods. Even better, you can configure a Fargate profile in advance as an administrator and have all future deployments follow that rule.

A request starts with you deploying a pod using a specific namespace (i.e. production with label five). The Kubernetes webhook will then validate that request and see if there is a matching namespace (and label) in your EKS cluster, or if the request needs to be matched with a Fargate profile directly. If the latter is true, the request is sent to the Fargate scheduler for deployment.

Here’s another interesting note about using Fargate to run serverless Kubernetes operations: you can also leverage Lambda to run certain codes without provisioning servers. Some of the burdens that need to be lifted by Fargate can be offloaded to Lambda. Lambda can be strategically placed at the end of the pods’ processes and attached to automatic triggers. The reverse is also possible.

For example, you can configure Lambda to run pre-processing when a new set of data is uploaded to S3. Once Lambda completes its runtime, it can automatically trigger another Lambda function that spools up EKS Fargate with specific parameters (i.e. data selection parameters). The pod will be created and the suitable runtime is executed.

Processed data can be stored in another S3 bucket, and the storing of that data can trigger another Lambda function. Continuing the previous process, a notification can be sent to administrators or a log can be written by Lambda. The entire process requires no server and does not force you to create your own cluster for pods.

More importantly, the cost for the entire process is kept at a minimum since you only pay for the computing power you actually use and only when you use it.

Getting Started with Amazon EKS on AWS Fargate

Getting started with Fargate is very easy. To begin the process, you need to choose whether you want to create an EKS cluster or use an existing one. This is an optional step, but it is recommended if you plan to use Fargate alongside an existing cluster. Simply add –fargate to your eksctl create cluster command and AWS will automatically create a pod execution rule, add a Fargate profile for default namespaces, and even adjust coredns to work with Fargate.

Next, you need to create a pod execution role. This is done from the AWS Management Console. Add a new role for EKS service and EKS-Fargate use case, and then define the permissions. Add a name to the role and you are all set. You can now create a Fargate profile using the IAM role. Make sure you define the cluster, profile name, and namespace correctly.

If you haven’t already, patch your coredns to allow EKS Fargate pods to communicate with other nodes. This is a mandatory step if you are running pods without creating your own EKS cluster. You need to add the correct subnets to your coredns configuration.

With EKS Fargate, there are several benefits you get to enjoy outright, with better cost-efficiency being the biggest one of them all. You deploy pods faster, can stop worrying about resource scheduling and allocation, and can run VMs in a completely isolated way. You can even do without Kubernetes Cluster Autoscaler since pods will always run smoothly.

So, are you ready to bring your application to a serverless environment?   

Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.

Cloud Native App Dev

Learn more about the services mentioned

Caylent Services

Cloud Native App Dev

Deliver high-quality, scalable, cloud native, and user-friendly applications that allow you to focus on your business needs and deliver value to your end users faster.

Accelerate your cloud native journey

Leveraging our deep experience and patterns

Get in touch

Related Blog Posts

Automated Testing with Jest on AWS

Learn how to automate testing and safeguard your JavaScript apps using Jest with AWS CodeBuild and CodePipeline.

Cloud Native App Dev
Infrastructure & DevOps Modernization
Cloud Technology

AWS Lambda Performance Boost with SnapStart

Supercharge AWS Lambda cold start times by up to 90% by leveraging AWS Lamda SnapStart and Firecracker, helping you minimize latency without any additional caching costs.

Cloud Technology
Cloud Native App Dev

Caylent Catalysts

Learn how we develop and implement Caylent Catalysts - a set of accelerators designed to fuel your AWS cloud adoption initiatives.

Cloud Native App Dev