AWS Fargate for Amazon Elastic Kubernetes Service

#AWSFargate for Amazon Elastic Kubernetes Service

On-demand cloud computing brings new ways to ensure scalability and efficiency. Rather than pre-allocating and managing certain server resources or having to go through the usual process of setting up a cloud cluster, apps and microservices can now rely on on-demand serverless computing blocks designed to be efficient and highly optimized.

Amazon Elastic Kubernetes Service (EKS) already makes running Kubernetes on AWS very easy. Support for AWS Fargate, which introduces the on-demand serverless computing element to the environment, makes deploying Kubernetes pods even easier and more efficient. AWS Fargate offers a wide range of features that make managing clusters and pods intuitive.

Utilizing Fargate

As with many other AWS services, using Fargate to manage Kubernetes clusters is very easy to do. To integrate Fargate and run a cluster on top of it, you only need to add the command –fargate to the end of your eksctl command.

EKS automatically configures the cluster to run on Fargate. It creates a pod execution role so that pod creation and management can be automated in an on-demand environment. It also patches coredns so the cluster can run smoothly on Fargate.

A Fargate profile is automatically created by the command. You can choose to customize the profile later or configure namespaces yourself, but the default profile is suitable for a wide range of applications already, requiring no human input other than a namespace for the cluster.

There are some prerequisites to keep in mind though. For starters, Fargate requires eksctl version 0.20.0 or later. Fargate also comes with some limitations, starting with support for only a handful of regions. For example, Fargate doesn’t support stateful apps, DaemonSets or privileged containers at the moment. Check out this link for Fargate limitations for your consideration.

Support for conventional load balancing is also limited, which is why ALB Ingress Controller is recommended. At the time of this writing, Classic Load Balancers and Network Load Balancers are not supported yet.

However, you can still be very meticulous in how you manage your clusters, including using different clusters to separate trusted and untrusted workloads.

Everything else is straightforward. Once the cluster is created, you can begin specifying pod execution roles for Fargate. You have the ability to use IAM console to create a role and assign it to a Fargate cluster. Or you can also create IAM roles and Fargate profiles via Terraform. 

Profile and Schedules

Once the cluster is created, you no longer have to use eksctl to run commands. Fargate can also be controlled from the AWS Management Console. To create a Fargate profile, select a cluster and click the Add Fargate Profile to continue.

You can define certain elements of the Fargate profile after assigning a unique name to it. You can, for instance, add tags to the profile. The tags are used exclusively for the profile and will not be propagated to pods within the cluster.

Pod execution role is where you add the role you have created earlier. If you don’t see the role you have created, make sure the role has service principal attached to it. Lastly, define a subnet to complete the profile.

There are other customizations and steps that you can take to fine-tune Fargate to your specific needs. At this point, you can choose to update CoreDNS if you want to limit pods to run on Fargate exclusively rather than EC2. This is because by default, CoreDNS is configured to run on Amazon EC2 workers on Amazon EKS clusters.

Apply the modified coredns.json and run the kubectl patch deployment coredns command to commit the changes you have made. Complete the set up by migrating your application to Fargate and setting up the ALB Ingress Controller.

Additional Features

You can configure Fargate pods in different ways depending on the specific needs of your application. Fargate offers a wide range of customization options, allowing you to define things such as vCPU and memory resources from the start.

You pay for Fargate based on the requested vCPU and memory resources. There is also a spot pricing option if you want to further lower the cost of running your application in the cloud. More importantly, Fargate can automate the management of long-running computing resources.

Let’s not forget that you can use the Vertical Pod Autoscaler (VPA) and/or Horizontal Pod Autoscaler (HPA). To do this, you need to install metrics server for the VPA or HPA to consume resource metrics (CPU or memory) or make the scaling based on custom or external metrics⁠—depending on what makes more sense from the application point of view. You also need to install the metrics server or Prometheus in order to gain access to the Autoscalers, but everything else works similarly to when your Kubernetes cluster runs on EC2.

Even better, each Fargate pod receives 20GB of ephemeral storage when it is spooled up. The allocated storage gets deleted when the pods are deleted. Storage is encrypted using AES-256 encryption in the latest iteration of AWS Fargate.

So, Why Fargate?

Making the decision to migrate to AWS Fargate is easier than you think. If you find yourself underutilizing your cluster resources most of the time, migrating to AWS Fargate is a great way to take cloud efficiency to a whole new level. The main reason for choosing this service though is that it allows you to focus on building your app rather than deploying and maintaining an EKS cluster. However, the limitations exposed above should be taken into consideration before making the decision.

Fargate maintains that simplicity that we know and love from Amazon EKS, all while reducing costs and improving security. As long as you differentiate the clusters used to run apps at different security requirements, protecting your cloud environment becomes easier.

Besides, scaling up (and down) becomes completely hassle-free when your application runs on AWS Fargate. That, at the end of the day, is the biggest advantage of them all.  

Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.

Share this article

Leave a comment


Share this article


Join Thousands of DevOps & Cloud Professionals. Sign up for our newsletter for updated information, insight and promotion.