Optimizing Azure functions on Kubernetes with KEDA

By Lucia Sampayo
Optimizing #Azurefunctions on Kubernetes with KEDA

After the events of 2020, more and more organizations are transforming their applications to become cloud native and optimizing containerization further. Gartner predicts that by 2023, 70 percent of global organizations will be running more than two containerized applications in production. As of 2019 the number of cloud native born projects (not apps that are reworked but purpose-built) in production was more than 50% and the adoption of the approach is only increasing. For more on Becoming Cloud Native, don’t miss our most recent blog on the subject here

The typical requirement for becoming cloud native is to run an application at scale and speed without risk of downtime. Using Kubernetes, you can containerize such an app and deploy it on a cluster to realize this aim. After that, Kubernetes will spin containers inside the pod to run the app and ensure that the resources it needs are distributed cleanly across the available infrastructure.

But scaling and descaling containers on-demand according to an application’s usage is a tough job. This is where Azure Functions can help support your business critical workload. Furthermore, by using the Azure Function on Kubernetes with KEDA, you can scale your application resources in or out dynamically as demand requires.

Azure Functions Running in Kubernetes

Azure Functions is a fully managed serverless service that you can best use to apply any code written in the Azure Functions run-time to the Azure Functions Programming Model. Publish code on the cloud, and the service then runs and scales and manages the code all for you. 

Scale in Azure Functions is fully event-driven. For example, the code you write will be triggered according to an assigned feature: when somebody clicks the checkout button on your website. You publish that code to Azure Functions, it will start listening and looking at your checkout events. If a checkout event pops up, it will scale up enough containers on the Kubernetes cluster to run the code you wrote, then it scales back down to zero. With the service, you only pay when your code is running. This is for when you are using Azure Function outside the cluster.

With KEDA, you do not consume resources in the cluster when the function is off but you are paying for the cluster, so the cost does not vary.

Leveraging KEDA?

Kubernetes is not very well suited for event-driven scaling, and that is because, by default, Kubernetes functions by resource-based scaling according to CPU and memory. KEDA is a Cloud Native Computing Foundation (CNCF) sandbox project that resulted in an event-driven scale controller that can run inside any Kubernetes cluster parallel to Horizontal Pod AutoScaler (HPA). KEDA monitors the rate of events to proactively scale a container even before there is any impact on the CPU. The tool allows containers to scale to and from zero in the same way an Azure Function or AWS Lambda service in the cloud can. KEDA is completely open-source, and you can install it on any cluster, making the tool very non-intrusive. You can add KEDA into a Kubernetes cluster where you already have deployments, or you can just map it to scale the things you want to. 

As well as an agent that activates and deactivates the deployments in the Kubernetes cluster to scale, KEDA also acts as a Kubernetes metrics server to expose event data such as queue length to Horizontal Pod AutoScaler (HPA) to drive the scale. There are different ways you can deploy KEDA on a Kubernetes cluster. You can use helm charts, or Operator Hub, or YAML declarations. In this article, we are using the YAML declaration of the latest KEDA version, which is currently 2.2.0 to deploy it on the Kubernetes cluster.

osboxes@osboxes:~$ kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.2.0/keda-2.2.0.yaml

Check if the deployments are ready under keda namespace.

osboxes@osboxes:~$ kubectl get deployments -n keda
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
keda-metrics-apiserver   1/1     1            1           74s
keda-operator            1/1     1            1           73s

Check the CRD available.

osboxes@osboxes:~$ kubectl get customresourcedefinition
NAME                                        CREATED AT
scaledjobs.keda.sh                          2021-05-14T21:18:34Z
scaledobjects.keda.sh                       2021-05-14T21:18:34Z

You have now successfully installed KEDA.

Azure Functions and KEDA in Practice

To implement Azure functions on Kubernetes with KEDA, below are the prerequisites:

  • Azure Function Core Tools v3
  • Azure free account
  • A Docker account
  • A Kubernetes cluster

Run the below command to install the Azure Functions Core Tools V3 on an Ubuntu machine.

osboxes@osboxes:~$ sudo npm i -g azure-functions-core-tools@3 --unsafe-perm true

Create a directory to run this project.

osboxes@osboxes:~$ mkdir keda-demo
osboxes@osboxes:~$ cd keda-demo/

Using func command, you need to initialize the directory for functions. Select “node” for worker runtime and “javascript” as language.

osboxes@osboxes:~/keda-demo$ func init . --docker

Run the command below to add a new queue triggered function and select “Azure Queue Storage Trigger”. The function name will be “QueueTrigger” by default.

osboxes@osboxes:~/keda-demo$ func new

The function “QueueTrigger” will be created successfully from the “Azure Queue Storage trigger” template.

Next, we need to create a few Azure resources, so we need Azure CLI. Run the command below to install it.

osboxes@osboxes:~/curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

Login to Azure.

osboxes@osboxes:~/az login

The command below creates an Azure group keda-demo, Azure storage kedastorage123 and a queue js-queue-items.

osboxes@osboxes:~/az group create -l westus -n keda-demo

osboxes@osboxes:~/az storage account create --sku Standard_LRS --location westus -g keda-demo -n kedastorage123

osboxes@osboxes:~/CONNECTION_STRING=$(az storage account show-connection-string --name kedastorage123 --query connectionString)

osboxes@osboxes:~/az storage queue create -n js-queue-items --connection-string $CONNECTION_STRING

  "created": true

The Azure command below gives you the connection string, copy the output.

osboxes@osboxes:~/keda-demo$ az storage account show-connection-string --name kedastorage123 --query connectionString


Type “code” in the terminal, and open the keda-demo directory in the visual studio. Now copy the output of the previous command in AzureWebJobsStorage as shown below in local.settings.json file.

local setting jsn

In the function.json file, inside QueueTrigger, add AzureWebJobsStorage in connection.


Replace the content of host.json with below lines. 

    "version": "2.0",
    "extensionBundle": {
        "id": "Microsoft.Azure.Functions.ExtensionBundle",
        "version": "[1.*, 2.0.0)"

Now, start the function locally.

osboxes@osboxes:~/keda-demo$ func start

Go to azure storage account kedastorage123 -> Storage Explorer -> js-queue-items. Click on Add message and put some text in it to send it to the Azure function.


Once you add the message, this is how the output will look like where the function will get fired and run your message.

[2021-05-15T20:24:33.198Z] JavaScript queue trigger function processed work item This is a Keda Demo.
[2021-05-15T20:24:33.860Z] Executed 'Functions.QueueTrigger' (Succeeded, Id=d7beeaf9-0f3b-4245-ac37-b33654d79da7, Duration=3838ms)

Now, the Azure function is ready. We will now deploy the function to KEDA.

Login to Docker.

osboxes@osboxes:~/keda-demo$ docker login
Login Succeeded

Deploy the docker image on DockerHub.

osboxes@osboxes:~/keda-demo$ func kubernetes deploy --name keda-demo --registry demodocker

Generate a deployment file to deploy the Azure function.

osboxes@osboxes:~/keda-demo$ func kubernetes deploy --name keda-demo --registry <docker-user-id> --javascript --dry-run > deploy.yaml

Build and deploy the container image. 

osboxes@osboxes:~/keda-demo$ docker build -t demodocker/keda-demo 
osboxes@osboxes:~/keda-demo$ docker push demodocker/keda-demo

Finally, apply the deployment to your cluster.

osboxes@osboxes:~/keda-demo$ kubectl apply -f deploy.yaml

Now, if you check the deployment on the cluster, the Azure function is running on the cluster with KEDA.

osboxes@osboxes:~/keda-demo$ kubectl get deploy
keda-demo   1/1     1            1           20m

Repeat the Add message step, which we did before. Go to the Azure storage account kedastorage123 -> Storage Explorer -> js-queue-items and keep on adding multiple messages to the queue for a few seconds.

Now, recheck the deployment status. You will see in order to process multiple messages, KEDA has automatically scaled the deployment as per the requirement. And once all the messages finish running, the deployment will go back to 0.

osboxes@osboxes:~/keda-demo$ kubectl get deploy
keda-demo   4/4     4            4           22m

You have successfully run Azure functions on Kubernetes with KEDA! Optimize your cluster to scale dynamically with these two services hand in hand. 

Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.