This post on leveraging serverless architecture was originally published here on DZone.com.
Serverless computing, also known as serverless architecture, and sometimes just serverless, is a hot topic in computing right now. Amazon, Google, and Microsoft, the big three when it comes to cloud computing, are all investing heavily in serverless architecture to provide continuous service integration for businesses of all types. It has become an increasingly common subject to find guidebooks for on bookshelves, alongside the usual programming and networking books.
But even if you have come across serverless architecture before, it can still take some time to wrap your head around and understand the concept properly. Many people find themselves wondering, if not what serverless architecture is, then how is it currently being best employed?
What is Serverless Architecture?
As with many revolutionary technology trends, serverless architecture is hard to pin down and summarize in a simple, catchy soundbite. It is a technology that is becoming increasingly important in the here and now, but which could also play a crucial role in the future of cloud computing—especially for enterprise systems.
Serverless architecture is a way of approaching cloud computing in which cloud service providers manage the allocation of server resources in a dynamic way. Processes are run in isolation and are fired when certain triggers or events happen the resources required to run the process is typically managed by the cloud provider. By combining third-party cloud services, client-side logic, and the ability to call/request cloud-based services with a variety of triggers, serverless architecture delivers that we often refer to as Functions-as-a-Service (FaaS).
Why Do Developers Use Serverless Architecture?
There are a number of reasons that developers choose to utilize serverless architecture, but there are also some situations where a traditional server-based approach will yield better results.
No Need to Worry About Maintenance
The traditional server-driven approach to cloud computing has always presented a number of inherent problems and challenges for businesses. For example, when applications and runtime environments are stored on remote servers, those servers need to be maintained. This maintenance includes ensuring that remote patches and security updates are applied, and that any issues with availability are rectified as soon as possible.
With serverless architecture, you only need to rely on your cloud provider to handle server maintenance. All of them now have a multitude of redundancies and other measures to ensure that any downtime is kept to the bare minimum.
For many businesses, the biggest selling point of serverless architecture is cost. When compared to the cost of renting out physical servers and maintaining them round the clock, serverless architecture offers impressive savings. In fact, serverless computing utilizes a completely different pricing model to traditional server architecture.
When renting a physical server, you have to pay according to the specifications of that server, and how its available resources are allocated to you. In contrast, a serverless architecture provider will charge you based on the number of executions—a pay per resource scheme. The cloud provider will allocate you a certain time frame, which will vary according to the package you choose. The amount of memory available to you during that time can also be adjusted if you need to be able to perform more executions. The more memory per millisecond that you require, the more expensive your package will be.
With a serverless cloud setup, it is just as easy to configure multiple different environments as it is to set up a single one. Because serverless computing occurs on a per execution basis, it is easy to have different executions calling into play different architectures. This also means that you don’t have to worry about tracking the status and configurations of a whole bunch of different environments.
What Are the Drawbacks?
No technology is perfect. There are certainly some circumstances and setups for which traditional server architecture would provide better results than a serverless solution.
When it comes to networking applications, serverless architecture suffers from an unfortunate downside. Namely that serverless functions can typically only be accessed as private APIs, meaning that an API gateway needs to be configured to allow access. With a serverless computing setup, you won’t be able to access these services through the usual IP. For configurations that require constant networking capabilities, traditional-based server architecture is unarguably the way to go.
The timeout limit in serverless architecture refers to the amount of time in which you are able to execute your functions. Most serverless architecture providers give a maximum timeout limit of 300 seconds. Anything that needs to run for longer than this, or which takes longer than this to run, is unsuitable for a serverless architecture setup.
Imagine an e-commerce app linked to an online retailer, a clothing store for example. Using traditional server architecture, the customer would connect from their internet-connected device to the clothing store’s server, then access the database contained on it. This database would contain all the information regarding the business, their products, etc. The application, stored on the server, will provide the user with an interface through which to access the server and place orders.
With a serverless architecture approach, the structure becomes very different. Rather than the customer going through the clothing store’s physical server, the purchase and search functions can be separated by an API gateway to access the appropriate isolated functions.
A simple and direct application for serverless computing is setting up REST APIs that deliver data for use by a single-page application or another service.
REST APIs are not typically considered difficult to create. Often, you just need a basic web framework, a library for translating data in the format you require (usually JSON), and whatever glue code required to talk to the back end from where you’re pulling data. With serverless architecture, the developer can concentrate on just writing and deploying code to serve the API, and not have to worry about much else.
Typical functions that need manual configuration in a REST API, such as autoscaling to meet demand, can be automatically addressed by serverless frameworks. Plus, the pay-per-resource model that’s become the nature of cloud pricing means that a lightweight, minimally viable API costs next to nothing to deploy.
As with any other technology, you will get the most out of serverless architecture when you adhere to a particular set of best practices. These are the tips that you should keep at the front of your mind when you are considering how to design and execute your serverless architecture implementation.
- Use a compute service to deploy code on demand: In order for your setup to become truly serverless, you will need to ensure that there are compute services available through your chosen cloud service provider suitable for achieving your goals. As soon as you have to rely on a physical server, a virtual machine, or any containers of your own creation, you have strayed away from serverless architecture.
- Write functions that are stateless and single-purpose: When you are writing functions for your serverless architecture, ensure that each function is serving a single purpose and is serving it properly. Many serverless setups utilize different functions in order to form microservices.
- Push-based and event-driven pipelines: When you need to carry out more complex computations and tasks, create pipelines that are event-driven and pushed-based, allowing different services to easily communicate with one another./li>
Serverless architecture gives us a glimpse at what the future of cloud computing will look like. With so many services now available in the cloud, and with the amount of cloud storage and power available, the possibilities for serverless computing going forward are awe-inspiring.
Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with microservices, containers, cloud infrastructure, and CI/CD deployments. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and profit from our DevOps-as-a-Service offering too.