Automation continues to be a major growing trend in today’s cloud infrastructure landscape. Service providers like Amazon Web Services are integrating better, more advanced automation tools to make the life of administrators easier. Automation allows for workflows to be more efficient, especially with more tasks being executed without the need for human input.
Until very recently, however, some fundamental tasks couldn’t be automated. The creation and maintenance of operating system images is a good example. Admins have different approaches when it comes to creating and maintaining OS images for their development teams, but all of those approaches required a lot of manual work.
The introduction of EC2 Image Builder solves most—if not all—image building problems. The tool allows for the creation of image build pipelines with specific parameters and runtimes. The resulting images can also be integrated with other automated workflows, including server deployment and the larger CI/CD pipeline itself.
Creating an operating system image build pipeline is a straightforward process. You can access the Create Image Pipeline option from the console homepage. You will be taken through a series of steps to define your recipe and configure your pipeline
All of the images you have—and images that are shared to you—can be used as the source image. You can also use a custom AMI ID to grant access to images. Don’t forget to check the Initiate a New Image Build option so that the build gets updated whenever an update is added.
The next part involves configuring the components you want to add to the image. This is where you can customize your recipe to meet specific needs. Click the Create Build Component button to configure software, settings, and other details about the recipe.
For example, you can add python-3-Linux to your build. You can also add other tools and components, as long as they are from the supported repositories. Updates to the components will also trigger updates to your build.
Make sure you define a test for the image build. You want to be extra certain that functionality and security of the build are sufficiently reviewed. AWS has several pre-built tests that you can run on your OS images, but you can also configure custom tests if needed.
The last part of the process is creating the pipeline itself. At this point, you already have a complete recipe to work with. Creating a pipeline allows for that recipe to be used in an automated and streamlined way.
Defining Your Image Build Pipeline
From the same wizard, the first thing you want to do to create a pipeline is integrate an IAM role for the build. You want a role that is associated with the EC2 instance used to build your OS images. Next, you can configure the pipeline to run periodically or manually.
The Build Schedule tool also supports a CRON-triggered build. If you are familiar with CRON jobs, you can access the advanced configurations for automation using similar commands. Once you have set a schedule for the pipeline, everything from integrating components, testing the image, and completing the build will be fully automated.
You have the option to define other parameters to be automated, with infrastructure settings being the most prominent one. You can, for example, choose a default EC2 instance type for the build, or configure advanced parameters such as VPC and subnet for better security.
You can even create an output AMI with tags. Since AWS now supports the creation of custom keys and values, tags are incredibly handy for managing your images. It may not be a useful feature if you have a simple infrastructure, but it is invaluable when you manage hundreds of images.
That’s it! Complete the steps and you now have the first pipeline ready to go. You can hit Run Pipeline to run the newly created runtime for the first time. From the management console, all images built by the pipeline are displayed underneath the pipeline details.
Digging Deeper into Advanced Configs
There are several advanced steps you can take to fully utilize the Image Build tool. For starters, you can create your own custom components to use with your recipes. Custom components can be utilized to add features or services, or for automating detailed testing further.
Similar to other AWS tools, components are defined using the standard YAML declaration. The basic YAML config file is generated when you create a custom component from the console, but you also have the freedom to customize your components further.
Image recipes are just as flexible. As mentioned before, you can trigger the creation of new builds when components of the recipe receive updates. Gone are the days of manually taking snapshots of deployed images just to maintain an up-to-date repository.
Let’s not forget that the AWS Image Builder also integrates other AWS security features, including AWS Organizations, which means you can use the Image Builder as an added security measure and a way to enforce information security protocols. Only approved AMIs can be run by certain accounts.
An EC2 image builder automatically reads your source image when the pipeline is triggered. It will then check for custom components and install them, plus perform additional cleanups and maintenance tasks while the process is running. This is the part where your image builds get tailored to your specific needs.
The image builds are then secured using AWS’s templates or your own custom configuration before they are then pushed for testing. The automation of testing alone saves a lot of time and money. You also don’t need to deploy images on pre-allocated resources just to do testing.
As an added bonus, you have the option to output the finished builds to any supported AWS region. Through careful planning, you can further optimize your cloud infrastructure for rapid deployment and maximum scalability.
These benefits are yours from the moment you create your first image build pipeline. With the process being so simple—as discussed in this article—there is no reason why you should not start automating your OS image build pipelines today.
Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.