Application Modernization - Refactor Spaces

Application Modernization
Video

Learn about some of the application modernization patterns Caylent's experts are seeing amongst customers and how services such as AWS Migration Hub Refactor Spaces are helping customers reduce infrastructure management efforts

pplication Modernization - Refactor Spaces


We're going to talk a little bit about some of the application modernization patterns we're seeing, starting with one of the things that AWS came out with at AWS re:Invent 2021 - AWS Migration Hub Refactor Spaces - that makes it easy to manage the refactoring process. One of the things that Refactor Spaces is designed to do is to facilitate the strangler pattern, which is often used when you have a monolith and you want to take this thing apart, but you can't do it all at once.

When working with this pattern, you introduce a facade. Once this clean facade is built around the old monolith functionality, you can begin carving that functionality out and you might move that functionality into containers, or you might utilize a serverless approach. But now it can be treated as a microservice, and it can be managed and agile in that way. So that's one approach to application modernization. 

Sometimes a customer will have what they think are microservices or they'll have elevated beyond just a single monolith, but what we find once we dig a little deeper is that they're just using what we refer to as a distributed monolith where they have separate endpoints for various services.

Each one of those endpoints might have even graduated to the point where it's horizontally scalable and independent so that no service is necessarily a single linchpin to the entire system, but they're inevitably talking to a single data source on the back end. So what ends up happening is they think that they're running microservices, but a true microservice has its own data source for each application. With a single database at the back end, you now have infinitely horizontally scalable application layers, which then talk to a database and you can inevitably end up DDoSing yourself.

And so I'm curious, you know, how does that kind of relate back to this idea of the strangler pattern and how do you avoid that?

With the strangler pattern as well, you are concerned about the application layer in the interface. You are, however, still perfectly capable of shooting yourself in the foot by pointing back to one data store. So it really is about due diligence and making sure that as you’re decomposing the monolith or taking apart a legacy application, you're not just separating out the functionality but also making sure that all of the dependencies are decoupled. Going even further, sometimes we’ll see synchronous calls across dozens of services. Effectively now I've got a distributed monolith again. One of the better practices we see there is things like queueing or leveraging a bulkhead pattern where you're making sure that you're able to short circuit some of these calls.

What other patterns are you seeing that allow for a graceful degradation of services? 

It's really about embracing an architecture where everything can be completely distributed. Queuing can be a major part of that because you want messages or events to go somewhere and not necessarily be handled at the same exact time by everything. Amazon SQS is a backbone for a lot of these services.

But there are also other message bus services in the background like Kafka or Amazon's Managed Kafka Service that you can leverage and they'll create streams of data so that other clients can pull that data out of the stream and process it asynchronously. These kinds of moves are really critical so that you don't end up shooting yourself in the foot. 

One of the challenges the customer may see when you break these pieces apart is what are people doing about monitoring? Now, instead of one big service that maybe scales vertically you've got a dozen completely decoupled services that scale horizontally, so does that mean you must monitor 12 different pieces instead of one?

The biggest challenge is trying to figure out where the monitoring & logging data is going. If there's already a centralized logging solution in place, we like to leverage that because it's comfortable & familiar, but the goal then shifts to not capturing that data from a central location, but capturing it from multiple endpoints.

You can integrate your log shipping and metric collection at the application level so that it's actually ingrained in the container or the serverless function that you're deploying, and then it ships that data directly to your centralized logging and metric collector, allowing you to essentially bypass any middlemen that you have to create. There are alternatives, particularly in AWS, where a lot of these services are naturally going to be shipping metrics and log data to services like Amazon CloudWatch

So there are solutions where you can deploy an extra service that will go and collect all of the information from Amazon CloudWatch Metrics & Amazon CloudWatch Logs and then ship them to wherever you need the endpoint to go. But realistically, bypassing that and eliminating the need for additional infrastructure that you then have to manage and deploy is going to be a much cleaner approach.

A lot of the services, especially AWS Lambda, are going to expose some metrics to Amazon CloudWatch natively. So the effort is limited to “turning on” the setting versus configuring a whole new program to incorporate with.

Assuming I'm a customer who has worked with Caylent and we've migrated workloads and they're modernized, what comes next and what did I gain from that?

A number of things - the number one thing being velocity. Typically, when we're going to migrate & modernize a workload, a lot of automation comes along with it. Rather than doing a one time activity, like a lift & shift approach might be, the automation that we've built into the migration is going to be the same automation that's used in Day 2 operations as well.

So now we don't have a different process that was used for this one time migration event. We have a migration that's used as the first iteration of a number of successive events. The value that that brings is that the product owners are able to sort of make change faster. 

In our engagement with the customer, we will have potentially decoupled and decomposed the monolith so that microservices can evolve independently and as quickly or slowly as they need.

We may have broken this up into a Kubernetes based deployment, but it's automated. So now the code changes get through in a predictable fashion. Maybe they have test automation built in and those kinds of things speed up business objectives. 

The other piece of that is that on Day 2, sometimes there's some sort of DevOps or some operations and monitoring that maybe the customer doesn't want to do. Just like with AWS services, where AWS takes off the undifferentiated heavy lifting in a managed service, Caylent can also engage in what we call a pods model, which involves ongoing service delivery, working from a shared backlog with the customer. So Caylent is equipped to help with that too. There are a lot of ways that we can also set up just for a successful Day 2 if customers want to do that on their own.

In pre-sales, you talk to a lot of customers. What are some of the patterns that you're picking up on?

Now, every customer is unique. They're going to have unique business objectives. But what we do see is there starts to become some commonality in the industry. Let’s consider Kubernetes. Amazon has a number of ways that you can host containers on AWS. That's a recurring pattern we have observed for customers that are coming from a virtual machine based workflow or Amazon EC2 bare metal instances. They're more comfortable with the approach of thinking about little machines. For them, what we've done is we've built a Kubernetes Caylent Catalyst that gets those people started on the right foot.

The goal of the Caylent Catalyst is to let the customers focus on the containers themselves, rather than the infrastructure and the complexity that's involved in caring for the containers, the registry, the monitoring and the other services that accompany it. 

On the other hand, sometimes we'll have customers that come to us with what we might call a greenfield approach where they're building a new application and don’t have anything in place. 

For that, what we like to do is to steer towards our Cloud Native Application Foundation Caylent Catalyst, which sets them up to use any of AWS’s native services like AWS Lambda or managed services like Amazon RDS or Amazon DynamoDB, so that they're using modern managed services that save them from having to worry about the underlying infrastructure. They're just worrying about the code and we're helping them develop pipelines that deploy that code. So there's a lot of acceleration due to not having to worry about the legacy technical debt.

These Caylent Catalysts basically help customers get on the right path, and we offer a plethora of Caylent Catalysts, helping our customers choose the best path for their needs.

Caylent also offers custom professional services where we can develop bespoke solutions to address our customer’s unique needs. Caylent Catalysts, however, were designed to be starting points for use cases we see repeating across companies. They may not be the right starting point for every customer but they can be a great starting point for a lot of customers. And these customers may pick up from those engagements and run on their own or they may continue with Caylent and build out something that's even beyond the sort of starting point.

If you’re interested in modernizing your legacy applications on the cloud or would like to build new cloud native applications from the ground up, Caylent’s experts are well equipped to support you. Get in touch with our team to find out how we can help!

Application Modernization
Video
Mark Olson

Mark Olson

As Caylent's VP of Customer Solutions, Mark leads a team that's entrusted with envisioning and proposing solutions to an infinite variety of client needs. He's passionate about helping clients transform and leverage AWS services to accelerate their objectives. He applies curiosity and a systems thinking mindset to find the optimal balance among technical and business requirements and constraints. His 20+ years of experience spans team leadership, technical sales, consulting, product development, cloud adoption, cloud native development, and enterprise-wide as well as line of business solution architecture and software development from Fortune 500s to startups. He recharges outdoors - you might find him and his wife climbing a rock, backpacking, hiking, or riding a bike up a road or down a mountain.

View Mark's articles
William Kray

William Kray

For the last decade William Kray has been everything from SysAdmin, Cloud Engineer, Solution Architect, Writer-of-documentation-about-how-to-write-documentation, and is currently Director of Architecture and Engineering at Caylent. He spends his spare time driving around in his 1966 Mini Cooper with his wife and their wiener dog.

View William's articles

Learn more about the services mentioned

Caylent Services

Application Modernization

Innovate at the speed of light with modern applications powered by modular architectures running on purpose-built AWS services.

Accelerate your cloud native journey

Leveraging our deep experience and patterns

Get in touch

Related Blog Posts

Multi-Region Disaster Recovery for QLDB

Learn how to implement disaster recovery capabilities for your Amazon Quantum Ledger Data Base to improve the availability of your applications across different regions or accounts

Application Modernization
Cloud Technology

Differences Between GenAI and AI

While GenAI has gained significant attention in recent times, businesses have long used AI for vital tasks like fraud detection and personalization. Learn the distinctions between GenAI and Analytical AI and how you can unleash the potential of AI in your business.

Artificial Intelligence & MLOps
Video

SageMaker JumpStart

Learn how SageMaker JumpStart paves the way for efficient AI adoption with a blend of foundation models, algorithms, and seamless integrations, without hefty initial investments.

Artificial Intelligence & MLOps
Video