Data Modernization & Analytics
From implementing data lakes and migrating off commercial databases to optimizing data flows between systems, turn your data into insights with AWS cloud native data services.
Reduce database licensing costs with microdatabases on Amazon Aurora. Read how you can modernize your infrastructure and expand your data strategy for savings, speed, scalability, and automation by using Babelfish endpoints for a smooth Microsoft SQL Server migration.
In our experience, customers often focus on modernizing front-end and back-end compute systems and focus less on their back-end data stores. This is despite the ever-growing list of benefits in moving away from commercial database engines.
In the not so distant past, I found myself in this very predicament – one that many of my current clients share in common. I managed a global software product running on a massive commercial database.
We knew we needed to diversify our data strategy. But we didn’t have the budget . Due to mounting pressure from our competition, our pricing model could no longer support the licensing costs.
We faced a common conundrum. We couldn’t afford to stay on our database platform. But we couldn’t afford the expense of modernizing our data tier.
How do you solve this?
To start, let’s address the actual root cause. The problem isn’t that licensing has become cost-prohibitive. It’s that the system architecture is so inflexible that swapping database technologies has become cost prohibitive.
Moving from one platform to another only solves for cost. It doesn’t address this inflexibility.
We’ve modernized monolithic platforms by breaking them down with a microservices architecture. We can modernize data stores in the same way. Doing so allows for much greater flexibility in changing technologies for your data store. That ‘s because you make design decisions within a single service – not the entire platform.
The amount of database platforms you use in combination doesn’t matter nearly as much as the way you use them. Rather than having a monolithic schema that all services use, isolate your data via table, schema, or database per-service.
This is obviously a serious decision. You can’t implement it overnight. Instead, use a more agile and incremental approach (described below) to reduce licensing costs up front. This unlocks budget you can apply towards continued evolution.
Let’s look at how we’d transition off of a monolithic SQL Server. You can start by migrating your data store into PostgreSQL. Once there, you can apply the savings from decreased licensing costs towards further modernizing your data footprint into microdatabases.
Moving to managed database platforms can also help re-allocate budget by taking DevOps tasks off of your DBA’s backlog. That frees them up to solve real data problems.
AWS provides an option that can achieve both (no licensing, fully managed): Amazon Aurora PostgreSQL.
Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL.
Architecturally, Aurora inherits the value of RDS and adds in the benefit of separated storage and compute. With a highly scalable distributed storage subsystem, Aurora handles automatically some of the more challenging aspects of managing a database platform, such as storage growth, database clustering, and database replication.
From a technical perspective, Aurora is quite impressive. It’s a drop-in replacement for either MySQL or PostgreSQL. It’s not a “MySQL-like database” – it delivers full compatibility.
Since Aurora is a managed service, you can get up and running faster by remaining laser-focused on your data models and system architecture. No need to get bogged down in hardware design, provisioning, and DBMS setup.
This cloud-native database platform includes high-performance storage with auto-scaling that occurs seamlessly in the background.
Storage-wise, you can take an Aurora cluster all the way to 128 tebibytes (TiB) of data. As for clusters, Aurora takes a config-driven approach to clustering and replication. These features are paramount to today’s modern scalable and highly available systems.
Routine database tasks such as provisioning, patching, backup, recovery, failure detection, and repair are config-driven and easy to automate. I can hear the DBA community letting out a sigh of relief.
I saved arguably the best feature for last: Babelfish. Aurora PostgreSQL provides a standard set of PostgreSQL-compatible endpoints for interfacing with the database.
Additionally, you can enable Babelfish endpoints which make the database look like a Microsoft SQL Server database. In use cases where you are moving your platform’s data store from Microsoft SQL Server to PostgreSQL, you can take a one-two punch rather than a one-shot approach.
Babelfish provides native and semantically correct execution of T-SQL using the TDS protocol. This is in addition to native SQL Server data types, stored procedures, cursors, functions, triggers, and catalog views. (There are some differences, however.)
Babelfish is compatible with T-SQL and SQL Server features available in SQL Server 2014 and later. As I said earlier, Babelfish compliments the native PostgreSQL endpoints, making the solution truly bilingual. Your application(s) can connect to the same backend data store using T-SQL over TDS as well as PL/pgSQL over native endpoints.
The benefits of Babelfish
Your app won’t know the difference. Now that the database is on a more scalable and economical solution (the one-punch), you can modernize your application to take advantage of native Aurora PostgreSQL endpoints (the two-punch).
Let’s summarize the benefits of this approach:
Now that you’ve unlocked your budget from licensing and database management, you can continue your modernization journey. As you update your application to talk to a PostgreSQL database, go ahead and start breaking down your data store into micro databases. You can support each service with a table or set of tables in PostgreSQL. Where a NoSQL data store would provide more value, move that service’s data store to DynamoDB instead.
If you’d like to modernize your data infrastructure leveraging Caylent’s deep AWS experience, get in touch with our data engineering experts to see how we can help you design and implement scalable and future-proof solutions.
Kenneth is fueled with a passion for transforming businesses through the power of technology. With over 20 years of industry expertise, he has helped startups thrive, empowered small businesses to scale, and collaborated with Fortune 500 clients to drive innovation. Throughout his career, Kenneth has done everything from bare-metal-to-browser and has gained an affinity for data. He has already helped several customers create value from generative AI and is energized by the wealth of possibilities that this technology ushers in.View Kenneth's articles
Learn about the differences in how Amazon Bedrock and SageMaker JumpStart help you approach foundation model training for GenerativeAI Use cases on AWS.
Migrating to the cloud is deeply desirable due to ease of the management, scalability and many other factors, however poor choices in the migration process can lead to increased costs, poor performance and tech debt. Learn about the top 7 cloud migration mistakes and how to avoid them.
Learn about Steve, Caylent's internal slackbot who has evolved to automate sales processes, improve efficiency and enrich business decisions with data.