re:Invent 2023 Data Session Summaries

Cloud Technology
Data Modernization & Analytics

Get up to speed on all the data focused 300 and 400 level sessions from re:Invent 2023!

We know that watching all the re:Invent session videos can be a daunting task, but we don't want you to miss out on the gold that is often found in them! In this blog, you can find quick summaries of all the 300 and 400 level sessions, grouped by track. Enjoy!

DAT324 Amazon Aurora HA and DR design patterns for global resilience

The AWS re:Invent 2023 session, "Amazon Aurora HA and DR design patterns for global resilience (DAT324)," delivered by Tim Stokes and Grant McAllister, focused on enhancing database resilience and global availability using Amazon Aurora. They emphasized the importance of Aurora's managed database service, particularly its ability to handle high availability (HA) and disaster recovery (DR) across multiple regions.

The session began with an overview of Aurora's architecture, emphasizing its storage replication across three availability zones (AZs) within a region, ensuring high durability. The speakers then discussed the concept of Aurora Global Database, which allows for cross-region replication, enhancing global resilience. This feature minimizes data loss and recovery time objectives (RTOs) by replicating data to multiple regions asynchronously, thus ensuring data availability even in the event of a regional outage.

The presenters also introduced advanced features like Aurora's global database with write forwarding and session consistency models. These features allow for seamless replication and data consistency across regions, enabling businesses to maintain operations even during significant outages. The session concluded by highlighting the importance of testing resilience strategies regularly and considering cross-account backups for additional security and compliance.

Overall, the session provided insights into leveraging Aurora's capabilities for achieving high availability and effective disaster recovery in a global context, underscoring the importance of robust database management strategies in today’s interconnected and data-driven business environment.

AWS re:Invent 2023 - Amazon Aurora HA and DR design patterns for global resilience (DAT324)

DAT325 Deep Dive into Amazon Neptune Analytics & its generative AI capabilities.

In the AWS re:Invent 2023 session "Deep Dive into Amazon Neptune Analytics & Its Generative AI Capabilities (DAT325)," Brad Beebe and Dr. Umit Chaur from Amazon Neptune discussed the integration of Neptune Analytics with generative AI. They began by explaining the basic functionalities and architecture of Amazon Neptune, emphasizing its utility in handling large graphs with billions of edges, supporting various graph models and query languages. The session highlighted Neptune's performance and versatility across different use cases like knowledge graphs, identity graphs, fraud detection, and security graphs, citing examples from various industries.

The core of the presentation revolved around the newly launched Amazon Neptune Analytics. This feature uses high-performance computing techniques as a managed service, offering high-performance graph processing. Neptune Analytics is designed to handle large-scale graphs for analytics, boasting fast data loading and scanning capabilities. It supports graph algorithms, low-latency queries, and vector search. Beebe and Chaur explained how Neptune Analytics implements algorithms through open cipher query language, making them easily accessible and integrable into various applications.

The speakers presented practical use cases and demonstrations to showcase Neptune Analytics' capabilities. They demonstrated how Neptune Analytics could be used to detect fraud, improve machine learning data preparation, and enhance generative AI applications by providing a rich context through graph databases. They showed how the service could be utilized in various sectors, from financial services to healthcare, highlighting its efficiency in reducing cost and time in data science pipelines. The session concluded with a Q&A segment, addressing specific queries from the audience.

AWS re:Invent 2023-Deep dive into Amazon Neptune Analytics & its generative AI capabilities (DAT325)

DAT326 What’s new with Amazon RDS?

In the session titled "What's new with Amazon RDS?" (DAT326) at AWS re:Invent 2023, the general manager of Amazon Relational Database Service (RDS), Suresh, discussed significant enhancements made to RDS over the year. The session, aimed at users familiar with RDS, highlighted the service's simplicity and various updates, including support for new database engines and improvements in performance, security, and compliance.

Key advancements in RDS included support for IBM's DB2 engine, enhancing customer choice and enabling users to easily migrate their existing DB2 databases to RDS. The introduction of new features like zero ETL (Extract, Transform, Load) integration between MySQL and Redshift was emphasized, simplifying data movement for analytics purposes. Updates to PostgreSQL, MySQL, and MariaDB versions on RDS were also announced, improving performance and stability.

Furthermore, Suresh underscored RDS's continuous focus on security and compliance, with new features like database activity streams for SQL Server and support for self-managed Active Directory. These additions aim to bolster the security posture and enhance administrative simplicity for RDS users. The session concluded with a call for ongoing customer feedback to drive future innovations in Amazon RDS.

AWS re:Invent 2023 - What's new with Amazon RDS? (DAT326)

DAT327 Why AWS is the place to build and grow your MySQL workloads

The AWS re:Invent 2023 session, presented by Eugene Koto, focused on the benefits of using AWS for MySQL workloads, emphasizing managed database services like Amazon Aurora MySQL and RDS MySQL. The session highlighted the evolution of database management at Amazon, noting the introduction of managed database services to alleviate the challenges developers faced in setting up and managing databases. Services like RDS MySQL and Amazon Aurora MySQL, introduced in 2009 and 2015 respectively, were designed to be cloud-native, providing features such as automatic provisioning, monitoring, and recovery. These services cater to various industries, including gaming, media, entertainment, financial services, and healthcare, offering scalability, high availability, and reduced maintenance efforts.

Key features and innovations in AWS's MySQL-related services were discussed. This included Aurora Serverless V2, which offers automatic scaling of Aurora capacity units for efficient capacity management; the Global Database feature for cross-region disaster recovery and fast local reads; and the Aurora Fast Cloning for instant database copies. Additionally, machine learning integration allows customers to enrich relational data using simple SQL queries, and the Aurora I/O Optimization provides predictable pricing and enhanced performance.

The use of AWS managed MySQL services by Intuit and Freshworks was showcased. Intuit, known for TurboTax, utilized Aurora MySQL to manage high demand during tax season, benefitting from its automatic scaling, high availability, and reduced maintenance. Similarly, Freshworks employed RDS MySQL to handle over a million requests per minute, leveraging multi-AZ deployment for high availability. The session concluded by reaffirming AWS's commitment to being a leading platform for managing and scaling MySQL workloads, with a continuous focus on security, availability, and durability.

AWS re:Invent 2023 - Why AWS is the place to build and grow your MySQL workloads (DAT327)

DAT328 Dive Deep into Different AWS DMS Migration Options

The AWS re:Invent 2023 session "Dive Deep into Different AWS DMS Migration Options" (DAT328) focused on providing an in-depth understanding of AWS's Database Migration Service (DMS) and its various functionalities. The session was led by John, who heads product for AWS DMS, and Ryan, who leads the technical field community for DMS. They aimed to demystify the process of data migration and replication, emphasizing that these two processes are essentially similar in nature. John and Ryan clarified that migration is broadly about moving data, which can occur within various contexts such as moving from on-premises to the cloud, within the cloud, or between different database types in the cloud.

The speakers discussed the latest advancements in DMS, including new features and capabilities added since the previous year. They demonstrated the practical application of DMS through a detailed walkthrough of setting up and executing a data migration, covering aspects such as configuring network and security settings, setting up data providers, and understanding the instance profiles. The session also included a live demo to illustrate these processes in action. Emphasis was placed on the importance of pre-migration assessments and data validation to ensure a smooth and error-free migration process. Additionally, common challenges faced during migration, such as instance sizing, network bandwidth constraints, and operational excellence, were addressed, offering solutions and best practices to overcome these hurdles. The session aimed to equip attendees with the knowledge and tools to successfully apply AWS DMS to their specific migration challenges.

AWS re:Invent 2023 - Dive deep into different AWS DMS migration options (DAT328)

DAT329 Data modeling core concepts for DDB

The AWS re:Invent 2023 presentation on "Data modeling core concepts for Amazon DynamoDB (DAT329)" was led by Greg Crum, a senior DynamoDB specialist, and Akshat V., a senior principal engineer. They focused on leveraging DynamoDB for scalable and efficient data modeling. Key highlights included the importance of selecting appropriate partition and sort keys for optimal data distribution, and the benefits of DynamoDB like seamless scaling, support for complex relationships, and schema flexibility. Greg discussed design considerations such as single vs. multi-table designs, workload priorities, and access control, emphasizing high cardinality and efficient access patterns for entities like customers, carts, and products. 

Akshat V. delved into strategies for optimizing costs in DynamoDB, focusing on item sizes, write/read capacity units, and vertical partitioning of data. He emphasized the trade-offs in database design for cost efficiency, and the use of global secondary indexes for effective data filtering. The presentation also covered the benefits and trade-offs of sparse indexes and the importance of high cardinality in index design to prevent throttling. They concluded with additional resources for DynamoDB data modeling, including tools and literature. The session aimed to equip attendees with insights into designing scalable, efficient, and cost-effective DynamoDB schemas, tailored to specific application needs.

AWS re:Invent 2023 - Data modeling core concepts for Amazon DynamoDB (DAT329)

DAT330 Dive deep into Amazon DynamoDB

The AWS re:Invent 2023 session on Amazon DynamoDB, presented by a senior principal engineer from the DynamoDB team, offered a deep dive into the service’s features and capabilities. The presentation focused on DynamoDB’s scaling and capacity modes, transactions, streams, and global tables. Key points included DynamoDB's horizontal scaling through data distribution across partitions, with each partition typically around 10 gigabytes. The database provides two capacity modes: on-demand for unpredictable workloads and provisioned for predictable ones. DynamoDB's on-demand mode simplifies scaling and operational management, adapting to fluctuating workloads without manual intervention.

The session also covered DynamoDB's support for ACID transactions, ensuring data integrity with a serializable isolation level. Transactions in DynamoDB are designed as single-request transactions, contributing to predictable performance and avoiding complex lock management. DynamoDB Streams, another highlighted feature, captures changes to items in tables in near real-time, providing a time-ordered sequence of item-level modifications with strict ordering and exactly-once processing guarantees.

Lastly, the presentation discussed DynamoDB Global Tables, which facilitate building globally distributed applications with fully managed, multi-region, and multi-master database replication. This feature ensures data availability close to users and offers automatic conflict resolution. The session emphasized DynamoDB’s internal mechanisms like global admission control, predictive splits, and adaptive moves for load balancing, highlighting DynamoDB's commitment to providing a robust, scalable, and efficient NoSQL database service. The session was structured around audience questions, offering insights into best practices and DynamoDB's architecture, and recommended additional resources for a deeper understanding of the service.

AWS re:Invent 2023 - Dive deep into Amazon DynamoDB (DAT330)

DAT331 Using Aurora Serverless to Simplify Manageability and Improve Costs

In the AWS re:Invent 2023 session titled "Using Aurora Serverless to Simplify Manageability and Improve Costs" (DAT331), Anam Zang, a Product Manager with Aurora, focused on the capabilities of Aurora Serverless V2 to streamline database capacity management and reduce operational costs. She highlighted the auto-scaling features of Aurora Serverless V2, explaining how it can dynamically adjust to workload demands while integrating seamlessly with other Aurora features. The session included an overview of the billing process and a customer use case study featuring Intuit, which showcased the cost-saving benefits of Aurora Serverless V2.

Anam detailed the operational mechanics of Aurora Serverless V2, emphasizing its capacity to perform non-disruptive scaling and its use of Aurora Capacity Units (ACUs) to measure capacity. She explained the in-place scaling process, where databases scale automatically based on CPU utilization, memory utilization, and network throughput, without incurring downtime. The presentation also covered how Aurora Serverless V2 integrates with Aurora's existing features like high availability, read scalability, global database replication, and RDS Proxy for connection management. Additionally, she provided insights into getting started with Aurora Serverless V2, including upgrading from previous versions and setting up new serverless instances through the AWS Management Console.

AWS re:Invent 2023 - Using Aurora Serverless to simplify manageability and improve costs (DAT331)

DAT332 Powering high-speed workloads with MemoryDB for Redis

At AWS re:Invent 2023, Itai and Kevin presented on Amazon MemoryDB for Redis, showcasing its capabilities in handling high-speed workloads. The session emphasized MemoryDB's low latency, high scalability, and Redis compatibility, positioning it as suitable for modern applications needing quick data processing and microsecond latency. Security features like encryption and compliance with various standards were also highlighted. The presentation showcased three customer use cases to demonstrate MemoryDB's versatility: Twilio Segment's use for GDPR deletion orchestration, BUD Technologies leveraging it for various gaming platform functionalities, and Media Set Infinity utilizing it for a voting system in a streaming platform.

The architectural discussion focused on MemoryDB's design for high performance and data durability, primarily through its multi-AZ transactional log ensuring zero data loss by replicating data across different availability zones. This architecture facilitates handling millions of requests per second, making it ideal for applications that demand high performance and reliability.

Finally, the scalability of MemoryDB was discussed, including both vertical and horizontal scaling methods. An enhanced I/O library introduced to improve read throughput and reduce latencies was also presented. The session concluded with an encouragement to try MemoryDB through a free trial and an invitation for further inquiries and feedback through provided contact details.

AWS re:Invent 2023 - Powering high-speed workloads with Amazon MemoryDB for Redis (DAT332)

DAT333 Building highly resilient applications with DDB

The presentation at AWS re:Invent 2023 focused on building highly resilient applications using Amazon DynamoDB. The key speaker, Jeff Duffy, a product manager for Amazon DynamoDB, emphasized the importance of a highly available database to support highly available applications. He explained resilience as the ability to adjust to change, including infrastructure failure, varying demand, and system modifications. The talk highlighted different resilience strategies in AWS's well-architected program, such as backup and restore, pilot light, warm standby, and active-active approaches, and how DynamoDB supports these strategies.

Tom Skinner from Amazon Advertising Measurement shared their experience in migrating a critical workload to DynamoDB to enhance resilience. He described the challenges they faced with their previous system, HBase, and the benefits gained from the migration, including increased availability, reduced developer ramp-up time, and a decrease in ticket load by 40%.

Richard Edwards, the principal engineer on Skinner's team, delved deeper into the technical aspects of the migration. He discussed the complexities of working with large-scale data and the need for an architecture that could handle high throughput and dynamic workloads. Edwards explained their approach to table structure, throughput, and table management, using multiple DynamoDB features such as Global Tables, GSIs (Global Secondary Indexes), and replication techniques to ensure high availability and performance.

The presentation concluded with the key takeaways that building a resilient application requires a resilient database, DynamoDB offers a rich set of features for building resilience, and AWS's infrastructure underpins DynamoDB to support application-wide resilience. The audience was encouraged to provide feedback on the session.

AWS re:Invent 2023 - Building highly resilient applications with Amazon DynamoDB (DAT333)

DAT334 Improving user experience at Epic Games using Amazon Timestream

The AWS re:Invent 2023 presentation on "Improving user experience at Epic Games using Amazon Timestream" featured Ian Robinson from AWS and Ken Hawthorne from Epic Games. They discussed how Epic Games enhanced their player experience using Amazon Timestream for time-series data management.

Ian Robinson introduced Amazon Timestream, emphasizing its architecture, scaling characteristics, and new features added over the years. He explained that Timestream, launched in 2020, is a serverless, scalable, and highly available time-series database. It's designed for high-throughput ingestion and real-time querying, making it suitable for various use cases like device monitoring, security analytics, and user sentiment analysis. Robinson highlighted key features such as multi-measure records, which allow storing multiple measurements in a single row, and scheduled queries for pre-computing results on a schedule.

Ken Hawthorne from Epic Games shared their experience implementing Amazon Timestream for the Epic Games Store’s player ratings page. The challenge was to analyze player data, like playtime patterns, to determine user eligibility for polling. The initial setup led to high query loads, prompting Epic Games to optimize their solution. They aggregated data to reduce database size, used scheduled queries for data transformation, and batched requests to lower query count. These changes, along with effective data partitioning, significantly improved performance and reduced costs, allowing for better understanding of user engagement.

In conclusion, the talk illustrated how Epic Games successfully utilized Amazon Timestream's features to manage time-series data efficiently. This enhanced their ability to analyze player behavior and improve user experience on their platform. The presentation also outlined future plans for further leveraging Timestream capabilities in Epic Games' operations.

AWS re:Invent 2023 - Improving user experience at Epic Games using Amazon Timestream (DAT334)

DAT335 Boost performance & save money using ElastiCache with Aurora & RDS

The AWS re:Invent 2023 session titled "Boost performance & save money using ElastiCache with Aurora & RDS (DAT335)" focused on enhancing database performance and reducing costs through the use of ElastiCache. The presentation, led by Joe Trin and Steven Hans, began with an exploration of traditional database scaling methods, such as vertical scaling (upgrading to more powerful servers) and horizontal scaling (adding read replicas). These methods, while effective in some scenarios, are often limited by increased costs and complexity. As a solution, the speakers introduced the concept of caching, particularly using AWS's ElastiCache, to alleviate these issues. They explained how ElastiCache, when integrated with RDS and Aurora, could improve application performance and reduce database load, leading to potential cost savings.

The session further delved into the operational aspects of ElastiCache. Joe Trin highlighted ElastiCache's features like automated management, high availability, and optimized performance. He shared a case study of Wiz, a cloud security company, to illustrate the real-world benefits of implementing ElastiCache, including significant cost reductions and performance improvements. Steven Hans then guided the audience through strategies for determining which databases are ideal for caching. He discussed different AWS tools and metrics useful for this assessment and shared insights on adapting application architecture to leverage caching effectively. He outlined two primary caching strategies: lazy loading and write-through caching, detailing their implementation and advantages.

The presentation concluded with a demonstration showing the performance impact of adding ElastiCache to a database-driven application. The demo revealed a substantial increase in transaction speeds and a decrease in response times, underscoring the effectiveness of ElastiCache in enhancing database performance. Overall, the session underscored ElastiCache as a valuable tool in the AWS ecosystem for boosting database efficiency and reducing operational costs, supported by practical examples and a customer case study.

AWS re:Invent 2023 - Boost performance & save money using ElastiCache with Aurora & RDS (DAT335)

DAT337 How United Airlines accelerates innovation with DocumentDB

This session from AWS re:Invent 2023 focused on how United Airlines accelerates innovation with Amazon DocumentDB. Rashi Gupta, the lead for the DocumentDB product management team at AWS, and Paul McLean, the managing director of the passenger service systems team at United Airlines, were the speakers.

Rashi Gupta discussed the features and benefits of Amazon DocumentDB, emphasizing its flexibility, scalability, and integration with AWS services. DocumentDB, a fully managed database service optimized for storing and querying JSON data, is particularly beneficial for workloads requiring a flexible schema and ad-hoc querying. Gupta also highlighted the latest updates to DocumentDB in 2023, including new regions availability, MongoDB 5.0 API compatibility, JSON schema validation, performance improvements, security enhancements, AWS integrations, and cost-saving features like DocumentDB I/O optimization.

Paul McLean shared insights into United Airlines' journey of using DocumentDB. He explained how the airline is transforming its legacy passenger service system into a modern, customer-centric platform. A significant part of this transformation is managing seating information, previously handled by over 30 different applications on a mainframe. The shift to DocumentDB allows United Airlines to have a flexible, scalable, and efficient system for managing seat inventory and customer data. McLean emphasized the role of DocumentDB in enabling rapid innovation, real-time data validation, and facilitating the airline's move towards a new data model centered around customer orders. He also mentioned United's exploration of using generative AI and Amazon Bedrock to enhance customer service and streamline operations.

In summary, the session highlighted the strategic role of Amazon DocumentDB in modernizing United Airlines' operations, focusing on its flexibility, scalability, and ability to facilitate rapid innovation in response to evolving customer needs and industry standards.

AWS re:Invent 2023 - How United Airlines accelerates innovation with Amazon DocumentDB (DAT337)

DAT338 Data patterns for generative AI applications

The presentation "Data Patterns for Generative AI Applications (DAT338)" at AWS re:Invent 2023, led by Sa Rag Pai and his colleague Bla Bla Channel, focused on the use of AWS services in developing generative AI applications. They covered three main data patterns: Retrieval-Augmented Generation (RAG), fine-tuning foundational models with labeled data, and building custom models from scratch.

In RAG, they explained how to enrich large language model prompts with contextual data from various sources for more accurate responses. This approach involves integrating behavioral, situational, and semantic contexts, using AWS services like Lambda, DynamoDB, and Kendra.

For fine-tuning, they discussed using labeled data sets to adapt foundational models to specific use cases, which can be done easily using AWS Bedrock. Building custom models, a more resource-intensive approach, was recommended for cases where fine-tuning might not suffice.

The speakers also highlighted the importance of evolving data strategies in line with generative AI needs, covering data storage, processing, governance, and security considerations. They stressed the necessity of managing both structured and unstructured data, ensuring proper data governance, and adapting security controls for new AI use cases.

Overall, the session aimed to provide insights into leveraging AWS services for developing sophisticated, data-driven generative AI applications, with a focus on practical implementation and strategic data handling.

AWS re:Invent 2023 - Data patterns for generative AI applications (DAT338)

DAT339 Advanced integration patterns with Amazon DynamoDB

The AWS re:Invent 2023 presentation by John Handler and Jason Hunter focused on the integration of Amazon DynamoDB and Amazon OpenSearch Service. Hunter, representing DynamoDB, highlighted its low latency, scalability, and robust features like encryption and deletion protection, making it ideal for applications needing high throughput. He acknowledged, however, DynamoDB's limitations in handling complex text searches and queries. Handler then discussed the capabilities of Amazon OpenSearch Service, emphasizing its proficiency in handling rich text searches, complex queries, and analytics. He demonstrated various applications, including log analytics and semantic searches, showcasing OpenSearch's versatility.

The crux of the presentation was the zero-ETL integration of DynamoDB and OpenSearch, a significant enhancement simplifying data transfer between the two services. This integration leverages DynamoDB Streams for real-time data synchronization with OpenSearch, allowing users to utilize DynamoDB's robust data storage and transaction capabilities alongside OpenSearch's advanced search and analytics features. This combination offers a powerful toolset for modern applications, addressing both data storage and complex querying needs.

Throughout the session, the speakers provided practical insights, best practices, and demonstrations to aid users in implementing and maximizing the benefits of this integration. The integration presents a streamlined solution for handling various data types and queries, marking a significant advancement in AWS's database and search capabilities. The presentation concluded with a Q&A session, further clarifying the applications and advantages of this integration for the audience.

AWS re:Invent 2023 - Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service (DAT339)

DAT341 Unlock insights on Amazon RDS data with zero-ETL to Amazon Redshift

This session at AWS re:Invent 2023, presented by Sudipta Das, a senior principal engineer at Amazon Redshift, and Jim Tran, a principal product manager at RDS, focused on the integration of Amazon RDS data with Amazon Redshift through a zero ETL (Extract, Transform, Load) approach. This new capability, launched in a keynote by Adam, simplifies the process of data analysis and maximizes the value derived from data.

Sudipta Das emphasized the importance of data as a competitive differentiator, citing statistics about the rapid growth of data and its potential for driving revenue growth. However, only a small percentage of organizations can harness the full value of their data. The integration aims to unlock this value by providing near real-time insights critical to business operations across various scenarios, such as customer experience personalization and supply chain optimization. Sudipta highlighted the challenges of traditional ETL pipelines, which are complex and time-consuming, underlining the benefits of the zero ETL approach. This approach simplifies setup, provides timely access to data, and facilitates the consolidation of data from multiple sources into a single Redshift warehouse.

Jim Tran demonstrated the ease of setting up the zero ETL integration, explaining how it eliminates the complexity of traditional data pipeline setups. He detailed the steps for creating an integration in the RDS console, selecting source databases, and meeting prerequisites like encryption and case sensitivity. Jim showed how data from MySQL is seamlessly replicated to Redshift, with changes in data and schema being automatically updated in near real-time. He also covered edge cases and system changes, highlighting the resilience and self-repairing nature of the integration.

The session concluded with an overview of Redshift's capabilities once data is replicated, such as creating materialized views, performing complex joins, and integrating with other AWS services for advanced analytics. The presenters encouraged attendees to participate in the public preview of this integration and emphasized their openness to feedback for future improvements. They emphasized that this zero ETL approach is just the beginning of a journey to make data analytics more accessible and valuable for businesses.

AWS re:Invent 2023 - Unlock insights on Amazon RDS data with zero-ETL to Amazon Redshift (DAT341)

DAT342 Introducing Amazon ElastiCache Serverless

The AWS re:Invent 2023 presentation introduced Amazon ElastiCache Serverless, a new addition to AWS's caching service portfolio. This launch aims to simplify cache management and operation for high-scale applications with an emphasis on simplicity, scalability, and cost-effectiveness.

The new ElastiCache Serverless eliminates the need for capacity planning. Users can set up a cache without configuring instance types, shards, or replicas, simplifying the operational overhead. The service scales both vertically and horizontally based on demand, ensuring there's always enough resource allocation for workloads. This scaling is proactive and predictive, based on usage patterns. It supports up to five terabytes per cache, with a pay-per-use pricing model, charging for the amount of data stored (in GB hours) and the compute and network resources consumed (measured in Elastic Cache Processing Units, ECPs).

The ElastiCache Serverless architecture leverages the Caspian platform, purpose-built for dynamic resizing of compute resources. Caspian platforms allow for instant resource scaling without fixed CPU or memory footprints, utilizing over-subscription techniques to allocate more resources than physically present. This is managed through a live aggregated view of all Caspian platforms, with a heat management system to identify and migrate cache nodes to different platforms as needed.

A significant feature of ElastiCache Serverless is its single endpoint connectivity. It uses a high-performance, reliable proxy built in Rust to handle all client requests, simplifying client-side operations. This proxy manages all connections to the cache nodes, including failover and scaling events, and supports a multiplexing solution that enables a single TCP channel to handle multiple client connections. The proxy facilitates read-replica operations, routing client requests to the nearest availability zone for reduced latency.

In summary, Amazon ElastiCache Serverless offers a simplified, scalable, and cost-effective caching solution. It's designed for high-scale applications, removing the complexity of cache operations and capacity planning. The service ensures optimal performance with its predictive scaling, Caspian platform utilization, and efficient proxy handling for seamless connectivity.

AWS re:Invent 2023 - [LAUNCH] Introducing Amazon ElastiCache Serverless (DAT342)

DAT343 Analyze Amazon Aurora PostgreSQL data in Redshift with zero-ETL

The AWS re:Invent 2023 session "Analyze Amazon Aurora PostgreSQL data in Amazon Redshift with zero-ETL (DAT343)" focused on the new zero-ETL integration capability between Amazon Aurora PostgreSQL and Amazon Redshift. The presenters, Niraj Rintala and Adam Levin, highlighted how AWS is working towards a future of zero Extract, Transform, Load (ETL) to enable seamless data analysis across different AWS database services.

The session began with an overview of operational analytics and its increasing importance for businesses to access near real-time analytics for better decision-making. They discussed various use cases for operational analytics, such as personalization, fraud detection, churn prevention, and insights into specific industries like gaming and IoT. The zero-ETL feature, which allows data to be easily analyzed from one system using another without managing complex data pipelines, was introduced as a key innovation. This capability is particularly useful for organizations with multiple databases for different applications, as it allows data from multiple Aurora databases to be consolidated into a single Redshift data warehouse.

The presentation then moved to the specifics of zero-ETL integrations, including the support for PostgreSQL. This integration is based on storage-level replication, supporting Data Manipulation Language (DML) operations (insert, update, delete) and metadata operations. The process is designed to be simple, with users only needing to specify the source and target, and the rest being handled automatically.

Finally, they demonstrated the practical application of this integration. This included creating a database in Aurora, setting up the zero-ETL integration, and then observing the replication of data in near real-time to Redshift. They showcased how once the data is in Redshift, users can leverage its powerful analytics capabilities, such as creating materialized views and AI/ML models for forecasting. The session concluded with a call for feedback on the public preview of the integration, emphasizing its potential to significantly simplify and accelerate the process of operational data analysis.

AWS re:Invent 2023 -Analyze Amazon Aurora PostgreSQL data in Amazon Redshift with zero-ETL (DAT343)

DAT344 Achieving scale with Amazon Aurora Limitless Database

The AWS re:Invent 2023 session on Amazon Aurora Limitless Database (DAT344) introduced the Amazon Aurora Limitless Database, focusing on scaling relational databases beyond the capacity of a single machine. The session, led by Christopher He, a product manager at Amazon Aurora, delved into the challenges of scaling databases, particularly in terms of write throughput. The common technique of sharding, which involves distributing data across multiple instances, was discussed in detail, highlighting its scalability benefits and the complexity it introduces, such as the need for application-level transaction management and consistency handling.

Amazon Aurora Limitless Database, available in limited preview for the PostgreSQL-compatible version of Aurora, aims to offer the scalability of a sharded database with the simplicity of a single database within a managed service. This serverless deployment automatically scales beyond the limits of a single instance, using a distributed architecture while presenting a unified interface to the user. The session explained the concept of shard tables and reference tables in this context, where shard tables distribute data across instances based on a shard key, and reference tables replicate data in full across all shards.

The technical deep dive into the Aurora Limitless Database's architecture revealed the integration of EC2 time sync for microsecond-precision bounded clocks, ensuring transaction consistency across the distributed system. The system supports read committed and repeatable read isolation levels, maintaining standard database semantics in a distributed environment. Queries in the limitless database are routed from a router to shards and back, with optimizations for single-shard operations. The session concluded with an invitation to join the preview of the Aurora Limitless Database, emphasizing its potential to revolutionize the scalability of relational databases in cloud environments.

AWS re:Invent 2023 - [LAUNCH] Achieving scale with Amazon Aurora Limitless Database (DAT344)

DAT345 Deep dive into RDS and RDS Custom for Oracle and SQL Server

The AWS re:Invent 2023 session on Amazon RDS and RDS Custom for Oracle and SQL Server, led by Cha and Bis, focused on recent updates and best practices. The session started with an overview of Amazon RDS and RDS Custom, emphasizing their ease of use, cost-effectiveness, and high availability features. The speakers highlighted the key advantages of managed services in database management, such as automated backups, patching, and scaling, which free users from routine administrative tasks.

Significant updates to RDS for Oracle and SQL Server were discussed, including the support for Recovery Manager (RMAN) for Oracle, which aids in efficient database migration with physical backups, ensuring data consistency and ease of transfer. Oracle Multi-Tenant was also a focus, enabling users to manage multiple databases more effectively by consolidating them into a single database instance, thereby optimizing resource utilization and simplifying database management.

For RDS Custom, the session covered its applicability for traditional and custom applications requiring elevated database and operating system access. This includes support for bring-your-own-license (BYOL) options for SQL Server, enabling users to use their existing licenses with RDS Custom. The discussion also touched on the point-in-time recovery feature for SQL Server on RDS Custom, supporting up to 1000 databases per instance, beneficial for SaaS providers managing multiple customer databases.

The session concluded with insights into best practices for RDS and RDS Custom, emphasizing the importance of staying within supported configurations and automation modes to maintain high availability and effective management. Attendees were encouraged to engage in a Q&A session for further clarification and practical advice.

AWS re:Invent 2023 - Deep dive into Amazon RDS and RDS Custom for Oracle and SQL Server (DAT345)

DAT346 Ultra-low latency vector search for Amazon MemoryDB for Redis

At the AWS re:Invent 2023, the session "Ultra-low latency vector search for Amazon MemoryDB for Redis (DAT346)" introduced vector search in Amazon MemoryDB for Redis, emphasizing its applicability in AI and ML applications. The discussion began with an overview of Redis's capabilities beyond a key-value store, notably its support for complex data types and efficient server-side operations. The newly added vector search feature in MemoryDB was highlighted, underscoring its ability to offer ultra-low latency and high recall rates, making it ideal for AI applications where speed and accuracy are crucial.

The session delved into MemoryDB's architecture, explaining how it ensures data persistence and high availability across multiple availability zones using a distributed transaction log. This architecture, unique to MemoryDB, allows for impressive performance metrics, essential for demanding AI applications. The presenters also touched upon MemoryDB's robustness in handling failures and maintaining data consistency across its nodes.

Real-world applications of vector search in MemoryDB were demonstrated, including enhancing language models with Retrieval-Augmented Generation (RAG) for more accurate responses, building chatbots with contextual memory, and implementing efficient fraud detection systems. The session concluded with a demonstration, showcasing the significant improvement in response accuracy when using RAG with MemoryDB compared to standard language models. This illustrated MemoryDB's potential in various AI and ML scenarios, highlighting its speed, accuracy, and ease of integration.

AWS re:Invent 2023 - Ultra-low latency vector search for Amazon MemoryDB for Redis (DAT346)

DAT406 Amazon Neptune architectures for scale, availability, and insight

In this AWS re:Invent 2023 session, Ian Robinson, a principal graph architect at AWS, discusses advanced strategies for Amazon Neptune, focusing on scaling, availability, and enhancing insights for graph practitioners. He begins by addressing the challenge of scaling for success, where growing user bases and complex queries necessitate scaling resources. Robinson explains how to identify scaling needs, such as monitoring worker thread activity and buffer cache churn, and offers solutions like scaling up with larger instances or scaling out with additional read replicas. He also introduces Neptune serve, a dynamic scaling feature, and advises on its appropriate use cases.

Robinson then shifts to improving availability, particularly during Neptune engine updates. He introduces the Neptune blue-green deployment method, which minimizes downtime by cloning a production cluster (blue), upgrading it (green), and then switching over once stable. This method is especially useful for major version updates. The session concludes with a focus on tools for graph practitioners, including the Graph Notebook and Graph Explorer for query authoring and visualization. Additionally, Robinson explores the integration of generative AI with Neptune, demonstrating how large language models can assist in query generation, data model design, and retrieval-augmented generation, thus opening new possibilities for deriving insights from graph data.

AWS re:Invent 2023 - Amazon Neptune architectures for scale, availability, and insight (DAT406)

DAT407 Best Practices for querying vector data for gen AI apps in PostgreSQL

In this AWS re:Invent 2023 session, Jonathan Katz presents a deep dive into vector search and retrieval, particularly in the context of PostgreSQL databases and their application in generative AI. He introduces the concept of foundational models in AI, which are large-scale machine learning models trained on vast datasets, often publicly available. Katz emphasizes the importance of these models in enhancing AI applications, particularly through techniques like retrieve-augmented generation (RAG). RAG leverages foundational models alongside private database data to enrich AI responses, a process involving vector transformation and storage.

Katz delves into PostgreSQL (Postgres) as a viable platform for vector storage and retrieval, highlighting its robust enterprise capabilities and extensibility via open-source extensions. He introduces PG Vector, an extension that enables vector searches in Postgres, and discusses its indexing methods, IVF Flat and HNSW. These methods have distinct characteristics: IVF Flat is K means-based and faster in indexing, while HNSW is graph-based, offering high performance and recall. Katz outlines the trade-offs between storage, performance, relevancy, and cost in vector data management. He further explains how Postgres handles large vector data through TOAST (The Oversized-Attribute Storage Technique) and provides best practices for managing vectors in PG Vector, emphasizing the balance between index building effort and query performance for HNSW, and tuning parameters for IVF Flat. The session underscores the complexities and nuances of working with vector data in database systems, particularly for AI-driven applications.

AWS re:Invent 2023 - Best practices for querying vector data for gen AI apps in PostgreSQL (DAT407)

DAT408 Deep Dive into Amazon Aurora and its Innovations

In the AWS re:Invent 2023 session "Deep Dive into Amazon Aurora and its Innovations" (DAT408), Graham McAllister, a senior principal engineer at AWS, explores the architecture, features, and recent innovations of Amazon Aurora, a cloud-native database for MySQL and PostgreSQL. He begins by describing Aurora's unique storage system, which is distributed across multiple availability zones for enhanced durability and relies on a write-to-log mechanism, ensuring data integrity even during partial system failures. This architecture also facilitates efficient reads and automatic data replication and repair. McAllister then delves into the latest features of Aurora, including global database configurations for cross-region data replication and disaster recovery, enhancements in Aurora's MySQL and PostgreSQL versions, and tools for simplified database management, like the blue-green deployment model.

McAllister also introduces new capabilities aimed at improving performance and manageability for high IO workloads. This includes optimized read features like tiered caching, which leverages NVMe storage for faster data access, and Aurora IO Optimized storage, which provides predictable pricing and better performance for IO-intensive applications. He also discusses Aurora's innovative approach to sharding with the "Limitless Database" concept, which addresses the complexity of traditional sharding methods by automating re-sharding processes, managing consistency across shards, and scaling effectively. This approach leverages global clocks for transaction consistency and integrates features like distributed transaction routers for managing queries and updates across shards. In conclusion, McAllister highlights these advanced features and their implications for Aurora users, emphasizing the database's scalability, reliability, and efficiency.

AWS re:Invent 2023 - Deep dive into Amazon Aurora and its innovations (DAT408)

DAT409 Hyperscaling databases on Amazon Aurora

The video "AWS re:Invent 2023 - Hyperscaling databases on Amazon Aurora (DAT409)" covers strategies for managing rapid growth and hyperscaling in databases, particularly focusing on Amazon Aurora. The speaker, a lead in the Aurora service team, shares insights from helping hundreds of AWS customers with database scaling. Key points include:

1. **Handling Rapid Growth**: Initially, the talk emphasizes the challenge of hyperscaling, particularly when application traffic and database load increase exponentially. For applications experiencing fast growth, scaling the application tier might be easier than scaling the database. The speaker shares techniques used by AWS customers to successfully handle this growth, including handling the initial surge of traffic, optimizing database workload, scaling with microservices, and using Amazon Aurora's limitless database.

2. **Amazon Aurora's Architecture**: The presentation dives into the specifics of Amazon Aurora, a cloud-built relational database noted for its manageability, security, high availability, and scalability. A crucial aspect of Aurora's architecture is the separation of the compute layer from the storage layer, allowing independent scaling. Aurora's storage layer ensures durability by storing six copies of data across three availability zones, scaling in 10 GB segments to a maximum size of 128 TB.

3. **Scaling and High Availability**: The talk further explores strategies for scaling databases, starting with 'scaling up' by moving to larger Aurora instances. High availability is addressed by adding Aurora read replicas and using a connection pooling tier to manage increased database connections. Tools like RDS Proxy are recommended for efficient scaling and failover times.

4. **Optimization and Cost Management**: The speaker underscores the importance of database tuning, error management, and cost optimization. AWS provides tools like Performance Insights and DEV OPS Guru for RDS to aid in this process. Aurora's serverless and reader autoscaling features help manage variable workloads, enhancing cost management.

5. **Microservices and Database Separation**: As applications grow, aligning the database architecture with a microservice approach becomes essential. The session suggests refactoring the data model and database endpoints to align with individual microservices, improving agility and scaling capabilities.

6. **Sharding and Horizontal Scaling**: For even larger scales, the talk shifts to horizontal sharding, explaining how to partition data across multiple Aurora clusters. This includes mapping partitions, managing backups, and ensuring consistency across shards.

7. **Aurora Limitless Database**: The final segment introduces the Aurora Limitless Database, a new feature designed for easy sharding with transaction-consistent backups and integrated routing logic. It simplifies sharding by handling data movement and query routing, offering scalable writes and reads.

In summary, the session provided a comprehensive guide on scaling databases with Amazon Aurora, covering aspects from initial growth management to advanced sharding techniques, emphasizing Aurora's architecture and features that facilitate scalability and high availability.

AWS re:Invent 2023 - Hyperscaling databases on Amazon Aurora (DAT409)

DAT410 DynamoDB and Advanced Data Modeling

This talk begins with the fundamentals, explaining key concepts like tables, items, primary keys, and attributes, while emphasizing the critical role of primary key selection in data distribution and access strategies. Alex highlights the unique aspects of DynamoDB, particularly its multi-tenant architecture, where all tables in a region share infrastructure, emphasizing the significance of index usage in every database request. The talk also covers DynamoDB's default mode of eventually consistent reads, beneficial for optimizing cost and performance, and its distinctive operation-based pricing model, which charges based on read/write operations instead of resource allocation.

In exploring advanced data modeling patterns, Alex presents three complex scenarios: searching flight options, booking flights, and performing complex filtering. The first scenario addresses finding direct and connecting flights, suggesting a solution that involves fetching and locally processing flight data. The flight booking example illustrates the use of DynamoDB transactions for handling multiple item operations and AWS Step Functions for managing complex workflows. The final part discusses the challenges of complex filtering in DynamoDB, particularly when dealing with multiple optional attributes. Alex suggests strategies such as over-fetching combined with client-side filtering, utilizing reduced projection in secondary indexes, and employing external systems like ElasticSearch for handling large data sets. The talk concludes by reiterating the importance of understanding DynamoDB's unique features and the necessity of careful data modeling for effective database management.

AWS re:Invent 2023 - Advanced data modeling with Amazon DynamoDB (DAT410)

Conclusion

These are summaries of all the 300 and 400 level DAT sessions. We hope you found these helpful in both getting an overview of the new DAT content as well as deciding which sessions to go watch.

Cloud Technology
Data Modernization & Analytics
Brian Tarbox

Brian Tarbox

Brian is an AWS Community Hero, Alexa Champion, runs the Boston AWS User Group, has ten US patents and a bunch of certifications. He's also part of the New Voices mentorship program where Heros teach traditionally underrepresented engineers how to give presentations. He is a private pilot, a rescue scuba diver and got his Masters in Cognitive Psychology working with bottlenosed dolphins.

View Brian's articles

Accelerate your cloud native journey

Leveraging our deep experience and patterns

Get in touch

Related Blog Posts

re:Invent 2023 AI/ML Session Summaries

Get up to speed on all the GenAI, AI, and ML focused 300 and 400 level sessions from re:Invent 2023!

Cloud Technology
Artificial Intelligence & MLOps

Best Practices for Migrating to Aurora MySQL

Aurora MySQL is a high-performance, fully managed database with Amazon RDS benefits, simplifying infrastructure for business focus. Learn migration best practices and essential components for a successful journey toward Aurora MySQL that can lead to increased scalability, resiliency, and cost-effectiveness.

Data Modernization & Analytics
Migrations

re:Invent 2023 Storage Session Summaries

Get up to speed on all the storage focused 300 and 400 level sessions from re:Invent 2023!

Cloud Technology