AWS re:Invent 2023 Keynote Highlights

Cloud Technology

Swami Sivasubramanian gave a thrilling keynote speech on the latest innovations in databases, analytics, generative AI, and machine learning and the ways they are accelerating business productivity and impact.


Swami Sivasubramanian presented the ML Keynote this morning and showed us where AWS is taking us on the “very early days” of this journey.

Humans + Data + GenAI

Swami started by providing a historical context to GenAI. Though it seems like the last six months (or two weeks) have been earth-shattering, Swami took us back to the days of Ada Lovelace and the first “computing machines”. This context lets him give us perspective on the journey we’re now on. “It’s very early days,” he said several times. This leads to the observation that “no one model will rule them all”. We already have many Foundational Models, each with strengths and weaknesses, which will only increase over time. This multiplicity almost demands a tool like Bedrock, which provides an abstraction layer on top of an underlying LLM.

In addition, AWS is addressing the need for customers to have help in designing new applications via the GenAI Innovation Center. This center lets customers pair with AWS to find and design new business applications.

A related theme is that both GenAI and the humans who create and use it must evolve together. This involves a three-layer stack of infrastructure, tools, and applications. As the talk continued, he showed how AWS is addressing each layer.

Bedrock and Titan

Bedrock continues its evolution in terms of its capabilities and the various models it supports. Bedrock now supports Anthropic’s Claude 2.1. Claude 2.1 now provides a 200k context window size, about 400 pages of a single-spaced book. It has reduced the rate of hallucination by a factor of two. It has also increased its performance by 25%.

Bedrock also now supports Meta’s Llama 2, which supports 70 billion parameters and was trained on 2 trillion tokens.

AWS’s Titan now supports a family of models, including Text Lite, Text Express, and Image Generator. One notable feature provided by Image Generator is support for invisible watermarks. This supports the goal of responsible AI by ensuring the consumers of a generated image know that it was machine-generated. This can help defend against misuses, such as Deep Fakes.

Image generation also supports “outpainting” which is instructing the model to evolve an image by adding a background. He showed the model creating the image of a lizard and then placing that lizard in a rainforest.

It also announced the ground-breaking Amazon Titan Multimodal Embeddings, which can generate image embeddings. It enables end-users to search for images using other images and text, empowering use cases requiring contextually relevant images, including recommendation experiences and enterprise search.

Acting on their belief that model Fine-Tuning is the vehicle for customers to realize the power of their data and gain market differentiation, AWS announced that Bedrock now also supports Fine-Tuning, which lets you make a copy of an existing model and train it on data for your specific use case. Your application then uses this new model and is never shared with AWS. Many models from AWS, Meta, Cohere, and other bedrock-hosted third-party models are now entirely fine-tunable, with Anthropic’s Claude fine-tuning coming soon. Meanwhile, customers interested in fine-tuning the Claude model can work with AWS experts via the Custom Model Program to accomplish this goal.

AWS introduced continuous pre-training for dealing with highly volatile data, ensuring your models' relevancy to your data

As one uses Bedrock models, evaluating how they are doing is important. Bedrock now supports Model Evaluation, which lets designers evaluate a model and provide feedback.

Q and New Vector Support

Vectors are the key to LLMs, so vector support is critical to their evolution. Today, AWS expanded support for vectors to RDS Aurora, DynamoDB, DocumentDB, and OpenSearch Serverless, enabling customers to store source and vector data in the same database and eliminating the learning curve of learning new databases and APIs.

 They also said that MemoryDB for Redis now supports vectors. This gives AWS-backed GenAI applications an incredible single-digit millisecond latency with 99% information recall accuracy, providing unparalleled performance in vector databases.

Q provides a diverse range of products access to GenAI capabilities. This includes RedShift, which can let a user design a query via a GenAI prompt. “Show me the division with the highest return rate over the last two months”. It can then turn the results of that query into a dashboard. The dashboard can be customized with various widgets, such as graphs and text. The generated text can be modified by instructions such as “lengthen”, “shorten” or even “turn into bullet points”.

Other related enhancements include a 20x performance improvement in Aurora-optimized vector reads. Also, SageMaker introduced Distributed Training with HyperPod, which addresses the issue of failure during long-running training jobs. Training jobs can run for days or weeks and involve hundreds of machines. Hardware failures at the scale can seriously affect a training job's success rate. HyperPod provides automatic checkpointing, which reduces the impact of machine failures.

Summary

These really are very early days, and we are just beginning to see the sort of possible new applications. Toyota showed an application which ingested a car’s owner’s manual into a model and then allowed a driver to use human speech to ask questions such as “what does that indicator light mean?”. These are uses that would not have been possible or even imagined a small number of months ago.

In another example, we were shown a system that could synthesize airline travel data such as lost luggage, flight delays and bad weather and make intelligent decisions and predictions using a combination of manual actions and Zero-ETL pipelines.

Emphasizing the ability of GenAI to help the human condition, we learned of a program that uses GenAI to allow doctors facing an overwhelming number of cancer patients to treat them more efficiently.

Swami returned to the GenAI + Data + Humans theme at the close as he showed a rainforest ecosystem and how various birds and animals interact to create a whole ecosystem. He said that humans and AI could and should create a similar system and that AWS was proud to be one of the companies leading the way.

Swami continued to talk about the human side of things by discussing AWS’s education programs with partners such as Udacity. He said that soft skills would be part of the great skill revolution in the field.

Conclusion

AWS is well-positioned to push the bounds of what GenAI can do while keeping humans in the loop. Some people fear that GenAI will replace or eliminate jobs, but a brighter possibility is that we evolve, along with the software, to do greater and more impactful things. An example of that is PartyRock, a Bedrock playground. Without even needing an AWS account, you can use a fun and intuitive user interface to explore the GenAI and create real applications that can be shared.

Cloud Technology
Khobaib Zaamout

Khobaib Zaamout

Dr. Khobaib Zaamout is the Principal Architect for AI Strategy at Caylent, where his main focus lies in AIML and Generative AI. He brings a solid background with over ten years of experience in software, Data, and AIML. Khobaib has earned a master's in Machine Learning and holds a doctorate in Data Science. His professional journey also involves extensive consulting, solutioning, and leadership roles. Based in Chestermere, Alberta, Canada, Khobaib enjoys a laid-back life. Outside of work, he likes cooking for his family and friends and finds relaxation in camping trips to the Rocky Mountains.

View Khobaib's articles
Brian Tarbox

Brian Tarbox

Brian is an AWS Community Hero, Alexa Champion, runs the Boston AWS User Group, has ten US patents and a bunch of certifications. He's also part of the New Voices mentorship program where Heros teach traditionally underrepresented engineers how to give presentations. He is a private pilot, a rescue scuba diver and got his Masters in Cognitive Psychology working with bottlenosed dolphins.

View Brian's articles

Related Services

Caylent Services

Artificial Intelligence & MLOps

Apply artificial intelligence (AI) to your data to automate business processes and predict outcomes. Gain a competitive edge in your industry and make more informed decisions.

Caylent Services

Data Modernization & Analytics

From implementing data lakes and migrating off commercial databases to optimizing data flows between systems, turn your data into insights with AWS cloud native data services.

Caylent Catalysts™

MLOps Strategy

Plan and implement an MLOps strategy unique to your team's needs, capabilities, and current state, unlocking the next steps in tactical execution by offloading the infrastructure, data, operations, and automation work from data scientists.​

Caylent Catalysts™

Generative AI Strategy

Accelerate your generative AI initiatives with ideation sessions for use case prioritization, foundation model selection, and an assessment of your data landscape and organizational readiness.

Caylent Catalysts™

Generative AI Knowledge Base

Learn how to improve customer experience and with custom chatbots powered by generative AI.

Accelerate your cloud native journey

Leveraging our deep experience and patterns

Get in touch

Related Blog Posts

re:Invent 2023 AI/ML Session Summaries

Get up to speed on all the GenAI, AI, and ML focused 300 and 400 level sessions from re:Invent 2023!

Cloud Technology
Artificial Intelligence & MLOps

re:Invent 2023 Storage Session Summaries

Get up to speed on all the storage focused 300 and 400 level sessions from re:Invent 2023!

Cloud Technology

re:Invent 2023 Serverless Session Summaries

Get up to speed on all the serverless focused 300 and 400 level sessions from re:Invent 2023!

Cloud Technology