What is Amazon Personalize?
Amazon Personalize is a fully managed machine learning service provided by Amazon Web Services (AWS). It enables developers to create and deploy personalized recommendations for applications without requiring extensive machine learning expertise.
Amazon Personalize is used to build recommendation systems for various applications, including:
- E-commerce product recommendations
- Content personalization for streaming services
- Personalized marketing campaigns
- Custom news feeds and article recommendations
The service uses machine learning algorithms to analyze user behavior data and generate personalized recommendations in real-time. It can handle various types of data, including user interactions, item metadata, and user attributes, to create tailored experiences for each user.
Amazon Personalize's Content Generator
Amazon Personalize now offers the Content Generator, a Generative AI feature designed to elevate traditional suggestions into dynamic, thematic narratives. This enhancement transforms traditional recommendations into captivating themed experiences that significantly boost user engagement and sales.
These themes can be particularly effective in promotional contexts, such as a merchandise sales campaign themed ‘Summer Adventures’ for outdoor gear and accessories, enhancing user engagement and improving the specificity of promotions.
What is AWS Bedrock?
AWS Bedrock is a fully managed service that provides access to high-performance foundation models (FMs) from leading AI companies through a single API. It offers a comprehensive suite of tools and capabilities for building and scaling generative AI applications.
AWS Bedrock is used for:
- Accessing state-of-the-art foundation models: It provides access to models from AI21 Labs, Anthropic, Stability AI, and Amazon's own Titan models.
- Customizing models: Users can fine-tune these models on their own data to create tailored solutions for specific use cases.
- Building generative AI applications: Developers can use Bedrock to create applications for various tasks such as text generation, summarization, and image creation.
- Ensuring privacy and security: Bedrock offers enterprise-grade security features, including private endpoints and data encryption.
By integrating AWS Bedrock with other AWS services, developers can create powerful, scalable, and secure generative AI applications that leverage the latest advancements in language models and AI technology.
Architectural Overview: Amazon Personalize with AWS Bedrock
The architecture for integrating Amazon Personalize with GenAI via AWS Bedrock involves several key components working in harmony:
1. Data Collection and Processing: The first step is gathering user interaction data, item metadata, and other relevant information. This data serves as the foundation for training the recommendation models. Here, you can use services like Amazon S3 for storage, and AWS Glue for data transformation.
2. Model Training with Amazon Personalize: Amazon Personalize allows you to quickly train personalized recommendation models. It automatically optimizes the models based on the provided data, ensuring that the recommendations improve over time.
3. Feature Enhancement with AWS Bedrock: Here, GenAI comes into play. We can use AWS Bedrock to enhance the item features or generate new user content. For instance, you can enrich the dataset for Amazon Personalize by generating synthetic user reviews based on product features or creating detailed item descriptions with aspects users care about.
4. Integration and Deployment: We can then integrate the enriched recommendations into the application. This can be done through APIs, ensuring seamless delivery of personalized recommendations to end-users.
5. Feedback Loop: It is important to establish a continuous feedback mechanism to monitor user interactions with promotions. We can feed this data back into Amazon Personalize to refine and optimize the models, ensuring that the promotions remain effective and user-centric over time.
Case study: Personalized Promotions (PoC) and Customer Experiences
Let’s look at a project we conducted with a confidential client, in which we developed a proof of concept (PoC) to test the use of GenAI in recommendation systems. Spoiler alert: the concept was actually unproven, and surprisingly, that was great news.
Understanding the Context
In the current digital marketplace, the demand for personalized customer experiences has never been higher. So, the client wanted to adopt GenAI as the core of their recommendation platform to find new data patterns, improve recommendation quality, and develop new product offerings, as traditional promotional strategies often lack the targeted engagement necessary for optimizing conversion rates. This gap between customer expectations and the actual shopping experience highlights a critical need for innovation in how promotions are crafted and delivered.
Business Objectives
The objective was to leverage AI to enhance the personalization of promotional content. By doing so, the client aimed to:
- Enhance customer engagement and loyalty by providing customers with promotions that match their individual needs.
- Improve conversion rates by increasing the effectiveness of promotion targeting and consequentially maximizing marketing ROI.
Strategic Goals for the PoC
To achieve the business objectives given the context above, we started a PoC to personalize promotions for each client. The PoC goals were:
- Validating the concept by demonstrating if GenAI-driven promotions are more appropriate than the ones created using other approaches.
- Assess the best architecture for a scalable implementation of the recommendation system.
PoC Implementation
We relied on the following datasets:
- Historical transaction data
- Customer demographics
- Mobility data
- Past promotions
Also, we used pre-trained LLMs from Amazon Bedrock, enabling us to jumpstart the personalization process. The development was divided into five phases:
1. Data Collection & Preparation
The initial phase involved filtering data, selecting relevant columns, and merging tables to create a unified dataset that was ready for analysis and model training.
Amazon S3 hosted the compressed CSV files containing the data tables for customers, visits, transactions, and promotions, while SageMaker Studio Notebook accessed the data from S3, performed necessary preprocessing, and prepared the data for prompt engineering.
2. Prompt Engineering
We added context using demographics and past transaction data, enriching the model's ability to generate personalized content.
We re-coded demographics to ensure the model would interpret them accurately and converted the aggregated data from the previous step into a natural language format that Claude v2.1 large language model (LLM) could process. Then, the LLM generated the top-k recommendations.
3. Setup the LLM
We set up the Claude v2.1 model in the SageMaker environment via the Bedrock API, to run inferences using the prompt engineered in the previous step. Utilizing Claude v2.1, we did multi-step queries to dive deep into customer individuality.
4. Generating Recommendations
Finally, we had a model inference feeding the formatted customer data into the LLM to generate promotion recommendations.
For example, the model generates: "For customer 123, offer a 20-off promo code which gives them a 20% discount on electronics, valid for the next two weeks."
Then, we post-process it, converting the model's natural language output into a structured format for practical use. For example, parsing the model's text output to extract promotion details and map them to existing offer IDs in the database.
5. Evaluation and Refinement
The final step is setting up metrics to measure the effectiveness of the recommendations (e.g., conversion rate, customer engagement, ROI, etc). We recommended testing and Iterating. This means, running tests with a control group and the recommended promotions to evaluate the performance. Based on the results, refine the model.