Enhancing Cancer Diagnostics with AI
Explore how we helped a healthcare organization build an AI system that significantly enhanced cancer diagnostics.
Explore Amazon Bedrock & Amazon SageMaker AI, differences in use case, setup, data protection, and cost.
In the realm of Artificial Intelligence (AI), a fascinating and innovative branch has taken center stage: Generative AI. This state-of-the-art technology enables machines to not only gain insights from data but to create entirely new and original content, blurring the lines between human-generated and machine-generated creations. From art and music to text and images, the potential applications of Generative AI are pushing the boundaries of what can be achieved both in business and creative works.
As GenAI is a brand new technology, we often tend to be confused about how it can be attached to a specific business and what are the tools and services that allow us to build customized solutions leveraging such innovation. Fortunately AWS offers two services that address this point: Amazon SageMaker AI and Amazon Bedrock. In this blog we will discuss those two services and the main differences between them.
Both Bedrock and SageMaker AI offer managed ML services to develop solutions around ML tasks with progressive price structures. They have similarities and differences, and together enable a full stack of solutions for the ML realm.
SageMaker AI, the oldest of the two options, offers a full ML suite of services. With it, you can implement solutions for a wide variety of use-cases, from classical ML like classification and regression to more complex tasks like Generative AI and computer vision. The main point to keep in mind is that it requires more maintenance and more hands-on skills for certain tasks.
On the other hand, Bedrock, one of the newest services emerging from the AWS ecosystem, provides really strong off-the-shelf capabilities that are well-suited for lean teams. Bedrock provides a serverless API, and its usage is strict to Generative AI tasks.
Amazon Bedrock is a serverless API service engineered for easier development and deployment of Generative AI applications utilizing foundational models (FMs). These foundational models are substantial language models that have undergone extensive training on extensive datasets encompassing text and code. They exhibit versatility and can be harnessed for a multitude of applications, such as generating text, language translation, creative content generation, and providing informative responses to questions.
With Bedrock you'll have access to great features. Some of them being:
Amazon SageMaker AI is a fully managed machine learning (ML) platform designed to empower developers and data scientists, to rapidly create, train, and deploy ML models. SageMaker AI offers an extensive toolkit and features covering the entire ML lifecycle, from data preparation and feature engineering to model training and deployment. It also includes a diverse selection of pre-built models, algorithms and sample solutions suitable for various ML tasks, from more complex tasks, like Generative AI, to classical problems like, computer vision, classification and regression.
SageMaker AI suite of services range between a wide realm of applications. Some of the most notable features include:
Bedrock use cases are Generative AI tasks, and are more aligned with users that want to quickly excel in the AI realm with advanced capabilities, without worrying about infrastructure and lots of code for model build and deployment. You can use Bedrock to perform a multitude of GenAI tasks like code generation, chatbots, image generation and text generation. Bedrock also allows fine-tuning of its provided FMs.
With SageMaker AI on the other hand, you can implement and develop solutions for GenAI and a wide range of ML use-cases, providing more flexibility by controlling infrastructure and codes for model build, train, deployment and inference. Some examples of ML tasks that SageMaker AI handles are computer vision, anomaly detection (e.g. fraud detection), feature reduction (e.g. risk assessment), forecasting (e.g. market projection), classification, regression and natural language processing.
The major difference between Bedrock and SageMaker AI lies in the complexity of the development cycle. We can summarize the complexity of both as:
To sum this major difference up, the following diagram can be a good starting point to decide which service to choose for your use-case.
There are other differences that may impact your decision, other than complexity and ML use-case. Some of those differences are related to how cost is calculated, how to set the environment up, how privacy is implemented and the learning curve needed for working with each one of those services.
Regarding setup, in a nutshell, Bedrock is easier to setup than SageMaker AI.
Being fully managed by AWS, Bedrock requires less effort compared to SageMaker AI considering that users and developers select one of the provided pre-trained models, run some customizations if applicable and hit the ground running.
For SageMaker AI, by having to set up networking and privacy configurations, environment isolation and because of the larger feature set, SageMaker AI requires a bigger effort for setup. Also, for using the service, users need to be more hands-on in terms of customization and code development, so there is a bigger learning curve when comparing it to Bedrock.
Regarding privacy concerns, the main difference lies again in the customization capabilities of each service, even though both services provide robust security features.
SageMaker AI provides more control over privacy and security. Users can define and create their own VPC with the needed configurations for internet access, encrypt data at rest and in transit and manage data and service access through IAM roles.
Bedrock, on the other hand, processes data within the AWS environment. Bedrock provides and ensures that the user data does not leave the user's VPC and that it is encrypted. Also it guarantees that the user data is not used to train the foundational models used. One other cool feature of Bedrock is that, for customization jobs (model fine tuning), Bedrock makes a private copy of the foundational model to be fine-tuned, so proprietary data is not shared with model providers.
In the end, the choice between services, when it comes to privacy and security is, again, how much control your solution requires.
Regarding customization, as we have been discussing throughout this blog, SageMaker AI offers more flexibility when compared to Bedrock.
In Bedrock, users can customize the provided FMs with their own data, creating fine-tuned models and using those through its API.
SageMaker AI offers more customizations. The main one is the ability to perform other ML tasks than GenAI. Also, in SageMaker AI users can incorporate their own algorithms, third-party algorithms and open-source LLMs. Users have full control over codes that are used for processing, training, evaluation, deploying and predicting.
Bedrock pricing model is dependent on the modality, provider, and model. Pricing models are separated into:
One example of On-Demand pricing is the price of running Anthropic's Claude base model. The table below depicts the cost of both models available based on input and output-tokens.
Note: for Agents for Bedrock and Knowledge Bases for Bedrock, you are only charged for the models and vector databases you use, when applicable.
On the other hand, SageMaker AI pricing is based on usage, however there is more complexity involved in it. Notebook instances are charged for the time they are up and jobs like training, processing and inference are also charged for the time that the selected instance was up and running. Idle notebook instances are also charged, but you can get rid of those charges by leveraging a Lifecycle Configuration with a script that shuts down notebook instances that are idle after a user-defined period of time. Model endpoints are also charged based on time they are available and their usage.
As SageMaker AI is a more robust suite of services and has instances and infrastructure attached to its features, the cost breakdown is more complicated. As mentioned, there is cost associated with different features like training jobs, endpoints, notebook instances and more. AWS explanation is pretty well defined in the pricing page for SageMaker AI, with examples, instance types and deeper details around the cost breakdown.
To choose the right service for your own use-case, the following questions can serve as guidance:
If you're looking for help strategizing how to operationalize your GenAI initiatives, Caylent can help. Our Generative AI Proof Of Value Caylent Catalyst can help you build an AI roadmap for your organization and demonstrate how generative AI can positively impact you. If you have a specific GenAI vision, we can also tailor an engagement exactly to your requirements. Get in touch with our team to explore opportunities to innovate with AI.
Gustavo Gialluca is a Senior Machine Learning Architect with 6 years of experience delivering end-to-end ML solutions, from Data Science to ML Engineering. He has worked across various industries, including energy, finance, and academia, and holds a degree in Electrical Engineering. Currently completing his Master’s in Electrical Engineering with a focus on Data Science, Gustavo is also AWS ML Specialty certified. Passionate about driving business value through scalable ML, he thrives on helping others, fostering positive work environments, and staying at the forefront of ML and cloud innovations.
View Gustavo's articlesLeveraging our accelerators and technical experience
Browse GenAI OfferingsExplore how we helped a healthcare organization build an AI system that significantly enhanced cancer diagnostics.
Learn how we helped a clinical imaging intelligence company implement a solution to de-identify PHI and CT scans that comply with HIPAA rules and build a foundational MLOps infrastructure to support scalable machine learning in life sciences.
Learn how Amazon SageMaker AI accelerates computer vision projects by providing powerful tools for rapid experimentation and seamless scaling from development to production.