Caylent Accelerate™

Amazon Bedrock AgentCore: Redefining Agent Infrastructure as Undifferentiated Heavy Lifting

Generative AI & LLMOps

Explore how Amazon Bedrock AgentCore and the Agent Marketplace are industrializing, standardizing, and commoditizing the underlying agent infrastructure, helping organizations eliminate the operational toil and risk that have slowed the adoption of agentic systems.

The AI community has seen a year of innovation, experimentation, and, for many, headaches as teams cobble together their own agentic systems from open-source parts and cloud services. AWS’s recent debut of Amazon Bedrock AgentCore and the Agent Marketplace is not so much a bold leap in agent technology as it is a decisive move to industrialize, standardize, and commoditize the underlying agent infrastructure. This is AWS’s classic strategy: transform the necessary but undifferentiated work, what Werner Vogels famously called “undifferentiated heavy lifting” into streamlined, reliable building blocks so teams can concentrate on delivering business value.

The Challenge: Agents Are Easy, Production Is Not

Technical teams have long understood that building the core logic of an agent is only the beginning. Real-world deployment, persisting state, ensuring security, managing sessions, wiring APIs, and supporting observability can absorb the bulk of engineering resources:

  • State management often devolves into quick fixes around databases or object stores.
  • Authentication and secrets management become repetitive, brittle work across projects.
  • Observability and debugging require custom hooks, logs, and tracing that rarely benefit from reuse.

The result is a proliferation of homegrown, fragile stacks where development cycles are spent re-solving operational hurdles and undifferentiated work that adds risk but seldom competitive advantage.

AgentCore addresses this friction directly: by standardizing and operationalizing the agentic “plumbing,” AWS enables development teams to redirect their focus onto the distinctive challenges and opportunities unique to their business context.

Abstraction Over Invention

This approach echoes previous AWS innovations, such as Amazon SageMaker AI for ML operations or AWS Elastic Beanstalk for web hosting, where the main advances are not in technology, but in how technology is delivered and consumed:



Challenge Traditional Approach AWS Abstraction
Web hosting Amazon EC2, custom configs AWS Elastic Beanstalk, AWS Amplify
ML model deployment Amazon S3, Amazon EC2, manual pipelines Amazon SageMaker
Agentic infrastructure DIY frameworks + cloud infrastructure Amazon Bedrock AgentCore


Moving away from bespoke infrastructure allows teams to iterate faster and with more confidence, knowing the foundational work is secure, scalable, and observable by default.

What Amazon Bedrock AgentCore Delivers

AgentCore does not offer radically different agent primitives. Instead, it abstracts the recurring requirements into modular, managed services:

  • Runtime: Handles agent execution with built-in session and memory management, supporting complex, long-running agent tasks (up to 8 hours) without bespoke cluster management.
  • Observability: End-to-end tracing and live metrics (via Amazon CloudWatch and/or OpenTelemetry). Developers gain production-grade debugging and insights without the overhead of custom solutions.
  • Memory: Provides short and long-term state for agent context, tightly coupled to session lifecycles, and abstracted from the details of data store integration.
  • Identity and Access Management: Integrates natively with enterprise SSO and OAuth providers, ensuring secure, audited handling of credentials and permissions without redundant engineering effort.
  • API Gateway for Agents: Declaratively exposes APIs and services to agents using protocols like MCP (Model Context Protocol), reducing the friction of creating and managing agent tools.
  • Secure Browser & Code Tooling: Built-in browser automation and code execution, safely sandboxed, so teams avoid the hazards and complexity of DIY sandbox environments.

None of these individual services are, by themselves, new. The power of AgentCore lies in how it unifies and productizes them, shifting effort away from “how do we make this work?” and back to “how do we deliver value?”

AgentCore Details

AgentCore Runtime

Amazon Bedrock AgentCore SDK provides a practical CLI tool that can do most of the heavy-lifting for setting up a cloud environment for agents. The CLI tool can automatically set up:

  • Docker: create a Dockerfile and an Amazon ECR Docker Repository.
  • AWS IAM: create an execution role for the Agent with the required permissions.
  • Amazon CloudWatch for full observability: metrics, logs, and traces for your agent and your Bedrock prompt invocations.

You can also override those settings with your preferences, but the defaults are sensible. The settings are saved in a local state file (.bedrock_agentcore.yaml), and AgentCore will automatically update it when any state or configuration changes.

The following command can set up the Runtime for a Python project:

# Set up the Agent Runtime (one-time only)
agentcore configure --entrypoint <main-python-file.py> --region <AWS region>
# Answer the questions to confirm the settings

All the resources will be created on the first deploy:

# Perform the first deploy
agentcore launch

The whole process takes only a few minutes. The agent is live now and meets Amazon's production-quality standards. The same command can be used to redeploy the application. The CLI tool also provides features to run the agent locally for a faster code-build-test turnaround.

Amazon Bedrock AgentCore is compatible with any framework, SDK, and model because it forwards the HTTP request to the agent’s code. An example would be AWS Strands Agent, which is an open-source SDK and API for building production-ready, autonomous AI agents using a flexible, model-driven approach that integrates multiple language models and tools with just a few lines of code. To use AgentCore with Strands Agent, see below:

from strands import Agent
from strands_tools import calculator
from bedrock_agentcore import BedrockAgentCoreApp

# Initialize the Bedrock AgentCore application
app = BedrockAgentCoreApp()

helpful_agent = Agent(system_prompt='You are a helpful...',
                 tools = [ calculator ] )

# This annotation defines the function that will handle the requests
# Any SDK, framework, and the model can be used
@app.entrypoint
def invoke(payload):
    """Function that processes HTTP requests for agent invocations"""

    prompt = payload.get(
        "prompt", "Tell in 10 words what you can do with the provided tools."
    )
    result = helpful_agent(prompt)
    return {"result": result.message}

# Starting the Bedrock AgentCore application 
if __name__ == "__main__":
    app.run()

Most of the code is standard Strands. Very few changes were necessary to make it run in Amazon Bedrock AgentCore, and most of the code can be used to host on other platforms. It also natively supports the MCP (Model Context Protocol) and A2A (Agent2Agent) protocols, in addition to HTTP invocations.

AgentCore Observability

Amazon Bedrock AgentCore is integrated with Amazon CloudWatch’s GenAI Observability feature (also in Preview as of July 2025). The default dashboards provide visibility into latency, token counts, throttles, and errors. They become available when the first deploy is performed.

The trace maps can help troubleshoot latency issues and communication errors.

The Amazon Bedrock AgentCore tab on GenAI Observability provides detailed troubleshooting information for each invocation. It’s possible to analyze how much time was spent on each step. In our example, most of the time is spent in three tool invocations. This information can guide optimizations and help investigate errors with specific user prompts.

AgentCore Memory

Most GenAI applications must store information during processing, including chat histories and temporary caches of intermediate steps. Many GenAI applications on AWS use Amazon DynamoDB for those needs, which is a great fit. But the developers must set up a data model that will be scalable, handle schema evolution, and properly set up indexing for scalability and cost-effectiveness. And depending on how the data will be used, a vector store is also required, and the data must be carefully synchronized between the two.

The Memory feature provides a serverless storage of data for Agents. It provides short-term memory (with automatic expiration) and long-term storage through a straightforward API. It’s essentially a NoSQL database service, but it’s tuned to the specific needs of Agents. The long-term storage natively supports Summarization and Semantic queries. AgentCore provides sensible defaults and also supports customization of the models and prompts for both.

The data in each AgentCore Memory resource is organized hierarchically by user-defined namespaces, agents, and sessions. It can be used for the simplest needs (history), for shared context on multi-agent flows, and even for personalization by storing user preferences and behavior data. The same functionality can be achieved with a combination of other AWS services, but the Memory feature is very straightforward and much faster to set up for production.

client.create_event(
    memory_id=memory.get("id"), # This is the id from create_memory or list_memories
    actor_id="User84",  # This is the identifier of the actor, could be an agent or end-user.
    session_id="OrderSupportSession1", #Unique id for a particular request/conversation.
    messages=[
        ("Hi, I'm having trouble with my order #12345", "USER"),
        ("I'm sorry to hear that. Let me look up your order.", "ASSISTANT"),
        ("lookup_order(order_id='12345')", "TOOL"),
        ("I see your order was shipped 3 days ago. What specific issue are you experiencing?", "ASSISTANT"),
        ("Actually, before that - I also want to change my email address", "USER"),
        (
            "Of course! I can help with both. Let's start with updating your email. What's your new email?",
            "ASSISTANT",
        ),
        ("newemail@example.com", "USER"),
        ("update_customer_email(old='old@example.com', new='newemail@example.com')", "TOOL"),
        ("Email updated successfully! Now, about your order issue?", "ASSISTANT"),
        ("The package arrived damaged", "USER"),
    ],
)

AgentCore Identity

The Identity feature provides a straightforward way to integrate the Agent with standard SSO and OAuth2 services. It can be used to authenticate and validate permissions for your agent invocations (inbound), and also to authenticate your agent with other services (outbound). For example, if an agent needs to access the user’s files in services like Google Drive and Dropbox, the developers can use the Identity feature to store the credentials and handle the authorization flow securely. 

The AgentCore SDK provides annotations to specify the credentials that the functions require. It automatically handles the authorization flows and refresh of expired tokens. When additional authorization is required, it provides an authorization URL for the user. It can automatically cache the authorization tokens to reduce the number of times the user needs to authenticate or enforce authentication on critical tools.

The Identity feature guarantees that user access tokens and API Keys will be stored and handled safely, and offers a straightforward way to provide a good user authentication experience for agents who act on behalf of the user on external systems and integrations, with great developer experience. See the Amazon Bedrock AgentCore Developer Guide for more information.

AgentCore Gateway

The AgentCore Gateway feature provides a straightforward way to connect GenAI agents to REST services and third-party tools. It converts REST APIs, AWS Lambda functions, and existing services into MCP (Model Context Protocol) compatible tools, with strong authentication and authorization with OAuth2 tokens or API Keys. It also provides out-of-the-box integration with multiple services, including Slack, Jira, Zoom, and Salesforce.

It can be used to quickly develop agents that interact with those services on behalf of users, accelerating time-to-market for deeply integrated agents, and guaranteeing that the integrations and communications will be performed securely. See the Amazon Bedrock AgentCore Developer Guide for more information.

AgentCore Built-in Tools

AgentCore provides a set of specialized tools for common scenarios, alleviating the need to implement those tools by hand and making sure they run in secure, isolated environments. As of July 2025, two tools are provided: The AgentCore CodeInterpreter and the AgentCore Browser

The AgentCore Code Interpreter can be used to run custom code in isolated environments, without risking the Agent. It can execute and debug code generated by agents or provided by the user, as well as perform complex calculations, including Data Science workloads.

The AgentCore Browser lets agents safely interact with websites and web applications. It isolates the browsing environment from the agent execution by running the browser in a safe sandbox environment. It provides advanced features like visual analysis with screenshots, human intervention with live interactive view capabilities, and recording browser interactions for auditing purposes. See the Getting Started with AgentCore Browser Guide and the Amazon Bedrock AgentCore Code Interpreter Guide for more information.

The Agent Marketplace: Accelerating “Buy Over Build”

Alongside AgentCore, AWS unveiled the Agent Marketplace: a repository of over 800 prebuilt agents and tools ready to deploy. Agents can be searched and filtered by use case, such as contract analysis, compliance automation, or developer productivity, and deployed directly into the AgentCore environment.

For enterprises, this shifts the calculus: if a capability already exists, it is increasingly cost-effective to purchase, configure, and extend it rather than reinvent from scratch. For vendors and ISVs, the Marketplace offers streamlined distribution to a broad customer base, benefiting from AWS’s compliance and billing frameworks.

AgentCore Pricing

Amazon Bedrock AgentCore uses a modular, consumption-based pricing model with no upfront commitments or minimum fees. You pay only for the resources and features you actively use, so costs scale directly with agent activity and adoption. This applies across its main subsystems: Runtime, Tools (Browser, Code Interpreter), Gateway, Identity, Memory, and Observability. See the table below for details of the pricing of the AgentCore components.

Practical Impact and Use Cases

Organizations already piloting AgentCore and the Marketplace have reported significant efficiency gains:

  • AWS’s internal teams have leveraged AgentCore to automate complex cloud migration workflows, reducing manual intervention and cycle time.
  • Partners like Anthropic and Salesforce distribute composite agents—combining generative AI with workflow automation—without requiring end users to assemble the operational stack.
  • Developer tool vendors can now integrate IDE-native agents for coding and testing without owning the execution or observability infrastructure.

What unites these stories is the radical reduction in non-differentiated engineering. Teams can focus on defining agent behaviors and business integrations while AWS assumes responsibility for operational uniformity and security.

Current Limitations and Forward Look

It is important to acknowledge current limitations:

  • As of July 2025, AgentCore remains in preview, regionally restricted, and with a nascent ecosystem.
  • Broad protocol support (e.g., MCP, A2A) is still evolving across frameworks and available agent listings.
  • The Marketplace is rapidly expanding, but “build-over-buy” may remain necessary for specialized or proprietary use cases.
  • As of July 2025, VPC-only agents are not available.

Nonetheless, AWS’s expanding ecosystem signals that the balance is shifting: more agentic capabilities will become commodities, while differentiation will emerge through domain expertise and integration.

Conclusion: Focus on What Matters

Amazon Bedrock AgentCore and the Agent Marketplace fundamentally reframe the agent infrastructure problem. By turning the hard-won lessons of trailblazing teams into reusable services, AWS eliminates the operational toil and risk that have slowed the adoption of agentic systems in the enterprise.

The upshot is both simple and profound: invest your energy where it matters most—on agent logic, workflows, and user value—not on constructing and maintaining what is now standardized infrastructure.

AgentCore does not make agents “better” as much as it makes them “unremarkable” to operate at scale. And that, for most enterprise teams, is the breakthrough that actually matters.

Generative AI & LLMOps
Brian Tarbox

Brian Tarbox

Brian is an AWS Community Hero, Alexa Champion, runs the Boston AWS User Group, has ten US patents and a bunch of certifications. He's also part of the New Voices mentorship program where Heros teach traditionally underrepresented engineers how to give presentations. He is a private pilot, a rescue scuba diver and got his Masters in Cognitive Psychology working with bottlenosed dolphins.

View Brian's articles
André Luís Gobbi Sanches

André Luís Gobbi Sanches

View André's articles

Accelerate your GenAI initiatives

Leveraging our accelerators and technical experience

Browse GenAI Offerings

Related Blog Posts

Amazon Q Developer for AI-Driven Application Modernization

Discover how Amazon Q Developer is redefining developer productivity -featuring a real-world migration of a .NET Framework application to .NET 8 that transforms weeks of manual effort into just hours with AI-powered automation.

Application Modernization
Generative AI & LLMOps

Why Healthcare and Life Sciences Need Agentic AI Architectures

Explore how agentic AI architectures can address the complexity, uncertainty, and personalization needs of modern healthcare by mirroring medical team dynamics, enabling dynamic reasoning, mitigating bias, and delivering more context-aware and trustworthy medical insights.

Generative AI & LLMOps

How AI is Revolutionizing Database Migration: From Year-long Projects to Quarterly Wins

AI-powered automation is transforming database migrations. Read expert insights on faster, safer, and more cost-effective modernization for enterprises.

Generative AI & LLMOps
Databases