
"I presented at DevelopersIO 2025 Osaka with the title 'Let's Try Amazon Bedrock AgentCore! ~Explaining the Key Points of Various Features~'! #devio2025"
Hello, I'm Jinno from the Consulting Department!
I presented at DevelopersIO 2025 Osaka held on Wednesday, September 3, 2025, with the title "Let's Try Amazon Bedrock AgentCore! ~Explaining the Key Points of Various Functions~"!
In this presentation, I introduced the attractive features of Amazon Bedrock AgentCore, which was released as a public preview in July 2025, based on my hands-on experience.
Thankfully, quite a number of people attended and despite feeling nervous, I hope I was able to convey the appeal of AgentCore even a little bit!
Presentation Materials
Due to my enthusiasm, I ended up creating 81 slides for a 20-minute presentation.
Key Points of the Presentation
What is Amazon Bedrock AgentCore?
Amazon Bedrock AgentCore is an optimal managed service for deploying and operating AI agents.
There are various managed functions available, and below are the types of functions:
- Runtime
- Hosting function
- Identity
- Authentication function
- Gateway
- Function to turn external processing into Tools
- Memory
- Memory function
- Built in tools
- Code Interpreter: Code execution environment
- Browser: Browser execution environment
That's a lot of features...!! Here's roughly how these functions can be integrated with each other:
From here, I'll go through each service one by one.
Runtime
Runtime is a managed service for hosting AI agents.
Since it's a hosting environment, you can freely choose the agent framework and LLM as shown below.
Not being vendor-locked is a nice point, isn't it?
#### Deployment
You can run the agentcore configure
command to be asked about IAM and ECR settings, with the option for automatic creation. It's nice and simple.
After running the agentcore configure
command, you can deploy with agentcore launch
. Easy!
While it's simple, let's also look at the deployment process.
The configure
command creates files used to build a container image, which is pushed to ECR, and then the AgentCore Runtime pulls that image.
Invocation is also simple - you can call the created agent using agentcore invoke
.
When invoked, the result is returned as shown below. The AI agent's message is in the response
field!
agentcore invoke '{"prompt": "Hello"}'
Payload:
{
"prompt": "Hello"
}
Invoking BedrockAgentCore agent 'agent' via cloud endpoint
Session ID: c4dbea7b-7c51-471c-b631-330f991d5893
Response:
{
"ResponseMetadata": {
"RequestId": "c796b751-caf4-4d44-a450-dbde138546dd",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"date": "Fri, 05 Sep 2025 22:47:24 GMT",
"content-type": "application/json",
"transfer-encoding": "chunked",
"connection": "keep-alive",
"x-amzn-requestid": "c796b751-caf4-4d44-a450-dbde138546dd",
"baggage":
"Self=1-68bb6874-5e56d29a3edd9d1103a57630,session.id=c4dbea7b-7c51-471c-b631-330f991d5893",
"x-amzn-bedrock-agentcore-runtime-session-id": "c4dbea7b-7c51-471c-b631-330f991d5893",
"x-amzn-trace-id":
"Root=1-68bb6874-56784cc51d5370ab108fd780;Self=1-68bb6874-5e56d29a3edd9d1103a57630"
},
"RetryAttempts": 0
},
"runtimeSessionId": "c4dbea7b-7c51-471c-b631-330f991d5893",
"traceId":
"Root=1-68bb6874-56784cc51d5370ab108fd780;Self=1-68bb6874-5e56d29a3edd9d1103a57630",
"baggage":
"Self=1-68bb6874-5e56d29a3edd9d1103a57630,session.id=c4dbea7b-7c51-471c-b631-330f991d5893",
"contentType": "application/json",
"statusCode": 200,
"response": [
"b'{\"role\": \"assistant\", \"content\": [{\"text\": \"Hi there! How are you doing today?
Is there anything I can help you with?\"}]}'"
]
}
```#### Blog
There are actual blogs where this has been tested, so please refer to these as well!
https://dev.classmethod.jp/articles/bedrock-agentcore-openai-gpt41/
### Identity
This is a managed service that provides authentication functionality for AI agents. There are two types: Inbound Auth and Outbound Auth.

Inbound Auth is an authentication function for the AI agent itself. It can implement authentication by integrating with IdPs such as Cognito.

Outbound Auth is an authentication function for AI agents to call external services. It manages API keys and OAuth authentication information in a managed way, allowing them to be automatically obtained.
For example, in the case of an API Key, it can be obtained through the flow below, and the acquisition logic can be obtained just by adding a decorator, which is nice.

```python
@requires_api_key(
provider_name="azure-openai-key"
)
async def need_api_key(*, api_key: str):
Blog
For more details, I have summarized them in the blog below, so please refer to this as well!
https://dev.classmethod.jp/articles/amazon-bedrock-agentcore-identity-cognito-azure-openai/### Memory
Memory is a managed service for giving AI agents "memory." There are two types of memory: Short-term Memory and Long-term Memory.
Short-term Memory is a mechanism that maintains conversation history during a session. It's nice to be able to maintain conversation history in a managed service, isn't it? The memory retention and retrieval logic is not too difficult and can be implemented.
The data structure can be managed with built-in attributes such as actor_id
for each user and session_id
for each session, so there's no need to think about complicated things. The conversation history tabs in ChatGPT and Claude might be easy to imagine.
On the other hand, Long-term Memory is a function that automatically extracts and integrates important information from Short-term Memory.
The extracted data is stored as vectors, enabling semantic search to extract highly relevant memories.
You might wonder how to configure the transition from Short-term Memory to Long-term Memory,
but this can be done through settings called Strategies for extraction. There are three built-in Strategies. It's about considering what kind of extraction to choose depending on the use case.
Blog
For more details, I've also summarized this in the blog below, so please refer to it!
https://dev.classmethod.jp/articles/amazon-bedrock-agentcore-memory-sample-agent/### Gateway
Gateway is a service that converts APIs, Lambda functions, and various services into MCP (Model Context Protocol) compatible tools, making them easy to call from AI agents.
This seems useful in cases where AI agents want to treat APIs or Lambda functions as Tools.
In this article, we'll examine how to turn a Lambda function into a Tool.
Gateway prepares Lambda functions to be callable from agents as MCP protocol-compatible tools. Specifically, this involves granting Gateway permission to call Lambda functions and registering the mapping between Lambda functions and Tools using Tool Schema.
# Tool schema definition
tool_schemas = [
{
"name": "get_order_tool",
"description": "Retrieves order information",
"inputSchema": {
"type": "object",
"properties": {
"orderId": {
"type": "string",
"description": "Order ID"
}
},
"required": ["orderId"]
}
},
{
"name": "update_order_tool",
"description": "Updates order information",
"inputSchema": {
"type": "object",
"properties": {
"orderId": {
"type": "string",
"description": "Order ID"
}
},
"required": ["orderId"]
}
}
]
# Target configuration
target_config = {
"mcp": {
"lambda": {
"lambdaArn": lambda_arn,
"toolSchema": {
"inlinePayload": tool_schemas
}
}
}
}
# Credential provider (Gateway IAM role is used for Lambda invocation)
credential_config = [
{
"credentialProviderType": "GATEWAY_IAM_ROLE"
}
]
After these preparations, Lambda functions can be used as Tools as shown below.
Agents can also use the results from Tool-converted Lambda functions to respond to users.
#### Blog
For more details, please also refer to the blog post I have summarized below!
Observability
Observability is a managed service that enables visualization of various metrics, traces, and logs for AI agents.
There is some preparatory work to do. There is a transaction search function that can visualize the agent's execution flow, which becomes available after being enabled once per account.
Additionally, aws-opentelemetry-distro
needs to be included in the dependencies. However, when deploying using AgentCore's starter toolkit, it is automatically included in the Dockerfile and enabled. That's a nice feature.
Integrated Dashboard
CloudWatch provides an integrated dashboard called GenAI Observability, which allows you to see the agent's behavior at a glance.
It visualizes session counts, error rates, token usage, etc., which seems useful for monitoring performance and early problem detection.
I found it valuable that you can check the AI agent's activities on a timeline. Being able to visually understand which tools were used when and where bottlenecks occur is very convenient for debugging.
Terms like traces, spans, and sessions have appeared, and their relationship is as follows:
Blog
For more details, please also refer to the blog post I have summarized below!
https://dev.classmethod.jp/articles/amazon-bedrock-agentcore-observability-genai-observability/### Built-in Tools
Built-in Tools provides two useful features: Code Interpreter and Browser.
Since both are provided in a managed way, it's great that you can use them without worrying about security or environment setup.
Code Interpreter
Code Interpreter is a feature that executes code created by generative AI in a secure external environment.
The code is executed in a completely isolated sandbox environment, allowing for safe execution without affecting the agent's main environment. It supports Python, JavaScript, TypeScript, and data science libraries such as pandas, numpy, matplotlib, etc.
Blog
For more details, please refer to the blog post linked below!
Browser
Browser is a service that provides an execution environment for generative AI to operate a browser.
Utilizing Playwright and Browser-use, it can operate an actual web browser. This seems useful for information gathering and automating screen operations.
However, there's a caution that using search engines might trigger CAPTCHA. The official documentation recommends using MCP tools (like Tavily) rather than browsers for general searches.
Blog
For more details, please refer to the blog post linked below!
Summary
AgentCore is a managed service with all the necessary features for AI agent development. Let's build AI agents by combining the functions you need!!
## Conclusion
That was the presentation blog on "Let's try Amazon Bedrock AgentCore! ~Explaining the key points of various features~"!
Since it can be used for free during the preview period, it's definitely a service I'd like everyone to try. I plan to explore it more deeply in the future and share practical usage and tips on my blog!
I'd be happy if this sparked your interest enough to try it out! Let's create useful AI agents together!!
Thank you for reading until the end!