I tried the Amazon Bedrock AgentCore integration supported by Generative AI Use Cases JP (abbreviated as GenU)

I tried the Amazon Bedrock AgentCore integration supported by Generative AI Use Cases JP (abbreviated as GenU)

2025.09.07

Introduction

Hello, I'm Kamino from the consulting department who loves the La Mu supermarket!

Recently, Amazon Bedrock AgentCore (hereafter AgentCore) integration has been supported in GenU v5.0.0!!

https://github.com/aws-samples/generative-ai-use-cases/releases/tag/v5.0.0

It's convenient to have a place to try things out when you've created an AI agent but are wondering what to do with the frontend. Also, it's useful when you've already implemented GenU and want to check your created AI agent in the GenU environment as well!!

I'd like to give it a try right away!

About GenU

Let me briefly introduce GenU again.
GenU is an application implementation equipped with a collection of business use cases for safely utilizing generative AI in business operations.
It's a convenient tool when you want to quickly use generative AI and consider how to apply it to your business.

https://github.com/aws-samples/generative-ai-use-cases/tree/main

Deployment is simple - just clone the above repository and run the deployment command to use it immediately. If you want to check more detailed information, I think it would be good to refer to blogs like the one below.

https://dev.classmethod.jp/articles/generative-ai-use-cases-jp-genu-installation-guide/

This time, I will pull locally and implement the deployment.

Prerequisites and Preparation

  • AWS CLI 2.28.8
  • Python 3.12.6
  • AWS account
    • Region to use: us-west-2
    • The models you want to use need to be enabled in advance.
  • GenU version used: v5.1.1
  • Docker version 27.5.1-rd, build 0c97515## The Agent We Will Create

We will write the required dependencies in requirements.txt and install them.

requirements.txt
			
			strands-agents
strands-agents-tools
bedrock-agentcore
bedrock-agentcore-starter-toolkit

		
			
			pip install -r requirements.txt

		

We'll create a simple agent using Strands Agent.
This is a simple application that just responds with "The weather is sunny" when asked about the weather.
The filename will be agent.py.

agent.py
			
			import os
from strands import Agent, tool
from strands.models import BedrockModel
from bedrock_agentcore.runtime import BedrockAgentCoreApp

model_id = os.getenv("BEDROCK_MODEL_ID", "anthropic.claude-3-5-haiku-20241022-v1:0")
model = BedrockModel(model_id=model_id, params={"max_tokens": 4096, "temperature": 0.7}, region="us-west-2")

app = BedrockAgentCoreApp()

@tool
def get_weather(city: str) -> str:
    """Get the weather for a given city"""
    return f"The weather in {city} is sunny"

@app.entrypoint
async def entrypoint(payload):
    agent = Agent(model=model, tools=[get_weather])
    message = payload.get("prompt", "")
    stream_messages = agent.stream_async(message)
    async for message in stream_messages:
        if "event" in message:
            yield message

if __name__ == "__main__":
    app.run()

		

To match the response format expected by the GenU frontend,
we return the stream messages from Strands Agent as follows.
(I didn't realize this at first, and it took me some time to figure out why the responses weren't displaying on the screen...)

			
			async for message in stream_messages:
        if "event" in message:
            yield message

		

After implementation is complete, we deploy it. First, we use the configure command to set up IAM and ECR settings. We'll be asked various questions, but we'll proceed with the default settings and automatic creation.

			
			agentcore configure --entrypoint agent.py

		

.bedrock_agentcore.yaml and Dockerfile will be automatically generated, then we deploy with the launch command.

			
			agentcore launch

		

When completed, the Agent ARN will be displayed, so we'll make a note of it for later use.

			
			Agent ARN:arn:aws:bedrock-agentcore:us-west-2:xxx:runtime/agent-yyy
```## GenU Deployment
Next, we will proceed with the deployment of GenU.
First, let's `clone` the source code from the repository.

```bash
git clone https://github.com/aws-samples/generative-ai-use-cases.git

		

After clone, run the ci command to install the packages.

			
			npm ci

		

After the dependencies are installed,
edit the parameter.ts file as follows.

parameter.ts
			
			const envs: Record<string, Partial<StackInput>> = {

  // If you want to define an anonymous environment, uncomment the following and the content of cdk.json will be ignored.
  // If you want to define an anonymous environment in parameter.ts, uncomment the following and the content of cdk.json will be ignored.
  '': {
     // Parameters for anonymous environment
     // If you want to override the default settings, add the following
     modelRegion: 'us-west-2',
     imageGenerationModelIds: [],
     videoGenerationModelIds: [],
     speechToSpeechModelIds: [],
     createGenericAgentCoreRuntime: true,
     agentCoreRegion: 'us-west-2',
     agentCoreExternalRuntimes: [
      {
        name: 'SimpleAgentCore',
        arn: 'arn:aws:bedrock-agentcore:us-west-2:xxx:runtime/agent-yyy',
      },
    ]
   },
  dev: {
    // Parameters for development environment
  },
  staging: {
    // Parameters for staging environment
  },
  prod: {
    // Parameters for production environment
  },
  // If you need other environments, customize them as needed
};

		

We will use the region us-west-2 and pass empty arrays to unused parameters such as imageGenerationModelIds.

The important parameters are createGenericAgentCoreRuntime, agentCoreRegion, and agentCoreExternalRuntimes.

  • createGenericAgentCoreRuntime is the AI agent of AgentCore provided by GenU by default. I set it to true to see how it works.
  • agentCoreRegion is the region to deploy.
  • agentCoreExternalRuntimes allows you to specify your own AgentCore to use with GenU.
    Here, we specify the ARN of the AI agent we deployed earlier.

Now we're ready! Let's deploy!
If you've never used CDK before, you'll need to bootstrap first, so run the bootstrap command.

			
			npx -w packages/cdk cdk bootstrap

		

Once the bootstrap command executes successfully, run the deploy command.To enable AgentCore use cases, the docker command must be executable.

As mentioned in the documentation, if you are using an OS architecture such as Intel/AMD, please run the following command before deployment. This is to enable building ARM-based container images.

https://github.com/aws-samples/generative-ai-use-cases/blob/main/docs/ja/DEPLOY_OPTION.md#agentcore-ユースケースの有効化

			
			docker run --privileged --rm tonistiigi/binfmt --install arm64

		
			
			npm run cdk:deploy

		

It will take some time, but after waiting a while, it will complete and a URL will be displayed.

			
			GenerativeAiUseCasesStack.WebUrl = https://xxx.cloudfront.net

		

When you access it, a Cognito login screen will be displayed, and you need to register as a user and log in.

CleanShot 2025-09-06 at 18.53.54@2x

After logging in, you'll see a menu displaying AgentCore!!!
The robot icon is cute!

CleanShot 2025-09-06 at 12.35.19@2x

Let's click and try it out right away.
The default GenericAgentCoreRuntime is displayed.
This AI agent can use the MCP Server defined below, so I'll ask a question that uses the Documentation MCP Server.

mcp.json
			
			{
  "_comment": "Generic AgentCore Runtime Configuration",
  "_agentcore_requirements": {
    "platform": "linux/arm64",
    "port": 8080,
    "endpoints": {
      "/ping": "GET - Health check endpoint",
      "/invocations": "POST - Main inference endpoint"
    },
    "aws_credentials": "Required for Bedrock model access and S3 operations"
  },
  "mcpServers": {
    "time": {
      "command": "uvx",
      "args": ["mcp-server-time"]
    },
    "awslabs.aws-documentation-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-documentation-mcp-server@latest"]
    },
    "awslabs.cdk-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.cdk-mcp-server@latest"]
    },
    "awslabs.aws-diagram-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-diagram-mcp-server@latest"]
    },
    "awslabs.nova-canvas-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.nova-canvas-mcp-server@latest"],
      "env": {
        "AWS_REGION": "us-east-1"
      }
    }
  }
}
```![CleanShot 2025-09-06 at 12.41.06@2x](https://devio2024-2-media.developers.io/upload/3c8tDfrw4MzcEOWABbAvnI/2025-09-06/cCxTCPKTxbnO.png)

I got a response!!
The MCP server obtained results, and the LLM interpreted those results to return them as appropriate language!

![CleanShot 2025-09-06 at 12.42.43@2x](https://devio2024-2-media.developers.io/upload/3c8tDfrw4MzcEOWABbAvnI/2025-09-06/1vbCQcD1IedI.png)

Looking at the trace results, `search_documentation` and `read_documentation` are activated, so it seems the MCP Server is being used.

![CleanShot 2025-09-06 at 12.43.14@2x](https://devio2024-2-media.developers.io/upload/3c8tDfrw4MzcEOWABbAvnI/2025-09-06/fxBxb9NQBmne.png)

It's good to see that the default AI agent can utilize various MCP servers to do many things. It's also possible to add MCP servers to `mcp.json` and use them with this AI agent. I'd like to try that sometime.

Now I'll try the agent I created.
I'll switch the runtime in the select box.

![CleanShot 2025-09-06 at 19.58.08@2x](https://devio2024-2-media.developers.io/upload/3c8tDfrw4MzcEOWABbAvnI/2025-09-06/EzqiUzPYj7Cl.png)
After switching, I'll ask "Tell me the weather in Tokyo".

![CleanShot 2025-09-06 at 18.11.10@2x](https://devio2024-2-media.developers.io/upload/3c8tDfrw4MzcEOWABbAvnI/2025-09-06/Rn644ggbihAC.png)

Oh! It worked perfectly!! I can see from the trace that the weather retrieval tool was used.
By the way, while models can be selected from the dropdown, currently we have a fixed model ID in the code, so I'll modify it to use the model ID sent in the request.
I checked the developer tools to see what kind of request is being sent.

![CleanShot 2025-09-06 at 18.08.51@2x](https://devio2024-2-media.developers.io/upload/3c8tDfrw4MzcEOWABbAvnI/2025-09-06/SkPCHKxvksEh.png)

I see that the `modelId` is included within the `model` object when sent.
I won't implement this now, but it's good to know that past conversations are stored in `messages` when handling AgentCore in GenU.

Based on this, I'll modify the code and deploy it again.

```python:agent.py
import os
from strands import Agent, tool
from strands.models import BedrockModel
from bedrock_agentcore.runtime import BedrockAgentCoreApp
app = BedrockAgentCoreApp()

@tool
def get_weather(city: str) -> str:
    """Get the weather for a given city"""
    return f"The weather in {city} is sunny"

@app.entrypoint
async def entrypoint(payload):
    message = payload.get("prompt", "")
    model = payload.get("model", {})
    model_id = model.get("modelId","anthropic.claude-3-5-haiku-20241022-v1:0")
    model = BedrockModel(model_id=model_id, params={"max_tokens": 4096, "temperature": 0.7}, region="us-west-2")
    agent = Agent(model=model, tools=[get_weather])
    stream_messages = agent.stream_async(message)
    async for message in stream_messages:
        if "event" in message:
            yield message

if __name__ == "__main__":
    app.run()
```I'll translate the Markdown content to English while preserving all formatting:

If the corrections are made, deploy again.
```bash
agentcore launch

		

Once deployment is complete, let's try switching the model!
Let's switch to Claude 3.7 Sonnet.

CleanShot 2025-09-06 at 18.31.38@2x

It's working without any issues! Just to make sure, let's check the logs to confirm that the model we specified in the request was properly used.

CleanShot 2025-09-06 at 20.09.15@2x
It successfully switched and is being used!
This is convenient when you want to test agent behavior while switching between models!

Thoughts

I found this quite useful, so here's a summary of the advantages and disadvantages I experienced.

Advantages

  • You can use AI agents created in the GenU environment
    • If your team or company already has GenU deployed, being able to quickly share AI agents you've created is a nice point.
      • Having authentication provided by Cognito is also great. SSO via SAML integration with EntraID is also possible.
    • It's nice that you don't have to create your own frontend.
  • Since it runs through AgentCore, you can use LLMs other than Bedrock.
    • GenU itself is built around a Bedrock-centered architecture. There are ways to use other LLMs, such as custom implementation or calling via MCP server, but these might be somewhat cumbersome. AgentCore, on the other hand, can use LLMs besides Bedrock, such as Azure OpenAI.
    • If you want to use other LLMs in GenU, it might be viable to lightly wrap them in an AI agent framework and deploy them to AgentCore. It's also good that the GenU side only needs to add parameters.

If you're interested in using other LLMs with AgentCore, please refer to the article I wrote previously.

https://dev.classmethod.jp/articles/amazon-bedrock-agentcore-identity-cognito-azure-openai/

Disadvantages

  • Currently, you can't send arbitrary request parameters to AI agents
    • While model ID, prompts, and past messages are sent, user IDs and arbitrary request parameters cannot be sent, so it seems difficult to handle Memory functions and request parameter-based processing. (Sorry if I've overlooked something...)
    • Since conversation history is stored in DynamoDB deployed on the GenU side, Short-term Memory integration features might not be implemented.

As this is still an Experimental feature, I hope it will become more convenient with AgentCore integration in the future! Looking forward to it!
Looking at the PR below, it seems various AgentCore features will be integrated in the future!

https://github.com/aws-samples/generative-ai-use-cases/pull/1191

Conclusion

I quickly integrated AgentCore with GenU and called the created AI agent.
I found it convenient for cases where you want to share AI agents created in an environment with GenU deployed.

However, as it's still an Experimental feature, I hope it becomes more convenient in the future!
I'll continue to test it if there are further updates!

I hope this article was helpful! Thank you for reading until the end!

Share this article

FacebookHatena blogX

Related articles