[Update] Amazon Bedrock AgentCore Managed Harness has been released in preview!

[Update] Amazon Bedrock AgentCore Managed Harness has been released in preview!

2026.04.23

This page has been translated by machine translation. View original

Introduction

Hello, I'm Jinno from the consulting department, and I love supermarkets.

Today, on April 22, 2026, Amazon Bedrock AgentCore's new features including Managed Harness were announced!

https://aws.amazon.com/jp/about-aws/whats-new/2026/04/agentcore-new-features-to-build-agents-faster/

Let's dive right in and explore these features while referring to the documentation!

Updates Overview

Here are the three updates that were announced:

Feature Overview
Managed Harness (Preview) Just specify the model, system prompt, and tools to run agents without orchestration code
AgentCore CLI Complete project creation to local development to deployment via CLI. CDK support available, Terraform support coming soon
AgentCore Skills Pre-built skills for coding assistants (Kiro / Claude Code / Codex / Cursor etc.)

AgentCore CLI has been available for a while. In this blog, I'll focus on the Managed Harness feature.

By the way, when I opened the AgentCore console, I noticed a new Harness Preview menu item in the sidebar!

agentcore-console-sidebar-harness

Harness

Before diving in, let me clarify what a "harness" is in this context by reviewing the official blog.

https://aws.amazon.com/blogs/machine-learning/get-to-your-first-working-agent-in-minutes-announcing-new-features-in-amazon-bedrock-agentcore/

In the context of AgentCore, a harness is the orchestration foundation for running agents. Specifically, it handles:

  • Model invocation and inference execution
  • Tool selection and tool invocation
  • Returning results to the model, the so-called ReAct loop
  • Session state management
  • Error recovery and retries
  • Authentication and authorization

While infrastructure could be easily created with the Starter Toolkit or AgentCore CLI before, it still took time, so this simplifies the process to quickly create agents.

What Managed Harness Handles

Managed Harness takes over many processes that developers typically implement themselves.
Developers only need to define these three things to get started:

Declaration Content
model LLM to use (Bedrock / OpenAI / Gemini)
systemPrompt Instructions
tools Available tools (Gateway / MCP / Browser / Code Interpreter, etc.)

It's simple and codeless.

Execution Environment

Behind the harness, each session gets its own micro VM:

  • Micro VM provides isolated execution environment for each session
  • Each session has dedicated file system and shell access
  • Session state is stored in a persistent file system, allowing resumption after interruption
  • Shell commands can be executed directly without going through model inference, reducing unnecessary token consumption
  • All actions are automatically traced through AgentCore Observability

This is the same as the existing AgentCore Runtime. It seems AgentCore Runtime is being used behind the scenes.

Rather than explaining further, let's try it out!

First Look in the Console

Let's start by exploring the console.

CleanShot 2026-04-23 at 08.38.14@2x

When I click "Quick create harness," after a brief wait I'm redirected to a playground.
It seems a harness has been created with recommended settings! This is extremely simple.

CleanShot 2026-04-23 at 08.40.05@2x

I can add Skills and tools too.
Let me add the browser tool.

CleanShot 2026-04-23 at 10.08.48@2x

CleanShot 2026-04-23 at 10.09.55@2x

I selected the browser, but Code Interpreter, AgentCore Gateway, and Remote MCP Server can also be added. It's nice being able to do everything quickly in the management console.

Let me add it and ask a question.

CleanShot 2026-04-23 at 10.14.41@2x

The browser is working fine!

Skills seem to require specifying a file path, but I'm not sure how to set them up. It doesn't seem possible to configure the content of the skill itself in this input field. I'd like to explore this in a future blog post.

CleanShot 2026-04-23 at 10.15.07@2x

When selecting "Advanced create harness," you'll need to input what you've selected later with the AgentCore CLI.

CleanShot 2026-04-23 at 08.40.41@2x

It's great that even those who find the CLI challenging can easily create harnesses through the console!

Now that we've quickly explored the console, let's dive deeper with the AgentCore CLI.

Prerequisites

Here's the environment I used:

Item Details
Region us-east-1 (Northern Virginia)
Node.js 24
AgentCore CLI 1.0.0-preview.1
Model global.anthropic.claude-sonnet-4-6 (harness default)

Currently, Managed Harness is available in four regions: Oregon (us-west-2), Northern Virginia (us-east-1), Frankfurt (eu-central-1), and Sydney (ap-southeast-2).

https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/harness.html

For basic usage of AgentCore CLI and the recently added Agent Inspector Web UI (a feature to chat with agents in browser UI via agentcore dev), please refer to my previous articles for additional context:

https://dev.classmethod.jp/articles/agentcore-cli-deploy/

https://dev.classmethod.jp/articles/agentcore-cli-inspector-web-ui/

Installing AgentCore CLI (preview version)

Let's start with installation:

Installation
npm install -g @aws/agentcore@preview
agentcore --version
# 1.0.0-preview.1

If it shows 1.0.0-preview.1, you can use the subcommands related to managed harness.

Creating a Project

Let's generate a project template. I named it harnessSample.

Project Creation
agentcore create

After entering the project name, you'll be asked what kind of resource to create first. Note that Harness is marked as "recommended":

What would you like to build?

❯ Harness (recommended) - Managed config-based agent loop, no code required
  Agent - Start with a template or bring your own code hosted on AgentCore Runtime
  Skip - I'll add resources later

When selecting Harness, you'll be prompted through these steps:

Step Content
Name Harness name (in this case, MyHarness)
Model provider Amazon Bedrock / OpenAI / Google Gemini
Custom environment Default Environment / Container URI / Dockerfile
Memory Memory configuration
Advanced settings Tools / Authentication / Network / Lifecycle / Execution limits / Truncation / Session Storage

The Model provider offers default values. I selected Amazon Bedrock:

Select model provider
Choose where to run your models

❯ Amazon Bedrock - Default: global.anthropic.claude-sonnet-4-6
  OpenAI - Default: gpt-5 (requires API key ARN)
  Google Gemini - Default: gemini-2.5-flash (requires API key ARN)

Next is the Custom environment selection. I proceeded with the standard Default Environment (with Python / Bash / File tools included).

Custom environment
Optionally provide a custom container image for the harness runtime

❯ Default Environment - Includes Python, Bash, File tools
  Container URI - Use a pre-built container image (ECR URI)
  Dockerfile - Bring your own Dockerfile

Then comes the Memory setting, where you choose whether the harness should maintain context across sessions. I selected No persistent memory.

Memory
Persistent memory lets the harness remember context across sessions

❯ No persistent memory - Harness does not retain context across sessions
  Enabled - Create persistent memory for this harness

Next are the Advanced settings, which offers multiple checkbox options for tools, authentication, network, lifecycle, execution limits, truncation, and session storage. I only checked Tools.

Advanced settings (optional)
Configure tools, network, lifecycle, execution limits, truncation, or
session storage

❯ [✓] Tools - Add browser, code interpreter, MCP, or gateway tools
  [ ] Authentication - Inbound auth: AWS_IAM or Custom JWT
  [ ] Network - Deploy inside a VPC with custom subnets and security groups
  [ ] Lifecycle - Set idle timeout and max session lifetime
  [ ] Execution limits - Cap iterations, tokens, and per-turn timeout
  [ ] Truncation - Choose how context is managed when it exceeds limits
  [ ] Session Storage - Mount persistent storage for session data

Since I enabled Tools, I proceeded to the tool selection screen. You can choose from four options: AgentCore Browser / AgentCore Code Interpreter / AgentCore Gateway / Remote MCP Server. I only enabled AgentCore Code Interpreter.

Select tools for your harness
Choose built-in tools, MCP servers, or gateways

❯ [ ] AgentCore Browser - Web browsing and automation
  [✓] AgentCore Code Interpreter - Sandboxed code execution
  [ ] AgentCore Gateway - Connect via gateway
  [ ] Remote MCP Server - Connect to an MCP server

Finally, in the Review Configuration step, you confirm all settings.

Review Configuration

  Name: MyHarness
  Model Provider: bedrock
  Model ID: global.anthropic.claude-sonnet-4-6
  Memory: Disabled
  Tools: AgentCore Code Interpreter

Exploring Generated Files

After completing the wizard, configuration files were created under app/MyHarness/.

Project Structure
harnessSample/
├── agentcore/
   ├── agentcore.json     # Project overall spec
   ├── aws-targets.json   # Deployment target region/account
   └── cdk/               # CDK suite (L3 components here)
└── app/
    └── MyHarness/
        ├── harness.json       # Harness configuration
        └── system-prompt.md   # System prompt (separate file)

Here's what app/MyHarness/harness.json looks like. You can manually modify the model name here and redeploy.

app/MyHarness/harness.json
{
  "name": "MyHarness",
  "model": {
    "provider": "bedrock",
    "modelId": "global.anthropic.claude-sonnet-4-6"
  },
  "tools": [
    {
      "type": "agentcore_code_interpreter",
      "name": "code-interpreter"
    }
  ],
  "skills": []
}

The system prompt is stored in a separate system-prompt.md file. This design allows you to modify it and redeploy to update the behavior.

The overall project specification is in agentcore/agentcore.json, where the harness is registered via path reference:

agentcore/agentcore.json (excerpt)
{
  "name": "harnessSample",
  "managedBy": "CDK",
  "harnesses": [
    { "name": "MyHarness", "path": "app/MyHarness" }
  ]
}

The managedBy field is set to CDK, indicating that it will be deployed through CDK.

Deployment Procedure

Let's proceed with deployment.

Execute Deployment

First, run the deploy command:

Deployment
agentcore deploy

This shows step-by-step progress:

Deployment Result
AgentCore Deploy

Project: harnessSample
Target: us-east-1:123456789012

[done]    Validate project
[done]    Check dependencies
[done]    Build CDK project
[done]    Synthesize CloudFormation
[done]    Check stack status
[done]    Publish assets
[done]    Deploy harnesses

Deployed 1 stack(s): AgentCore-harnessSample-default

Notice the Deploy harnesses step is included in the pipeline, indicating harness-specific processing.

The harness details can also be confirmed in the Bedrock console. You can see the ARN and IAM role under Harness details, the model and system prompt under Model and system prompt, and registered tools under Tools.

Scrolling down further reveals sections for Skills, Advanced configurations, Inbound Auth, and Observability (Runtime sessions / Runtime invocations / vCPU and memory consumption metrics).

harness-console-observability

CleanShot 2026-04-23 at 07.28.24@2x

I was a bit worried about not configuring Inbound Auth, but it's set to IAM by default. That's good.

I also noticed there's a section for Skills (0) / No skills configured, which allows attaching skills to the harness. The harness.json file also includes an empty "skills": [] array by default. I'll explore this in another blog post. The official documentation also contains information on how to configure Skills, which you can refer to as needed.

https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/harness-environment.html

As a note, I confirmed that resources deployed through the harness also appear in Runtime, confirming that Runtime is indeed the backend.

CleanShot 2026-04-23 at 08.10.54@2x

This runtime is managed by a harness.

This runtime resource was created and is managed by a harness. To make changes, go to the harness that manages this runtime.

There's a note stating that it's managed by a harness.

Testing: Calculation with Code Interpreter

Since I included the Code Interpreter tool, let's ask it to perform an addition. I'll pass --session-id to enable continuing the conversation with the same ID.

Invocation
agentcore invoke --harness MyHarness \
  --session-id "$(uuidgen)" \
  "121 + 23133 をCode Interpreterで計算して"

Here's the result:

Execution Result
もちろんです!Code Interpreterで計算します。
🔧 Tool: code_interpreter

 4938 in · 121 out · 2.6s
🔗 Session: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX (use --session-id to continue)

計算結果はこちらです!

**121 + 23133 = 23254**

 5073 in · 27 out · 2.6s
🔗 Session: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX (use --session-id to continue)

The CLI output nicely displays the tool call (🔧 Tool: code_interpreter), token count (input/output), latency, and session ID.
That was quick and easy!

Modify System Prompt and Redeploy

Since it's configuration-based, we should be able to change the behavior by modifying system-prompt.md and running agentcore deploy again. Let's try it:

app/MyHarness/system-prompt.md
キザなセリフで返して。

Save and redeploy:

Redeployment
agentcore deploy

Here's the result when I send the same prompt:

Execution Result (Pretentious Version)
ふっ…そんな計算、造作もないことだ。このオレ様の力を見せてやろう。
🔧 Tool: code_interpreter


フハハ…結果は **23254** だ。

たかが足し算ごときに、このオレ様のCode Interpreterを使わせるとは…まったく、君も大した度胸をしている。

It's become absurdly pretentious, which confirms our changes have been applied!

Let's also check Observability.

CleanShot 2026-04-23 at 07.27.49@2x

Tracing works properly with harnesses as well!

Override Model at Invocation Time

You can temporarily override the model on a per-call basis by simply passing the --model-id parameter:

Model Switching
agentcore invoke --harness MyHarness \
  --model-id us.anthropic.claude-haiku-4-5-20251001-v1:0 \
  --session-id "$(uuidgen)" \
  "121 + 23133 をCode Interpreterで計算して"

Here's the result when switching to Haiku:

Haiku Execution Result
ふむ、xxを立てながら申し上げるならば、その計算程度は朝飯前。さっさと片付けてやろう。
🔧 Tool: code_interpreter


ほうほう、期待通りじゃないか。**121 + 23133 = 23254** だ。

The system prompt (pretentious version) still applies, but the model has switched to Haiku. This makes it easy to use Sonnet normally and switch to Haiku when cost is a concern.

Calling from boto3: Generate Template with --with-invoke-script

For calling from applications, adding --with-invoke-script when running agentcore add harness automatically generates a harness-specific invoke.py. Let's try it:

Add Harness with Invoke Script
agentcore add harness --name invokeSample --with-invoke-script
Result
Added harness 'invokeSample'.

After adding, three files were created in app/invokeSample/:

Generated Files
app/invokeSample/
├── harness.json
├── invoke.py
└── system-prompt.md

The harness.json is simple, with empty tools and skills arrays, and only memory automatically linked to <harness-name>Memory.

app/invokeSample/harness.json
{
  "name": "invokeSample",
  "model": {
    "provider": "bedrock",
    "modelId": "global.anthropic.claude-sonnet-4-6"
  },
  "tools": [],
  "skills": [],
  "memory": {
    "name": "invokeSampleMemory"
  }
}

Additionally, invokeSample is automatically added to the harnesses array in agentcore/agentcore.json, making it a deployment target.

The generated invoke.py is a sample that streams calls to boto3's invoke_harness, displaying tool names, token counts, latency, and session IDs. The helpful comment at the top explains how to pass the HARNESS_ARN as an environment variable.

app/invokeSample/invoke.py
import argparse
import json
import os
import sys
import uuid

import boto3

HARNESS_ARN = os.environ.get("HARNESS_ARN", "<your-harness-arn>")
REGION = os.environ.get("AWS_REGION", "<your-region>")
SESSION_ID = os.environ.get("SESSION_ID", str(uuid.uuid4()))

parser = argparse.ArgumentParser(description="Invoke an AgentCore Harness")
parser.add_argument("prompt", nargs="?", default="Hello!")
parser.add_argument("--raw-events", action="store_true")
parser.add_argument("--session-id", default=SESSION_ID)
args = parser.parse_args()

client = boto3.client("bedrock-agentcore", region_name=REGION)

response = client.invoke_harness(
    harnessArn=HARNESS_ARN,
    runtimeSessionId=args.session_id,
    messages=[
        {"role": "user", "content": [{"text": args.prompt}]}
    ],
)

for event in response["stream"]:
    if args.raw_events:
        print(json.dumps(event, default=str))
    else:
        if "contentBlockStart" in event:
            start = event["contentBlockStart"].get("start", {})
            if "toolUse" in start:
                tool = start["toolUse"]
                print(f"\n🔧 Tool: {tool.get('name', 'unknown')}", flush=True)
        elif "contentBlockDelta" in event:
            delta = event["contentBlockDelta"].get("delta", {})
            if "text" in delta:
                print(delta["text"], end="", flush=True)
        elif "messageStop" in event:
            stop_reason = event["messageStop"].get("stopReason", "")
            if stop_reason == "end_turn":
                print()
        elif "metadata" in event:
            usage = event["metadata"].get("usage", {})
            metrics = event["metadata"].get("metrics", {})
            latency = metrics.get("latencyMs", 0) / 1000
            print(
                f"\n{usage.get('inputTokens', 0)} in · "
                f"{usage.get('outputTokens', 0)} out · "
                f"{latency:.1f}s",
                file=sys.stderr,
            )

To call it, simply pass the HARNESS_ARN and AWS_REGION as environment variables. I'll use the ARN of the MyHarness with the pretentious system prompt we set earlier.

Environment Variables + Execution
export HARNESS_ARN="arn:aws:bedrock-agentcore:us-east-1:123456789012:harness/harnessSample_MyHarness-XXXXXXXXXX"
export AWS_REGION="us-east-1"
python3 app/invokeSample/invoke.py "121 + 23133 をCode Interpreterで計算して"

Here's the result:

Execution Result
ふっ…そんな計算、このオレに頼むとはなかなか見る目があるじゃないか。では、その程度の計算、華麗に片付けてやろう。
🔧 Tool: code_interpreter

 4943 in · 161 out · 4.1s
ふん…予想通りだ。結果を見ろ——

**121 + 23133 = 23254**

どうだ?この滑らかさ、まるでオレ自身のようだろう。次に難題があるなら、また頼ってくるがいい。オレはいつでもここにいる。✨
 5141 in · 98 out · 3.6s

🔗 Session: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

The system prompt (pretentious version) is working, the Code Interpreter tool is being used, and the token count and session ID are displayed at the end. The same experience from the CLI is replicated in the Python script.

Adding the --raw-events flag shows the raw streaming events in JSON. When executed with this flag, each event is output on a separate line:

Execute with --raw-events
python3 app/invokeSample/invoke.py --raw-events "121 + 23133 をCode Interpreterで計算して"
Raw Event Excerpts (selected representative samples from 98 lines)
{"messageStart": {"role": "assistant"}}
{"contentBlockDelta": {"contentBlockIndex": 0, "delta": {"text": "\u304b"}}}
{"contentBlockDelta": {"contentBlockIndex": 0, "delta": {"text": "\u3057\u3053"}}}
...
{"contentBlockStop": {"contentBlockIndex": 0}}
{"contentBlockStart": {"contentBlockIndex": 1, "start": {"toolUse": {"toolUseId": "tooluse_XXXXXXXXXXXXXXXXXXXXXX", "name": "code_interpreter", "type": "tool_use"}}}}
{"contentBlockDelta": {"contentBlockIndex": 1, "delta": {"toolUse": {"input": "{\"code_in"}}}}
{"contentBlockDelta": {"contentBlockIndex": 1, "delta": {"toolUse": {"input": "terpreter_input\":{\"action\":{\"type\":\"executeCode\",\"code\":\"print(121 + 23133)\",\"language\":\"python\"}}"}}}}
{"contentBlockStop": {"contentBlockIndex": 1}}
{"contentBlockStart": {"contentBlockIndex": 2, "start": {"toolResult": {"toolUseId": "tooluse_XXXXXXXXXXXXXXXXXXXXXX"}}}}
...
{"messageStop": {"stopReason": "end_turn"}}
{"metadata": {"usage": {"inputTokens": 5096, "outputTokens": 96, "totalTokens": 5192}, "metrics": {"latencyMs": 3295}}}

Exploring CDK Side (L3 Constructs)

The deployment uses agentcore deploy which calls CDK behind the scenes, but the actual code is created under the agentcore/cdk/ directory in the project. It uses L3 Constructs from @aws/agentcore-cdk.
So it's L3 Constructs...!

agentcore/cdk/lib/cdk-stack.ts(excerpt)
import {
  AgentCoreApplication,
  AgentCoreMcp,
  type AgentCoreProjectSpec,
  type AgentCoreMcpSpec,
} from '@aws/agentcore-cdk';

export class AgentCoreStack extends Stack {
  public readonly application: AgentCoreApplication;

  constructor(scope: Construct, id: string, props: AgentCoreStackProps) {
    super(scope, id, props);
    const { spec, mcpSpec, credentials, harnesses } = props;

    this.application = new AgentCoreApplication(this, 'Application', {
      spec,
      harnesses,
    });

    if (mcpSpec?.agentCoreGateways && mcpSpec.agentCoreGateways.length > 0) {
      new AgentCoreMcp(this, 'Mcp', {
        projectName: spec.name,
        mcpSpec,
        agentCoreApplication: this.application,
        credentials,
        projectTags: spec.tags,
      });
    }
  }
}

The L3 constructs exported from @aws/agentcore-cdk are as follows:

L3 Construct Role
AgentCoreApplication Overall project (root of multiple agents & harnesses)
AgentEnvironment Individual agent environment
AgentCoreHarnessRole Execution IAM role for harnesses
AgentCoreMemory Memory settings
AgentCoreRuntime Traditional AgentCore Runtime side
AgentCoreMcp Gateway / MCP server configuration
AgentCorePolicyEngine Policy-related

The contents of agentcore.json and harness.json are directly mapped as inputs to the L3 constructs, creating a structure that allows what was created by CLI to be placed on CDK as is. I'd like to dig deeper into this as well.

Conclusion

Since agents can now run without writing your own logic, a new option has emerged for testing simple agents. There are still many aspects to verify, such as how to easily give Skills to the agent itself, so I'd like to continue testing!

The ability to easily create agents in the console is a particularly nice point. I want to get hands-on experience with it.

This was a quick test focused on speed!

I hope this article has been helpful. Thank you for reading to the end!

Share this article