I tried to see how compatible Bedrock Mantle's Responses API is with Azure
This page has been translated by machine translation. View original
In the previous article, we confirmed that by connecting to the Bedrock Mantle Responses API using Codex CLI v0.124.0, coding agents can be used with just AWS authentication.
For basic usage of the Responses API in Bedrock Mantle, please also refer to Takakuni-san's article.
In February 2026, OpenAI and Amazon announced a strategic partnership and the joint development of a Stateful Runtime Environment running on Bedrock. About 2 months later, on April 28, Amazon CEO Andy Jassy mentioned that OpenAI models would begin to be offered on Bedrock within a few weeks.
Meanwhile, on Azure AI Foundry, the Responses API has been generally available since 2025, offering full functionality including stateful conversations using previous_response_id and server-side tools (web_search, code_interpreter, file_search).
While the main focus is on the upcoming GPT-5 series models, in this article we'll prepare for their arrival by testing the portability and architectural differences of the Responses API using currently available models.
Test Environment
| Azure AI Foundry | Bedrock Mantle | |
|---|---|---|
| Region | East US 2 | us-east-1 |
| Model | gpt-4.1-mini | openai.gpt-oss-120b |
| Endpoint | /openai/v1/responses |
/v1/responses |
| Authentication | API Key | AWS IAM (SigV4) |
Since different models support the Responses API in Azure and Bedrock, we used models available on each platform. Azure supports numerous models including the GPT-5 series, while Bedrock only supports openai.gpt-oss-120b and openai.gpt-oss-20b (verified in Part 1). Additionally, gpt-oss-120b on Azure doesn't support the Responses API (only Chat Completions), so a direct comparison with the same model isn't possible.
Our comparison focuses not on model performance differences but on compatibility of Responses API response structures and differences in tool support.
Stateful Conversations (previous_response_id)
Using previous_response_id in the Responses API allows conversation history to be maintained on the server side, eliminating the need for clients to manage history for context continuity. However, internally, past inputs and outputs are included in the prompt and subject to billing, so the number of billable tokens may increase as the conversation progresses. The approach should be chosen based on the specific workload.
For a practical scenario, we tested ECS task error log analysis.
Azure AI Foundry
In Turn 1, we submit CloudWatch error logs. In the curl command, gpt-41-mini is the deployment name (periods can't be included in deployment names, hence not using gpt-4.1-mini).
curl -s "${ENDPOINT}openai/v1/responses" \
-H "Content-Type: application/json" \
-H "api-key: ${KEY}" \
-d '{
"model": "gpt-41-mini",
"instructions": "You are a DevOps engineer assistant. Be concise.",
"input": "Our ECS task keeps failing. Here is the error from CloudWatch:\n\nResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr.us-east-1.amazonaws.com/: dial tcp 10.0.1.42:443: i/o timeout"
}'
Response (extracted with jq '{id, object, status, output: [.output[] | {type, content}]}'):
{
"id": "resp_087784ee8eba51a60069f0bed6d29c819585e6d3c2530c9844",
"object": "response",
"status": "completed",
"output": [
{
"type": "message",
"content": [
{
"type": "output_text",
"text": "The ECS task is failing because it cannot retrieve ECR registry auth due to a network timeout when trying to reach the ECR endpoint.\n\n**Likely causes:**\n1. The ECS task's subnet does not have proper internet access or NAT gateway configured.\n2. Security group or NACL blocking outbound HTTPS (port 443).\n3. Missing or incorrect VPC endpoint for ECR if running in a private subnet without internet.\n\n**Actions to fix:**\n- Ensure the ECS tasks are in subnets with NAT Gateway or Internet Gateway for outbound HTTPS access.\n- Check and allow outbound port 443 in Security Groups and NACLs.\n- Alternatively, configure VPC Endpoints for ECR (`com.amazonaws.us-east-1.ecr.api` and `com.amazonaws.us-east-1.ecr.dkr`) and for S3 if needed to avoid internet dependency."
}
]
}
]
}
In Turn 2, we follow up with previous_response_id. We don't resend the error logs.
curl -s "${ENDPOINT}openai/v1/responses" \
-H "Content-Type: application/json" \
-H "api-key: ${KEY}" \
-d '{
"model": "gpt-41-mini",
"input": "What VPC networking changes should I check? Give me the AWS CLI commands.",
"previous_response_id": "resp_087784ee8eba51a60069f0bed6d29c819585e6d3c2530c9844"
}'
The response included VPC troubleshooting procedures and AWS CLI commands (aws ec2 describe-route-tables, aws ec2 describe-nat-gateways, aws ec2 describe-security-groups, aws ec2 describe-vpc-endpoints, etc.) based on the context from Turn 1 (connection timeout to ECR endpoint).
Bedrock Mantle
We tried the same scenario on Bedrock.
# Turn 1
curl -s --aws-sigv4 "aws:amz:us-east-1:bedrock-mantle" \
--user "${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}" \
-H "x-amz-security-token: ${AWS_SESSION_TOKEN}" \
-H "Content-Type: application/json" \
-X POST "https://bedrock-mantle.us-east-1.api.aws/v1/responses" \
-d '{
"model": "openai.gpt-oss-120b",
"instructions": "You are a DevOps engineer assistant. Be concise.",
"input": "Our ECS task keeps failing. Here is the error from CloudWatch:\n\nResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr.us-east-1.amazonaws.com/: dial tcp 10.0.1.42:443: i/o timeout"
}'
Response (extracted with the same jq filter, omitting the reasoning type):
{
"id": "resp_pbknkjxqs4ekzxoujxftabh2msmflyzyb4yfge4rs3zg3gazjtjq",
"object": "response",
"status": "completed",
"output": [
{
"type": "message",
"content": [
{
"type": "output_text",
"text": "## TL;DR\n\nThe error means the task's execution role cannot reach the Amazon ECR public endpoint (or the VPC endpoint you've set up for it). In practice this is almost always a networking problem (security group, subnet routing, NAT/Internet gateway, VPC endpoint, DNS, or a mis-configured task execution role).\n\n..."
}
]
}
]
}
# Turn 2: with previous_response_id
curl -s --aws-sigv4 "aws:amz:us-east-1:bedrock-mantle" \
--user "${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}" \
-H "x-amz-security-token: ${AWS_SESSION_TOKEN}" \
-H "Content-Type: application/json" \
-X POST "https://bedrock-mantle.us-east-1.api.aws/v1/responses" \
-d '{
"model": "openai.gpt-oss-120b",
"input": "What VPC networking changes should I check? Give me the AWS CLI commands.",
"previous_response_id": "resp_pbknkjxqs4ekzxoujxftabh2msmflyzyb4yfge4rs3zg3gazjtjq"
}'
On Bedrock as well, VPC troubleshooting procedures and AWS CLI commands were returned based on the error log from Turn 1.
Compatibility Check
Comparing both responses, the JSON structure (id, object, status, output[]) follows the same format, and the hierarchy of output array with message → content → output_text is identical. Bedrock responses also include output of type reasoning, but the message type structure is the same as Azure.
Since different models were used (gpt-4.1-mini vs gpt-oss-120b), we did not compare token consumption or response speed.
Tool Support Comparison
| Tool | Azure | Bedrock | Notes |
|---|---|---|---|
function |
✅ | ✅ | |
mcp |
✅ (server_url) | ⚠️ (connector_id) | Different connection methods* |
web_search |
✅ | ❌ | |
code_interpreter |
✅ | ❌ | Azure executes code and returns results |
file_search |
✅ | ❌ |
*MCP is accepted by both, but Azure connects directly via server_url while Bedrock requires connector_id for connection through AWS connectors.
function tool — works on both
We sent requests with the same function definition to both platforms.
# Azure
curl -s "${ENDPOINT}openai/v1/responses" \
-H "Content-Type: application/json" \
-H "api-key: ${KEY}" \
-d '{
"model": "gpt-41-mini",
"input": "What is the weather in Tokyo?",
"tools": [{"type": "function", "name": "get_weather", "description": "Get weather", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}}}]
}'
We sent the same payload to Bedrock (with only the model name, endpoint, and authentication method changed), and both returned a function_call. The output structure was identical.
code_interpreter tool — Azure only
On Azure, you can execute Python code on the server side.
curl -s "${ENDPOINT}openai/v1/responses" \
-H "Content-Type: application/json" \
-H "api-key: ${KEY}" \
-d '{
"model": "gpt-41-mini",
"input": "Calculate the first 10 Fibonacci numbers using Python.",
"tools": [{"type": "code_interpreter", "container": {"type": "auto"}}]
}'
The output in the response included a code_interpreter_call type, returning the executed code and results.
{
"output": [
{"type": "message", "content": [{"type": "output_text", "text": "Sure! Here's a Python code snippet..."}]},
{"type": "code_interpreter_call", "code": "def fibonacci(n):\n fib = [0, 1]\n ..."},
{"type": "message", "content": [{"type": "output_text", "text": "The first 10 Fibonacci numbers are:\n\n0, 1, 1, 2, 3, 5, 8, 13, 21, 34"}]}
]
}
Bedrock's unsupported tools
When specifying web_search, code_interpreter, or file_search on Bedrock, all returned the same error:
Invalid 'tools': unknown variant `web_search`, expected `function` or `mcp`
Error messages for each tool
web_search:
Failed to deserialize the JSON body into the target type: ?[0]: Invalid 'tools': unknown variant `web_search`, expected `function` or `mcp` at line 4 column 37
code_interpreter:
Failed to deserialize the JSON body into the target type: ?[0]: Invalid 'tools': unknown variant `code_interpreter`, expected `function` or `mcp` at line 4 column 74
file_search:
Failed to deserialize the JSON body into the target type: ?[0]: Invalid 'tools': unknown variant `file_search`, expected `function` or `mcp` at line 4 column 69
Bedrock Mantle only supports two tool types: function and mcp.
Summary
In this article, we compared the Responses API of Azure AI Foundry and Bedrock Mantle using the same tests. As preparation for the upcoming availability of GPT-5 series models on Bedrock, here's what we've learned so far:
- ✅ Compatible response structure — The JSON structure with
id,object,status,output[]is identical. Existing client code can be reused with just endpoint and model name changes - ✅
previous_response_idworks on both — Server-side conversation history retention functions on both Azure and Bedrock - ⚠️ Server-side tools on Azure only —
web_search,code_interpreter,file_searchare not yet supported on Bedrock. Tool layer abstraction is needed on the application side - ⏳ Waiting for expanded model support — Bedrock currently only supports two gpt-oss models. Once GPT-5 series becomes available, comparisons using identical models will be possible
These gaps (server-side tools and proprietary model support) are expected to be addressed in the Stateful Runtime Environment. We plan to conduct further testing with identical models once they become available.
References
- Testing Codex CLI v0.124.0's Amazon Bedrock Support (Part 1)
- Amazon Bedrock now supports OpenAI's Responses API (by Takakuni-san)
- Bedrock Mantle documentation - Generate responses using OpenAI APIs
- Use the Azure OpenAI Responses API
- The Responses API in Azure AI Foundry is now generally available
- Introducing the Stateful Runtime Environment for Agents in Amazon Bedrock
- OpenAI and Amazon announce strategic partnership