I tried Amazon Bedrock support for Codex CLI v0.124.0

I tried Amazon Bedrock support for Codex CLI v0.124.0

Codex CLI v0.124.0 now natively supports Amazon Bedrock. Through Bedrock Mantle's Responses API, I was able to use the coding agent with AWS authentication only, without requiring an OpenAI API key.
2026.04.25

This page has been translated by machine translation. View original

Next week, on April 28th, "What's Next with AWS" by AWS and OpenAI is scheduled.

https://aws.amazon.com/events/whats-next-with-aws/

Since the strategic partnership between OpenAI and Amazon announced in February 2026 (including up to $50 billion investment and Bedrock integration), collaboration between AWS and OpenAI has been accelerating. During the April 17th Opus 4.7 release, we also investigated the Anthropic-compatible endpoint for Bedrock Mantle.

https://dev.classmethod.jp/articles/bedrock-mantle-anthropic-endpoint-opus-4-7/

While tracking developments around Mantle, I found the following entry in the Change Log of OpenAI Codex CLI v0.124.0 released on April 23rd.

https://github.com/openai/codex/releases/tag/rust-v0.124.0

Added first-class Amazon Bedrock support for OpenAI-compatible providers, including AWS SigV4 signing and AWS credential-based auth. (#17820)

Having discovered this release a week before the event, I'll share my results using Codex CLI with Mantle's OpenAI-compatible side (Responses API).

Bedrock Mantle API Structure

Bedrock Mantle provides inference endpoints on Amazon Bedrock that are compatible with OpenAI and Anthropic API specifications. You can connect to models via Bedrock by simply changing the base URL in existing OpenAI SDK or Anthropic SDK code.

Currently, Mantle provides three APIs:

API Format Purpose
Responses API (/v1/responses) OpenAI compatible Stateful conversations (recommended)
Chat Completions API (/v1/chat/completions) OpenAI compatible Stateless multi-turn chat
Messages API (/anthropic/v1/messages) Anthropic compatible Anthropic native interface

https://docs.aws.amazon.com/bedrock/latest/userguide/apis.html

In my previous article, I tested the Messages API. Since Codex CLI uses the Responses API, I'll be testing this one now.

Installing Codex CLI

You can download Codex CLI binaries from GitHub Releases. Node.js is not required, and it works as a single binary.

curl -sL "https://github.com/openai/codex/releases/download/rust-v0.124.0/codex-aarch64-unknown-linux-musl.tar.gz" -o codex.tar.gz
tar xzf codex.tar.gz
mv codex-aarch64-unknown-linux-musl ~/.local/bin/codex
chmod +x ~/.local/bin/codex
$ codex --version
codex-cli 0.124.0

Installation via npm and Homebrew is also supported. For details, refer to the release page.

Configuring Bedrock Connection

config.toml

Add the following to ~/.codex/config.toml:

model = "openai.gpt-oss-120b"
model_provider = "amazon-bedrock"
web_search = "disabled"

[model_providers.amazon-bedrock]
[model_providers.amazon-bedrock.aws]
region = "us-east-1"

The key points are:

  • Specify the built-in Bedrock provider with model_provider = "amazon-bedrock"
  • Disable web search tools with web_search = "disabled" (explained below)
  • Specify the Mantle region with aws.region (default is us-east-1)

Disabling web_search is required

Without setting web_search = "disabled", the following error occurs:

ERROR: {"error":{"code":"validation_error","message":"Failed to deserialize the JSON body into the target type: ?[4]: Invalid 'tools': unknown variant `web_search`, expected `function` or `mcp` at line 1 column 31513","param":null,"type":"invalid_request_error"}}

By default, Codex CLI includes the web_search tool in requests, but Bedrock Mantle only supports function and mcp types. Adding web_search = "disabled" resolved this issue.

Passing AWS credentials

AWS credentials must be passed via environment variables. Using only the profile in ~/.aws/credentials results in the following error:

ERROR: stream disconnected before completion: failed to load AWS credentials: the credentials provider was not properly configured

This was resolved by exporting credentials to environment variables with aws configure export-credentials:

eval "$(aws configure export-credentials --format env)"

Testing

With the configuration complete, I tested it using codex exec:

codex exec --skip-git-repo-check "Say hello in Japanese. Reply in one short sentence only."
OpenAI Codex v0.124.0 (research preview)
--------
workdir: /tmp/codex-test
model: openai.gpt-oss-120b
provider: amazon-bedrock
approval: never
sandbox: read-only
reasoning effort: none
reasoning summaries: none
--------
user
Say hello in Japanese. Reply in one short sentence only.
codex
こんにちは。
tokens used
8,841
こんにちは。

It worked! The output shows provider: amazon-bedrock, confirming that the inference is being performed through Bedrock Mantle.

I also measured the response speed for code generation:

codex exec --skip-git-repo-check "FizzBuzzをPythonで書いてください。コードのみ、説明不要。"
for i in range(1, 101):
    if i % 15 == 0:
        print("FizzBuzz")
    elif i % 3 == 0:
        print("Fizz")
    elif i % 5 == 0:
        print("Buzz")
    else:
        print(i)

Simple code generation tasks responded within 2-3 seconds, which is practical for everyday use.

Model Task Response Time
openai.gpt-oss-120b FizzBuzz (Python) ~2.2 seconds
openai.gpt-oss-20b FizzBuzz (Python) ~3.1 seconds
openai.gpt-oss-120b CSV aggregation→JSON output function ~3.9 seconds

Available Models on Mantle and Responses API Support

I retrieved the list of available models from Mantle's /v1/models endpoint:

eval "$(aws configure export-credentials --format env)"
curl -s --aws-sigv4 "aws:amz:us-east-1:bedrock-mantle" \
  --user "${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}" \
  -H "x-amz-security-token: ${AWS_SESSION_TOKEN}" \
  "https://bedrock-mantle.us-east-1.api.aws/v1/models"

Note that the service name for SigV4 signing is bedrock-mantle instead of the usual bedrock-runtime.

39 models were listed.

Model List (All 39 models)
anthropic.claude-opus-4-7
deepseek.v3.1
deepseek.v3.2
google.gemma-3-12b-it
google.gemma-3-27b-it
google.gemma-3-4b-it
minimax.minimax-m2
minimax.minimax-m2.1
minimax.minimax-m2.5
mistral.devstral-2-123b
mistral.magistral-small-2509
mistral.ministral-3-14b-instruct
mistral.ministral-3-3b-instruct
mistral.ministral-3-8b-instruct
mistral.mistral-large-3-675b-instruct
mistral.voxtral-mini-3b-2507
mistral.voxtral-small-24b-2507
moonshotai.kimi-k2-thinking
moonshotai.kimi-k2.5
nvidia.nemotron-nano-12b-v2
nvidia.nemotron-nano-3-30b
nvidia.nemotron-nano-9b-v2
nvidia.nemotron-super-3-120b
openai.gpt-oss-120b
openai.gpt-oss-20b
openai.gpt-oss-safeguard-120b
openai.gpt-oss-safeguard-20b
qwen.qwen3-235b-a22b-2507
qwen.qwen3-32b
qwen.qwen3-coder-30b-a3b-instruct
qwen.qwen3-coder-480b-a35b-instruct
qwen.qwen3-coder-next
qwen.qwen3-next-80b-a3b-instruct
qwen.qwen3-vl-235b-a22b-instruct
writer.palmyra-vision-7b
zai.glm-4.6
zai.glm-4.7
zai.glm-4.7-flash
zai.glm-5

The list includes models from various providers such as OpenAI, Anthropic, DeepSeek, Mistral, Qwen, NVIDIA, and Google.

Responses API Compatibility Testing

Codex CLI uses the Responses API (/v1/responses). I tested each listed model to see if it's compatible with the Responses API via Codex CLI.

Working models:

Model Response
openai.gpt-oss-120b ✅ Works normally
openai.gpt-oss-20b ✅ Works normally

Models not supporting the Responses API:

All other models returned the following error:

ERROR: {"error":{"code":"validation_error","message":"The model 'deepseek.v3.1' does not support the '/v1/responses' API","param":null,"type":"invalid_request_error"}}

Even anthropic.claude-opus-4-7 doesn't support the Responses API. As mentioned in my previous article, Opus 4.7 is only available through the Messages API (/anthropic/v1/messages).

Although 39 models were listed, only openai.gpt-oss-120b and openai.gpt-oss-20b support the Responses API. Note that Mantle's documentation only mentions openai.gpt-oss-120b, but openai.gpt-oss-20b also works.

Conclusion

In this article, I connected to Bedrock Mantle's Responses API using Codex CLI v0.124.0, enabling the use of OpenAI's coding agent with just AWS authentication credentials, without needing an OpenAI API key.

From an enterprise perspective, access control via AWS IAM and consolidating usage costs into AWS billing are attractive features. There's potential to use existing AWS governance mechanisms like IAM policies with bedrock:InvokeModel and resource tags for cost allocation.

However, at the time of writing, only 2 out of 39 models support the Responses API, which is quite limited.

Nevertheless, including the previously mentioned ability to use Claude Desktop with Bedrock's pay-as-you-go pricing, AI tools on the Bedrock ecosystem are definitely increasing.

https://dev.classmethod.jp/articles/amazon-bedrock-claude-desktop-cowork-3p-inference/

This Codex CLI support is part of that trend. Future developments like expanded model support will be worth watching, and the official page for next week's event promises "Be the first to see new agentic solutions and platform capabilities," which sounds intriguing.

https://aws.amazon.com/events/whats-next-with-aws/

Share this article