I tried calling Opus 4.7 from LangChain using Bedrock's Anthropic-compatible endpoint (Mantle)
This page has been translated by machine translation. View original
The Claude Opus 4.7 released today was introduced in the AWS official blog with a client called AnthropicBedrockMantle.
from anthropic import AnthropicBedrockMantle
client = AnthropicBedrockMantle(aws_region=REGION)
The Mantle documentation only mentions OpenAI-compatible APIs (Chat Completions / Responses). I was curious about where this client connects to, so I investigated.
Identifying the endpoint from debug logs
I enabled debug logging with AnthropicBedrockMantle and made a call.
pip install "anthropic[bedrock]"
import logging
logging.basicConfig(level=logging.DEBUG)
from anthropic import AnthropicBedrockMantle
client = AnthropicBedrockMantle(aws_region="ap-northeast-1")
response = client.messages.create(
model="anthropic.claude-opus-4-7",
max_tokens=256,
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.content[0].text)
It worked fine, and the logs output the following lines:
DEBUG:anthropic._base_client:Sending HTTP Request: POST https://bedrock-mantle.ap-northeast-1.api.aws/anthropic/v1/messages
INFO:httpx:HTTP Request: POST https://bedrock-mantle.ap-northeast-1.api.aws/anthropic/v1/messages "HTTP/1.1 200 OK"
I found that the bedrock-mantle endpoint has a path /anthropic/v1/messages. This is an Anthropic Messages API-compatible endpoint, different from the OpenAI-compatible paths (/v1/chat/completions, /v1/responses) mentioned in the Mantle documentation.
Path compatibility status
Here are the results of testing each path with Opus 4.7:
| API Path | Format | Opus 4.7 |
|---|---|---|
/v1/models |
OpenAI compatible | ✅ Listed |
/v1/chat/completions |
OpenAI compatible | ❌ "This model is not supported" |
/v1/responses |
OpenAI compatible | ❌ "This model is not supported" |
/anthropic/v1/messages |
Anthropic compatible | ✅ Responds normally |
Currently, only Opus 4.7 supports this Anthropic-compatible path. Opus 4.6 and Sonnet 4.6 do not work with it.
Testing directly with curl
Now that I know the endpoint, I confirmed it with curl. The SigV4 signature service name is bedrock-mantle.
curl -s -X POST \
"https://bedrock-mantle.ap-northeast-1.api.aws/anthropic/v1/messages" \
--aws-sigv4 "aws:amz:ap-northeast-1:bedrock-mantle" \
--user "${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}" \
-H "x-amz-security-token: ${AWS_SESSION_TOKEN}" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "anthropic.claude-opus-4-7",
"max_tokens": 64,
"messages": [{"role": "user", "content": "Hello!"}]
}'
{
"model": "claude-opus-4-7",
"content": [
{
"type": "text",
"text": "Hello! How can I help you today?"
}
],
"stop_reason": "end_turn",
"usage": {
"input_tokens": 24,
"output_tokens": 15,
"service_tier": "standard"
}
}
Note that the anthropic-version header is required.
Using plain Anthropic SDK with short-term Bedrock API keys
Here's the main point: since the endpoint is Anthropic Messages API-compatible, I tried using the plain Anthropic client by replacing the base_url.
For authentication, I used a short-term Bedrock API key. These can be issued with aws-bedrock-token-generator and are valid for up to 12 hours. You don't need to create an IAM user; keys can be generated from your current credentials.
pip install aws-bedrock-token-generator
from aws_bedrock_token_generator import provide_token
import anthropic
token = provide_token(region='ap-northeast-1')
client = anthropic.Anthropic(
base_url="https://bedrock-mantle.ap-northeast-1.api.aws/anthropic",
api_key=token
)
response = client.messages.create(
model="anthropic.claude-opus-4-7",
max_tokens=64,
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.content[0].text)
print(f"model: {response.model}")
print(f"usage: input={response.usage.input_tokens}, output={response.usage.output_tokens}")
Hello! How can I help you today?
model: claude-opus-4-7
usage: input=24, output=15
It worked! I was able to call Opus 4.7 through Bedrock using the plain Anthropic SDK with the same user experience as the Anthropic API. The key points are:
- Set
base_urltohttps://bedrock-mantle.{region}.api.aws/anthropic - Pass the short-term Bedrock API key to
api_key - Everything else works the same as the normal Anthropic API
This opens the possibility of using existing code and tools built for the Anthropic SDK within Bedrock's secure environment.
Confirmed working with LangChain
It also works with ChatAnthropic from langchain-anthropic by simply replacing the anthropic_api_url.
pip install langchain-anthropic
from aws_bedrock_token_generator import provide_token
from langchain_anthropic import ChatAnthropic
token = provide_token(region='ap-northeast-1')
llm = ChatAnthropic(
model="anthropic.claude-opus-4-7",
anthropic_api_key=token,
anthropic_api_url="https://bedrock-mantle.ap-northeast-1.api.aws/anthropic",
max_tokens=64,
)
response = llm.invoke("Amazon Bedrockとは?1行で。")
print(response.content)
Amazon Bedrockとは、AWSが提供する、複数の基盤モデル(FM)をAPI経由で利用できるフルマネージド型の生成AIサービスです。
I confirmed that ecosystems based on the Anthropic SDK can use Bedrock models just by replacing the endpoint.
Conclusion
Bedrock Mantle has added an Anthropic Messages API-compatible endpoint (/anthropic/v1/messages).
Until now, Mantle has only provided OpenAI-compatible APIs (Chat Completions / Responses), but in February 2026, a strategic partnership between OpenAI and Amazon was announced, which might explain this expansion toward bundling native APIs from various vendors.
Currently, there are some limitations:
- Only supports Opus 4.7 (Opus 4.6 / Sonnet 4.6 are not supported)
- The
/anthropic/v1/messagespath is not yet documented in official documentation
While it may be too early for production use, the ability to seamlessly call Bedrock models from workflows using the Anthropic SDK is very attractive. I look forward to future developments.