[アップデート] Amazon Bedrock が OpenAI の Responses API をサポートするようになりました #AWSreInvent
こんにちは!クラウド事業本部コンサルティング部のたかくに(@takakuni_)です。
re:Invent 2025 で Amazon Bedrock が OpenAI の Responses API をサポートするようになりました。
もう年の瀬ですが、このアップデートについて触れていきたいと思います。
Responses API
Responses API は OpenAI SDK で提供される API です。
テキスト生成モデルを OpenAI SDK で扱う際に、Chat Completions API と Responses API があるのですが、Responses API は Chat Completions API に比べて新しく提供された API になります。
以下のように、 Completions API から Responses API への移行ガイドなどもあったりします。(懐かしい)
アップデート内容
今回のアップデートですが、「Amazon Bedrock が OpenAI の Responses API をサポートするようになりました。」というものです。
私も知らなかったのですが、Amazon Bedrock API Key を利用することで、(Amazon Bedrock 上の)OpenAI のオープンウェイトモデルを OpenAI SDK で実行可能だったようです。
ただし、実行可能な API は先ほどの Chat Completions API のみだった状況でした。
from openai import OpenAI
client = OpenAI(
base_url="https://bedrock-runtime.us-west-2.amazonaws.com/openai/v2",
api_key="$AWS_BEARER_TOKEN_BEDROCK" # Replace with actual API key
)
completion = client.chat.completions.create(
model="gpt-oss-20b",
messages=[
{
"role": "developer",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
)
print(completion.choices[0].message)
今回のアップデートによって、Responses API もサポートされました。というものです。
今日時点で、モデルは gpt-oss-20b, gpt-oss-120b をサポートし、他のモデルのサポートも近日中に提供予定のようです。
Responses API support is available today starting with OpenAI's GPT OSS 20B/120B models, with support for other models coming soon.
Amazon Bedrock now supports Responses API from OpenAI
Project Mantle
先ほどご紹介した Bedrock API キー で Completions API の実行する方式は Amazon Bedrock Runtime のエンドポイントを利用していました。
- Make an HTTP request with an Amazon Bedrock Runtime endpoint.
- Use an OpenAI SDK request with an Amazon Bedrock Runtime endpoint.
あまり公開情報がないのですが、Amazon Bedrock 上の分散推論エンジンとして、新たに Mantle が発表されました。
今回のアップデートでは、この Mantle を経由して、 OpenAI 互換の API エンドポイントが提供されています。
Amazon Bedrock provides OpenAI compatible API endpoints for model inference, powered by Mantle, a distributed inference engine for large-scale machine learning model serving.
Generate responses using OpenAI APIs
ちなみに Project Mantle で提供されるエンドポイントでは Completions も、Responses API もサポートしています。良いですね。
概要
ざっくり、以下のドキュメントをかいつまんでいきます。
認証方式
まずはじめに、 Responses API は HTTP 経由でリクエストを送る方式と、OpenAI SDK 経由でリクエストする方式があります。
つづいて、Amazon Bedrock は、Amazon Bedrock API Keys または AWS IAM のどちらかで利用できます。
その上で Amazon Bedrock API key を利用する場合は、 OpenAI SDK が必須、AWS IAM を利用する場合は HTTP 経由もサポートしています。
環境変数
特定の環境変数を利用することで、コードにベタ書きする行為を防げます。積極的に使っていきましょう。
- OPENAI_API_KEY
- Set to your Amazon Bedrock API key
- OPENAI_BASE_URL
- Set to the Amazon Bedrock endpoint for your region (for example, https://bedrock-mantle.us-east-1.api.aws)
やってみた
早速、試してみましょう。
gpt-oss-20b, gpt-oss-120b は、バージニア北部、オハイオ、オレゴンリージョンで提供されているため、今回はオレゴンを選びました。
アクセス方法は、以下の 3 つを試してみます。
- Amazon Bedrock API key の短期 API キー
- Amazon Bedrock API key の長期 API キー
- AWS IAM
OpenAI SDK 経由
エンドポイントの登録
Mantle 経由でモデルを扱うには、リクエスト先 URL を更新する必要があります。環境変数 OPENAI_BASE_URL を設定します。
今回、オレゴンリージョンを利用するので次の環境変数をセットしました。末尾に /v1 をつけるのを忘れないようにしましょう。
export OPENAI_BASE_URL=https://bedrock-mantle.us-west-2.api.aws/v1
利用可能なリージョンとエンドポイント名は Supported Regions and Endpoints を確認しましょう。
Models API
モデルの推論に入る前に Models API で、Mantle 経由で利用可能なモデルのリストを取得してみます。
# List all available models using the OpenAI SDK
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()
models = client.models.list()
for model in models.data:
print(model.id)
Amazon Bedrock API key の短期 API キー
まずは短期の API キーから試してみます。
OPENAI_BASE_URL に加えて、AWS_REGION, OPENAI_API_KEY を登録します。
export AWS_REGION=us-west-2
export OPENAI_API_KEY=bedrock-api-key-hoge # Amazon Bedrock のコンソールから取得
export OPENAI_BASE_URL=https://bedrock-mantle.us-west-2.api.aws/v1
問題なく実行できました!
takakuni@ short-term % uv run models.py
mistral.magistral-small-2509
deepseek.v3.1
google.gemma-3-27b-it
moonshotai.kimi-k2-thinking
openai.gpt-oss-safeguard-120b
nvidia.nemotron-nano-3-30b
qwen.qwen3-coder-30b-a3b-instruct
openai.gpt-oss-120b
qwen.qwen3-next-80b-a3b-instruct
mistral.ministral-3-14b-instruct
openai.gpt-oss-20b
minimax.minimax-m2
nvidia.nemotron-nano-9b-v2
google.gemma-3-4b-it
openai.gpt-oss-safeguard-20b
nvidia.nemotron-nano-12b-v2
mistral.mistral-large-3-675b-instruct
qwen.qwen3-32b
qwen.qwen3-vl-235b-a22b-instruct
google.gemma-3-12b-it
zai.glm-4.6
mistral.voxtral-small-24b-2507
qwen.qwen3-coder-480b-a35b-instruct
mistral.ministral-3-3b-instruct
mistral.voxtral-mini-3b-2507
qwen.qwen3-235b-a22b-2507
mistral.ministral-3-8b-instruct
Amazon Bedrock API key の長期 API キー
続いて長期の API キーです。
export AWS_REGION=us-west-2
+ export OPENAI_API_KEY=bedrock-api-key-hoge # Amazon Bedrock のコンソールから取得
- export OPENAI_API_KEY=bedrock-api-key-hoge # Amazon Bedrock のコンソールから取得
export OPENAI_BASE_URL=https://bedrock-mantle.us-west-2.api.aws/v1
エラーになってしまいましたね。
takakuni@ openai-api % uv run models.py
Traceback (most recent call last):
File "/Users/takakuni/Documents/openai-api/models.py", line 8, in <module>
models = client.models.list()
^^^^^^^^^^^^^^^^^^^^
File "/Users/takakuni/Documents/openai-api/.venv/lib/python3.12/site-packages/openai/resources/models.py", line 91, in list
return self._get_api_list(
^^^^^^^^^^^^^^^^^^^
File "/Users/takakuni/Documents/openai-api/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1308, in get_api_list
return self._request_api_list(model, page, opts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/takakuni/Documents/openai-api/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1159, in _request_api_list
return self.request(page, options, stream=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/takakuni/Documents/openai-api/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1047, in request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404
調べてみると、bedrock-mantle から始まる、新しい API スキーマが登場しているようです。
たとえば、今回の例で言うと、 bedrock-mantle:ListModels といった API です。
Amazon Bedrock API Keys の長期の API キーではデフォルトで、 AmazonBedrockLimitedAccess ポリシーが付与されており、このポリシーには bedrock-mantle の許可が含まれていません。
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockAPIs",
"Effect": "Allow",
"Action": [
"bedrock:Get*",
"bedrock:List*",
"bedrock:CallWithBearerToken",
"bedrock:BatchDeleteEvaluationJob",
"bedrock:CreateEvaluationJob",
"bedrock:CreateGuardrail",
"bedrock:CreateGuardrailVersion",
"bedrock:CreateInferenceProfile",
"bedrock:CreateModelCopyJob",
"bedrock:CreateModelCustomizationJob",
"bedrock:CreateModelImportJob",
"bedrock:CreateModelInvocationJob",
"bedrock:CreatePromptRouter",
"bedrock:CreateProvisionedModelThroughput",
"bedrock:DeleteCustomModel",
"bedrock:DeleteGuardrail",
"bedrock:DeleteImportedModel",
"bedrock:DeleteInferenceProfile",
"bedrock:DeletePromptRouter",
"bedrock:DeleteProvisionedModelThroughput",
"bedrock:StopEvaluationJob",
"bedrock:StopModelCustomizationJob",
"bedrock:StopModelInvocationJob",
"bedrock:TagResource",
"bedrock:UntagResource",
"bedrock:UpdateGuardrail",
"bedrock:UpdateProvisionedModelThroughput",
"bedrock:ApplyGuardrail",
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "*"
},
{
"Sid": "DescribeKey",
"Effect": "Allow",
"Action": ["kms:DescribeKey"],
"Resource": "arn:*:kms:*:::*"
},
{
"Sid": "APIsWithAllResourceAccess",
"Effect": "Allow",
"Action": [
"iam:ListRoles",
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups"
],
"Resource": "*"
},
{
"Sid": "MarketplaceOperationsFromBedrockFor3pModels",
"Effect": "Allow",
"Action": [
"aws-marketplace:Subscribe",
"aws-marketplace:ViewSubscriptions",
"aws-marketplace:Unsubscribe"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:CalledViaLast": "bedrock.amazonaws.com"
}
}
}
]
}
今回のアップデートで AWS 管理ポリシーに新しく AmazonBedrockMantleFullAccess, AmazonBedrockMantleReadOnly, AmazonBedrockMantleInferenceAccess が追加されました。
AmazonBedrockMantleInferenceAccess を利用してみましょう。
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockMantleInference",
"Effect": "Allow",
"Action": [
"bedrock-mantle:Get*",
"bedrock-mantle:List*",
"bedrock-mantle:CreateInference"
],
"Resource": "arn:aws:bedrock-mantle:*:*:project/*"
},
{
"Sid": "BedrockMantleCallWithBearerToken",
"Effect": "Allow",
"Action": ["bedrock-mantle:CallWithBearerToken"],
"Resource": "*"
}
]
}
こちらも権限を付与すると、うまく動く様になりました。
takakuni@ long-term % uv run models.py
qwen.qwen3-vl-235b-a22b-instruct
openai.gpt-oss-safeguard-120b
qwen.qwen3-32b
mistral.ministral-3-8b-instruct
openai.gpt-oss-safeguard-20b
mistral.voxtral-mini-3b-2507
mistral.magistral-small-2509
nvidia.nemotron-nano-3-30b
mistral.ministral-3-3b-instruct
openai.gpt-oss-20b
zai.glm-4.6
google.gemma-3-4b-it
nvidia.nemotron-nano-12b-v2
mistral.ministral-3-14b-instruct
deepseek.v3.1
google.gemma-3-27b-it
openai.gpt-oss-120b
qwen.qwen3-coder-480b-a35b-instruct
nvidia.nemotron-nano-9b-v2
google.gemma-3-12b-it
moonshotai.kimi-k2-thinking
qwen.qwen3-next-80b-a3b-instruct
qwen.qwen3-coder-30b-a3b-instruct
minimax.minimax-m2
mistral.voxtral-small-24b-2507
qwen.qwen3-235b-a22b-2507
mistral.mistral-large-3-675b-instruct
AWS IAM
ドキュメントの前提条件によると、 OpenAI SDK を利用する場合は、Amazon Bedrock API key が必須と表現されています。
Amazon Bedrock API key (required for OpenAI SDK)
対して、AWS IAM を利用する場合は、 HTTP 経由をサポートしていると記載されています。
AWS credentials (supported for HTTP requests)
ただし、以下の記事のように、 Client 生成時に IAM の認証情報を http_client に含めることで、OpenAI SDK でも IAM 認証が利用できるようです。
http_client クライアントを生成してリクエストを送ります。
import boto3
import httpx
from openai import OpenAI
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
class AWSBedrockMantleAuth(httpx.Auth):
def __init__(self, service: str, region: str, credentials):
self.service = service
self.region = region
self.credentials = credentials
def auth_flow(self, request: httpx.Request):
# 1. 署名対象にするヘッダーを厳選する
# 全てのヘッダーを含めると、送信時に httpx が書き換えて署名エラーになるため
# 必須のものに絞り、かつ大文字小文字を整理
headers_to_sign = {
'host': request.url.host,
'x-amz-date': None, # SigV4Auth が後で追加する
}
# ヘッダーの抽出
aws_request = AWSRequest(
method=request.method,
url=str(request.url),
data=request.content,
headers=headers_to_sign
)
# 署名処理
signer = SigV4Auth(self.credentials, self.service, self.region)
signer.add_auth(aws_request)
# 署名によって生成されたヘッダー(Authorization, X-Amz-Date, X-Amz-Security-Token) を、元の httpx リクエストに反映
for key, value in aws_request.headers.items():
request.headers[key] = value
yield request
# --- 設定 ---
region = "us-west-2"
session = boto3.Session()
credentials = session.get_credentials().get_frozen_credentials()
client = OpenAI(
api_key="dummy",
http_client=httpx.Client(
auth=AWSBedrockMantleAuth("bedrock-mantle", region, credentials)
)
)
models = client.models.list()
for model in models.data:
print(model.id)
こちらもうまく送れていますね。
takakuni@ iam % uv run models.py
openai.gpt-oss-20b
google.gemma-3-27b-it
mistral.voxtral-mini-3b-2507
moonshotai.kimi-k2-thinking
qwen.qwen3-coder-30b-a3b-instruct
mistral.magistral-small-2509
google.gemma-3-4b-it
mistral.ministral-3-8b-instruct
mistral.mistral-large-3-675b-instruct
zai.glm-4.6
nvidia.nemotron-nano-9b-v2
qwen.qwen3-next-80b-a3b-instruct
qwen.qwen3-235b-a22b-2507
qwen.qwen3-coder-480b-a35b-instruct
openai.gpt-oss-120b
nvidia.nemotron-nano-12b-v2
mistral.ministral-3-3b-instruct
openai.gpt-oss-safeguard-20b
deepseek.v3.1
nvidia.nemotron-nano-3-30b
google.gemma-3-12b-it
openai.gpt-oss-safeguard-120b
minimax.minimax-m2
mistral.ministral-3-14b-instruct
qwen.qwen3-vl-235b-a22b-instruct
mistral.voxtral-small-24b-2507
qwen.qwen3-32b
Responses API
Responses API を利用してモデルの推論を行います。
# Create a basic response using the OpenAI SDK
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="openai.gpt-oss-120b",
input=[
{"role": "user", "content": "Hello! How can you help me today?"}
]
)
# response.output から message の text を取り出す
messages = [
item for item in getattr(response, "output", [])
if getattr(item, "type", None) == "message"
]
texts = []
for msg in messages:
for content in getattr(msg, "content", []) or []:
text = getattr(content, "text", None)
if text:
texts.append(text)
text = "\n".join(texts) if texts else None
if text:
print(text)
else:
# 念のためレスポンス全体も確認できるように
print(response)
出力に差異は出てくるものの、うまく実行できていますね。
Amazon Bedrock API key の短期 API キー
takakuni@ short-term % uv run responses.py
Hello! 👋 I’m here to help with almost anything you need. Some of the things I can do include:
- **Answering questions** – from quick facts to deep‑dive explanations on science, history, tech, art, and more.
- **Writing & editing** – essays, blog posts, emails, stories, poems, jokes, resumes, cover letters, etc.
- **Brainstorming & planning** – ideas for projects, gifts, events, business strategies, study schedules, travel itineraries, and so on.
- **Learning & tutoring** – step‑by‑step help with math, programming, language learning, exam prep, or any subject you’re curious about.
- **Coding assistance** – writing, debugging, or explaining code in many languages (Python, JavaScript, Java, C++, SQL, …).
- **Data & analysis guidance** – tips on working with spreadsheets, statistics, visualizations, or interpreting results.
- **Creative fun** – role‑playing, world‑building, character creation, puzzles, riddles, or just a friendly chat.
Just let me know what you’d like to work on or learn about, and we’ll dive right in! 🚀
Amazon Bedrock API key の長期 API キー
takakuni@ long-term % uv run responses.py
Hey there! 👋 I’m here to assist with a wide range of tasks, such as:
- **Answering questions** – from quick facts to deeper explanations on science, history, tech, and more.
- **Writing & editing** – blog posts, essays, reports, creative stories, emails, resumes, cover letters, social‑media copy, etc.
- **Problem‑solving** – math, programming, logic puzzles, data‑analysis guidance, troubleshooting tech issues.
- **Learning & tutoring** – breaking down concepts, creating study guides, designing practice problems, language practice.
- **Planning & organization** – itineraries, project outlines, meeting agendas, goal‑setting frameworks, habit trackers.
- **Brainstorming** – ideas for products, marketing campaigns, gifts, events, scripts, game mechanics, etc.
- **Research assistance** – finding reliable sources, summarizing articles, comparing options.
- **Conversational support** – practicing interview answers, role‑playing scenarios, offering motivation or a friendly chat.
…and pretty much anything else that can be expressed in text.
What’s on your mind today? Let me know how I can help!
AWS IAM
import boto3
import httpx
from openai import OpenAI
from botocore.auth import SigV4Auth
from botocore.awsrequest import AWSRequest
class AWSBedrockMantleAuth(httpx.Auth):
def __init__(self, service: str, region: str, credentials):
self.service = service
self.region = region
self.credentials = credentials
def auth_flow(self, request: httpx.Request):
# 1. 署名対象にするヘッダーを厳選する
# 全てのヘッダーを含めると、送信時に httpx が書き換えて署名エラーになるため
# 必須のものに絞り、かつ大文字小文字を整理
headers_to_sign = {
'host': request.url.host,
'x-amz-date': None, # SigV4Auth が後で追加する
}
# ヘッダーの抽出
aws_request = AWSRequest(
method=request.method,
url=str(request.url),
data=request.content,
headers=headers_to_sign
)
# 署名処理
signer = SigV4Auth(self.credentials, self.service, self.region)
signer.add_auth(aws_request)
# 署名によって生成されたヘッダー(Authorization, X-Amz-Date, X-Amz-Security-Token) を、元の httpx リクエストに反映
for key, value in aws_request.headers.items():
request.headers[key] = value
yield request
# --- 設定 ---
region = "us-west-2"
session = boto3.Session()
credentials = session.get_credentials().get_frozen_credentials()
client = OpenAI(
api_key="dummy",
http_client=httpx.Client(
auth=AWSBedrockMantleAuth("bedrock-mantle", region, credentials)
)
)
response = client.responses.create(
model="openai.gpt-oss-120b",
input=[
{"role": "user", "content": "Hello! How can you help me today?"}
]
)
# response.output から message の text を取り出す
messages = [
item for item in getattr(response, "output", [])
if getattr(item, "type", None) == "message"
]
texts = []
for msg in messages:
for content in getattr(msg, "content", []) or []:
text = getattr(content, "text", None)
if text:
texts.append(text)
text = "\n".join(texts) if texts else None
if text:
print(text)
else:
# 念のためレスポンス全体も確認できるように
print(response)
takakuni@ iam % uv run responses.py
Hello! 👋 I’m here to help with almost anything you might need—whether that’s answering a question, brainstorming ideas, polishing a piece of writing, tackling a bit of code, learning a new concept, planning a trip, drafting an email, solving a puzzle, or just having a friendly chat.
**A quick snapshot of what I can do:**
| Category | Examples of what I can help with |
|----------|-----------------------------------|
| **Information & Research** | Summarize articles, explain complex topics, fact‑check, provide up‑to‑date statistics (as of 2024‑06) |
| **Writing & Editing** | Draft essays, blog posts, letters, resumes, cover letters, poetry, scripts; improve style, grammar, tone, structure |
| **Creative Projects** | Brainstorm story ideas, characters, plot twists, world‑building, jokes, marketing copy, slogans |
| **Programming & Tech** | Explain algorithms, debug code snippets (Python, JavaScript, Java, C++, etc.), write small programs, suggest libraries, help with Git |
| **Learning & Tutoring** | Walk through math problems, science concepts, language practice, test‑prep tips |
| **Productivity & Planning** | Create to‑do lists, project plans, meeting agendas, time‑management strategies |
| **Personal & Lifestyle** | Meal planning, fitness routines, travel itineraries, gift ideas, hobby recommendations |
| **Conversation & Support** | Offer a listening ear, role‑play scenarios, practice interviews, discuss philosophy, explore hobbies |
Just let me know what you’re working on or curious about, and we’ll dive in together! 🌟
Chat Completions API
実は Chat Completions API も Mantle 経由で実行できるようになっています。
再掲になりますが、以前は次のように Amazon Berock Runtime エンドポイント経由で実行する仕組みでした。
from openai import OpenAI
client = OpenAI(
base_url="https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1",
api_key="$AWS_BEARER_TOKEN_BEDROCK" # Replace with actual API key
)
completion = client.chat.completions.create(
model="openai.gpt-oss-20b-1:0",
messages=[
{
"role": "developer",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
)
print(completion.choices[0].message)
HTTP 経由
続いて HTTP 経由です。cURL で試してみます。Authorization ヘッダーに API Key の情報を付与して、リクエストします。
# Create a basic response
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
curl -X POST $OPENAI_BASE_URL/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "openai.gpt-oss-120b",
"input": [
{"role": "user", "content": "Hello! How can you help me today?"}
]
}'
Models API
# List all available models
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
curl -X GET $OPENAI_BASE_URL/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
Amazon Bedrock API key の短期 API キー
takakuni@ short-term % curl -X GET $OPENAI_BASE_URL/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
{"data":[{"created":1764460800,"id":"qwen.qwen3-32b","object":"model","owned_by":"system"},{"created":1764460800,"id":"moonshotai.kimi-k2-thinking","object":"model","owned_by":"system"},{"created":1763923750,"id":"mistral.ministral-3-8b-instruct","object":"model","owned_by":"system"},{"created":1763769600,"id":"nvidia.nemotron-nano-12b-v2","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-12b-it","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.magistral-small-2509","object":"model","owned_by":"system"},{"created":1765065600,"id":"nvidia.nemotron-nano-3-30b","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-120b","object":"model","owned_by":"system"},{"created":1763923654,"id":"mistral.ministral-3-3b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-235b-a22b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-next-80b-a3b-instruct","object":"model","owned_by":"system"},{"created":1763769600,"id":"nvidia.nemotron-nano-9b-v2","object":"model","owned_by":"system"},{"created":1764460800,"id":"deepseek.v3.1","object":"model","owned_by":"system"},{"created":1763923896,"id":"mistral.mistral-large-3-675b-instruct","object":"model","owned_by":"system"},{"created":1763923865,"id":"mistral.ministral-3-14b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-safeguard-120b","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-coder-30b-a3b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.voxtral-mini-3b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-safeguard-20b","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-vl-235b-a22b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-27b-it","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.voxtral-small-24b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-coder-480b-a35b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"minimax.minimax-m2","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-20b","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-4b-it","object":"model","owned_by":"system"},{"created":1764460800,"id":"zai.glm-4.6","object":"model","owned_by":"system"}],"object":"list"}%
Amazon Bedrock API key の長期 API キー
takakuni@ long-term % curl -X GET $OPENAI_BASE_URL/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
{"data":[{"created":1763923654,"id":"mistral.ministral-3-3b-instruct","object":"model","owned_by":"system"},{"created":1763923750,"id":"mistral.ministral-3-8b-instruct","object":"model","owned_by":"system"},{"created":1763769600,"id":"nvidia.nemotron-nano-12b-v2","object":"model","owned_by":"system"},{"created":1765065600,"id":"nvidia.nemotron-nano-3-30b","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-120b","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-20b","object":"model","owned_by":"system"},{"created":1763923896,"id":"mistral.mistral-large-3-675b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.voxtral-small-24b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-next-80b-a3b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-235b-a22b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-safeguard-20b","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.magistral-small-2509","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-coder-30b-a3b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-27b-it","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-safeguard-120b","object":"model","owned_by":"system"},{"created":1763769600,"id":"nvidia.nemotron-nano-9b-v2","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-vl-235b-a22b-instruct","object":"model","owned_by":"system"},{"created":1763923865,"id":"mistral.ministral-3-14b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.voxtral-mini-3b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-32b","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-coder-480b-a35b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"deepseek.v3.1","object":"model","owned_by":"system"},{"created":1764460800,"id":"minimax.minimax-m2","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-12b-it","object":"model","owned_by":"system"},{"created":1764460800,"id":"moonshotai.kimi-k2-thinking","object":"model","owned_by":"system"},{"created":1764460800,"id":"zai.glm-4.6","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-4b-it","object":"model","owned_by":"system"}],"object":"list"}%
AWS IAM
IAM は別途、SigV4 署名付きリクエストが必要なので、awscurl を使って楽をします。
awscurl --service bedrock-mantle \
--region us-west-2 \
"https://bedrock-mantle.us-west-2.api.aws/v1/models"
こちらも問題なく出力されていますね。
takakuni@ iam % awscurl --service bedrock-mantle \
--region us-west-2 \
"https://bedrock-mantle.us-west-2.api.aws/v1/models"
{"data":[{"created":1764460800,"id":"mistral.voxtral-small-24b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-12b-it","object":"model","owned_by":"system"},{"created":1765065600,"id":"nvidia.nemotron-nano-3-30b","object":"model","owned_by":"system"},{"created":1763769600,"id":"nvidia.nemotron-nano-9b-v2","object":"model","owned_by":"system"},{"created":1764460800,"id":"zai.glm-4.6","object":"model","owned_by":"system"},{"created":1763923896,"id":"mistral.mistral-large-3-675b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-coder-30b-a3b-instruct","object":"model","owned_by":"system"},{"created":1763769600,"id":"nvidia.nemotron-nano-12b-v2","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.magistral-small-2509","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-32b","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-safeguard-120b","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-120b","object":"model","owned_by":"system"},{"created":1763923654,"id":"mistral.ministral-3-3b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"moonshotai.kimi-k2-thinking","object":"model","owned_by":"system"},{"created":1764460800,"id":"mistral.voxtral-mini-3b-2507","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-27b-it","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-safeguard-20b","object":"model","owned_by":"system"},{"created":1764460800,"id":"minimax.minimax-m2","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-235b-a22b-2507","object":"model","owned_by":"system"},{"created":1763923865,"id":"mistral.ministral-3-14b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-next-80b-a3b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-vl-235b-a22b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"deepseek.v3.1","object":"model","owned_by":"system"},{"created":1763923750,"id":"mistral.ministral-3-8b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"openai.gpt-oss-20b","object":"model","owned_by":"system"},{"created":1764460800,"id":"qwen.qwen3-coder-480b-a35b-instruct","object":"model","owned_by":"system"},{"created":1764460800,"id":"google.gemma-3-4b-it","object":"model","owned_by":"system"}],"object":"list"}
Responses API
あまり変わり映えないですが、Responses API も実行します。エンドポイントが /responses になるくらいでしょうか。
# Create a basic response
# Requires OPENAI_API_KEY and OPENAI_BASE_URL environment variables
curl -X POST $OPENAI_BASE_URL/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "openai.gpt-oss-120b",
"input": [
{"role": "user", "content": "Hello! How can you help me today?"}
]
}'
Amazon Bedrock API key の短期 API キー
takakuni@ short-term % curl -X POST $OPENAI_BASE_URL/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "openai.gpt-oss-120b",
"input": [
{"role": "user", "content": "Hello! How can you help me today?"}
]
}'
{"background":false,"created_at":1766909534.0,"error":null,"id":"resp_JXQHGDL3F37PJMKIZRMNRZK6GGOUMONYLMD36E5ENUQMFQNLB77Q","incomplete_details":null,"instructions":null,"metadata":{},"model":"openai.gpt-oss-120b","object":"response","output":[{"content":[{"text":"The user asks \"Hello! How can you help me today?\" This is a casual greeting and question. We should respond friendly, explain capabilities. Follow policies: don't mention policies. Provide a brief overview of what we can do: answering questions, summarizing, tutoring, creative writing, etc. Ask if they have a specific request.","type":"reasoning_text"}],"id":"msg_8015ade90331c021","status":"completed","summary":[],"type":"reasoning"},{"content":[{"annotations":[],"text":"Hey there! 👋 I’m here to help with a wide range of things, such as:\n\n- **Answering questions** – factual info, explanations, or advice on everyday topics. \n- **Research & summarizing** – pulling together and condensing information from multiple sources. \n- **Writing & editing** – essays, reports, emails, stories, poems, code snippets, you name it. \n- **Learning & tutoring** – walking through concepts, solving problems, or providing study tips. \n- **Creative brainstorming** – ideas for projects, recipes, travel plans, marketing copy, etc. \n- **Technical support** – debugging code, explaining algorithms, helping with software tools.\n\nAnything specific on your mind today? Let me know how I can assist you!","type":"output_text"}],"id":"msg_b02edcdf73954830","role":"assistant","status":"completed","type":"message"}],"parallel_tool_calls":true,"service_tier":"default","status":"completed","temperature":1.0,"text":{"format":{"type":"text"},"verbosity":"medium"},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled"}%
Amazon Bedrock API key の長期 API キー
takakuni@ long-term % curl -X POST $OPENAI_BASE_URL/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "openai.gpt-oss-120b",
"input": [
{"role": "user", "content": "Hello! How can you help me today?"}
]
}'
{"background":false,"created_at":1766909552.0,"error":null,"id":"resp_IYXJEOQGDA3TFZOYC6PXJY3HYZ4MGRTYPU5CRCJQXT35GIADROYA","incomplete_details":null,"instructions":null,"metadata":{},"model":"openai.gpt-oss-120b","object":"response","output":[{"content":[{"text":"User greeted and asked \"How can you help me today?\" We need to respond as AI assistant. It's general greeting and question. Should introduce capabilities and ask follow-up. Keep friendly tone.","type":"reasoning_text"}],"id":"msg_8f283b5d077e772f","status":"completed","summary":[],"type":"reasoning"},{"content":[{"annotations":[],"text":"Hey there! 👋\n\nI’m here to help with a wide range of things, including:\n\n- **Answering questions** – facts, explanations, how‑tos, or deep‑dive research on almost any topic. \n- **Writing & editing** – essays, reports, blog posts, creative stories, emails, cover letters, social‑media copy, scripts, poetry, you name it. \n- **Learning & tutoring** – break down complex concepts, walk through problems step‑by‑step, generate practice questions, or create study guides. \n- **Planning & organizing** – itineraries, project outlines, to‑do lists, meal plans, budgeting spreadsheets, event schedules, etc. \n- **Technical assistance** – code snippets, debugging help, algorithm explanations, data‑analysis ideas, workflow automation, and more. \n- **Creative brainstorming** – product names, marketing slogans, game mechanics, plot twists, character backstories, design concepts. \n- **Personal guidance** – productivity tips, habit‑building strategies, career advice, interview prep, communication skills, etc. \n\nIf you have something specific in mind—or just want to explore ideas—let me know and we’ll dive in together! 🚀","type":"output_text"}],"id":"msg_a24ec82c3c372942","role":"assistant","status":"completed","type":"message"}],"parallel_tool_calls":true,"service_tier":"default","status":"completed","temperature":1.0,"text":{"format":{"type":"text"},"verbosity":"medium"},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled"}%
IAM
awscurl --service bedrock-mantle \
--region us-west-2 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"model": "openai.gpt-oss-120b",
"input": [
{
"role": "user",
"content": "Hello! How can you help me today?"
}
]
}' \
"https://bedrock-mantle.us-west-2.api.aws/v1/responses"
takakuni@ iam % awscurl --service bedrock-mantle \
--region us-west-2 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"model": "openai.gpt-oss-120b",
"input": [
{
"role": "user",
"content": "Hello! How can you help me today?"
}
]
}' \
"https://bedrock-mantle.us-west-2.api.aws/v1/responses"
{"background":false,"created_at":1766909761.0,"error":null,"id":"resp_7IKB4GCGBKJGINLHOLPIP2M2IL2IQYMRP2SMIN6K5PMCVDS37UTQ","incomplete_details":null,"instructions":null,"metadata":{},"model":"openai.gpt-oss-120b","object":"response","output":[{"content":[{"text":"The user says \"Hello! How can you help me today?\" It's a general greeting. We need to respond with a friendly greeting and explain capabilities. Use the style guidelines. Probably ask how can assist. Provide short overview.","type":"reasoning_text"}],"id":"msg_aa80debcdef3f7c3","status":"completed","summary":[],"type":"reasoning"},{"content":[{"annotations":[],"text":"Hi there! 👋 \n\nI’m here to help with a wide range of tasks, such as:\n\n* **Answering questions** – facts, explanations, troubleshooting, or quick look‑ups. \n* **Writing & editing** – drafts, emails, essays, reports, creative pieces, or polishing existing text. \n* **Learning & tutoring** – breaking down concepts, generating practice problems, or guiding you through step‑by‑step solutions. \n* **Planning & organization** – to‑do lists, schedules, project outlines, travel itineraries, recipes, etc. \n* **Coding assistance** – explaining code, debugging, writing snippets, or teaching programming concepts. \n* **Fun & brainstorming** – ideas for stories, games, gifts, hobbies, or just a friendly chat.\n\nWhat would you like to work on or discuss today?","type":"output_text"}],"id":"msg_8c819d9e7b34b057","role":"assistant","status":"completed","type":"message"}],"parallel_tool_calls":true,"service_tier":"default","status":"completed","temperature":1.0,"text":{"format":{"type":"text"},"verbosity":"medium"},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled"}
まとめ
以上、「Amazon Bedrock が OpenAI の Responses API をサポートするようになりました。」でした。
Project Mantle について、非常に気になりますね。
今回みたいに、今後、LLM プロバイダーが提供するネイティブな SDK を Bedrock でもサポートしていくのでしょうか。
楽しみです。クラウド事業本部コンサルティング部のたかくに(@takakuni_)でした!










