I tried running Kiro CLI on Lambda functions

I tried running Kiro CLI on Lambda functions

2026.04.28

This page has been translated by machine translation. View original

Introduction

On 2026/04/13, Kiro CLI 2.0 was released. Among the new features included in this version, a new operation mode called headless mode was implemented, making it possible to authenticate using API keys in addition to the previously required browser-based authentication.

※For more details, please read the following article by suzuki.ryo.

https://dev.classmethod.jp/articles/kiro-cli-2-0-headless-mode-api-key-auth/

As a result, it has become easier to integrate Kiro into CI/CD pipelines, for example by installing Kiro CLI in Lambda functions to have Kiro review code bases or write summaries of changes.

In this article, I would like to explore and test how to run Kiro CLI in Lambda.

Prerequisites

For this verification, the following environment was used:

  • AWS Region: us-east-1
    • For this verification, all resources are placed in the us-east-1 region for convenience. For production use, please check the available regions for each resource
  • Lambda base image: public.ecr.aws/lambda/python:3.14 (latest version at time of writing)
  • Kiro CLI: version 2.1.1 (latest version at time of writing)
  • Local build environment: MacOS (Apple Silicon) + Docker CLI + colima
  • Kiro API key has been issued
    • You need to log in to Kiro Web via IAM Identity Center and issue an API key from the account settings

Function Creation

Store Kiro API Key in Secrets Manager

First, let's store the Kiro API key in a Secrets Manager secret. Since API keys are long-term credentials, they should always be stored in a secure location. Never hardcode them in Lambda function source code.

aws secretsmanager create-secret \
  --name kiro-credentials \
  --region us-east-1 \
  --secret-string '{"KIRO_API_KEY":"ksk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}'

Preparation for Container Image Creation

Next, we'll create a Lambda function with Kiro CLI.

There are two methods for creating Lambda functions: either by zipping the source code and uploading it, or by building and deploying it as a container image. Since the file size of Kiro CLI exceeds the quota, we cannot use the former method.
Therefore, we will proceed with creating and deploying a container image. First, let's create the directory structure as follows:

lambda-image/
├── Dockerfile
├── handler.py
└── .dockerignore

Next, create the Dockerfile as follows:

I chose a Python 3.14-based image as the base image. While the language environment may not matter much, it's recommended to choose a version based on Amazon Linux 2023 as Kiro CLI requires glibc 2.34 or later.
The setup is simple, just including the execution of the installation command mentioned in the official documentation and the configuration of dependency files.

FROM public.ecr.aws/lambda/python:3.14

RUN dnf install -y unzip ca-certificates && dnf clean all

ENV HOME=/opt/kiro-home
RUN mkdir -p $HOME && \
    curl -fsSL https://cli.kiro.dev/install | bash

RUN pip install --no-cache-dir boto3

COPY handler.py ${LAMBDA_TASK_ROOT}/

ENV HOME=/tmp \
    PATH=/opt/kiro-home/.local/bin:${PATH}

CMD ["handler.handler"]

The Lambda function source code is as follows:
It's a simple implementation that retrieves the Kiro API key from Secrets Manager and calls the Kiro CLI binary using subprocess.

The key point is to use the --no-interactive option, which enables non-interactive execution mode, allowing it to operate autonomously in environments like Lambda functions where user interaction is not possible.
Additionally, you can use the --trust-all-tools option to allow all tool executions, or --trust-tools to allow them partially. For details, see the Kiro CLI documentation. In any case, regarding options, there's no difference in usage compared to running in a normal local environment, except that the --no-interactive option is practically mandatory when running Kiro CLI in Lambda functions.

Another point is that the strings output by Kiro to standard output or standard error contain ANSI escape sequences used to add color to characters in the CLI.
Therefore, if you don't remove these ANSI escape sequences, they will become noise if you want to programmatically handle the output in subsequent processing. (Example: \x1b[38;5;141m> \x1b[0mI)
The official documentation mentions that this can be disabled with the NO_COLOR or KIRO_LOG_NO_COLOR environment variables, but since control characters were still remaining in practice, I added a process to remove ANSI escape sequences using regular expressions.

import json
import re
import os
import subprocess
import boto3

KIRO_BIN = "/opt/kiro-home/.local/bin/kiro-cli"
SECRET_ID = os.environ.get("KIRO_SECRET_ID", "kiro-credentials")
SECRET_REGION = os.environ.get("KIRO_SECRET_REGION", "us-east-1")

# Function to remove ANSI escape sequences (details below)
ANSI_ESCAPE_RE = re.compile(r'\x1b(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')

def _strip_ansi(s: str) -> str:
  return ANSI_ESCAPE_RE.sub("", s) if s else s

# Get Kiro API key from Secrets Manager secret (with cache)
_api_key_cache = None

def _get_api_key():
  global _api_key_cache
  if _api_key_cache:
    return _api_key_cache

  client = boto3.client("secretsmanager", region_name=SECRET_REGION)
  resp = client.get_secret_value(SecretId=SECRET_ID)
  secret = json.loads(resp["SecretString"])
  _api_key_cache = secret["KIRO_API_KEY"]
  return _api_key_cache

# Main processing
def handler(event, context):
  prompt = event.get("prompt")
  trust_all_tools = event.get("trust_all_tools", True)

  env = os.environ.copy()
  env["KIRO_API_KEY"] = _get_api_key()
  env["HOME"] = "/tmp"
  env["NO_COLOR"] = "1"  # Suppress color control (TUI control is removed separately by regex)

  cmd = [KIRO_BIN, "chat", "--no-interactive"]
  if trust_all_tools:
    cmd.append("--trust-all-tools")
  cmd.append(prompt)

  completed = subprocess.run(
    cmd,
    env=env,
    cwd="/tmp",
    capture_output=True,
    text=True,
    timeout=context.get_remaining_time_in_millis() / 1000 - 5,
  )
  return {
    "statusCode": 200 if completed.returncode == 0 else 500,
    "returncode": completed.returncode,
    "stdout": _strip_ansi(completed.stdout),
    "stderr": _strip_ansi(completed.stderr),
  }

Write the following for .dockerignore.

*
!Dockerfile
!handler.py

With this, the preparation is complete.

Creating the Container Image

Now we just need to build and push the image based on the files we created.

# Create repository ※Only for the first time
aws ecr create-repository \
  --repository-name "{repository-name}" \
  --region "us-east-1" \
  --image-scanning-configuration scanOnPush=true \
  --encryption-configuration encryptionType=AES256

# ECR login ※Only for the first time
aws ecr get-login-password --region "us-east-1" \
  | docker login --username AWS --password-stdin "{AWSAccountID}.dkr.ecr.us-east-1.amazonaws.com"

# Build
docker buildx build \
  --platform linux/amd64 \
  --provenance=false \
  -t kiro-on-lambda:latest \
  --load .

# Tag & Push
docker tag kiro-on-lambda:latest "{AWSAccountID}.dkr.ecr.us-east-1.amazonaws.com/{repository-name}:v1"
docker push "{AWSAccountID}.dkr.ecr.us-east-1.amazonaws.com/{repository-name}:v1"

Creating IAM Role for Lambda Function Execution

Since we're creating the Lambda function from the CLI, we'll manually create the IAM role from the CLI as well.
In the trust policy, allow calls from lambda.amazonaws.com and attach the AWSLambdaBasicExecutionRole policy.

cat > lambda-trust-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { "Service": "lambda.amazonaws.com" },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

aws iam create-role \
  --role-name kiro-on-lambda-role \
  --assume-role-policy-document file://lambda-trust-policy.json

aws iam attach-role-policy \
  --role-name kiro-on-lambda-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

In the permission policy, allow retrieving the Kiro API key from the Secrets Manager secret.

cat > lambda-inline-policy.json <<'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["secretsmanager:GetSecretValue"],
      "Resource": "arn:aws:secretsmanager:us-east-1:${AWSAccountID}:secret:kiro-credentials*"
    }
  ]
}
EOF

aws iam put-role-policy \
  --role-name kiro-on-lambda-role \
  --policy-name ReadKiroSecret \
  --policy-document file://lambda-inline-policy.json

Creating the Lambda Function

Create the function by specifying the container image and execution role created earlier.

aws lambda create-function \
  --function-name kiro-on-lambda \
  --region "us-east-1" \
  --package-type Image \
  --code "ImageUri={AWSAccountID}.dkr.ecr.us-east-1.amazonaws.com/{repository-name}:v1" \
  --role "arn:aws:iam::{AWSAccountID}:role/kiro-on-lambda-role" \
  --timeout 300 \
  --memory-size 2048 \
  --ephemeral-storage Size=1024 \
  --architectures x86_64 \
  --description "Kiro CLI on Lambda"

I set a rather long timeout time (300 seconds) considering the cold start time and the time it takes for Kiro to process the prompt. Also, I set the ephemeral storage size to 1024 MB to allow enough space for installing Kiro CLI, which requires several hundred MB, and the memory size to 2048 MB based on measuring a maximum usage of around 350 MB. For production use, you may need more careful parameter tuning, but I hope this serves as a reference.

Verification

Let's start the Lambda function from the CLI and check the results.

aws lambda invoke \
  --cli-binary-format raw-in-base64-out \
  --function-name kiro-on-lambda \
  --region us-east-1 \
  --payload '{"prompt":"Hello from AWS Lambda. Please introduce yourself in one sentence."}' \
  out.json

cat out.json | jq -r '.stdout'

The string generated by Kiro is returned as follows, confirming it works:

> Hi! I'm Kiro, an AI agent that helps with coding, writing, analysis, research,
  and other professional tasks — ready to assist with whatever you need.

Let's also try accessing files under /tmp using the write and read tools.
We'll instruct Kiro via prompt to create a file and then read it back and output it, and execute the function.
At this time, to verify that Kiro is actually using tools (not hallucinating that it has created files when it hasn't), we'll add code to the function to check the file output results:

+ verifications = []
+ for p in event.get("verify_paths", []):
+   item = {"path": p, "exists": os.path.exists(p)}
+   if item["exists"]:
+     try:
+       item["size"] = os.path.getsize(p)
+       with open(p) as f:
+         item["content"] = f.read()[:500]
+     except Exception as e:
+       item["error"] = repr(e)
+   verifications.append(item)

return {
  "statusCode": 200 if completed.returncode == 0 else 500,
  "returncode": completed.returncode,
  "stdout": _strip_ansi(completed.stdout),
  "stderr": _strip_ansi(completed.stderr),
  "cmd": cmd[:-1] + ["<prompt>"],
+   "verifications": verifications,
}
aws lambda invoke \
  --cli-binary-format raw-in-base64-out \
  --function-name kiro-on-lambda \
  --region us-east-1 \
  --payload '{
    "prompt":"Use the fs_write tool to create /tmp/hello.txt containing exactly the text \"Hello from Lambda\", then use fs_read to read it back and report the content.",
    "verify_paths":["/tmp/hello.txt"]
  }' \
  out.json

cat out.json | jq '.verifications'

The actual response is as follows, confirming that Kiro was able to output the string "Hello from Lambda" to /tmp/hello.txt via the tool:

[
  {
    "path": "/tmp/hello.txt",
    "exists": true,
    "size": 18,
    "content": "Hello from Lambda\n"
  }
]

Conclusion

In this article, I introduced how to run Kiro CLI on Lambda.
You can use Kiro for purposes such as document generation, dependency auditing, PR summaries, etc., as introduced in the AWS official blog, expanding the range of Kiro's applications.

Additionally, our company is developing a no-code business support tool that combines Kiro CLI's subagent and hook functions, and we are applying this mechanism as a foundation for running those prompts.
Kiro can be used not only as an IDE for development support but also as a no-code/low-code agent workflow tool, which may expand the range of applications in development and business operations.

I hope the content of this article will be of some help in your use of generative AI.
Thank you for reading to the end.

Reference Articles

About Classmethod Operations, Inc.

We are the operations company of the Classmethod Group.

Our team of experts in operations, maintenance development, support, information systems, and back office provides everything from business outsourcing to problem-solving and high-value-added services through "mechanisms" that fully utilize IT and AI.

We are recruiting members for various positions.

If you are interested in our culture, mechanisms, and work style that together realize "operational excellence" and "working and living in our own way," please visit the Classmethod Operations, Inc. recruitment site. ※We changed our company name from Annotation Inc. in January 2026

Share this article