Trying to connect to Aurora PostgreSQL Serverless Express configuration with Drizzle ORM
This page has been translated by machine translation. View original
Aurora PostgreSQL Serverless with express configuration (referred to as Aurora Express) is a new Aurora option that allows you to create clusters in seconds without requiring a VPC.
For more details, please check our company blog posts here:
In this article, I'll demonstrate connecting to Aurora Express using TypeScript ORM "Drizzle ORM" (https://orm.drizzle.team/). The key point will be handling IAM authentication tokens.
Note that there's another article about using Drizzle with Aurora DSQL. Please be aware that this is a different service from the Express configuration we're covering today.
Key Points First
- Aurora Express supports IAM authentication only
- To connect from Drizzle, generate an IAM authentication token with
@aws-sdk/rds-signerand pass it as the password - Pass an async function to
node-postgres(pg) connection settings to dynamically obtain IAM authentication tokens for each connection - Since it's a standard Aurora PostgreSQL, Drizzle features like
generatedAlwaysAsIdentity()work as expected
What is Aurora PostgreSQL Serverless Express Configuration?
Aurora Express is a new cluster creation option for Aurora.
Its biggest feature is that it includes an Internet Access Gateway, allowing direct connection from the internet without a VPC.
This makes it very convenient for personal development and prototyping.
Prerequisites
I'll proceed with the assumption that you've completed:
- Creating an Aurora Express cluster
- Installing AWS CLI and configuring AWS credentials
What We'll Build
We'll create a simple REST API server using Hono + Drizzle ORM.
The complete sample code is available on GitHub.
The directory structure is as follows:
.
├── src/
│ ├── db/
│ │ ├── index.ts # DB connection settings (IAM token retrieval & caching)
│ │ └── schema.ts # Schema definition
│ ├── app.ts # Hono route definition
│ ├── logger.ts # Logger configuration (pino)
│ └── index.ts # Server startup
├── scripts/
│ └── migrate.sh # Migration script
├── drizzle.config.ts
├── .env
└── package.json
Let's Run It
Installing Dependencies
First, install the dependencies.
$ npm install
Setting Up .env
Copy the .env.example file.
$ cp .env.example .env
Set your DB connection information in the .env file. Specify your cluster endpoint for DB_HOST.
DB_HOST=my-express-cluster.cluster-xxxxxxxxxxxx.ap-northeast-1.rds.amazonaws.com
DB_USER=postgres
DB_NAME=postgres
AWS_REGION=ap-northeast-1
TZ=Asia/Tokyo
Migration
Run npm run db:migrate to create tables.
Command execution requires AWS CLI installation and AWS authentication credentials to generate IAM authentication tokens.
$ npm run db:migrate
Generating IAM authentication token...
Running migration...
[✓] Changes applied
Starting the API Server and Testing
Start the server.
Here too, AWS authentication credentials are needed to generate IAM authentication tokens. Set them in environment variables or other methods.
$ npm start
[HH:MM:SS.mmm] INFO (XXXXX): [db/index] module initialized
[HH:MM:SS.mmm] INFO (XXXXX): http://localhost:3000
...
Verify the API CRUD operations using curl.
# POST - Create user
$ curl -s -X POST http://localhost:3000/users \
-H 'Content-Type: application/json' \
-d '{"name":"ClassMethod Taro","email":"mesota@example.com"}'
{
"id": 1,
"name": "ClassMethod Taro",
"email": "mesota@example.com",
"createdAt": "2025-03-26T10:00:00.000Z"
}
# GET - List all
$ curl -s http://localhost:3000/users
[
{ "id": 1, "name": "ClassMethod Taro", "email": "mesota@example.com", "createdAt": "2025-03-26T10:00:00.000Z" }
]
# PUT - Update
$ curl -s -X PUT http://localhost:3000/users/1 \
-H 'Content-Type: application/json' \
-d '{"name":"ClassMethod Jiro"}'
{
"id": 1,
"name": "ClassMethod Jiro",
"email": "mesota@example.com",
"createdAt": "2025-03-26T10:00:00.000Z"
}
# DELETE - Delete
$ curl -s -X DELETE http://localhost:3000/users/1
{
"id": 1,
"name": "ClassMethod Jiro",
"email": "mesota@example.com",
"createdAt": "2025-03-26T10:00:00.000Z"
}
It works perfectly!
Implementation Details
From here, I'll explain the differences from regular drizzle + PostgreSQL connections.
Obtaining and Caching IAM Authentication Tokens (src/db/index.ts)
Since Aurora Express supports IAM authentication only, we use an IAM authentication token instead of a password.
Tokens can be generated with the Signer class from @aws-sdk/rds-signer.
- Reference: Amazon RDS examples using SDK for JavaScript (v3) - AWS SDK for JavaScript
- Reference: @aws-sdk/rds-signer - AWS SDK for JavaScript v3
const signer = new Signer({
hostname: DB_HOST,
port: 5432,
username: DB_USER,
region: AWS_REGION,
});
You can pass an async function to the password option of pg.Pool. We use this to pass a function that retrieves a token each time a new connection is established.
- Reference: Connecting | node-postgres
const pool = new Pool({
host: DB_HOST,
// ...
ssl: true, // SSL is required
password: () => signer.getAuthToken(),
});
However, IAM authentication tokens expire after 15 minutes. It's inefficient to call the AWS API for every connection, so we cache the token and only refresh it 5 minutes before expiration.
Also, pg.Pool's idleTimeoutMillis (time before idle connections are destroyed) is 10 seconds by default.
Since connection processing takes time, I changed it to a longer setting.
Since Aurora Serverless disconnects idle connections on the server side after about 5 minutes, I set it to 4 minutes, which is shorter.
- Reference: Pool | node-postgres
const TOKEN_TTL_MS = 15 * 60 * 1000;
const TOKEN_REFRESH_BUFFER_MS = 5 * 60 * 1000;
let cachedToken = "";
let tokenExpiresAt = 0;
async function getAuthToken(): Promise<string> {
if (Date.now() < tokenExpiresAt - TOKEN_REFRESH_BUFFER_MS) {
return cachedToken;
}
cachedToken = await signer.getAuthToken();
tokenExpiresAt = Date.now() + TOKEN_TTL_MS;
return cachedToken;
}
const pool = new Pool({
host: DB_HOST,
// ...
ssl: true,
idleTimeoutMillis: 4 * 60 * 1000, // Set shorter than Aurora's disconnect (about 5 minutes)
password: getAuthToken,
});
export const db = drizzle({ client: pool });
Migration: Embedding Tokens in URLs (scripts/migrate.sh)
drizzle-kit references DATABASE_URL when executing migrations. For regular password authentication, you could set a fixed URL in .env, but with IAM authentication, you need to generate the token and build the URL at runtime.
Since tokens contain characters that aren't allowed in URLs (like /, +, =), URL encoding is also necessary.
scripts/migrate.sh combines these steps.
<details>
<summary>scripts/migrate.sh</summary>
#!/bin/bash
set -e
# Load environment variables from .env
if [ -f .env ]; then
export $(grep -v '^#' .env | grep -v '^$' | xargs)
fi
# Check required variables
: "${DB_HOST:?DB_HOST is not set in .env}"
: "${DB_USER:=postgres}"
: "${DB_NAME:=postgres}"
: "${AWS_REGION:=ap-northeast-1}"
echo "Generating IAM authentication token..."
TOKEN=$(aws rds generate-db-auth-token \
--hostname "$DB_HOST" \
--port 5432 \
--region "$AWS_REGION" \
--username "$DB_USER")
ENCODED_TOKEN=$(node -e "process.stdout.write(encodeURIComponent(process.argv[1]))" "$TOKEN")
export DATABASE_URL="postgresql://${DB_USER}:${ENCODED_TOKEN}@${DB_HOST}:5432/${DB_NAME}?sslmode=require"
echo "Running migration..."
npx drizzle-kit push
</details>
Schema Definition (src/db/schema.ts)
The schema definition is the same as regular Drizzle + PostgreSQL. Let's review it again.
import { integer, pgTable, varchar, timestamp } from "drizzle-orm/pg-core";
export const usersTable = pgTable("users", {
id: integer().primaryKey().generatedAlwaysAsIdentity(),
name: varchar({ length: 255 }).notNull(),
email: varchar({ length: 255 }).notNull().unique(),
createdAt: timestamp().defaultNow().notNull(),
});
generatedAlwaysAsIdentity() uses PostgreSQL sequences for auto-numbering. Since Express Configuration is a regular Aurora PostgreSQL, these standard Drizzle features work as expected.
Examining Execution Times in Logs
Let's look at how much execution time it actually takes using the execution logs.
Aurora Express automatically stops after 300 seconds (5 minutes) of inactivity by default. Let's check the actual logs to see how long it takes during a cold start.
- Reference: Scaling to zero ACUs with automatic pause and resume for Aurora Serverless v2 - Amazon Aurora
From the very first cold start state to establishing a connection, it takes about 16 seconds.
[16:20:54.070] INFO (99349): [db] query: select "id", "name", "email", "createdAt" from "users" -- []
[16:20:55.137] INFO (99349): [getAuthToken] called
[16:20:55.137] INFO (99349): [getAuthToken] cache miss. fetching new token...
[16:20:55.155] INFO (99349): [getAuthToken] token fetched. expires at 2026-03-27T16:35:55
[16:21:10.336] INFO (99349): [pool] new connection established
[16:21:10.478] INFO (99349): [db] result: 1 rows
After the connection is established, responses come back in milliseconds.
[16:22:03.743] INFO (99349): [db] query: select "id", "name", "email", "createdAt" from "users" -- []
[16:22:03.782] INFO (99349): [db] result: 1 rows
Connections are destroyed after a certain time due to the idleTimeoutMillis setting.
When an API is executed afterward and reconnection occurs, it takes 3-4 seconds.
[16:34:53.692] INFO (99349): [db] query: select "id", "name", "email", "createdAt" from "users" -- []
[16:34:54.567] INFO (99349): [getAuthToken] called
[16:34:54.569] INFO (99349): [getAuthToken] cache hit. expires at 2026-03-27T16:35:55
[16:34:57.019] INFO (99349): [pool] new connection established
[16:34:57.076] INFO (99349): [db] result: 1 rows
Things to Note
Internet Access Gateway Cannot Be Disabled
The Internet Access Gateway for Express Configuration is always enabled. While the convenience of direct internet connection is appealing, access control becomes important. Pay attention to properly configuring IAM authentication for production use.
Conclusion
I've tried connecting to Aurora PostgreSQL Serverless Express configuration using Drizzle ORM.
Despite the IAM authentication-only constraint, you can handle it smoothly by combining @aws-sdk/rds-signer with node-postgres (pg). Adding caching minimizes AWS API calls.
As a serverless PostgreSQL that can be used without a VPC, it seems very useful for personal development and prototyping.
I'd like to try using it from AWS Lambda next.
I hope this blog is helpful to someone.