Tried Directly Connecting to Amazon Aurora DSQL with Cloudflare Workers TCP Socket

Tried Directly Connecting to Amazon Aurora DSQL with Cloudflare Workers TCP Socket

I verified direct connection to Amazon Aurora DSQL using Cloudflare Workers' TCP sockets. In addition to implementation procedures using the official connector, I'll explain important architectural design considerations for production use, such as edge-specific latency and the "100 connections per second" limitation due to the inability to reuse connections.
2026.03.09

This page has been translated by machine translation. View original

In the previous article, we tested the write and read performance when connecting to Aurora DSQL from AWS Lambda.

https://dev.classmethod.jp/articles/aurora-dsql-lambda-performance/

This time, we'll check if we can connect to Aurora DSQL from Cloudflare Workers using TCP sockets.

Why Connect to DSQL from Workers?

We're considering microservicing some of our blog feature APIs with Cloudflare Workers + D1. While D1 is SQLite-based and excellent for edge read performance, it's not ideal for complex aggregations like complex JOINs or window functions.

Therefore, we're thinking about using Aurora DSQL, which is PostgreSQL-compatible, as a "master data and aggregation DB" and periodically syncing the results to D1 using Cron Triggers. To achieve this architecture, being able to directly access DSQL from Workers is a fundamental requirement.

Cloudflare Workers has supported TCP sockets (connect() API) since May 2023, making direct PostgreSQL connections possible. However, it was uncertain whether DSQL's IAM token authentication would work properly in the Workers runtime environment, so we started by checking this connectivity.

Test Environment

Item Value
DSQL Region us-west-2
Docker Image node:22-slim + wrangler 4.71.0
Bundle Size 147.79 KiB / gzip: 37.37 KiB
Worker Startup Time 14-16 ms

I set up wrangler, the Cloudflare Workers development CLI, in a Docker container to keep my local environment clean during testing.

FROM node:22-slim
RUN npm install -g wrangler
WORKDIR /app
EXPOSE 8787
# compose.yaml
services:
  worker:
    build: .
    volumes:
      - ./app:/app
    ports:
      - "8788:8787"
    environment:
      - CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
    working_dir: /app
    command: sleep infinity

Here's the wrangler.toml. The nodejs_compat flag is essential - without it, you can't use TCP sockets.

name = "dsql-test"
main = "src/index.ts"
compatibility_date = "2024-09-23"
compatibility_flags = ["nodejs_compat"]
npm install postgres @aws-sdk/dsql-signer @aws/aurora-dsql-postgresjs-connector

Connecting with the Official Connector

Let's connect using the official @aws/aurora-dsql-postgresjs-connector. The connector transparently handles token generation and connection parameter optimization, resulting in less code.

import { auroraDSQLPostgres } from "@aws/aurora-dsql-postgresjs-connector";

interface Env {
  AWS_ACCESS_KEY_ID: string;
  AWS_SECRET_ACCESS_KEY: string;
  AWS_SESSION_TOKEN: string;
  DSQL_ENDPOINT: string;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const sql = auroraDSQLPostgres({
      host: env.DSQL_ENDPOINT,
      username: "admin",
      customCredentialsProvider: async () => ({
        accessKeyId: env.AWS_ACCESS_KEY_ID,
        secretAccessKey: env.AWS_SECRET_ACCESS_KEY,
        sessionToken: env.AWS_SESSION_TOKEN,
      }),
    });

    try {
      const [row] = await sql`SELECT current_timestamp as ts`;
      await sql.end();
      return Response.json({ ok: true, timestamp: row.ts });
    } catch (e: any) {
      try { await sql.end(); } catch {}
      return Response.json({ ok: false, error: e.message }, { status: 500 });
    }
  },
};

Let's deploy with wrangler deploy and send a request.

{ "ok": true, "timestamp": "2026-03-08T08:49:29.290Z" }

The connection works perfectly.

Querying Real Data

Next, let's retrieve data from actual tables.

const posts = await sql`SELECT id, title FROM blogpost_meta LIMIT 3`;
const tags = await sql`SELECT * FROM tag_master LIMIT 3`;
{
  "posts": [
    { "id": "00oAKWv1w3mtdzVecJwdU", "title": "[2025年7月21日から]BigQueryの..." }
  ],
  "tags": [
    { "tag_name": "1on1", "usage_count": 14, "tag_category": "Methodology" }
  ]
}

The data is retrieved successfully. Japanese text also displays without any character encoding issues.

Latency Measurements

I added Date.now() measurements to the previous code to measure processing time within the Worker (connection establishment + query execution). Here are the results from measuring two clusters: us-west-2, which is in the same region as the Worker edge (PDX), and ap-northeast-1, which is across the Pacific Ocean.

Measurement us-west-2 (near edge) ap-northeast-1 (across Pacific)
SELECT 1 row median 100-170ms median 1,400-1,800ms
INSERT 100 rows (serial) ~16ms per row ~147ms per row

In us-west-2, which is close to the Worker edge (PDX), SELECT queries complete in about 100-170ms, while for ap-northeast-1 across the Pacific, latency increases about 10x. These figures include the full TCP/TLS handshake + IAM token generation + query execution.

Note that since I called the Worker from an EC2 instance in Oregon, the Worker launched at the Oregon edge (PDX = Portland International Airport's IATA code). I'll cover this point in more detail in the next section.

Considerations for Workers × DSQL

AWS Credential Management

While Lambda transparently handles IAM authentication through execution roles, Workers require explicitly providing access keys since they run outside the AWS environment. For this test, I used temporary credentials (access key + session token) issued by STS assume-role for convenience. However, since STS temporary tokens expire after a maximum of 12 hours, they're not suitable for static registration in Cloudflare Secrets. For production, I recommend setting up an IAM user with minimal DSQL connection privileges and securely managing those access keys as secrets.

Edge Location Cannot Be Controlled

Workers launch at the edge closest to the request origin. You cannot control which edge your application runs on. For this test, I called the Worker from an EC2 in Oregon, so the Worker launched at the Oregon edge (PDX) and could connect to the DSQL in the same region.

However, if a user from Japan accesses the service, the Worker would launch at the Tokyo edge, adding Pacific-crossing RTT to the us-west-2 DSQL connection. With Lambda, you could "deploy to Tokyo region and connect to Tokyo DSQL" to fix the region, but Workers doesn't allow this. Your design needs to account for potential latency increases due to distance from DSQL.

Connection Reuse Is Not Possible

It might seem natural to think "Why not store the connection in a global variable and reuse it?", but this doesn't work with Workers:

Cannot perform I/O on behalf of a different request.
I/O objects (such as streams, request/response bodies, and others)
created in the context of one request handler cannot be accessed
from a different request's handler.

Workers isolate I/O contexts per request, preventing TCP sockets created in one request from being reused in another. Each request must establish a new connection (TCP/TLS handshake + IAM token generation).

In the previous Lambda article, connection reuse was the key performance factor. Reuse improved INSERT times to under 10ms and maintained high throughput, but Workers cannot benefit from this.

Another important consideration is DSQL's connection rate limit (100 connections per second, a hard limit that cannot be increased) and how it interacts with Workers. Since Workers create a 1:1 ratio between requests and new DB connections, if traffic exceeds 100 rps, connections will be rejected by DSQL. For high-frequency access scenarios, consider architectures with caching or asynchronous processing between Workers and DSQL rather than direct connections.

Reference: Implementation Without the Connector

You can also generate tokens directly with @aws-sdk/dsql-signer and pass them to postgres.js without using the official connector:

import { DsqlSigner } from "@aws-sdk/dsql-signer";
import postgres from "postgres";

const signer = new DsqlSigner({
  hostname: env.DSQL_ENDPOINT,
  region: "us-west-2",
  credentials: {
    accessKeyId: env.AWS_ACCESS_KEY_ID,
    secretAccessKey: env.AWS_SECRET_ACCESS_KEY,
    sessionToken: env.AWS_SESSION_TOKEN,
  },
});
const token = await signer.getDbConnectAdminAuthToken();

const sql = postgres({
  host: env.DSQL_ENDPOINT,
  port: 5432,
  database: "postgres",
  username: "admin",
  password: token,
  ssl: "require",
});

This also works fine in the Workers runtime. SigV4 is HTTP-based signature processing, so there are no compatibility issues with Workers. However, the official connector eliminates the need to handle token generation and optimize parameters like idleTimeoutMillis / maxLifetimeSeconds, so I recommend using it unless you have specific reasons not to.

Conclusion

I've confirmed that Aurora DSQL can be successfully connected to using a combination of Workers' TCP sockets and the official connector.

However, Workers has different constraints than Lambda: you can't control the edge location, and connection reuse isn't possible, making it easier to hit DSQL's connection limit (100 connections/second). Given these characteristics, direct DSQL connections from Workers are best suited for use cases with less stringent latency requirements, such as periodic synchronization via Cron Triggers or background processing.

For user request paths requiring low latency or high throughput, more practical architectures would involve synchronizing DSQL data to D1 via Cron Triggers to leverage edge reads, or utilizing Workers' page caching.

Share this article

FacebookHatena blogX

Related articles