Avoiding Docker Hub 429 Errors in CodeBuild with ECR Public Gallery
This page has been translated by machine translation. View original
I solved the issue of Docker Hub's 429 errors (Too Many Requests) in CodeBuild during docker build execution by combining Dockerfile's ARG feature with ECR Public Gallery.
While ECR Pull Through Cache with Docker Hub credentials would be the standard approach, I wanted to minimize credential management, so I adopted a method to switch registries between local development and CI using build arguments.
I've also included a ready-to-use CloudFormation template.
The Problem
In the Build stage of CodePipeline, the following error occurred during docker build, causing the build to fail:
ERROR: failed to resolve source metadata for docker.io/library/node:24-alpine:
429 Too Many Requests
toomanyrequests: You have reached your unauthenticated pull rate limit.
Docker Hub limits anonymous users to 100 pulls per 6 hours. Since CodeBuild accesses Docker Hub from AWS-managed shared IP addresses, it shares rate limits with other users, making it easy to hit these restrictions.
Solution
Use Dockerfile's ARG feature to switch registry prefixes via --build-arg and leverage ECR Public Gallery.
Dockerfile
ARG REGISTRY=""
FROM ${REGISTRY}node:24-alpine
Local Development (Docker Hub)
# REGISTRY not specified → default "" → pull from Docker Hub
docker build -t myapp .
CI Environment (ECR Public)
# Inject ECR Public prefix with --build-arg
docker build --build-arg REGISTRY="public.ecr.aws/docker/library/" -t myapp .
That's all you need to do. No need for pre-pulling images or retagging repositories (docker tag); this can be solved using just Docker's standard features (ARG).
How It Works
【Local Development】REGISTRY="" (default)
docker build → FROM node:24-alpine → Docker Hub
【CI Environment】REGISTRY="public.ecr.aws/docker/library/"
docker build --build-arg REGISTRY=... → FROM public.ecr.aws/docker/library/node:24-alpine → ECR Public
- Implemented using only Docker's standard features (ARG), no shell script hacks needed
- Works generically with any type of base image (node, python, nginx, etc.)
- No buildspec modifications needed when changing Node versions
- Works safely with
--platformspecification and multi-stage builds
Demonstration
Using a demo CloudFormation template to create ECR repository, CodeBuild, and CodePipeline together, confirming build and push operations.
File Structure
demo-ecr-ratelimit/
├── template.yaml # CloudFormation (ECR + CodeBuild + CodePipeline)
└── Dockerfile # ARG REGISTRY="" + FROM ${REGISTRY}node:24-alpine
Dockerfile
ARG REGISTRY=""
FROM ${REGISTRY}node:24-alpine
RUN echo "Hello from node:$(node -v)" > /hello.txt
CMD ["cat", "/hello.txt"]
CloudFormation Template Explanation
| Resource | Description |
|---|---|
| ECRRepository | For storing application images. Lifecycle policy retains the latest 3 images |
| CodeBuildProject | ARM64 build environment. Specifies ECR Public with --build-arg REGISTRY |
| Pipeline | 2-stage structure: Source (S3) → Build |
| IAM Roles | For CodeBuild and CodePipeline |
The core part of the buildspec is just this:
build:
commands:
# ★ Inject ECR Public prefix with --build-arg
- docker build --build-arg REGISTRY="public.ecr.aws/docker/library/" -t ${IMAGE_NAME} .
Deployment
REGION="us-west-2"
# Create stack
aws cloudformation deploy \
--stack-name demo-ecr-ratelimit \
--template-file template.yaml \
--capabilities CAPABILITY_IAM \
--region ${REGION}
# Get source bucket name
BUCKET=$(aws cloudformation describe-stacks \
--stack-name demo-ecr-ratelimit \
--query 'Stacks[0].Outputs[?OutputKey==`SourceBucket`].OutputValue' \
--output text --region ${REGION})
# Zip Dockerfile and upload
zip source.zip Dockerfile
aws s3 cp source.zip s3://${BUCKET}/source.zip --region ${REGION}
# Run pipeline
aws codepipeline start-pipeline-execution \
--name demo-ecr-ratelimit-pipeline \
--region ${REGION}
Verification
In the CodeBuild logs, you can confirm that the image is being retrieved via ECR Public:
[Container] Running command docker build --build-arg REGISTRY="public.ecr.aws/docker/library/" -t ${IMAGE_NAME} .
#2 [internal] load metadata for public.ecr.aws/docker/library/node:24-alpine
#2 DONE 0.4s ← Retrieved from ECR Public (not Docker Hub)
#4 [1/2] FROM public.ecr.aws/docker/library/node:24-alpine@sha256:01743339...
#4 resolve public.ecr.aws/docker/library/node:24-alpine done
#4 DONE 3.0s ← Stable pull without rate limiting
Hello from node:v24.14.1
Pushed 123456789012.dkr.ecr.us-west-2.amazonaws.com/demo-ecr-ratelimit-app:20260409105429
The resolution destination for FROM is public.ecr.aws, with no access to Docker Hub occurring.
Confirming Automatic Version Following
I changed the Dockerfile to node:23-alpine and ran it again. The buildspec was not changed at all.
ARG REGISTRY=""
- FROM ${REGISTRY}node:24-alpine
+ FROM ${REGISTRY}node:23-alpine
#2 [internal] load metadata for public.ecr.aws/docker/library/node:23-alpine
#2 DONE 0.3s ← Automatically retrieved version 23
Hello from node:v23.11.1
| Run | Dockerfile | buildspec changes | Result |
|---|---|---|---|
| 1st run | FROM ${REGISTRY}node:24-alpine |
- | Hello from node:v24.14.1 |
| 2nd run | FROM ${REGISTRY}node:23-alpine |
No change | Hello from node:v23.11.1 |
Note: How to Check Available Versions in ECR Public
# Check for specific version
docker manifest inspect public.ecr.aws/docker/library/node:23-alpine
# Also viewable on ECR Public Gallery webpage
# https://gallery.ecr.aws/docker/library/node
Cleanup
aws ecr delete-repository \
--repository-name demo-ecr-ratelimit-app \
--force --region ${REGION}
aws cloudformation delete-stack \
--stack-name demo-ecr-ratelimit \
--region ${REGION}
Comparison of Docker Hub Rate Limit Avoidance Methods
| Method | Benefits | Drawbacks |
|---|---|---|
| ARG + ECR Public (this article) | Free, Docker standard feature, versatile | Requires adding ARG REGISTRY="" to Dockerfile |
| ECR Pull Through Cache | Automatic caching, transparent, standard approach | Requires registering Docker Hub credentials in Secrets Manager |
| Running CodeBuild in VPC (via NAT Gateway) | Fixed EIP prevents sharing rate limits with other users | NAT Gateway costs (about $45/month+), requires VPC setup |
| Docker Hub paid plan | Rate limits removed | Monthly cost |
| Docker Hub authentication via Secrets Manager | Relaxed to 200 pulls/6h | Requires credential management, not complete avoidance |
| Copying to private ECR | Completely independent of Docker Hub | Requires operational management of image updates |
Summary
I introduced combining Dockerfile's ARG feature with ECR Public Gallery as a method to avoid Docker Hub's rate limits (429) in CodeBuild. It can be implemented using only Docker's standard features and works generically regardless of base image type or version. If you're struggling with Docker Hub rate limits in CodeBuild, please give it a try.
References
- Advice for customers dealing with Docker Hub rate limits, and a Coming Soon announcement | AWS Blog
- Resolve "error pulling image configuration: toomanyrequests" errors in CodeBuild | AWS re:Post
- Amazon ECR Public Gallery
- Docker Hub rate limiting
- ECR Pull Through Cache - Docker Hub support
Full CloudFormation Template
Click to expand
AWSTemplateFormatVersion: "2010-09-09"
Description: >
Demo: Docker Hub Rate Limit avoidance - Safely implement CI/CD with ARG REGISTRY + ECR Public
Resources:
ECRRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Sub "${AWS::StackName}-app"
LifecyclePolicy:
LifecyclePolicyText: |
{"rules":[{"rulePriority":1,"selection":{"tagStatus":"any","countType":"imageCountMoreThan","countNumber":3},"action":{"type":"expire"}}]}
ArtifactBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Delete
Properties:
VersioningConfiguration:
Status: Enabled
CodeBuildServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: Policy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*"
- Effect: Allow
Action: ecr:GetAuthorizationToken
Resource: "*"
- Effect: Allow
Action:
- ecr:BatchCheckLayerAvailability
- ecr:PutImage
- ecr:InitiateLayerUpload
- ecr:UploadLayerPart
- ecr:CompleteLayerUpload
Resource: !GetAtt ECRRepository.Arn
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
- s3:PutObject
Resource: !Sub "${ArtifactBucket.Arn}/*"
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: !Sub "${AWS::StackName}-build"
ServiceRole: !GetAtt CodeBuildServiceRole.Arn
TimeoutInMinutes: 10
Environment:
Type: ARM_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/amazonlinux-aarch64-standard:3.0
PrivilegedMode: true
EnvironmentVariables:
- Name: ECR_REPOSITORY_URI
Value: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${ECRRepository}"
- Name: IMAGE_NAME
Value: !Sub "${AWS::StackName}-app"
Source:
Type: CODEPIPELINE
BuildSpec: |
version: 0.2
phases:
pre_build:
commands:
- aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin ${ECR_REPOSITORY_URI}
build:
commands:
# ★ Inject ECR Public prefix with --build-arg
- docker build --build-arg REGISTRY="public.ecr.aws/docker/library/" -t ${IMAGE_NAME} .
- docker run --rm ${IMAGE_NAME}
post_build:
commands:
- IMAGE_TAG=$(date +%Y%m%d%H%M%S)
- docker tag ${IMAGE_NAME}:latest ${ECR_REPOSITORY_URI}:${IMAGE_TAG}
- docker push ${ECR_REPOSITORY_URI}:${IMAGE_TAG}
- echo "Pushed ${ECR_REPOSITORY_URI}:${IMAGE_TAG}"
Artifacts:
Type: CODEPIPELINE
CodePipelineServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: codepipeline.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: Policy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
- s3:GetBucketVersioning
- s3:PutObject
Resource:
- !Sub "${ArtifactBucket.Arn}"
- !Sub "${ArtifactBucket.Arn}/*"
- Effect: Allow
Action:
- codebuild:BatchGetBuilds
- codebuild:StartBuild
Resource: !GetAtt CodeBuildProject.Arn
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: !Sub "${AWS::StackName}-pipeline"
RoleArn: !GetAtt CodePipelineServiceRole.Arn
PipelineType: V2
ArtifactStore:
Type: S3
Location: !Ref ArtifactBucket
Stages:
- Name: Source
Actions:
- Name: S3Source
ActionTypeId:
Category: Source
Owner: AWS
Provider: S3
Version: "1"
Configuration:
S3Bucket: !Ref ArtifactBucket
S3ObjectKey: source.zip
PollForSourceChanges: false
OutputArtifacts:
- Name: SourceCode
- Name: Build
Actions:
- Name: Build
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: "1"
Configuration:
ProjectName: !Ref CodeBuildProject
InputArtifacts:
- Name: SourceCode
Outputs:
SourceBucket:
Value: !Ref ArtifactBucket
PipelineName:
Value: !Ref Pipeline
ECRRepositoryUri:
Value: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${ECRRepository}"