Amazon S3 Files is GA — Mounting S3 Buckets as a File System, Compared with EFS

Amazon S3 Files is GA — Mounting S3 Buckets as a File System, Compared with EFS

Amazon S3 Files, starting in April 2026, is a new service that allows S3 buckets to be mounted via NFS v4.2. It can be used from EC2/Lambda/EKS/ECS and enables utilizing S3 without code changes to existing legacy applications.
2026.04.08

This page has been translated by machine translation. View original

On April 7, 2026, general availability (GA) of Amazon S3 Files was announced.

https://aws.amazon.com/about-aws/whats-new/2026/04/amazon-s3-files/

This is a new feature that allows you to mount S3 buckets as a file system. I'll show how to create an S3 bucket, prepare IAM roles, build an S3 Files file system using AWS CLI, and mount it on Amazon Linux 2023.
After mounting, I'll verify file operations and locking behavior, and share measurements of I/O performance and synchronization lag compared to EFS.

What is S3 Files

S3 Files is a service that provides a file system interface for data in S3 buckets. Built on Amazon EFS, it can be mounted from EC2, Lambda, EKS, and ECS using the NFS v4.2 protocol.

  • Access existing data in S3 buckets directly as files
  • Writes to the file system are automatically synchronized to the S3 bucket
  • Only actively used data is cached in high-performance storage
  • Supports simultaneous access from up to 25,000 compute resources

Test Environment

Item Value
OS Amazon Linux 2023 (aarch64)
Instance type r7gd.medium
Region / AZ us-west-2 / us-west-2a
VPC Default VPC
AWS CLI 2.34.26
amazon-efs-utils 3.0.0

Step 1: Update AWS CLI

To use the S3 Files CLI commands (aws s3files), AWS CLI 2.34 or later is required. I updated to the latest version.

curl -s "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o /tmp/awscliv2.zip
cd /tmp && unzip -qo awscliv2.zip
sudo ./aws/install --update

For x86_64 environments, change the URL to https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip.

After updating, you can verify it works if aws s3files help displays the list of subcommands.

$ aws --version
aws-cli/2.34.26 Python/3.14.3 Linux/6.1.161-183.298.amzn2023.aarch64

$ aws s3files help
...
AVAILABLE COMMANDS
       o create-file-system
       o create-mount-target
       o get-file-system
       o list-file-systems
       o list-mount-targets
       ...

Step 2: Create an S3 Bucket

S3 Files requires an S3 bucket with versioning enabled. I created a bucket and enabled versioning.

BUCKET_NAME="s3files-demo-$(date +%Y%m%d)-${RANDOM}"
REGION="us-west-2"

# Create bucket
aws s3api create-bucket \
  --bucket "${BUCKET_NAME}" \
  --region "${REGION}" \
  --create-bucket-configuration LocationConstraint="${REGION}"

# Enable versioning
aws s3api put-bucket-versioning \
  --bucket "${BUCKET_NAME}" \
  --versioning-configuration Status=Enabled

Step 3: Create IAM Role

I created an IAM role for S3 Files to access the bucket.

Policy Creation

Trust Policy

ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

cat > /tmp/s3files-trust-policy.json << EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "elasticfilesystem.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "${ACCOUNT_ID}"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:s3files:${REGION}:${ACCOUNT_ID}:file-system/*"
                }
            }
        }
    ]
}
EOF

aws iam create-role \
  --role-name S3FilesRole-demo \
  --assume-role-policy-document file:///tmp/s3files-trust-policy.json

Inline Policy

cat > /tmp/s3files-policy.json << EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3BucketPermissions",
            "Effect": "Allow",
            "Action": ["s3:ListBucket", "s3:ListBucketVersions"],
            "Resource": "arn:aws:s3:::${BUCKET_NAME}",
            "Condition": {
                "StringEquals": { "aws:ResourceAccount": "${ACCOUNT_ID}" }
            }
        },
        {
            "Sid": "S3ObjectPermissions",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload", "s3:DeleteObject*",
                "s3:GetObject*", "s3:List*", "s3:PutObject*"
            ],
            "Resource": "arn:aws:s3:::${BUCKET_NAME}/*",
            "Condition": {
                "StringEquals": { "aws:ResourceAccount": "${ACCOUNT_ID}" }
            }
        },
        {
            "Sid": "UseKmsKeyWithS3Files",
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey", "kms:Encrypt", "kms:Decrypt",
                "kms:ReEncryptFrom", "kms:ReEncryptTo"
            ],
            "Condition": {
                "StringLike": {
                    "kms:ViaService": "s3.${REGION}.amazonaws.com",
                    "kms:EncryptionContext:aws:s3:arn": [
                        "arn:aws:s3:::${BUCKET_NAME}",
                        "arn:aws:s3:::${BUCKET_NAME}/*"
                    ]
                }
            },
            "Resource": "arn:aws:kms:${REGION}:${ACCOUNT_ID}:*"
        },
        {
            "Sid": "EventBridgeManage",
            "Effect": "Allow",
            "Action": [
                "events:DeleteRule", "events:DisableRule", "events:EnableRule",
                "events:PutRule", "events:PutTargets", "events:RemoveTargets"
            ],
            "Condition": {
                "StringEquals": { "events:ManagedBy": "elasticfilesystem.amazonaws.com" }
            },
            "Resource": ["arn:aws:events:*:*:rule/DO-NOT-DELETE-S3-Files*"]
        },
        {
            "Sid": "EventBridgeRead",
            "Effect": "Allow",
            "Action": [
                "events:DescribeRule", "events:ListRuleNamesByTarget",
                "events:ListRules", "events:ListTargetsByRule"
            ],
            "Resource": ["arn:aws:events:*:*:rule/*"]
        }
    ]
}
EOF

aws iam put-role-policy \
  --role-name S3FilesRole-demo \
  --policy-name S3FilesBucketAccess \
  --policy-document file:///tmp/s3files-policy.json

The policy content complies with the official documentation.

Additionally, the EC2 instance profile requires the following permissions:

  • AmazonS3FilesClientFullAccess (or AmazonS3FilesClientReadOnlyAccess)
  • Direct read access to the S3 bucket (for optimized read performance)

Step 4: Create File System

ROLE_ARN="arn:aws:iam::${ACCOUNT_ID}:role/S3FilesRole-demo"

aws s3files create-file-system \
  --region "${REGION}" \
  --bucket "arn:aws:s3:::${BUCKET_NAME}" \
  --role-arn "${ROLE_ARN}"

Note the fileSystemId from the response.

FS_ID="fs-0123456789abcdef0"  # Replace with the fileSystemId from output

Step 5: Create Mount Target

I created a mount target in the same subnet as the EC2 instance.

SUBNET_ID="subnet-xxxxxxxx"  # Same subnet as EC2

aws s3files create-mount-target \
  --region "${REGION}" \
  --file-system-id "${FS_ID}" \
  --subnet-id "${SUBNET_ID}"

I waited a few minutes for it to become available. You can check the status with:

aws s3files list-mount-targets \
  --region "${REGION}" \
  --file-system-id "${FS_ID}" \
  --query 'mountTargets[0].{status:status,ip:ipv4Address}'

Step 6: Configure Security Group

The mount target's security group needed to allow NFS (TCP 2049) inbound from EC2.

# Check mount target SG (get from ENI)
MT_IP="172.31.xx.xx"  # Mount target's IP
aws ec2 describe-network-interfaces \
  --filters "Name=addresses.private-ip-address,Values=${MT_IP}" \
  --query 'NetworkInterfaces[0].Groups[0].GroupId' \
  --output text

# Allow TCP 2049 from EC2's SG to mount target's SG
MT_SG="sg-xxxxxxxxx"   # Mount target's SG
EC2_SG="sg-yyyyyyyyy"  # EC2's SG

aws ec2 authorize-security-group-ingress \
  --group-id "${MT_SG}" \
  --ip-permissions '[{
    "FromPort": 2049,
    "ToPort": 2049,
    "IpProtocol": "tcp",
    "UserIdGroupPairs": [{"GroupId": "'${EC2_SG}'", "Description": "NFS from EC2 for S3 Files"}]
  }]'

Step 7: Install amazon-efs-utils

The S3 Files mount helper (mount.s3files) is included in amazon-efs-utils 3.0.0 or later.

# Add efs-utils repository
sudo bash -c 'cat > /etc/yum.repos.d/efs-utils.repo << EOF
[efs-utils]
name=efs-utils repository
baseurl=https://amazon-efs-utils.aws.com/repo/rpm/amazon/2023
priority=1
enabled=1
repo_gpgcheck=1
type=rpm
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-efs-utils.gpg
EOF'

sudo curl -fsSL https://amazon-efs-utils.aws.com/efs-utils-armored.gpg \
  -o /etc/pki/rpm-gpg/RPM-GPG-KEY-efs-utils.gpg
sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-efs-utils.gpg

# Install
sudo dnf install -y amazon-efs-utils-3.0.0

# Install botocore for root (used for IAM authentication)
sudo python3 -m pip install botocore
$ rpm -q amazon-efs-utils
amazon-efs-utils-3.0.0-1.amzn2023.aarch64

$ which mount.s3files
/sbin/mount.s3files

Step 8: Mount and Verification

sudo mkdir -p /mnt/s3files
sudo mount -t s3files ${FS_ID}:/ /mnt/s3files
$ df -hT /mnt/s3files
Filesystem   Type  Size  Used Avail Use% Mounted on
127.0.0.1:/  nfs4  8.0E     0  8.0E   0% /mnt/s3files

8.0E (8 exabytes) reflects S3's virtually unlimited storage capacity.

I created a file and verified its reflection to S3.

echo "Hello from S3 Files!" > /mnt/s3files/test.txt
cat /mnt/s3files/test.txt
Hello from S3 Files!
# Check on S3 side (synchronization may take ~1 minute)
sleep 60
aws s3 ls s3://${BUCKET_NAME}/
2026-04-08 07:45:12         21 test.txt

I confirmed that writes to the file system were automatically synchronized to the S3 bucket.

Brief Performance Comparison with EFS

S3 Files is built on EFS. I mounted both S3 Files and EFS (Elastic throughput mode) on the same instance for a simple comparison.

I/O Performance

Each test was run 3 times, and I used the median value. Page cache was cleared before read tests.

Test S3 Files EFS Difference
1GB write 4.283s (about 239 MB/s) 4.224s (about 242 MB/s) Nearly identical
1GB read 1.601s (about 639 MB/s) 1.474s (about 694 MB/s) S3 Files about 9% slower
1KB×1,000 writes 11.265s 9.048s S3 Files about 24% slower
1KB×1,000 reads 4.916s 3.665s S3 Files about 34% slower

Performance for large file writes was almost identical. S3 Files is designed to stream reads larger than 1MB directly from S3, which is why the difference in large file reads is small. EFS outperformed for multiple small file operations, but S3 Files still completed writes at about 11ms per file, which is sufficient for practical use.

Synchronization Lag

Synchronization between S3 Files and S3 buckets happens automatically in both directions. According to official documentation, writes to the file system aggregate changes for up to 60 seconds before syncing with a single S3 PUT, while direct changes to S3 buckets are detected through S3 Event Notifications and reflected in the file system.

Direction Measurement Method Median
File system → S3 After write, polling with head-object About 63-66 seconds
S3 → File system After s3 cp, polling for file appearance About 30 seconds

The file system → S3 direction matched the official documentation's specification of "aggregating changes for up to 60 seconds." The S3 → file system direction varied depending on S3 Event Notifications delivery intervals.

S3 Object Metadata

Objects synchronized to S3 retained POSIX permissions, owner, and timestamps as S3 object metadata.

Metadata:
  user-agent: aws-s3-files
  file-permissions: 0100644
  file-owner: 0
  file-group: 0
  file-mtime: 1775634941827000000ns

File Lock Behavior Verification

S3 Files supports NFS v4.2 file locks. I verified the behavior of exclusive locks using flock.

FILE=/mnt/s3files/locktest.txt
echo "test" > $FILE

# Process 1: Acquire exclusive lock and hold for 3 seconds
(
  flock -x 200
  echo "P1 lock acquired at $(date +%T)"
  sleep 3
  echo "P1 lock released at $(date +%T)"
) 200>$FILE &

sleep 0.5

# Process 2: Request exclusive lock (wait until P1 releases)
(
  echo "P2 waiting at $(date +%T)"
  flock -x 200
  echo "P2 lock acquired at $(date +%T)"
) 200>$FILE &

wait
P1 lock acquired at 08:15:48
P2 waiting at 08:15:48
P1 lock released at 08:15:51
P2 lock acquired at 08:15:51

Both exclusive locks and non-blocking mode (flock -n) worked as expected.

S3 API Changes Bypass Locks

I tested overwriting the same file via S3 API while holding an exclusive lock in the file system.

Timing File System S3 API
Before lock original content original content
After S3 API overwrite while locked OVERWRITTEN BY S3 API OVERWRITTEN BY S3 API
After lock release OVERWRITTEN BY S3 API OVERWRITTEN BY S3 API

S3 API completely ignores file system locks, successfully overwriting the file even while locked. The changes were immediately visible to the process holding the lock.

Summary

I built S3 Files from scratch using AWS CLI, mounted it, compared it with EFS, and verified file lock behavior.

Test Results

I/O performance was nearly equivalent to EFS, with no difference in large file writes and sufficient performance for practical use even with multiple small file operations. Synchronization to S3 completed in about 1 minute, and updates from S3 to the file system were reflected in about 30 seconds. NFS file locks worked correctly, but operations from S3 API bypassed the locks, requiring caution for workloads that update data through both paths.

Use Cases for S3 Files

The greatest value is enabling S3 usage with minimal changes for workloads that find it difficult to use S3 natively with SDK or CLI. NFS v4.2 compatible existing applications can work with data on S3 just by changing the mount point, without any code changes.

S3 versioning is required, which automatically preserves change history, and you benefit from eleven nines of durability. Since only actively used data is cached in high-performance storage, this can reduce costs compared to configurations that separately manage data in both S3 and a file system.

When Higher NFS Performance is Needed

When adopting S3 Files, thorough testing is recommended to ensure synchronization lag and file lock constraints meet your workload requirements. For workloads handling many small files, be aware that API request costs for S3 synchronization may accumulate. If latency, throughput, or cost don't meet requirements, Amazon FSx for NetApp ONTAP is a strong alternative. It supports multiple protocols (NFS/SMB/iSCSI) and offers automatic tiering to S3 (FabricPool), combining high-performance file access with S3 cost-efficiency.

Share this article