AWS Interconnect that directly connects AWS and Google Cloud - multicloud has become GA!

AWS Interconnect that directly connects AWS and Google Cloud - multicloud has become GA!

AWS Interconnect for Google Cloud - multicloud is now generally available! The pricing is only a time-based fee for bandwidth, with no charges for the amount of data sent, so you can communicate as much as you want!!!
2026.04.15

This page has been translated by machine translation. View original

I'm a big fan of whisky, cigars, and pipes, and I go by Oguri.

The AWS Interconnect - multicloud, which was announced as a preview in November 2025, has finally reached GA after about 5 months! Alongside it, Google Cloud's Partner Cross-Cloud Interconnect for AWS has also come out of preview. This was a feature I had been looking forward to since hearing about it at AWS re:Invent 2025, so I'd like to introduce it.

What is AWS Interconnect - multicloud?

AWS Interconnect - multicloud is a service that directly connects Amazon VPC and other cloud service providers' (CSP) networks with private high-speed connections. Until now, when building multi-cloud connections, the only options involved significant lead time and operational overhead, such as setting up Direct Connect and routing through colocation facilities or connecting through third-party fabrics.

With AWS Interconnect - multicloud, you only need to prepare a Direct Connect Gateway on the AWS side and a Cloud Router on the Google Cloud side. The private connection can be established in minutes through a simple two-step process of creation and approval. There's no need to worry about customer routers, BGP, or peer IP addresses. The physical infrastructure is pre-provisioned by AWS and Google Cloud, resulting in a quadruple-redundant configuration distributed across 2 or more physical facilities and 4 routers.

Interconnect architecture diagram
Amazon Web Services. "Interconnect architecture diagram". What is AWS Interconnect?. AWS Documentation. https://docs.aws.amazon.com/interconnect/latest/userguide/what-is-interconnect.html, (referenced 2026-04-15).

For more details on the mechanism and architecture, please refer to the following past entries:

Changes from Preview to GA

I've summarized the main differences between the preview and GA in a comparison table.

Item Preview (November 2025) GA (April 2026)
Offering Public Preview General Availability
Production Traffic Not recommended Supported
Bandwidth 1 Gbps only 1 Gbps to 100 Gbps (select from pre-approved speeds)
Pricing Free Single pricing structure based on bandwidth and geographic scope
Free Tier Entire connection was free Starting in May, one free 500 Mbps Interconnect per region
Supported Regions 5 region pairs 5 region pairs (no change)
Target CSP Google Cloud Google Cloud (Microsoft Azure planned for late 2026)

The most significant changes are the ability to select bandwidth from 1 Gbps to 100 Gbps and the clarification of the pricing structure. During the preview, production traffic was not recommended, but now it's officially supported for production use.

Partner Cross-Cloud Interconnect for AWS also appears to be GA

Google Cloud's corresponding feature, Partner Cross-Cloud Interconnect for AWS, also seems to have reached GA as the preview designation has been removed from the documentation. Let's review the differences between this and the existing Cross-Cloud Interconnect.

Item Cross-Cloud Interconnect Partner Cross-Cloud Interconnect for AWS
Physical provisioning Required Not required
Physical connections and ports Required Not required
Connection speeds 10 Gbps or 100 Gbps 1 Gbps to 100 Gbps at pre-approved speeds
Provisioning time 1-4 weeks Minutes to within 1 day
Connection initiation direction Initiated from Google Cloud Can be initiated from either Google Cloud or AWS
Supported CSPs OCI, AWS, Azure, Alibaba, etc. AWS only

Partner Cross-Cloud Interconnect for AWS has a constraint of one transport resource per region per project. If you need multiple Interconnects in the same region, you'll need to consider using separate projects.

Pricing

You need to check pricing on both AWS and Google Cloud. Google Cloud's pricing is straightforward, but AWS's pricing structure is more complex and requires attention.

Google Cloud Pricing Structure

The pricing for Partner Cross-Cloud Interconnect for AWS is summarized in the Cloud Interconnect pricing's Partner Cross-Cloud Interconnect section. Key points are:

  • Hourly billing for the connection transport
  • No data transfer charges for both inbound and outbound traffic
  • Pricing is based on a combination of bandwidth and region (North America / Europe / Asia Pacific / South America), with higher bandwidth and geographically distant regions having higher rates
  • Bandwidths not explicitly listed in the pricing table (e.g., 20 Gbps) are calculated linearly from the previous tier rate
Transport location 1 Gbps 5 Gbps 10 Gbps 100 Gbps
North America $3.50 $17.30 $19.00 $146.60
Europe $3.50 $17.30 $19.00 $146.60
Asia Pacific $5.00 $24.90 $26.40 $196.10
South America $7.60 $38.00 $46.90 $299.60

If you access from a region different from the connection location, standard inter-region communication charges will apply additionally.

AWS Pricing Structure

A pricing structure has been introduced with GA. Here are several key points:

  • Hourly billing based on bandwidth and automatically assigned pricing tiers
  • No data transfer charges for both inbound and outbound traffic
  • Tiers range from 1 to 5 (Tier 5 being the most expensive)
  • Tiers are determined by the combination of source AWS region for VPC traffic and local AWS region for the Interconnect (generally Tier 1 when access and connection regions are the same)
  • Greater geographical distances result in higher tier assignments
  • Only a single tier is assigned to an Interconnect, with higher tiers covering all routes of lower tiers
  • Billing starts when the Interconnect is created and continues hourly until it's deleted
  • If you're using AWS Cloud WAN, the tier is determined by the highest tier CNE in the topology, not the local region's core network edge (CNE)

500 Mbps Free Tier

Personally, the most exciting part of the GA announcement is that starting in May, one 500 Mbps local Interconnect will be free in each region. This allows you to try out or use PoC-level validations or small-scale development environments without worrying about costs. It's a powerful incentive for those who want to try multi-cloud connections.

SLA

AWS SLA

As of April 15, 2026, the SLA documentation for AWS Interconnect - multicloud doesn't appear to be published yet. This article will be updated once the official numbers are announced.

Also, like AWS Direct Connect, the scope of responsibility for the SLA is expected to be limited to the AWS side, so you'll need to separately check Google Cloud's Interconnect SLA for Google Cloud-side availability.

Google Cloud SLA

As of April 15, 2026, while Google Cloud Interconnect SLA documentation exists, Partner Cross-Cloud Interconnect doesn't appear to be included in the covered services. This article will be updated once it's officially included.

However, the Google Cloud console in the "Let's Try It" section showed an SLA of 99.9%.

Supported Regions

The region pairs supported at GA are as follows, unchanged from the preview:

AWS Region Google Cloud Location
us-east-1 US East (Northern Virginia) us-east4 (Northern Virginia)
us-west-1 US West (Northern California) us-west2 (Los Angeles)
us-west-2 US West (Oregon) us-west1 (Oregon)
eu-west-2 Europe (London) europe-west2 (London)
eu-central-1 Europe (Frankfurt) europe-west3 (Frankfurt)

As with the preview, cross-region combinations like us-east-1 to us-west-1 are not supported with a single Interconnect. If you want to connect across regions, you'll need an architecture that combines with Cloud WAN.

However, Google Cloud Partner Cross-Cloud Interconnect for AWS's location documentation lists the following, which suggests that Singapore is supported:

Google Cloud locations AWS locations
asia-southeast1 ap-southeast-1 Asia Pacific (Singapore)
europe-west2 eu-west-2 Europe (London)
europe-west3 eu-central-1 Europe (Frankfurt)
us-east4 us-east-1 US East (Northern Virginia)
us-west1 us-west-2 US West (Oregon)
us-west2 us-west-1 US West (Northern California)

Let's Try It

Here, we'll connect in the Oregon region (AWS:us-west-2, Google Cloud:us-west1). For a simple setup, we'll assume that you have VPC subnets in the Oregon region on both AWS and Google Cloud. We'll also assume that you have already set up a Direct Connect Gateway on AWS and a Cloud Router on Google Cloud.

While there was no console available on Google Cloud during the preview, it seems a console has been prepared with GA.

AWS Preparation

Existing Configuration

Public / Private Subnet Name Availability Zone CIDR
Public interconnect-subnet-public1-us-west-2a us-west-2a 10.0.0.0/20
Public interconnect-subnet-public2-us-west-2b us-west-2b 10.0.16.0/20
Public interconnect-subnet-public3-us-west-2c us-west-2c 10.0.32.0/20
Private interconnect-subnet-private1-us-west-2a us-west-2a 10.0.128.0/20
Private interconnect-subnet-private2-us-west-2b us-west-2b 10.0.144.0/20
Private interconnect-subnet-private3-us-west-2c us-west-2c 10.0.160.0/20

Network Configuration

In the AWS Direct Connect gateway console, click Create a Direct Connect gateway.

Screenshot 2026-04-15 12.49.31 copy

Enter a name and ASN, add tags if needed, and click Create Direct Connect gateway.

Screenshot 2026-04-15 12.50.31 copy

Open the VPC virtual private gateway console in the Oregon region and click Create virtual private gateway.

Screenshot 2026-04-15 15.23.10 copy

Set a name tag and click Create virtual private gateway.

Screenshot 2026-04-15 12.52.58 copy

Select the created virtual private gateway and click Attach to VPC.

Screenshot 2026-04-15 12.55.26 copy

Select a VPC and click Attach to VPC.

Screenshot 2026-04-15 12.55.38 copy

Select the created Direct Connect gateway and click Associate gateway.

Screenshot 2026-04-15 12.56.07 copy

Select the created virtual private gateway and click Associate gateway.

Screenshot 2026-04-15 12.57.16 copy

Google Cloud Preparation

Enable the Network Connectivity API if it's not already enabled.

In the Cloud Router console, click Create router.

Screenshot 2026-04-15 13.09.43 copy

For the association, select VPC network, enter a name, select a VPC for the network, and select Oregon for the region. Enter an ASN and BGP keepalive interval, then click Create.

Screenshot 2026-04-15 13.10.52 copy

Creating Interconnect - multicloud

In the AWS Interconnect console, click Create multicloud interconnect.

Screenshot 2026-04-15 12.57.43 copy

Select Google Cloud as the provider and click Next.

Screenshot 2026-04-15 12.58.13 copy

Select Oregon for both AWS and Google Cloud regions and click Next.

Screenshot 2026-04-15 12.58.44 copy

Enter a description, bandwidth, Direct Connect gateway, and Google Cloud Project ID, then click Next.

Screenshot 2026-04-15 13.00.38 copy

Confirm that the settings are correct and click Finish.

Screenshot 2026-04-15 13.00.47 copy

Click Copy activation key to copy the activation key.

Screenshot 2026-04-15 13.01.02 copy

Creating Partner Cross-Cloud Interconnect Transport

In the Google Cloud Partner Cross-Cloud Interconnect console, click Create transport.

Screenshot 2026-04-15 13.19.00 copy

For the initial setup location, select Remote cloud service provider. Enter the copied activation key and click Verify. Then click Continue.

Screenshot 2026-04-15 13.19.58 copy

For the transport profile, select Amazon Web Services Oregon (us-west-2) and click Continue.

Screenshot 2026-04-15 13.21.37 copy

Enter a transport name, specify the bandwidth, and click Continue. Note that the bandwidth displays 1 GB/sec, but this is likely a typo for 1 Gbps.

Screenshot 2026-04-15 13.22.34 copy

Select a VPC for the network, enter the subnet CIDR for advertised routes, and click Create.

Screenshot 2026-04-15 13.23.22 copy

After waiting a few minutes, the transport will be created.

Screenshot 2026-04-15 13.32.45

Setting Up Peering

Here, we'll use gcloud commands on Cloud Shell.

Get the network name. Change to-aws-transport as needed. Check the content of peeringNetwork.

$ gcloud network-connectivity transports describe to-aws-transport --region us-west1
advertisedRoutes:
- 10.1.0.0/20
- 10.1.128.0/20
bandwidth: BPS_1G
createTime: '2026-04-15T04:24:14.235013205Z'
name: projects/project-name/locations/us-west1/transports/to-aws-transport
network: projects/project-name/global/networks/interconnect-aws
peeringNetwork: projects/123456789012345678901/global/networks/transport-1234567890123456-vpc
providedActivationKey: 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567=
remoteProfile: projects/project-name/locations/us-west1/remoteTransportProfiles/aws-us-west-2
stackType: IPV4_ONLY
state: ACTIVE
updateTime: '2026-04-15T04:36:19.599387613Z'

Run the gcloud compute networks peerings create command to establish VPC network peering. There's a warning about MTU mismatch, but we'll proceed anyway for testing purposes. For production use, align the MTUs between AWS and Google Cloud.

$ gcloud compute networks peerings create "to-aws-transport" \
    --network="interconnect-aws" \
    --peer-network="projects/123456789012345678901/global/networks/transport-1234567890123456-vpc" \
    --stack-type=IPV4_ONLY \
    --import-custom-routes \
    --export-custom-routes
Updated [https://www.googleapis.com/compute/v1/projects/project-name/global/networks/interconnect-aws].
WARNING: Some requests generated warnings:
 - Network MTU 1460B does not match the peer's MTU 8896B




Setting Up Routing

In the AWS console, edit the routes for the target route table.

Screenshot 2026-04-15 14.24.28 copy

Set up routing to the virtual private gateway for the Google Cloud subnet's CIDR.

Screenshot 2026-04-15 14.25.30 copy

To propagate routing information to Google Cloud, click Edit route propagation in the Route propagation tab.

Screenshot 2026-04-15 14.26.08 copy

Set propagation to Enable and click Save. Now routing information will propagate to Google Cloud, allowing connectivity.

Screenshot 2026-04-15 14.26.17 copy

Connection Verification

Set up virtual machines with web servers on both AWS and Google Cloud, and allow 80/TCP and ICMP in security groups/firewalls.

Connection from AWS Side

Run commands on EC2 to verify connectivity.

Running on Amazon Linux 2023:

$ uname -a
Linux ip-10-0-163-224.us-west-2.compute.internal 6.1.166-197.305.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Mar 23 09:53:26 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux

IP addresses are as follows:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:48:d9:3b:ba:33 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    altname eni-00cca6a6ee63812ab
    altname device-number-0.0
    inet 10.0.163.224/20 metric 512 brd 10.0.175.255 scope global dynamic ens5
       valid_lft 2024sec preferred_lft 2024sec
    inet6 fe80::848:d9ff:fe3b:ba33/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

Let's try pinging. The latency between AWS Oregon region and Google Cloud Oregon region is about 10ms.

$ ping -c 10 10.1.0.2
PING 10.1.0.2 (10.1.0.2) 56(84) bytes of data.
64 bytes from 10.1.0.2: icmp_seq=1 ttl=62 time=11.1 ms
64 bytes from 10.1.0.2: icmp_seq=2 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=3 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=4 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=5 ttl=62 time=10.2 ms
64 bytes from 10.1.0.2: icmp_seq=6 ttl=62 time=10.0 ms
64 bytes from 10.1.0.2: icmp_seq=7 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=8 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=9 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=10 ttl=62 time=10.1 ms

--- 10.1.0.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 10.038/10.191/11.130/0.314 ms

Let's run a TCP traceroute. It takes 5 hops.

$ sudo traceroute -T -p 80 10.1.0.2
traceroute to 10.1.0.2 (10.1.0.2), 30 hops max, 60 byte packets
 1  169.254.249.41 (169.254.249.41)  0.395 ms 169.254.249.45 (169.254.249.45)  0.491 ms  0.321 ms
 2  169.254.161.50 (169.254.161.50)  7.971 ms 169.254.80.58 (169.254.80.58)  6.287 ms 169.254.51.98 (169.254.51.98)  5.713 ms
 3  142.250.232.45 (142.250.232.45)  7.936 ms * *
 4  * * 142.250.232.46 (142.250.232.46)  7.974 ms
 5  * * ip-10-1-0-2.us-west-2.compute.internal (10.1.0.2)  11.105 ms

Connection from Google Cloud side

I'll confirm the connection by executing commands on Compute Engine.

These commands are being run on Debian GNU/Linux 12.

$ uname -a
Linux interconnect 6.1.0-44-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.164-1 (2026-03-09) x86_64 GNU/Linux

The IP address is as follows.

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000
    link/ether 42:01:0a:01:00:02 brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    inet 10.1.0.2/32 metric 100 scope global dynamic ens4
       valid_lft 83166sec preferred_lft 83166sec
    inet6 fe80::4001:aff:fe01:2/64 scope link 
       valid_lft forever preferred_lft forever

Let's try a ping test. It appears to be about 10ms from Google Cloud's Oregon region to AWS's Oregon region.

$ ping -c 10 10.0.163.224
PING 10.0.163.224 (10.0.163.224) 56(84) bytes of data.
64 bytes from 10.0.163.224: icmp_seq=1 ttl=124 time=11.2 ms
64 bytes from 10.0.163.224: icmp_seq=2 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=3 ttl=124 time=10.2 ms
64 bytes from 10.0.163.224: icmp_seq=4 ttl=124 time=10.0 ms
64 bytes from 10.0.163.224: icmp_seq=5 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=6 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=7 ttl=124 time=10.3 ms
64 bytes from 10.0.163.224: icmp_seq=8 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=9 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=10 ttl=124 time=10.0 ms

--- 10.0.163.224 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9012ms
rtt min/avg/max/mdev = 10.032/10.217/11.183/0.330 ms

Let's try a TCP traceroute. It's 3 hops.

$ sudo traceroute -T -p 80 10.0.163.224
traceroute to 10.0.163.224 (10.0.163.224), 30 hops max, 60 byte packets
 1  142.250.232.46 (142.250.232.46)  4.240 ms 142.251.78.214 (142.251.78.214)  6.754 ms 142.250.232.45 (142.250.232.45)  4.191 ms
 2  169.254.235.90 (169.254.235.90)  8.225 ms 169.254.80.58 (169.254.80.58)  4.217 ms  4.199 ms
 3  10.0.163.224 (10.0.163.224)  9.586 ms  10.685 ms  15.121 ms

Conclusion

About 5 months after the announcement at AWS re:Invent 2025, private connectivity between AWS and Google Cloud is now available as a production-ready service. During the preview, it was limited to 1 Gbps with unclear pricing information, making it difficult to conduct tests beyond a PoC. Now with GA, bandwidth has expanded to up to 100 Gbps, and the pricing structure has been clearly defined, enabling integration into full-scale multi-cloud architecture designs.

The 500 Mbps free tier starting in May is an extremely welcome benefit that makes it easy to try multi-cloud connectivity. It's a perfect opportunity to gain experience with direct connections between AWS and Google Cloud, so I'd encourage those interested to try it out after May.

Regarding pricing, particular attention should be paid to the tier determination logic when combined with Cloud WAN. Since the highest tier across the entire Core Network topology applies, not just the local AWS region, the placement of Cloud Network Edges should be carefully examined in advance to avoid unexpected costs. That said, since there are no charges for the amount of data sent, it seems very suitable for use cases involving large data transfers.

Personally, I hope that as an extension of AWS Direct Connect, it will eventually support connections to on-premises environments, leveraging the same quadruple redundant infrastructure. I expect that service integration for network connectivity will accelerate cooperation between AWS and Google Cloud in other services as well.

And as I've been saying since the preview, please bring this to the Japan region soon!!!!!!

Share this article