AWS and Google Cloud direct connection AWS Interconnect - multicloud is now GA!

AWS and Google Cloud direct connection AWS Interconnect - multicloud is now GA!

AWS Interconnect- multicloud that directly connects AWS and Google Cloud has become generally available! Pricing is only time-based charges for bandwidth, with no charges for the amount of data transmitted, so you can communicate as much as you want!!!
2026.04.15

This page has been translated by machine translation. View original

I am Oguri, someone who loves whiskey, cigars, and pipes.

The AWS Interconnect - multicloud, which had been in preview since November 2025, has finally become generally available after about 5 months! Along with this, Google Cloud's Partner Cross-Cloud Interconnect for AWS has also moved out of preview. I've been looking forward to this feature since hearing the announcement at AWS re:Invent 2025, so I'd like to introduce it.

What is AWS Interconnect - multicloud?

AWS Interconnect - multicloud is a service that directly connects Amazon VPC with other Cloud Service Provider (CSP) networks through a private high-speed connection. Previously, when building multi-cloud connections, the only options involved significant lead times and operational overhead, such as setting up Direct Connect through colocation facilities or connecting through third-party fabrics.

With AWS Interconnect - multicloud, you only need to prepare a Direct Connect Gateway on the AWS side and a Cloud Router on the Google Cloud side, and a private connection can be established in minutes with just two simple steps: creation and approval. There's no need to worry about customer routers, BGP, or peer IP addresses. The physical infrastructure is pre-provisioned by AWS and Google Cloud, and it has quadruple redundancy distributed across 2 or more physical facilities and 4 routers.

Interconnect architecture diagram
Amazon Web Services. "Interconnect architecture diagram". What is AWS Interconnect?. AWS Documentation. https://docs.aws.amazon.com/interconnect/latest/userguide/what-is-interconnect.html, (accessed 2026-04-15).

For more details on the mechanism and architecture, please see the following previous entries:

Changes from Preview to GA

Here's a comparison table of the main differences between preview and GA:

Item Preview (November 2025) GA (April 2026)
Offering Public preview General availability
Production traffic Not recommended Possible
Bandwidth 1 Gbps only 1 Gbps to 100 Gbps (select from pre-approved speeds)
Pricing Free Single pricing structure based on bandwidth and geographic scope
Free tier Entire connection free Starting in May, 500 Mbps Interconnect free in each region
Supported regions 5 region pairs 5 region pairs (no change)
Target CSP Google Cloud Google Cloud (Microsoft Azure planned for late 2026)

The most significant changes are that bandwidth can now be selected from 1 Gbps up to 100 Gbps, and the pricing structure has been clarified. During the preview, production traffic was not recommended, so now it's officially available for production use.

Partner Cross-Cloud Interconnect for AWS is also GA

Google Cloud's corresponding feature, Partner Cross-Cloud Interconnect for AWS, also appears to be generally available as the preview notation has been removed from the documentation. Here's a recap of the differences compared to the existing Cross-Cloud Interconnect:

Item Cross-Cloud Interconnect Partner Cross-Cloud Interconnect for AWS
Physical provisioning Required Not required
Physical connection and ports Required Not required
Connection speed 10 Gbps or 100 Gbps 1 Gbps to 100 Gbps from pre-approved speeds
Provisioning time 1-4 weeks Minutes to within 1 day
Connection initiation direction Initiated from Google Cloud Can be initiated from either Google Cloud or AWS
Supported CSPs OCI, AWS, Azure, Alibaba, etc. AWS only

Partner Cross-Cloud Interconnect for AWS has a constraint of one transport resource per region per project. If you want to configure multiple Interconnects in the same region, you'll need to consider using separate projects.

Pricing

You need to check the pricing for both AWS and Google Cloud. Google Cloud's pricing is straightforward, but AWS's pricing experience is more complex, so caution is needed.

Google Cloud Pricing Structure

The pricing for Partner Cross-Cloud Interconnect for AWS is summarized in the Cloud Interconnect pricing's Partner Cross-Cloud Interconnect section. Key points:

  • Charged hourly for the connection transport
  • No data transfer charges for both inbound and outbound
  • Pricing is determined by the combination of bandwidth and region (North America / Europe / Asia Pacific / South America), with higher prices for higher bandwidth and geographically distant regions
  • Bandwidths not explicitly listed in the pricing table (e.g., 20 Gbps) are calculated as a linear multiple from the previous tier
Transport location 1 Gbps 5 Gbps 10 Gbps 100 Gbps
North America $3.50 $17.30 $19.00 $146.60
Europe $3.50 $17.30 $19.00 $146.60
Asia Pacific $5.00 $24.90 $26.40 $196.10
South America $7.60 $38.00 $46.90 $299.60

If accessing from a region different from the connection location, additional regular inter-region communication charges apply.

AWS Pricing Structure

A pricing structure has been introduced with GA. There are several key points:

  • Hourly billing based on bandwidth and automatically assigned pricing tier
  • No data transfer charges for both inbound and outbound
  • Tiers range from 1 to 5 (Tier 5 being the most expensive)
  • Tier is determined by the combination of the source AWS region for VPC traffic and the local AWS region for the Interconnect (generally Tier 1 if access source and connection location are in the same region)
  • Higher tiers are assigned for greater geographical distances
  • Only a single tier is assigned to one Interconnect, and higher tiers cover all routes of lower tiers
  • Billing starts when the Interconnect is created and continues hourly until it's deleted
  • If using AWS Cloud WAN, the tier is determined by the highest tier of CNE in the topology, not the local region's core edge network (CNE)

500 Mbps Free Tier

Personally, the most exciting part of the GA announcement is that starting in May, one 500 Mbps local Interconnect will be free in each region. This allows for cost-free testing of PoCs or small development environments. It's a powerful incentive for those wanting to try multi-cloud connections.

SLA

AWS SLA

The AWS Interconnect - multicloud SLA document was published on April 16, 2026.
As of April 15, 2026, the SLA document for AWS Interconnect - multicloud doesn't appear to have been published yet. This article will be updated once the official numbers are announced.

Monthly Uptime Percentage Service Credit Rate
99.99% to 99.0% 10%
99.0% to 95.0% 25%
Below 95.0% 100%

Also, the scope of responsibility for the SLA is expected to be limited to the AWS side, similar to AWS Direct Connect, so you'll need to check Google Cloud's Interconnect SLA separately for Google Cloud's availability.

Google Cloud SLA

As of April 15, 2026, there is an SLA document for Google Cloud Interconnect, but Partner Cross-Cloud Interconnect doesn't appear to be included in the covered services. This article will be updated once it's officially included.

However, in the Try it out section, the Google Cloud console showed an SLA of 99.9%.

Supported Regions

The region pairs supported at GA are as follows. There are no changes from the preview:

AWS Region Google Cloud Location
us-east-1 US East (N. Virginia) us-east4 (Northern Virginia)
us-west-1 US West (N. California) us-west2 (Los Angeles)
us-west-2 US West (Oregon) us-west1 (Oregon)
eu-west-2 Europe (London) europe-west2 (London)
eu-central-1 Europe (Frankfurt) europe-west3 (Frankfurt)

As with the preview, cross-region combinations like us-east-1 to us-west-1 are not supported with a single Interconnect. If you want to connect across regions, you'll need an architecture that combines with Cloud WAN.

However, according to the location documentation for Google Cloud Partner Cross-Cloud Interconnect for AWS, it appears that Singapore is supported:

Google Cloud locations AWS locations
asia-southeast1 ap-southeast-1 Asia Pacific (Singapore)
europe-west2 eu-west-2 Europe (London)
europe-west3 eu-central-1 Europe (Frankfurt)
us-east4 us-east-1 US East (N. Virginia)
us-west1 us-west-2 US West (Oregon)
us-west2 us-west-1 US West (N. California)

Let's Try It

Here, we'll try connecting in the Oregon region (AWS: us-west-2, Google Cloud: us-west1). For simplicity, we'll assume you already have VPC subnets in the Oregon region on both AWS and Google Cloud. We'll also assume you already have a Direct Connect Gateway on AWS and a Cloud Router on Google Cloud.

During the preview, there was no console available on Google Cloud, but it seems a console has been created with GA.

AWS Preparation

Existing Configuration

Public / Private Subnet Name Availability Zone CIDR
Public interconnect-subnet-public1-us-west-2a us-west-2a 10.0.0.0/20
Public interconnect-subnet-public2-us-west-2b us-west-2b 10.0.16.0/20
Public interconnect-subnet-public3-us-west-2c us-west-2c 10.0.32.0/20
Private interconnect-subnet-private1-us-west-2a us-west-2a 10.0.128.0/20
Private interconnect-subnet-private2-us-west-2b us-west-2b 10.0.144.0/20
Private interconnect-subnet-private3-us-west-2c us-west-2c 10.0.160.0/20

Network Configuration

In the AWS Direct Connect gateway console, click Create a Direct Connect gateway.

Screenshot 2026-04-15 12.49.31 copy

Enter a name and ASN, add tags if needed, and click Create Direct Connect gateway.

Screenshot 2026-04-15 12.50.31 copy

Open the Virtual Private Gateway console in the Oregon region and click Create virtual private gateway.

Screenshot 2026-04-15 15.23.10 copy

Set a name tag and click Create virtual private gateway.

Screenshot 2026-04-15 12.52.58 copy

Select the created virtual private gateway and click Attach to VPC.

Screenshot 2026-04-15 12.55.26 copy

Select the VPC and click Attach to VPC.

Screenshot 2026-04-15 12.55.38 copy

Select the created Direct Connect gateway and click Associate gateway.

Screenshot 2026-04-15 12.56.07 copy

Select the created virtual private gateway and click Associate gateway.

Screenshot 2026-04-15 12.57.16 copy

Google Cloud Preparation

Enable the Network Connectivity API if it's not already enabled.

In the Cloud Router console, click Create router.

Screenshot 2026-04-15 13.09.43 copy

For association, select VPC network, enter a name, select the VPC for the network, and select Oregon for the region. Enter the ASN and BGP keepalive interval, then click Create.

Screenshot 2026-04-15 13.10.52 copy

Creating Interconnect - multicloud

In the AWS Interconnect console, click Create multicloud interconnect.

Screenshot 2026-04-15 12.57.43 copy

Select Google Cloud as the provider and click Next.

Screenshot 2026-04-15 12.58.13 copy

Select Oregon for both AWS Region and Google Cloud Region, then click Next.

Screenshot 2026-04-15 12.58.44 copy

Enter a description, bandwidth, Direct Connect gateway, Google Cloud Project ID, and click Next.

Screenshot 2026-04-15 13.00.38 copy

Confirm that the configuration is correct and click Finish.

Screenshot 2026-04-15 13.00.47 copy

Click Copy activation key to copy the activation key.

Screenshot 2026-04-15 13.01.02 copy

Creating Partner Cross-Cloud Interconnect Transport

In the Google Cloud Partner Cross-Cloud Interconnect console, click Create transport.

Screenshot 2026-04-15 13.19.00 copy

In the initial setup location, select Remote cloud service provider. Enter the copied activation key and click Verify, then click Continue.

Screenshot 2026-04-15 13.19.58 copy

Select Amazon Web Services Oregon (us-west-2) for the transport profile and click Continue.

Screenshot 2026-04-15 13.21.37 copy

Enter a transport name, specify the bandwidth, and click Continue. Note that it displays 1 GB/sec but should be 1 Gbps.

Screenshot 2026-04-15 13.22.34 copy

Select the VPC for the network, enter the subnet CIDR for advertised routes, and click Create.

Screenshot 2026-04-15 13.23.22 copy

After waiting a few minutes, the transport will be created.

Screenshot 2026-04-15 13.32.45

Setting up Peering

Here, we'll use gcloud commands in Cloud Shell.

Get the network name. Change to-aws-transport as needed. Check the content of peeringNetwork.

$ gcloud network-connectivity transports describe to-aws-transport --region us-west1
advertisedRoutes:
- 10.1.0.0/20
- 10.1.128.0/20
bandwidth: BPS_1G
createTime: '2026-04-15T04:24:14.235013205Z'
name: projects/project-name/locations/us-west1/transports/to-aws-transport
network: projects/project-name/global/networks/interconnect-aws
peeringNetwork: projects/123456789012345678901/global/networks/transport-1234567890123456-vpc
providedActivationKey: 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567=
remoteProfile: projects/project-name/locations/us-west1/remoteTransportProfiles/aws-us-west-2
stackType: IPV4_ONLY
state: ACTIVE
updateTime: '2026-04-15T04:36:19.599387613Z'

Run the gcloud compute networks peerings create command to establish VPC network peering. There's a warning about MTU mismatch, but we'll ignore it for this connection validation purpose. For production use, align the MTUs between AWS and Google Cloud.

$ gcloud compute networks peerings create "to-aws-transport" \
    --network="interconnect-aws" \
    --peer-network="projects/123456789012345678901/global/networks/transport-1234567890123456-vpc" \
    --stack-type=IPV4_ONLY \
    --import-custom-routes \
    --export-custom-routes
Updated [https://www.googleapis.com/compute/v1/projects/project-name/global/networks/interconnect-aws].
WARNING: Some requests generated warnings:
 - Network MTU 1460B does not match the peer's MTU 8896B




Setting up Routing

In the AWS console, edit the routes for the target route table.

Screenshot 2026-04-15 14.24.28 copy

Configure routing to the virtual private gateway for the Google Cloud subnet CIDR.

Screenshot 2026-04-15 14.25.30 copy

To propagate routing information to Google Cloud, click Edit route propagation in the Route propagation tab.

Screenshot 2026-04-15 14.26.08 copy

Set propagation to Enable and click Save. Now routing information is propagated to Google Cloud, and the connection is ready.

Screenshot 2026-04-15 14.26.17 copy

Connection Test

Set up virtual machines with web servers on AWS and Google Cloud, and allow 80/TCP and ICMP in the security group/firewall settings.

Connection from AWS

Run commands on the EC2 instance to test the connection.

Running on Amazon Linux 2023:

$ uname -a
Linux ip-10-0-163-224.us-west-2.compute.internal 6.1.166-197.305.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Mar 23 09:53:26 UTC 2026 x86_64 x86_64 x86_64 GNU/Linux

IP address configuration:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 0a:48:d9:3b:ba:33 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    altname eni-00cca6a6ee63812ab
    altname device-number-0.0
    inet 10.0.163.224/20 metric 512 brd 10.0.175.255 scope global dynamic ens5
       valid_lft 2024sec preferred_lft 2024sec
    inet6 fe80::848:d9ff:fe3b:ba33/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever

Let's try pinging. It seems to be around 10ms from AWS Oregon region to Google Cloud Oregon region.

$ ping -c 10 10.1.0.2
PING 10.1.0.2 (10.1.0.2) 56(84) bytes of data.
64 bytes from 10.1.0.2: icmp_seq=1 ttl=62 time=11.1 ms
64 bytes from 10.1.0.2: icmp_seq=2 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=3 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=4 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=5 ttl=62 time=10.2 ms
64 bytes from 10.1.0.2: icmp_seq=6 ttl=62 time=10.0 ms
64 bytes from 10.1.0.2: icmp_seq=7 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=8 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=9 ttl=62 time=10.1 ms
64 bytes from 10.1.0.2: icmp_seq=10 ttl=62 time=10.1 ms

--- 10.1.0.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 10.038/10.191/11.130/0.314 ms

Let's try traceroute over TCP. It's 5 hops.

$ sudo traceroute -T -p 80 10.1.0.2
traceroute to 10.1.0.2 (10.1.0.2), 30 hops max, 60 byte packets
 1  169.254.249.41 (169.254.249.41)  0.395 ms 169.254.249.45 (169.254.249.45)  0.491 ms  0.321 ms
 2  169.254.161.50 (169.254.161.50)  7.971 ms 169.254.80.58 (169.254.80.58)  6.287 ms 169.254.51.98 (169.254.51.98)  5.713 ms
 3  142.250.232.45 (142.250.232.45)  7.936 ms * *
 4  * * 142.250.232.46 (142.250.232.46)  7.974 ms
 5  * * ip-10-1-0-2.us-west-2.compute.internal (10.1.0.2)  11.105 ms

Connection from Google Cloud

I'll verify the connection by executing commands on Compute Engine.

Running on Debian GNU/Linux 12.

$ uname -a
Linux interconnect 6.1.0-44-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.164-1 (2026-03-09) x86_64 GNU/Linux

IP addresses are as follows:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000
    link/ether 42:01:0a:01:00:02 brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    inet 10.1.0.2/32 metric 100 scope global dynamic ens4
       valid_lft 83166sec preferred_lft 83166sec
    inet6 fe80::4001:aff:fe01:2/64 scope link 
       valid_lft forever preferred_lft forever

Let's try ping. The latency from Google Cloud's Oregon region to AWS's Oregon region appears to be around 10ms.

$ ping -c 10 10.0.163.224
PING 10.0.163.224 (10.0.163.224) 56(84) bytes of data.
64 bytes from 10.0.163.224: icmp_seq=1 ttl=124 time=11.2 ms
64 bytes from 10.0.163.224: icmp_seq=2 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=3 ttl=124 time=10.2 ms
64 bytes from 10.0.163.224: icmp_seq=4 ttl=124 time=10.0 ms
64 bytes from 10.0.163.224: icmp_seq=5 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=6 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=7 ttl=124 time=10.3 ms
64 bytes from 10.0.163.224: icmp_seq=8 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=9 ttl=124 time=10.1 ms
64 bytes from 10.0.163.224: icmp_seq=10 ttl=124 time=10.0 ms

--- 10.0.163.224 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9012ms
rtt min/avg/max/mdev = 10.032/10.217/11.183/0.330 ms

Let's try a TCP traceroute. It's 3 hops.

$ sudo traceroute -T -p 80 10.0.163.224
traceroute to 10.0.163.224 (10.0.163.224), 30 hops max, 60 byte packets
 1  142.250.232.46 (142.250.232.46)  4.240 ms 142.251.78.214 (142.251.78.214)  6.754 ms 142.250.232.45 (142.250.232.45)  4.191 ms
 2  169.254.235.90 (169.254.235.90)  8.225 ms 169.254.80.58 (169.254.80.58)  4.217 ms  4.199 ms
 3  10.0.163.224 (10.0.163.224)  9.586 ms  10.685 ms  15.121 ms

Conclusion

About 5 months after the announcement at AWS re:Invent 2025, private connectivity between AWS and Google Cloud is now available as a production-ready service. During the preview phase, only 1 Gbps was available with unclear pricing information, making it difficult to validate beyond proof of concept. Now with GA, bandwidth has expanded to up to 100 Gbps, and the pricing structure is clearly defined, enabling integration into full-scale multi-cloud architecture designs.

The 500 Mbps free tier starting in May is a very welcome benefit that allows for easy testing of multi-cloud connections. It's an excellent opportunity for those interested in gaining experience with direct connections between AWS and Google Cloud.

On the pricing side, careful attention should be paid to the tier determination logic, especially when combining with Cloud WAN. Since the highest tier across the entire Core Network topology applies, not just the local AWS region, you should carefully review your Cloud Network Edge placement in advance to avoid unexpected costs. That said, with no charges for data transfer, this service seems very suitable for use cases involving large data exchanges.

Personally, I hope that as an extension of AWS Direct Connect, they will leverage the existing quadruple redundant infrastructure to support on-premises connections in the future. I expect that network connectivity service integration will accelerate collaboration between other AWS and Google Cloud services.

And as I've been saying since the preview: please bring this to the Japan region soon!!!!!!

Share this article

AWSのお困り事はクラスメソッドへ