Comparing Private Connectivity of AWS, Microsoft Azure, and Google Cloud

Network Transformation Mastering Multicloud

We clarify the private connectivity differences between these major hyperscalers.

This post accompanies our webinar,
Network Transformation: Mastering Multicloud.

With the fast growing adoption of multicloud strategies, understanding the private connectivity models to these hyperscalers becomes increasingly important. Private connectivity can, in many cases, increase bandwidth throughput, reduce overall network costs, and provide a more predictable and stable network experience when compared to internet connections. So, whether it is time to spin up private connectivity to a new cloud service provider (CSP), or get rid of your ol’ internet VPN, this article can lend a helping hand in understanding the different connectivity models, vernacular, and components of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) private connectivity offerings.



These cloud providers use terminology that is often similar, but sometimes different. Let’s kick things off with some CSP terminology alignment.

CSP terminology

Now that we’ve got a better idea of the CSP terminology, let’s jump into some more of the meat and potatoes. We’ll start with breaking down AWS Direct Connect.

AWS Direct Connect

The main ingredients for AWS Direct Connect are the virtual interfaces (VIFs), the Gateways — Virtual Private Gateway (VGW), Direct Connect Gateway (DGW/DXGW), and Transit Gateway (TGW) — and the physical/Direct Connect Circuit.

AWS Direct Connect has varying connectivity models: Dedicated Connections, Hosted Connections, and hosted VIFs. In choosing the best one for your business, it’s important to first understand each of the different models in order to select the one most suitable for your use case.

Dedicated Connection: This is a physical connection requested through the AWS console and associated with a single customer. You take down the LOA-CFA and work with your DC operator or AWS partner to get the cross connect from your equipment to AWS. The available port speeds are 1 Gbps and 10 Gbps.

Hosted Connection: This is a physical connection that an AWS Direct Connect Partner provisions on behalf of a customer. Customers request a hosted connection by contacting an AWS partner who provisions the connection. The available speeds are 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, and 10 Gbps.

Hosted VIF: This is a virtual interface provisioned on behalf of a customer by the account that owns a physical Direct Connect circuit. Bandwidth is shared across all VIFs on the parent connection.

Unlike other CSPs, AWS also has different types of gateways that can be used with your Direct Connect: Virtual Private Gateways, Direct Connect Gateways, and Transit Gateways. 

Virtual Private Gateway (VGW): This is a logical, fully redundant, distributed edge-routing function that is attached to a VPC to allow traffic to privately route in/out of the VPC.

AWS VGW

Direct Connect Gateway (DGW): A Direct Connect Gateway is a globally available resource that you can use to attach multiple VPCs to a single (or multiple) Direct Connect circuit. This gateway doesn’t, however, provide inter-VPC connectivity.

Transit Gateway (TGW): A Transit Gateway connects both your VPCs and on-premises networks together through a central hub. This simplifies your network and puts an end to complex peering relationships.

The type of gateway you are using, and what type of public or private resources you ultimately need to reach, will determine the type of VIF you will use.

Let’s dive into the three different VIF types: private, public, and transit.

Private VIF – A private virtual interface: This is used to access an Amazon VPC using private IP addresses. You can advertise up to 100 prefixes to AWS.

Note: You can attach the Private VIF to a Virtual Private Gateway (VGW) or Direct Connect Gateway (DGW).

Public VIF — A public virtual interface: A public virtual interface can access all AWS public services using public IP addresses (S3, DynamoDB). You can advertise up to 1,000 prefixes to AWS. 

Note: Public VIFs are not associated or attached to any type of gateway.

  • Connect to all AWS public IP addresses globally (public IP for BGP peering required).
  • Access publicly routable Amazon services in any AWS Region (except the AWS China Region).

Transit VIF – A transit virtual interface: A transit virtual interface is used to access one or more Amazon VPCs through a Transit Gateway that is associated with a Direct Connect gateway. You can use transit virtual interfaces with 1/2/5/10 Gbps AWS Direct Connect connections, and you can advertise up to 100 prefixes to AWS.

Azure ExpressRoute

Whether you are using ExpressRoute Direct or the Partner model, the main components remain the same: the peerings (private or Microsoft), VNet Gateways, and the physical ExpressRoute circuit. Unlike the other CSPs, each Azure ExpressRoute comes with two circuits for HA/redundancy and SLA purposes.

Much like the AWS dedicated and hosted models, Azure has its own similar offerings of ExpressRoute Direct and Partner ExpressRoute.

With Azure ExpressRoute Direct, the customer owns the ExpressRoute port and the LOA CFA is provided by Azure. The fibre cross connects are ordered by the customer in their data centre. The supported port speeds are 10 Gbps or 100 Gbps interfaces. 

With the ExpressRoute Partner model, the service provider connects to the ExpressRoute port. The LOA CFA is provided by Azure and given to the service provider or partner. The fibre cross connects are provisioned by the partner. The customer works with the partner to provision ExpressRoute circuits using the connections the partner has already set up; the service provider owns the physical connections to Microsoft. Customers can create ExpressRoutes with the following bandwidth: 50 Mbps, 100 Mbps, 200 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, 10 Gbps.

Azure also has a unique connectivity model called Azure ExpressRoute Local. Somewhat of an outlier when stacked up against the other CSPs’ connectivity models, ExpressRoute Local allows Azure customers to connect at a specific Azure peer location. Connecting to one or two local regions associated with the peer provides the added benefit of unlimited data usage. For a more detailed overview of lExpressRoute Local, read our recent blog post: Avoid Cloud Bill Shock with Azure ExpressRoute Local and Megaport

Azure has two types of peerings that we can directly compare — apples to apples — with AWS’s private VIF and public VIF.

Private Peering — Private peering supports connections from a customer’s on-premises / private data centre to access their Azure Virtual Networks (VNets).

  • Access Azure compute services, primarily virtual machines (IaaS) and cloud services (PaaS), that are deployed within a virtual network (VNet).
  • Private IPs used for peer (RFC-1918). Customers will need a /28 broken into two /30: one for primary and one for secondary peer.
  • Private peering is supported over logical connections. BGP is established between customers’ on premises devices and Microsoft Enterprise Edge Routers (MSEE).

Note: The location of the MSEEs that you will peer with is determined by the peering location that was selected during the provisioning of the ExpressRoute.

  • Depending on the selected ExpressRoute SKU, a single private peer can support 10+ VNets across geographical regions.
  • The maximum number of prefixes supported per peering is 4000 by default; up to 10,000 can be supported on the premium SKU.

Microsoft Peering — Microsoft peering is used to connect to Azure public resources such as blob storage. Connectivity to Microsoft online services (Office 365 and Azure PaaS services) occurs through Microsoft peering.

  • Office 365 was created to be accessed securely and reliably via the internet. Approval from Microsoft is required to receive O-365 routes over ExpressRoute.
  • Route filters must be created before customers will receive routes over Microsoft peering. BGP communities are used with route filters to receive routes for customer services.

With Azure ExpressRoute, there is only one type of gateway: VNet Gateway.

VNet Gateway: A VNet gateway is a logical routing function similar to AWS’s VGW. ExpressRoute VNet Gateway is used to send network traffic on a private connection, using the gateway type ‘ExpressRoute’. This is also referred to as an ExpressRoute gateway.

With a standard Azure ExpressRoute, multiple VNets can be natively attached to a single ExpressRoute circuit in a hub and spoke model, making it possible to access resources in multiple VNets over a single circuit. In this way the standard Azure ExpressRoute offering is considered comparable to the AWS Direct Connect Gateway model.

Google Cloud Interconnect

The last, but certainly not least, CSP private connectivity that we will cover is GCP Interconnect. Luckily for us, GCP keeps their connectivity and components pretty straightforward and is arguably the simplest of the three.

Like AWS and Azure, GCP offers both Partner Interconnect and Dedicated Interconnect models.

Dedicated Interconnect: GCP Dedicated Interconnect provides a direct physical connection between your on-premises network and Google’s network. Similar to the other CSPs, you take the LOA-CFA from GCP and work with your colo provider/DC operator to set up the cross connect.

  • A 10 Gbps or 100 Gbps interface dedicated to customer IPv4 link local addressing (must select from 169.254.0.0/16 range for peer addresses)
  • LACP, even if you’re using a single-circuit EBGP-4 with multi-hop 802.1Q VLANs

Partner Interconnect: Like Dedicated Interconnect, Partner Interconnect provides connectivity between your on-premises network and your VPC network using a provider or partner. A Partner Interconnect connection is ideal if your data centre is in a separate facility from the Dedicated Interconnect colocation, or if your data needs don’t warrant an entire 10 Gbps connection.

The only gateway option for GCP Interconnect is the Google Cloud Router.

Google Cloud Router: A Cloud Router dynamically exchanges routes between your VPC network and your on-premises network using Border Gateway Protocol (BGP).

Not only is a GCP Cloud Router restricted to a single VPC, but it is also restricted to a single region of that VPC. Think of this as a one-to-one mapping or relationship. On top of the Google Cloud Router are the peering setups, which GCP terms as VLAN attachments.

VLAN Attachments: Also known as an interconnect attachment, a VLAN attachment is a logical connection between your on-premises network and a single region in your VPC network. Unlike Azure and AWS, GCP only offers a private peering option over their interconnect. In order to reach GCP’s public services and APIs you can set up Private Google access over your interconnect to accommodate your on-premises hosts. However, Google private access does not enable G Suite connectivity. To access G Suite, you would need to set up a connection/peering to them via an internet exchange (IX for short), or access these services via the internet.

Key Take-Aways 

Let’s wrap things up with some highlights. Hopefully, you can now walk away with some additional insight and a better understanding of the private connectivity options offered by these CSPs.

AWS Direct Connect has multiple types of gateways and connectivity models that can be leveraged to reach public and private resources from your on-premises infrastructure. Whether that takes the form of a Transit Gateway associated with a Direct Connect gateway, or a one-to-one mapping of a private VIF landing on a VGW, will be completely determined by your particular case and future plans.

With Azure ExpressRoute, you can configure both a Microsoft peering (to access public resources) and a private peering over the single logical layer 2 connection. Each ExpressRoute comes with two configurable circuits that are included when you order your ExpressRoute. With the standard ExpressRoute, you can connect multiple VNets within the same geographical region to a single ExpressRoute circuit and can configure a premium SKU (global reach) to allow connectivity from any VNet in the world to the same ExpressRoute circuit.

GCP keeps their interconnect easily understandable. Over GCPs interconnect, you can only natively access private resources. If connectivity to GCP public resources (such as cloud storage) is required, you can configure private Google access for your on-premises resources. This does not include GCPs SaaS offering, G Suite. In order to reach G Suite, you can always ride the public internet or configure a peering to them using an IX. With the GCP Cloud Router having a 1:1 mapping with a single VPC and region, the peerings (or rather VLAN attachments) are created on top of the Cloud Router. This functionality and model is similar to AWS Direct Connect and creating a VIF directly on a VGW.

Seeing how you made it this far, I’ll end by telling you that Megaport can not only connect you to all three of these CSPs (and many others), but we can also enable cloud-to-cloud connectivity between the providers without the need to back-haul that traffic to your on-premises infrastructure. 

So, please feel free to reach out to us. We would love to hear about your cloud journey, the challenges you are facing, and how we can help. 

• • •

More on Mastering Multicloud

This blog post is first in a series that accompanies Megaport’s webinar, Network Transformation: Mastering Multicloud, in which we dive into not only the private connectivity models, but also the cost components and the SLAs surrounding these CSPs’ private connectivity offerings.

Kyle Moreta
Solutions Architect

Filed under: Blog Cloud

forrester report

E-Guide

Forrester’s 5 Key Attributes to Consider When Evaluating Cloud Connectivity
Read the Free E-Guide

Get the latest cloud insights delivered.