Azure Global Vnet Peering: Across the world in milliseconds


My new role on the Azure Fabric team had me playing with Azure Virtual Networks (commonly referred to as vNets).  More specifically, I was investigating vNet peering and global peering.

Virtual network peering enables you to seamlessly connect two Azure virtual networks. Once peered, the virtual networks appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same virtual network, through private IP addresses only.  Global vNet peering is vNet peering when each virtual network resides in a different Azure region.  Read Azure Virtual Network peering at docs.microsoft.com for more information.

Microsoft does a good job about explaining what connectivity would/should look like with peered vNets in the same region

The network latency between virtual machines in peered virtual networks in the same region is the same as the latency within a single virtual network. The network throughput is based on the bandwidth that’s allowed for the virtual machine, proportionate to its size. There isn’t any additional restriction on bandwidth within the peering.  The traffic between virtual machines in peered virtual networks is routed directly through the Microsoft backbone infrastructure, not through a gateway or over the public Internet.

So, peered vNets are said to be low latency, high-bandwidth connections where the only bandwidth restrictions are those bound by the virtual machine and the data is private.  Also, peered vNets in the same region have the same latency as a single virtual network.  Great! What about globally peered vNets?  What is the latency there ( given bandwidth is mostly determined by the virtual machine size)?

The Test

To get a basic idea of the latency between globally peered vNets,  I created four virtual networks– each in their on Azure Region:

  • West US
  • East US
  • East Asia
  • Southeast Australia

I then globally peered each vNet with its three other partners, creating a mesh virtual network, which resembles the following

gobalVnetPeering

Then, I created one Windows Server 2016 Datacenter Azure virtual machine (Standard DS1 v2 [1 vcpus, 3.5 GB memory]) and connected it to each respective regional data center.  Then, I opened the Windows Firewall on each virtual machine to allow IPv4 ICMP packets.

Peered virtual networks are transitive, which is why I created a mesh topology.   I believe each connection allows one direction of egress traffic– meaning if I send data to another vNet the response is allowed.  However, data originating from the other side (not a response, but a new connection) is not allowed (I’ll verify this and update this post, but I believe that was the experience).   This is why I show two connections for each regional vNet– one for each direction with a total of each regional vNet having three pairs of connections.  With all this configured, I have created one large virtual network that spans four Azure regions.

I then pinged the virtual machine in each globally peered virtual network.  I sent 10 packets on each ping and varied the packet data size in three different attempts, 128, 1024, and 1472 bytes, respectively.  The following table shows the results.

Source Destination 128 bytes 1024 bytes 1472 bytes
East US West US 64ms 64ms 64ms
West US East US 63ms 64ms 63ms
East Asia West US 145ms 145ms 145ms
West US East Asia 145ms 145ms 145ms
Southeast Australia West US 148ms 148ms 148ms
West US Southeast Australia 148ms 148ms 148ms
East Asia Southeast Australia 122ms 122ms 123ms
Southeast Australia East Asia 123ms 123ms 123ms

I did not test perform a full test with East US.  I expect the RTT (Round-Trip-Time) to be similar to the results from West US to East Asia or West US to Southeast Australia give or take a few milliseconds (I may go back and test that for completeness)– around 140 to 150 milliseconds.

I noticed that the first time I used a one or two if the new global peered vNet connection, the first packet timed out, but it was not consistent with all globally peered vNet connections.  I expect this was a race condition as I was using the network immediately after I created it.  After that first dropped packet, the connection was perfect and never dropped anything afterwards– therefore packet loss does not appear to be something you need worry about.

I’m not going to interpret or classify these results. The ping command is hardly the be-all-end-all diagnostic command.  Also, each application has different network requirements.

Comparison

Here are some other test for comparison.

This is when I ping the default gateway on my home Wi-Fi (mediocre and basic as it gets) from my laptop.

Source Destination 128 bytes 1024 bytes 1472 bytes
Laptop Default Gateway 8ms 8ms 8ms

Then, I pinged several public DNS servers in various locations (I am based in Redmond, WA).

IP Address Location 128 bytes 1024 bytes 1472 bytes
151.202.0.85 New York 94ms 96ms 98ms
68.87.74.162 Miami 93ms 94ms 95ms
50.7.154.3 London 168ms 174ms 171ms
203.173.39.131 Adelaide 236ms 367ms 249ms
202.55.11.100 Hong Kong 222ms 225ms 227ms

I was pleasantly surprised with the results, especially considering the RTTs to DNS servers in Australia and Hong Kong are considerably slower than the RTTs that results from the globally peered vNets.

That wraps up this session on Azure Global Peered Virtual Networks.  Get you a trial subscription (here) and give it a try yourself.

–Mike Stephens

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s