Fun and Games with Overlay Tunnels: Part 2: How to setup a working 3-tier Hierarchy

 

 Recently my teaching collegues from VMware sent me this range of questions:

"Can I create a full, global mesh even using different hubs? Gateways are not an option in this scenario.

In other words, I have:

  • AMER DC with Hub Cluster
  • EMEA DC with Hub Cluster
  • APAC DC with Hub Cluster

 And I have profiles that use dynamic E2E VPN set to use the regional hub.

 Can we, in this topology, get, essentially, a full overlay mesh between Edges directly? Like, can I actually build a tunnel from, say, a Tokyo Edge to a Chicago Edge even with different hubs? 

 Will secondary hubs in the VPN config provide the meet-in-the-middle connectivity in order to create the E2E VPN?  My understanding of the hub cluster order in the Cloud VPN config is that we simply use the first cluster, but if that is unavailable, we use the next cluster in the list."

My first assumption was: 

In my opinion (static or dynamic) E2E works only when there is a single or 2 hop continuous set of permanent overlay tunnels between those 2 edge devices

So you would need a multitier hierarchy (see a vmlive named: Design Principles for Scaling a Global SD-WAN Network)

Or you create/use a separate backbone (not sd-wan) from MSP, telco, AWS or Azure, MPLS…)

As: SD-wan is a last mile and mid mile technology

If you have a high speed, lossless, low latency, low jitter backbone, any overlay would be a disadvantage (CPU cycles on edges for encrypt/decrypt, reduced MTU size due to additional headers and tunnel packet)

In the mentioned VMlive presentation the presenter showed examples of multitier hierarchy,  where only Branch2Branch within the same Region is possible.

The original presentation also defined a full mesh of permanent tunnels between all regional hubs. with the mentioned restriction, that there is no way to have Branch2Branch tunnels between Branches of different regions.

 
Basically the presenter was correct in stating the restriction. All the regional Branch Edges are refusing to install branch routes belonging to edges of other regions as they have no direct or 2 hop overlay to those destinations.  
 

  • The OFC table on the Orchestrator only shows the connected Branch networks directly learned from the corresponding Edge.
  • Branch Edges do not show any non regional branch routes.
But SD-WAN is networking and as a networking veteran, I know that often you find ways to overcome restrictions.

In order to test that behaviour and to find a possible way , I have set up a Multitier-lab for 2 Regions with 2 Gateways, 9 Edges and 4 different Profiles.
 

 

 The idea was to reinstate (relearn) all branch routes on a HUB device reachable via direct or 2hop overlay by all other Branch edges, as only routes reachable via direct or 2 hop overlay are considered valid.

So on the implemented 2 HUB-DC edges, I enabled OSPF to a local L3-switch and thus re-instated the branch routes learned from one DC-Hub into the other one.
 
 First you need to enable redistribution of overlay routes to OSPF
 
Those redistributed routes are forwarded to OSPF as E1 routes:

In the OFC you need to enable the redistribution of that OSPF external routes back to Overlay (1) and make sure that Hub routes are preferred over Router (Underlay) routes (2).

And then you see in the above OFC route table that those Branch routes are also advertised by the DC-Hub Edges.

Note:Same mechanisnm also should work with BGP

Still the DC-Hub is reachable from any edge via a 2 hop overlay

VPC-A1> trace 10.1.201.2 (VPC-DC)
trace to 10.1.201.2, 8 hops max, press Ctrl+C to stop
 1   10.2.201.1   28.072 ms  1.780 ms  1.321 ms            (VCE-A1)
 2   100.64.112.2   84.274 ms  7.221 ms  8.106 ms    OVL 1 (Hub-A2)
 3   100.64.121.2   13.552 ms  10.555 ms  12.114 ms  OVL 2 (Hub-DC-1)
 4   10.0.201.2   27.505 ms  14.941 ms  17.847 ms          (R-DC)
 5   *10.1.201.2   11.181 ms (ICMP type:3, code:3, Destination port unreachable)

Now the reinstated routes are accepted in the local route table of our regional branch edges and we can reach Branches in another region via a 3-hop overlay:

VPC-A1> trace 10.3.202.79 (VPC-B2)
trace to 10.3.202.79, 8 hops max, press Ctrl+C to stop
 1   10.2.201.1   4.118 ms  1.293 ms  1.840 ms               (VCE-A1)
 2   100.64.112.2   348.161 ms  10.031 ms  13.470 ms   OVL 1 (Hub-A2)
 3   100.64.113.2   265.984 ms  39.569 ms  17.451 ms   OVL 2 (Hub-B)
 4   100.64.104.2   102.274 ms  54.166 ms  64.786 ms   OVL 3 (VCE-B2)
 5   *10.3.202.79   186.271 ms (ICMP type:3, code:3, Destination port unreachable) 
 
Also in SD-WAN routing is done per hop, and every hop does its routing decision independent of other hops. In our case the Edge routing table assumes, that the destination branch network sitting in another region is reachable via the Hub-DC, so it send the packet via overlay to the next hop, which is one of the Regional Hubs. There the route lookup tells to forward it via Overlay to the destination Regional Hub and from there it will be delivered via an established overlay tunnel to the destination edge.

In that way I could show that in networking you nearly always find a clever way to achieve things which the implementers assumed impossible.
 

Thanks to Vladimir Franca de Sousa who asked the right question regarding redundancy, here is a additional note on that topic:

Both DC-Hubs need to be running, as the OSPF routes will be learned on one hub and forwarded to the other one, as learning and sending back to the same neighbor does not work due to split horizon rule.
So if you need redundancy you need either 3 Hubs in the DC (Vladimir´s idea) or a second set of 2 hubs in another DC (my idea).

Comments

Popular posts from this blog

Orchestrator Upgrade to Version 5.2

Deep Dive on DMPO and its Performance Features (available and missing) Part 1

Deep Dive on DMPO and its Performance Features (available and missing) Part 2