Posts

3 (or even 4) different Ways to connect your Data Center to SD-WAN

Image
  As you already might know we can design our SD-WAN to DC connectivity in various ways. Beside the most common connectivity via a HUB Cluster we can also use 2 or more separate Hubs to connect the SD-WAN network to our DC.  The third in enterprise environment less common way is to use Gateways in Partner Gateway Mode and connect the DC via the Handoff Interface. There is also a fourth possibility to go from a Cloud gateway via IPsec and treat the DC as a NSD (non sd-wan enabled) destination I want to explain the different methods including advantages and disadvantages of each of those 1.   Using HUB-Cluster This method uses 2-8 edges in parallell and each Branch edge will connect to only one of the Hubs of that cluster. Thus the implementation needs to set up an internal L3 switch or Router with typical eBGP connectivity to forward routing information (Control Plane function) and data packets between the Hubs inside the cluster.  Detail on the setup can be foun...

Fun and Games with Overlay Tunnels: Part 1, Dynamic Edge2Edge

Image
 Last week a VMware instructor collegue sent me an interesting question which came up during an SD-WAN course:  "If I set up  Profile A using Gateways, and Profile B using Hubs, would we set up a dynamic VPN from Edge A with Profile A and Edge B with Profile B?" My first idea was, that such configuration cannot work, but then I decided to try it in my lab first, before answering that question with a categoric NO. So I defined 3 different Branch Profiles: BR-3 is using Gateway for E2E (with no Branch to Hub connectivity) whereas BR-7 is using the DC-HUB for E2E When starting communication from BR-7  we see, that no Dynamic E2E tunnel is established: VPCS> ping 10.1.103.75 84 bytes from 10.1.103.75 icmp_seq=1 ttl=61 time=49.366 ms 84 bytes from 10.1.103.75 icmp_seq=2 ttl=61 time=16.159 ms 84 bytes from 10.1.103.75 icmp_seq=3 ttl=61 time=23.028 ms 84 bytes from 10.1.103.75 icmp_seq=4 ttl=61 time=20.930 ms 84 bytes from 10.1.103.75 icmp_seq=5 ttl=61 time=19.896 ms VPC...

Tips and Tricks: (Part 1) Build Permanent Overlay Tunnels to 2 (or 3) Data Centers

Image
 In a recent course when we spoke about Cloud-VPN and how to build permanent tunnels to a Data Center, students asked the very valid question: "How can we build permanent tunnels if we have 2 Data Centers ?" As normally only one DC will have permanent tunnels established When defining 2 different Hubs only the preferred Hub (Order 1) will get a permanent Overlay tunnel established But there is a second  possibility to select Hubs with a different order When we choose Branch to Branch VPN via Hub and reverse the order of our 2 Hub locations Now we have permanent tunnels to both DCs (DC1-Cluster and BR-3) Alternative we can set Cloud-VPN up also in this way: Again the branches will set up permant tunnel to both DCs (DC1-Cluster and BR-3) The caveat here is, that we should have either cluster or high availaibilty chosen for the Hub selected aa Backhaul, as we do not have a second one for backup. And by combining both setups we can even establish permanent tunnels to 3 DCs  ...

NAT, PAT, What ?!: Part 2: IPv6 and NAT

Image
 Some weeks ago in Linked-In there were a discussion about NAT and IPv6 and one of the engineer meant that as the IPv6 standard does not define NAT and PAT,  using NAT and/or PAT on IPv6 is not a good way of implementation. Now VMware SD-WAN also has full IPv6 support in Underlay and Overlay it uses NAT and specifically SNAT also with IPv6 The implemented IPv6 NAT features are: Default NAT66 on VCG  DIA NAT66 at edge (Many-to-one) 1:1 NAT66 and Port Forwarding Policy NAT66 on Edge and VCG SNAT66 when forwarding to Internet-Underlay And in my opinion this is a valid and sound decision.   But let's look at possible alternative solutions to NAT in IPv6. The standard defines a kind of Souce Routing with the use of the  IPv6 Routing Header   to force traffic via specific intermediate hops.   Unfortunately that method outside a Provider Segment routing environment, where a slightly different Header is used, is a very bad idea from the security point of view....

NAT, PAT, What ?!: Part 1: Policy NAT

Image
 When teaching or discussing VMware SD-WAN features, even with some experienced People, when it comes to NAT, PAT and specific Policy NAT , then I often experience that unless needed no one takes care about that feature. So let me explain in this Blog Policy NAT from the SD-WAN aka Customer side. Let's start with the involved components. Partner Gateway A Partner Gateway connects Overlay Customer/Segment Traffic via Handoff Interface to per Customer/Segment separate connectivity using a mechanism known as VRF Lite.  But you can also use that mechanism to Handoff all customer traffic to the same destination However customers often use Private non-unique addresses in their SD-WAN environment.  In that case we need a Source-NAT (SNAT) mechanism to translate the Customer addresses to a unique routable address before reaching the shared destination network. But where is that SNAT address defined? A Service Provider typically will avoid custom specific NATting on its Provider ...

Fighting at the forefront: Early 5.0.0.x experiences

Image
 5.0.0.0 came around with a ton of exciting new features I was eager to test IPv6 (Dual Stack) in Underlay and Overlay  Better Gateway throughput Data loss prevention in SASE   However, new Version, new Bugs I first upgraded my company Orchestrator to 5.0.0.0 (after creating a snapshot to be able to roll back to 4.5). This worked quite well, only after going to all parts of the new UI I found out that I could not reach the "General Settings" in the new UI, but the content was perfectly visible in the old UI. Fortunately a 5.0.0.1 upgrade solved that problem. Another strange items, still in version 5.0.0.1 as well, is the fact that in the old and the new UI our Edges now are showing   0 % memory utilization ,  which is either an incredible efficient new code or simply a bug .   Next I tried IPv6 As my Internet provider at home still does not support IPv6, I used the new 5.0 IPv6 features to build IPv6 connectivity using the Dual Stack Overlay and the fact,...