I had recently multiple discussions about NSX and its Layer 2 bridging capabilities with various service providers. Let me summarize some important points and considerations when you would use which.
Let’s start with simple question – why would you need layer 2 bridging? Here are some use cases:
- The end-user wants to burst their application to the cloud but wants to keep certain components on-site and because its legacy application it cannot be re-IP’d or requires single subnet communication.
- The service provider is building new cloud offering next to legacy business (collocation, managed services) and wants to enable existing customers to migrate or extend their workloads to the cloud seamlessly (no IP address changes)
NSX offers three ways how to bridge layer two networks.
Layer 2 VPN
This is proprietary VPN solution which enables to create encrypted tunnel across IP networks between Edge Service Gateways that stitches one or more L2 networks. These Edge Gateways can be deployed in different management domains and there is also option of deploying standalone Edge which does not require NSX license. This is great for the cloud bursting use case. I have blogged about L2VPN already in the past here.
While this option is very flexible it is also quite CPU intensive for both the L2 VPN Client and Server Edge VMs. This option provides up to 2Gb throughput.
NSX Native L2 Bridging
L2 bridge is created in the ESXi VMkernel hypervisor by deploying a Logical router control VM. The control VM is used only for the bridge configuration and its pinning to a particular ESXi host. As the bridging happens in the VMkernel it is possible to achieve impressive line rate (10 Gb) throughput.
It is possible to bridge only VXLAN based logical switch with VLAN based port group. The same physical uplink must be utilized so this means that the VLAN port group must be on the same vSphere Distributed Switch (vDS) that is prepared with the VXLAN VTEP and where the VXLAN logical switch portgroups are created.
The above fact prohibits a scenario where you would have Edge Cluster with multiple pairs of uplinks connected to separate vDS switches. One for VLAN based traffic and the other for VXLAN traffic. You cannot create NSX native L2 bridge instance between two vDS switches.
This important especially for the collocation use case mentioned at the beginning. In order to use the L2 bridge the customer VLAN must be connected to the Edge Cluster Top of the Rack pair of switches.
If this is not possible, as a workaround the service provider can use the L2 VPN option – it is even possible to run both L2 VPN Server and Client Edges on the same host connected through a transit VXLAN network where one Edge is connected to trunk with VLAN networks from one vDS and the other to trunk with VXLAN networks on another vDS. Unfortunately this has performance impact (even if NULL-MD5 encryption is used) and should be used only for temporary migration use cases.
The last bridging option discussed is a new feature of NSX 6.2. It is possible to extend the VXLAN logical switch all the way to a compatible hardware device (switch from a VMware partner) that acts as Layer 2 gateway and bridges the logical switch with a VLAN network. The device performing the function of hardware VTEP is managed from NSX via OVSDB protocol while the control plane is still managed by NSX Controllers. More details are in the following white paper.
As this option requires new dedicated and NSX compatible switching hardware it is more useful for the permanent use cases.