When NSX-T 3.1 was released a few days ago, the feature that I was most looking for was the ability to share Geneve overlay transport VLAN between ESXi transport nodes and Edge transport nodes.
Before NSX-T 3.1 in a collapsed design where Edge transport nodes were running on ESXi transport nodes (in other words NSX-T Edge VMs were deployed to NSX-T prepared ESXi cluster) you could not share the same transport (TEP) VLAN unless you would dedicate separate physical uplinks for Edge traffic and ESXi underlay host traffic. The reason is that the Geneve encapsulation/decapsulation was happening only on the physical uplink in/egress and that point would be skipped for intra-host datapath between the Edge and host TEP VMkernel port.
This was quite annoying because the two transport VLANs need to route between each other at full jumbo MTU>1600 frame size. So in lab scenarios you had to have additional router taking care of that. And I have seen multiple time issues due to misconfigured router MTU size.
After upgrading my lab to NSX-T 3.1 I was eager to test it.
Here are the steps I used to migrate to single transport VLAN:
- The collapsed Edge Nodes will need to use trunk uplinks created as NSX-T logical segment. My Edge Nodes used regular VDS port group so I renamed the old ones in vCenter and created new trunks in NSX-T Manager.
- (Optional) Create new TEP IP Address Pool for the Edges. You can obviously use the ESXi host IP Pool as now they will share the same subnet, or you can use static IP addressing. I opted for new IP Address Pool with the same subnet as my ESXi host TEP IP Address Pool but a different range so I can easily distinguish host and edge TEP IPs.
- Create new Edge Uplink Profile VLAN to match the ESXi transport VLAN.
- Now for each Edge node repeat this process: edit the node in the Edge Transport Node Overview tab, change its Uplink Profile, IP Pool and uplinks to the created ones in steps #1, #2 and #3. Refresh and observe the Tunnel health.
- Clean up now unused Uplink Profile, IP Pool and VDS uplinks.
- Deprovision now unused Edge Transport VLAN from physical switches and from the physical router interface.
During the migration I saw one or two pings to drop but that was it. If you see tunnel issues try to put the edge node briefly into NSX Maintenance Mode.
9 thoughts on “NSX-T 3.1: Sharing Transport VLAN between Host and Edge Nodes”
How about the uplink interfaces of the T0, are they connected to segments created within the “nsx-vlan-transportzone” or did you create another transportzone for those uplink segments? In other words, are the Edge nodes connected to a third transport zone which is hosting some uplink segments?
I mean there is no way to specify a VLAN ID within a T0 uplink interface and therefore there must be any other segments used for the T0 uplink interfaces.
This is completely transparent to existing transport zones. So no need to change anything on your Tier-0 GWs.
If the Edge TEPs can now exist in the same vlan as the Host TEPs why did you create a TEP Pool for vlan 11?
It was used before the migration.
In my lab when I’m creating a VLAN backed segment is not creating a port group in the Vcenter? Works fine in case of a ovelay segment
Wrong VLAN Transport zone? Check its scope.
Yea, I found the issue, missed to add the VALN Transport zone into transport npde profile under my asible variable file!!! Thanks Tomas
Thanks for this post, I’d been tearing my hair out for a couple of evenings using a DvS backed VLAN port group but only got it working when I realised that transport VLAN has to be carried by NSX on the overlay first.
On the edge appliance, what port groups did you connect the nic interfaces to after the first management nic?