Poor Man’s VXLAN – VLAN Bridging

I was wondering about following interesting use case where there is a need to extend a physical layer 2 network over layer 3 boundaries and connect physical and virtual VMs without routing. Would it be possible with today technology?

VXLAN enables creation of virtual layer 2 networks that  cross layer 3 boundaries. It is already possible today with vSphere 5.1 and vCloud Network and Security to create virtual layer 2 wires between two clusters that do not share any layer 2 networks. However I can connect only virtual machines to such network. What if I would like to connect physical devices as well? vShield Edge allows communication between virtual and external (and potentialy physical) networks, but this is accomplished with layer 3 routing and does not allow usage of the same IP segment on the virtual and external network.

Arista and Brocade already announced that they will create physical VTEP devices (virtual tunnel end points) which will allow bridging of VXLAN and physical networks, but I wanted to see if I would be able to achieve the same with a Vyatta virtual bridge.


I was using vCloud Director 5.1 with vSphere 5.1 for my testing, but the same should work also with manually created VXLAN networks with vShield Manager. I will not mention all the needed prerequisites to get VXLAN working. I will go straight to the bridging test.

      1. I have created a VXLAN Organization vDC network in vCloud Director with IP range 192.168.1.x/24, without a gateway and deployed VM1 ( there. This creates a portgroup on virtual distributed switch(es) with one VM.
      2. I have created a portgroup with VLAN 191 which simulated my physical external network. In this VLAN I could have physical or virtual servers in the same IP range as the VXLAN 192.168.1.x/24. I deployed a VM2 in it with IP
      3. Now in order to be able to have communication between VM1 and VM2 I need to set up a bridge. For this I used the open source Vyatta firewall. I registered and downloaded the newest vyatta-livecd-virt_VC6.4-2012.05.31_i386.iso and installed it in a VM with 2 CPU and with two vnics which represent the network interfaces I was going to bridge. Note: I was managing Vyatta over console, so If you want to manage it over network add another management vnic.
      4. The bridge set up: log in to Vyatta and go to configure mode and enter:
        set interfaces bridge br0
        set interfaces ethernet eth0 bridge‐group bridge br0
        set interfaces ethernet eth1 bridge‐group bridge br0
        show bridge
        where eth0 and eth1 are the interfaces we want to bridge.

      5. In order to bridge the Ethernet frames from one LAN segment to the other Vyatta needs to be able to listen to all frames in the portgroup. That means that the portgroups where are Vyatta’s eth0 and eth1 interfaces (and only those) must be set as promiscuous in vSwitch security settings.
      6. Vyatta is basically forwarding frames from one bridged interface to the other. It does not change the source MAC part of the frame (as proxy ARP would do) and that means we need  to allow Forged Transmits as the source MAC of the frame does not correspond to the MAC address of the vnic interface.

That’s it. We are all set and we can test with ping if it works. I did also some simple performance tests with netperf and it was clear the performance is CPU bound. On a host where there was only Vyatta VM I was pushing in and out 600 Mbps with very high CPU utilization. As I have vShield App installed on that host as well I had to exclude Vyatta VM from vShield App scope otherwise I was getting only half of the throughput as the vShield App service VM had to inspect all the packets coming in and out and had also high CPU utilization.


2 thoughts on “Poor Man’s VXLAN – VLAN Bridging

  1. Disclaimer: I work for Intel as a Solution Architect in the LAN Access Group.

    If you are using The Intel Ethernet Converged Network Adapter X520 or X540 you can try enabling RSS to spread the VXLAN UDP traffic across multiple cores. We have worked with VMware to enable RSS on the our 10Gb CNAs so that VXLAN traffic could be distributed among various hardware Rx queues. Basically, we modified how VMware NetQueue and VMDq work to help address the Rx overhead seen when using VXLAN. RSS is only present for 10Gb Intel Ethernet CNAs on vSphere 5.1 and can be enabled by unloading and loading the module with vmkload_mod ixgbe RSS=”4” for each port on the server that will be used for VXLAN traffic.

    Check out the Performance Brief from VMware…

    1. I have read the whitepaper. VXLANs up to 30% CPU overhead due to lack of hardware offloads is pretty scary. I hope Intel will come soon with silicon that can help. Nicira’s STT tunneling protocol has an advantage over VXLAN in this point.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.