New Networking Features in VMware Cloud Director 10.2

The 10.2 release of VMware Cloud Director from networking perspective was a massive one. NSX-V vs NSX-T gap was closed and in some cases NSX-T backed Org VDCs now provide more networking functionality than the NSX-V backed ones. UI has been redesigned with new dedicated Networking sections however some new features are currently available only in API.
Let me dive straight in so you do not miss any.

NSX-T Advanced Load Balancing (Avi) support

This is a big feature that requires its own blog post. Please read here. In short, NSX-T backed Org VDCs can now consume network load balancer services that are provided by the new NSX-T ALB / Avi.

Distributed Firewall and Data Center Groups

Another big feature combines Cross VDC networking, shared networks and distributed firewall (DFW) functionality. The service provider first must create Compute Provider Scope. This is basically a tag – abstraction of compute fault domains / availability zones and is done either at vCenter Server level or at Provider VDC level.

The same can be done for each NSX-T Manager where you would define Network Provider Scope.

Once that is done, the provider can create Data Center Group(s) for a particular tenant. This is done from the new networking UI in the Tenant portal by selecting one or multiple Org VDCs. The Data Center Group will now become a routing domain with networks spanning all Org VDCs that are part of the group, with a single egress point (Org VDC Gateway) and the distributed firewall.

Routed networks will automatically be added to a Data Center Group if they are connected to the group Org VDC Edge Gateway. Isolated networks must be added explicitly. An Org VDC can be member of multiple Data Center Groups.

If you want the tenant to use DFW, it must be explicitly enabled and the tenant Organization has to have the correct rights. The DFW supports IP Sets and Security Groups containing network objects that apply rules to all connected VMs.

Note that only one Org VDC Edge Gateway can be added to the Data Center Group. This is due to the limitation that NSX-T logical segment can be attached and routed only via single Tier-1 GW. The Tier-1 GW is in active / standby mode and can theoretically span multiple sites, but only single instance is active at a time (no multi-egress).

VRF-Lite Support

VRF-Lite is an object that allows slicing single NSX-T Tier-0 GW into up to 100 independent virtual routing instances. Lite means that while these instances are very similar to the real Tier-0 GW they do support only subset of its features: routing, firewalling and NATing.

In VCD, when tenant requires direct connectivity to on-prem WAN/MPLS with fully routed networks (instead of just NAT-routed ones), in the past the provider had to dedicated a whole external network backed by Tier-0 GW to such tenant. Now the same can be achieved with VRF which greatly enhances scalability of the feature.

There are some limitations:

  • VRF inherits its parent Tier-0 deployment mode (HA A/A vs A/S, Edge Cluster), BGP local ASN and graceful restart setting
  • all VRFs will share its parent uplinks physical bandwidth
  • VRF uplinks and peering with upstream routers must be individually configured by utilizing VLANs from a VLAN trunk or unique Geneve segments (if upstream router is another Tier-0)
  • an an alternative to the previous point EVPN can be used which allows single MP BGP session for all VRFs and upstream routers with data plane VXLAN encapsulation. Upstream routers obviously must support EVPN.
  • the provider can import into VCD as an external network either the parent Tier-0 GW or its child VRFs, but not both (mixed mode)

IPv6

VMware Cloud Director now supports dual stack IPv4/IPv6 (both for NSX-V and NSX-T backed networks). This must be currently enabled via API version 35 either during network creation or via PUT on the OpenAPI network object by specifying:

“enableDualSubnetNetwork”: true

In the same payload you also have to add the 2nd subnet definition.

 

PUT https://{{host}}/cloudapi/1.0.0/orgVdcNetworks/urn:vcloud:network:c02e0c68-104c-424b-ba20-e6e37c6e1f73

...
    "subnets": {
        "values": [
            {
                "gateway": "172.16.100.1",
                "prefixLength": 24,
                "dnsSuffix": "fojta.com",
                "dnsServer1": "10.0.2.210",
                "dnsServer2": "10.0.2.209",
                "ipRanges": {
                    "values": [
                        {
                            "startAddress": "172.16.100.2",
                            "endAddress": "172.16.100.99"
                        }
                    ]
                },
                "enabled": true,
                "totalIpCount": 98,
                "usedIpCount": 1
            },
            {
                "gateway": "fd13:5905:f858:e502::1",
                "prefixLength": 64,
                "dnsSuffix": "",
                "dnsServer1": "",
                "dnsServer2": "",
                "ipRanges": {
                    "values": [
                        {
                            "startAddress": "fd13:5905:f858:e502::2",
                            "endAddress": "fd13:5905:f858:e502::ff"
                        }
                    ]
                },
                "enabled": true,
                "totalIpCount": 255,
                "usedIpCount": 0
            }
        ]
    }
...
    "enableDualSubnetNetwork": true,
    "status": "REALIZED",
...

 

The UI will still show only the primary subnet and IP address. The allocation of the secondary IP to VM must be either done from its guest OS or via automated network assignment (DHCP, DHCPv6 or SLAAC). DHCPv6 and SLAAC is only available for NSX-T backed Org VDC networks but for NSX-V backed networks you could use IPv6 as primary subnet (with IPv6 pool) and IPv4 with DHCP addressing as the secondary.

To enable IPv6 capability in NSX-T the provider must enable it in Global Networking Config.
VCD automatically creates ND (Neighbor Discovery) Profiles in NSX-T for each NSX-T backed Org VDC Edge GW. And via /1.0.0/edgeGateways/{gatewayId}/slaacProfile API the tenant can set the Edge GW profile either to DHCPv6 or SLAAC. For example:
PUT https://{{host}}/cloudapi/1.0.0/edgeGateways/urn:vcloud:gateway:5234d305-72d4-490b-ab53-02f752c8df70/slaacProfile
{
    "enabled": true,
    "mode": "SLAAC",
    "dnsConfig": {
        "domainNames": [],
        "dnsServerIpv6Addresses": [
            "2001:4860:4860::8888",
            "2001:4860:4860::8844"
        ]
    }
}

And here is the corresponding view from NSX-T Manager:

And finally a view on deployed VM’s networking stack:

DHCP

Speaking of DHCP, NSX-T supports two modes. Network mode (where DHCP service is attached directly to a network and needs an IP from that network) and Edge mode where the DHCP service runs on Tier-1 GW loopback address. VCD now supports both modes (via API only). The DHCP Network mode will work for isolated networks and is portable with the network (meaning the network can be attached or disconnected from the Org VDC Edge GW) without DHCP service disruption. However, before you can deploy DHCP service in Network mode you need to specify Services Edge Cluster (for Edge mode that is not needed as the service runs on the Tier-1 Edge GW).  The cluster definition is done via Network Profile at Org VDC level.

In order to use DHCPv6 the network must be configured in Network mode and attached to Org VDC Edge GW with SLAAC profile configured with DHCPv6 mode.

Other Features

  • vSphere Distributed Switch support for NSX-T segments (also known as Converged VDS), although this feature was already available in VCD 10.1.1+
  • NSX-T IPSec VPN support in UI
  • NSX-T L2VPN support, API only
  • port group backed external networks (used for NSX-V backed Org VDCs) can now have multiple port groups from the same vCenter Server instance (useful if you have vDS per cluster for example)
  • /31 external network subnets are supported
  • Org VDC Edge GW object now supports metadata

NSX-V vs NSX-T Feature Parity

Let me conclude with an updated chart showing comparison of NSX-V vs NSX-T features in VMware Cloud Director 10.2. I highlighted new additions in green.

33 thoughts on “New Networking Features in VMware Cloud Director 10.2

  1. Thanks for the post. VRF seems to solve the issue of having to deploy additional edges each time i deploy a dedicated external network, saving on resources. Question, will VRF be incorporated in the next V-T migration tool release at all?

      1. Hi Tom, Is it possible to use datacenter groups when the VDCs belong to different Provider VDCs? All Provider VDCs backed by their individual vCenter but all the same NSX-T Manager.

  2. Hi Tom!
    Do you have some references about multi cloud in 10.2 with NSX-T?
    I need to know, if it is possible to connect 2 VCD-Sites, to the same NSX-T manager. Or I need one NSX-T manager for each VCD-Site.
    Thanks!

  3. vCD 10.2 UI feedback reported to GSS;
    – When opening a VMRC console, it seems to have lost the VM name from the VMRC header (tested with VMRC 11.1 & 12). It does show the name when using the web console still.
    – When listing VMs in a Org vDC, changing the sort criteria or removing columns works fine. The custom settings however are then reverted to default when switching to another vDC. Would be useful for the changes to remain if possible across vDC’s.

      1. update 21/04/21 – Upgrading to 10.2.2 makes no difference to the console bug issue, it will apparently be fixed in the next VMRC 12.1 release according to GSS.

  4. Hi Tom, have you come across any issues with LB console proxy health monitors to vCD 10.2 cells causing VM console session disconnects? A suggested GSS workaround is to remove LB monitoring to the console proxy port being used for now, which i am hesitant to do. Thanks.

    1. Update, if anyone else experiences the same issue; workaround to console disconnects seems to be to add the following setting (will be resolved permanently in 10.2.2);

      /opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n consoleproxy.cores.max -v “0”

  5. Hello,

    Thanks for this article!
    Is it possible to create a Datacenter Group for two VDCs stored on two vCloud Directors which are linked by a pairing (case of a multi site?)
    For an NSX-T network with vCloud Director multi-site, do you need one NSX-T per site?

    thank you !

  6. Thanks for all the good information!
    One question about oVDC Edge Logs: In the last table, you wrote “Requires multitenant proxyig of Edge node logs”. What do you mean by this? We are looking for a way to make the logs of each tenants edge available to them.
    Is there any solution? We thought about heavy filtering in LogInsight automated via API to forward logs to a public syslog server of the tenant inside his oVDC which is published via DNAT and secured by FW rules to only be reachable by our LogInsight server, but this seems like such an overkill and unreliable solution.

    1. There is no easy way to do so. The issue is that the only way to filter the Edge Node GW FW logs is by rule ID. So it is hard to map log entries to a particular tenant unless you would allocate the whole Edge Node to them.

  7. Hi Tom, It is not the topic of this post but I need your help to figure out the issue about removing NIC from running VM. In vCD 10.1.2, looks it is not supported and in Terraform page it says it is know bug. I would confirm if it is really known bug and in which release/patch it is fixed. We need to perform vNIC removal while VM running, otherwise it take so long to operate the function that we need to apply. Any advice appreciated.

    https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/vapp_vm

  8. Hello Tomas,

    Sorry, the question is related to the vCD appliance itself, not VDCs

    Is it possible to use dual-stack IPv4/IPv6 on the Appliance interface? Using ./vami_config_net just gives the options:


    6) IP Address Allocation for eth0
    7) IP Address Allocation for eth1
    Enter a menu number [0]: 6
    Configure an IPv4 address for eth0? y/n [n]:

    Also, there is a flag show when setting the ipv4:

    net.ipv6.conf.eth1.disable_ipv6 = 1

    Is it possible to change the option and use IPv6 on eth0?

    Many thanks for your attention.

    1. While VAMI might support IPv6, but you really need IPv4 to communicate with NSX Manager or Avi Controllers. So there is no point of using IPv6. Also the VCD service is listening only on the primary IP.

      1. Hello Tomas,

        You mentioned the VAMI might support IPv6. Is there a way to set up IPv6? Found nothing regarding?

  9. Hi Tom,
    with T0 External Networks, OrgVDC networks must have rfc1918 ranges and src nat is used for external connectivity. How about ipv6 ? Can tenants use /64 public routable networks for there OrgVDC networks without nat on EdgeGW ?

          1. Okay. Thank you for clarification. So does IPv6 work at all in some shared t0 environments? I guess there is no src Nat for IPv6 private to IPv6 public addresses possible .

          2. Correct. Unless you configure the network advertisement directly in NSX-T there is no point of using IPv6 on shared Tier-0. We do plan to enhance this functionality in the future though.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.