Provider Networking in VMware Cloud Director

This is going to be a bit longer than usual and more of a summary / design option type blog post where I want to discuss provider networking in VMware Cloud Director (VCD). By provider networking I mean the part that must be set up by the service provider and that is then consumed by tenants through their Org VDC networking and Org VDC Edge Gateways.

With the introduction of NSX-T we also need to dive into the differences between NSX-V and NSX-T integration in VCD.

Note: The article is applicable to VMware Cloud Director 10.2 release. Each VCD release is adding new network related functionality.

Provider Virtual Datacenters

Provider Virtual Datacenter (PVDC) is the main object that provides compute, networking and storage resources for tenant Organization Virtual Datacenters (Org VDCs). When a PVDC is created it is backed by vSphere clusters that should be prepared for NSX-V or NSX-T. Also during the PVDC creation the service provider must select which Network Pool is going to be used – VXLAN backed (NSX-V) or Geneve backed (NSX-T). PVDC thus can be backed by either NSX-V or NSX-T, not both at the same time or none at all and the backing cannot be changed after the fact.

Network Pool

Speaking of Network Pools – they are used to create on-demand routed/isolated networks by tenants. The Network Pools are independent from PVDCs, can be shared across multiple PVDCs (of the same backing type). There is an option to automatically create VXLAN network pool with PVDC creation but I would recommend against using that as you lose the ability to manage the transport zone backing the pool on your own. VLAN backed network pool can still be created but can be used only in PVDC backed by NSX-V (same for very legacy port group backed network pool now available only via API). Individual Org VDCs can (optionally) override the Network Pool assigned of its parent PVDC.

External Networks

Deploying virtual machines without the ability to connect to them via network is not that usefull. External networks are VCD objects that allow the Org VDC Edge Gateways connect to and thus reach the outside world – internet, dedicated direct connections or provider’s service area. External network have associated one or more subnets and IP pools that VCD manages and uses them to allocate external IP addresses to connected Org VDC Edge Gateways.

There is a major difference how external networks are created for NSX-V backed PVDCs and for NSX-T ones.

Port Group Backed External Network

As the name suggest these networks are backed by an existing vCenter port group (or multiple port groups) that must be created upfront and is usually backed by VLAN (but could be a VXLAN port group as well). These external networks are (currently) supported only in NSX-V backed PVDCs. Org VDC Edge Gateway connected to this network is represented by NSX-V Edge Service Gateway (ESG) with uplink in this port group. The uplinks have assigned IP address(es) of the allocated external IPs.

Directly connected Org VDC network connected to the external network can also be created (only by the provider) and VMs connected to such network have uplink in the port group.

Tier-0 Router Backed External Network

These networks are backed by an existing NSX-T Tier-0 Gateway or Tier-0 VRF (note that if you import to VCD Tier-0 VRF you can no longer import its parent Tier-0 and vice versa). The Tier-0/VRF must be created upfront by the provider with correct uplinks and routing configuration.

Only Org VDC Edge Gateways from NSX-T backed PVDC can be connected to such external network and they are going to be backed by a Tier-1 Gateway. The Tier-1 – Tier-0/VRF transit network is autoplumbed by NSX-T using 100.64.0.0/16 subnet. The allocated external network IPs are not explicitly assigned to any Tier-1 interface. Instead when a service (NAT, VPN, Load Balancer) on the Org VDC Edge Gateway starts using assigned external address, it will be advertised by the Tier-1 GW to the linked Tier-0 GW.

There are two main design options for the Tier-0/VRF.

The recommended option is to configure BGP on the Tier-0/VRF uplinks with upstream physical routers. The uplinks are just redundant point-to-point transits. IPs assigned from any external network subnet will be automatically advertised (when used) via BGP upstream. When provider runs out of public IPs you just assign additional subnet. This makes this design very flexible, scalable and relatively simple.

Tier-0/VRF with BGP

An alternative is to use design that is similar to the NSX-V port group approach, where Tier-0 uplinks are directly connected to the external subnet port group. This can be useful when transitioning from NSX-V to T where there is a need to retain routability between NSX-V ESGs and NSX-T Tier-1 GWs on the same external network.

The picure below shows that the Tier-0/VRF has uplinks directly connected to the external network and a static route towards the internet. The Tier-0 will proxy ARP requests for external IPs that are allocated and used by connected Tier-1 GWs.

Tier-0 with Proxy ARP

The disadvantage of this option is that you waste public IP addresses for T0 uplink and router interfaces for each subnet you assign.

Note: Proxy ARP is supported only if the Tier-0/VRF is in Active/Standby mode.

Tenant Dedicated External Network

If the tenant requires direct link via MPLS or a similar technology this is accomplished by creating tenant dedicated external network. With NSX-V backed Org VDC this is represented by a dedicated VLAN backed port group, with NSX-T backed Org VDC it would be a dedicated Tier-0/VRF. Both will provide connectivity to the MPLS router. With NSX-V the ESG would run BGP, with NSX-T the BGP would have to be configured on the Tier-0. In VCD the NSX-T backed Org VDC Gateway can be explicitly enabled in the dedicated mode which gives the tenant (and also the provider) the ability to configure Tier-0 BGP.

There are seprate rights for BGP neighbor configuration and route advertisement so the provider can keep BGP neighbor configuration as provider managed setting.

Note that you can connect only one Org VDC Edge GW in the explicit dedicated mode. In case the tenant requires more Org VDC Edge GWs connected to the same (dedicated) Tier-0/VRF the provider will not enable the dedicated mode and instead will manage BGP directly in NSX-T (as a managed service).

Often used use case is when the provider directly connects Org VDC network to such dedicated external network without using Org VDC Edge GW. This is however currently not possible to do in NSX-T backed PVDC. There instead, you will have to import Org VDC network backed by NSX-T logical segment (overlay or VLAN).

Internet with MPLS

The last case I want to describe is when the tenant wants to access both Internet and MPLS via the same Org VDC Edge GW. In NSX-V backed Org VDC this is accomplished by attaching internet and dedicated external network portgroups to the ESG uplinks and leveraging static or dynamic routing there. In an NSX-T backed Org VDC the provider will have to provision Tier-0/VRF that has transit uplink both to MPLS and Internet. External (Internet) subnet will be assigned to this Tier-0/VRF with small IP Pool for IP allocation that should not clash with any other IP Pools.

If the tenant will have route advertisement right assigned then route filter should be set on the Tier-0/VRF uplinks to allow only the correct prefixes to be advertised towards the Internet or MPLS. The route filters can be done either in NSX-T direclty or in VCD (if the Tier-0 is explicitly dedicated).

The diagram below shows example of an Org VDC that has two Org VDC Edge GWs each having access to Internet and MPLS. Org VDC GW 1 is using static route to MPLS VPN B and also has MPLS transit network accessible as imported Org VDC network, while Org VDC GW 2 is using BGP to MPLS VPN A. Connectivity to the internet is provided by another layer of NSX-T Tier-0 GW which allows usage of overlay segmens as VRF uplinks and does not waste physical VLANs.

One comment on usage of NAT in such design. Usually the tenant wants to source NAT only towards the Internet but not to the MPLS. In NSX-V backed Org VDC Edge GW this is easily set on per uplink interface basis. However, that option is not possible on Tier-1 backed Org VDC Edge GW as it has only one transit towards Tier-0/VRF. Instead NO SNAT rule with destination must be used in conjunction with SNAT rule.

An example:

NO SNAT: internal 10.1.1.0/22 destination 10.1.0.0/16
SNAT: internal 10.1.1.0/22 translated 80.80.80.134

The above example will source NAT 10.1.1.0 network only to the internet.

New Networking Features in VMware Cloud Director 10.2

The 10.2 release of VMware Cloud Director from networking perspective was a massive one. NSX-V vs NSX-T gap was closed and in some cases NSX-T backed Org VDCs now provide more networking functionality than the NSX-V backed ones. UI has been redesigned with new dedicated Networking sections however some new features are currently available only in API.
Let me dive straight in so you do not miss any.

NSX-T Advanced Load Balancing (Avi) support

This is a big feature that requires its own blog post. Please read here. In short, NSX-T backed Org VDCs can now consume network load balancer services that are provided by the new NSX-T ALB / Avi.

Distributed Firewall and Data Center Groups

Another big feature combines Cross VDC networking, shared networks and distributed firewall (DFW) functionality. The service provider first must create Compute Provider Scope. This is basically a tag – abstraction of compute fault domains / availability zones and is done either at vCenter Server level or at Provider VDC level.

The same can be done for each NSX-T Manager where you would define Network Provider Scope.

Once that is done, the provider can create Data Center Group(s) for a particular tenant. This is done from the new networking UI in the Tenant portal by selecting one or multiple Org VDCs. The Data Center Group will now become a routing domain with networks spanning all Org VDCs that are part of the group, with a single egress point (Org VDC Gateway) and the distributed firewall.

Routed networks will automatically be added to a Data Center Group if they are connected to the group Org VDC Edge Gateway. Isolated networks must be added explicitly. An Org VDC can be member of multiple Data Center Groups.

If you want the tenant to use DFW, it must be explicitly enabled and the tenant Organization has to have the correct rights. The DFW supports IP Sets and Security Groups containing network objects that apply rules to all connected VMs.

Note that only one Org VDC Edge Gateway can be added to the Data Center Group. This is due to the limitation that NSX-T logical segment can be attached and routed only via single Tier-1 GW. The Tier-1 GW is in active / standby mode and can theoretically span multiple sites, but only single instance is active at a time (no multi-egress).

VRF-Lite Support

VRF-Lite is an object that allows slicing single NSX-T Tier-0 GW into up to 100 independent virtual routing instances. Lite means that while these instances are very similar to the real Tier-0 GW they do support only subset of its features: routing, firewalling and NATing.

In VCD, when tenant requires direct connectivity to on-prem WAN/MPLS with fully routed networks (instead of just NAT-routed ones), in the past the provider had to dedicated a whole external network backed by Tier-0 GW to such tenant. Now the same can be achieved with VRF which greatly enhances scalability of the feature.

There are some limitations:

  • VRF inherits its parent Tier-0 deployment mode (HA A/A vs A/S, Edge Cluster), BGP local ASN and graceful restart setting
  • all VRFs will share its parent uplinks physical bandwidth
  • VRF uplinks and peering with upstream routers must be individually configured by utilizing VLANs from a VLAN trunk or unique Geneve segments (if upstream router is another Tier-0)
  • an an alternative to the previous point EVPN can be used which allows single MP BGP session for all VRFs and upstream routers with data plane VXLAN encapsulation. Upstream routers obviously must support EVPN.
  • the provider can import into VCD as an external network either the parent Tier-0 GW or its child VRFs, but not both (mixed mode)

IPv6

VMware Cloud Director now supports dual stack IPv4/IPv6 (both for NSX-V and NSX-T backed networks). This must be currently enabled via API version 35 either during network creation or via PUT on the OpenAPI network object by specifying:

“enableDualSubnetNetwork”: true

In the same payload you also have to add the 2nd subnet definition.

 

PUT https://{{host}}/cloudapi/1.0.0/orgVdcNetworks/urn:vcloud:network:c02e0c68-104c-424b-ba20-e6e37c6e1f73

...
    "subnets": {
        "values": [
            {
                "gateway": "172.16.100.1",
                "prefixLength": 24,
                "dnsSuffix": "fojta.com",
                "dnsServer1": "10.0.2.210",
                "dnsServer2": "10.0.2.209",
                "ipRanges": {
                    "values": [
                        {
                            "startAddress": "172.16.100.2",
                            "endAddress": "172.16.100.99"
                        }
                    ]
                },
                "enabled": true,
                "totalIpCount": 98,
                "usedIpCount": 1
            },
            {
                "gateway": "fd13:5905:f858:e502::1",
                "prefixLength": 64,
                "dnsSuffix": "",
                "dnsServer1": "",
                "dnsServer2": "",
                "ipRanges": {
                    "values": [
                        {
                            "startAddress": "fd13:5905:f858:e502::2",
                            "endAddress": "fd13:5905:f858:e502::ff"
                        }
                    ]
                },
                "enabled": true,
                "totalIpCount": 255,
                "usedIpCount": 0
            }
        ]
    }
...
    "enableDualSubnetNetwork": true,
    "status": "REALIZED",
...

 

The UI will still show only the primary subnet and IP address. The allocation of the secondary IP to VM must be either done from its guest OS or via automated network assignment (DHCP, DHCPv6 or SLAAC). DHCPv6 and SLAAC is only available for NSX-T backed Org VDC networks but for NSX-V backed networks you could use IPv6 as primary subnet (with IPv6 pool) and IPv4 with DHCP addressing as the secondary.

To enable IPv6 capability in NSX-T the provider must enable it in Global Networking Config.
VCD automatically creates ND (Neighbor Discovery) Profiles in NSX-T for each NSX-T backed Org VDC Edge GW. And via /1.0.0/edgeGateways/{gatewayId}/slaacProfile API the tenant can set the Edge GW profile either to DHCPv6 or SLAAC. For example:
PUT https://{{host}}/cloudapi/1.0.0/edgeGateways/urn:vcloud:gateway:5234d305-72d4-490b-ab53-02f752c8df70/slaacProfile
{
    "enabled": true,
    "mode": "SLAAC",
    "dnsConfig": {
        "domainNames": [],
        "dnsServerIpv6Addresses": [
            "2001:4860:4860::8888",
            "2001:4860:4860::8844"
        ]
    }
}

And here is the corresponding view from NSX-T Manager:

And finally a view on deployed VM’s networking stack:

DHCP

Speaking of DHCP, NSX-T supports two modes. Network mode (where DHCP service is attached directly to a network and needs an IP from that network) and Edge mode where the DHCP service runs on Tier-1 GW loopback address. VCD now supports both modes (via API only). The DHCP Network mode will work for isolated networks and is portable with the network (meaning the network can be attached or disconnected from the Org VDC Edge GW) without DHCP service disruption. However, before you can deploy DHCP service in Network mode you need to specify Services Edge Cluster (for Edge mode that is not needed as the service runs on the Tier-1 Edge GW).  The cluster definition is done via Network Profile at Org VDC level.

In order to use DHCPv6 the network must be configured in Network mode and attached to Org VDC Edge GW with SLAAC profile configured with DHCPv6 mode.

Other Features

  • vSphere Distributed Switch support for NSX-T segments (also known as Converged VDS), although this feature was already available in VCD 10.1.1+
  • NSX-T IPSec VPN support in UI
  • NSX-T L2VPN support, API only
  • port group backed external networks (used for NSX-V backed Org VDCs) can now have multiple port groups from the same vCenter Server instance (useful if you have vDS per cluster for example)
  • /31 external network subnets are supported
  • Org VDC Edge GW object now supports metadata

NSX-V vs NSX-T Feature Parity

Let me conclude with an updated chart showing comparison of NSX-V vs NSX-T features in VMware Cloud Director 10.2. I highlighted new additions in green.

How to Migrate VMware Cloud Director from NSX-V to NSX-T

Update January 28, 2021: Version 1.2 of VMware NSX Migration for VMware Cloud Director has been released with support for VMware Cloud Director 10.2 and its new networking features (load balancing, distributed firewall, VRF) as well as enhancements in migrations of isolated Org VDC networks with DHCP and multiple Org VDC Edge GWs and external networks. Roll back can now be performed at any point.

VMware Cloud Director as a cloud management solution is built on top of the underlying compute and networking platforms that virtualize the physical infrastructure. For the compute and storage part VMware vSphere was always used. However, the networking platform is more interesting. It all started with vShield Edge which was later rebranded to vCloud Networking and Security, Cisco Nexus 1000V was briefly an option, but currently NSX for vSphere (NSX-V) and NSX-T Data Center are supported.

VMware has announced the sunsetting of NSX-V (current end of general support is planned for (January 2022) and is fully committed going forward to the NSX-T Data Center flavor. The two NSX releases while similar are completely different products and there is no direct upgrade path from one to the other. So it is natural that all existing NSX-V users are asking how to migrate their environments to the NSX-T?

NSX-T Data Center Migration Coordinator has been available for some time but the way it works is quite destructive for Cloud Director and cannot be used in such environments.

Therefore with VMware Cloud Director 10.1 VMware is releasing compatible migration tool called VMware NSX Migration for VMware Cloud Director.

The philosophy of the tool is following:

  • Enable granular migration of tenant workloads and networking at Org VDC granularity with minimum downtime from NSX-V backed Provider VDC (PVDC) to NSX-T backed PVDC.
  • Check and allow migration of only supported networking features
  • Evolve with new releases of NSX-T and Cloud Director

In other words, it is not in-place migration. The provider will need to stand up new NSX-T backed cluster(s) next to the NSX-V backed ones in the same vCenter Server. Also the current NSX-T feature set in Cloud Director is not equivalent to the NSX-V. Therefore there are networking features that cannot in principle be migrated. To see comparison of the NSX-V and NSX-T Cloud Director feature set see the table at the end of this blog post.

The service provider will thus need to evaluate what Org VDCs can be migrated today based on existing limitations and functionality. Start with the simple Org VDCs and as new releases are provided migrate the rest.

How does the tool work?

  • It is Python based CLI tool that is installed and run by the system administrator. It uses public APIs to communicate with Cloud Director, NSX-T and vCenter Server to perform the migrations.
  • The environment must be prepared is such way that there exists NSX-T backed PVDC in the same vCenter Server as the source NSX-V PVDC and that their external networks are at the infrastructure level equivalent as existing external IP addresses are seamlessly migrated as well.
  • The service provider defines which source Org VDC (NSX-V backed) is going to be migrated and what is the target Provider VDC (NSX-T backed)
  • The service provider must prepare dedicated NSX-T Edge Cluster whose sole purpose is to perform Layer-2 bridging of source and destination Org VDC networks. This Edge Cluster needs one node for each migrated network and must be deployed in the NSX-V prepared cluster as it will perform VXLAN port group to NSX-T Overlay (Geneve) Logical Segment bridging.
  • When the tool is started, it will first discover the source Org VDC feature set and assess if there are any incompatible (unsupported) features. If so, the migration will be halted.
  • Then it will create the target Org VDC and start cloning the network topology, establish bridging, disconnect target networks and run basic network checks to see if the bridges work properly. If not then roll-back is performed (bridges and target Org VDC are destroyed).
  • In the next step the north / south networking flow will be reconfigured to flow through the target Org VDC. This is done by disconnecting the source networks from the gateway and reconnecting the target ones. During this step brief N/S network disruption is expected. Also notice that the source Org VDC Edge GW needs to be connected temporarily to a dummy network as NSX-V requires at least one connected interface on the Edge at all times.
  • Each vApp is then vMotioned from the source Org VDC to the target one. As this is live vMotion no significant network/compute disruption is expected.
  • Once the provider verifies the correct functionality of the target VDC she can manually trigger the cleanup step that migrates source catalogs, destroys bridges and the source Org VDC and renames the target Org VDC.
  • Rinse and repeat for the other Org VDCs.

Please make sure you read the release notes and user guide for the list of supported solutions and features. The tool will rapidly evolve – short roadmap already includes pre-validation and roll-back features. You are also encouraged to provide early feedback to help VMware decide how the tool should evolve.

VMware Cloud Director 10.1: NSX-T Integration

This is an updated blog post of the original vCloud Director 10: NSX-T Integration to include all VMware Cloud Director 10.1 related updates.

Intro

VMware Cloud Director relies on NSX network virtualization platform to provide on-demand creation and management of networks and networking services. NSX for vSphere has been supported for long time and vCloud Director allows most of its feature to be used by its tenants. However as VMware slowly shifts away from NSX for vSphere and pushes forward modern, fully rewritten NSX-T networking platform, I want to focus in this article on its integration with vCloud Director.

History

Let me start with highlighting that NSX-T is evolving very quickly. It means each release (now at version 3.0) adds major new functionality. Contrast that with NSX-V which is essentially feature complete in a sense that no major functionality change is happening there. The fast pace of NSX-T development is a challenge for any cloud management platforms as they have to play the catch up game.

The first release of vCloud Director that supported NSX-T was 9.5. It supported only NSX-T version 2.3 and the integration was very basic. All vCloud Director could do was to import NSX-T overlay logical segments (virtual networks) created manually by system administrator. These networks were imported into a specific tenant Org VDC as Org VDC networks.

The next version of vCloud Director – 9.7 supported only NSX-T 2.4 and from the feature perspective not much had changed. You could still only import networks. Under the hood the integration however used completely new set of NSX-T policy based APIs and there were some minor UI improvements in registering NSX-T Manager.

vCloud Director version 10 for the first time introduced on-demand creation of NSX-T based networks and network services. NSX-T version 2.5 was required.

The latest Cloud Director version 10.1 is extending NSX-T support with new features.

Note: Cloud Director 10.1.0 does not support NSX-T 3.0. That support will come in the next patch release (10.1.1).

NSX-T Primer

While I do not want to go too deep into the actual NSX-T architecture I fully expect that not all readers of this blog are fully familiar with NSX-T and how it differs from NSX-V. Let me quickly highlight major points that are relevant for topic of this blog post.

  • NSX-T is vCenter Server independent, which means it scales independently from vCenter domain. NSX-T essentially communicates with ESXi hosts directly (they are called host transport nodes). The hosts must be prepared with NSX-T vibs that are incompatible with NSX-V which means a particular host cannot be used by NSX-V and NSX-T at the same time.
  • Overlay virtual networks use Geneve encapsulation protocol which is incompatible with VXLAN. The concept of Controller cluster that keeps state and transport zone is very similar to NSX-V. The independence from VC mentioned in the previous point means vSphere distributed switch cannot be used, instead NSX-T brings its own N-VDS switch. It also means that there is concept of underlay (VLAN) networks managed by NSX-T. All overlay and underlay networks managed by NSX-T are called logical segments.
  • Networking services (such as routing, NATing, firewalling, DNS, DHCP, VPN, load balancing) are provided by Tier-0 or Tier-1 Gateways that are functionally similar to NSX-V ESGs but are not instantiated in dedicated VMs. Instead they are services running on shared Edge Cluster. The meaning of Edge Cluster is very different from the usage in NSX-V context. Edge Cluster is not a vSphere cluster, instead it is cluster of Edge Transport Nodes where each Edge Node is VM or bare metal host.
  • While T0 and T1 Gateways are similar they are not identical, and each has specific purpose or set of services it can offer. Distributed routing is implicitly provided by the platform unless a stateful networking service requires routing through single point. T1 GWs are usually connected to single T0 GW and that connection is managed automatically by NSX-T.
  • Typically you would have one or small number of T0 GWs in ECMP mode providing North-south routing (concept of Provider Edge) and large number of T1 GWs connected to T0 GW, each for a different tenant to provide tenant networking (concept of Tenant Edge).

VMware Cloud Director Integration

As mentioned above since NSX-T is not vCenter Server dependent, it is attached to Cloud Director independently from VC.

(Geneve) network pool creation is the same as with VXLAN – you provide mapping to an existing NSX-T overlay transport zone.


Now you can create Provider VDC (PVDC) which is as usual mapped to a vSphere cluster or resource pool. A particular cluster used by PVDC must be prepared for NSX-V or NSX-T and all clusters must share the same NSX flavor. It means you cannot mix NSX-V clusters with NSX-T in the same PVDC. However you can easily share NSX-V and NSX-T in the same vCenter Server, you will then just have to create multiple PVDCs. Although NSX-T can span VCs, PVDC cannot – that limitation still remains. When creating NSX-T backed PVDC you will have to specify the Geneve Network Pool created in the previous step.

Within PVDC you can start creating Org VDCs for your tenants – no difference there.

Org VDCs without routable networks are not very useful. To remedy this we must create external networks and Org VDC Edge Gateways. Here the concept quite differs from NSX-V. Although you could deploy provider ECMP Edges with NSX-V as well (and I described here how to do so), it is mandatory with NSX-T. You will have to pre-create T0 GW in NSX-T Manager (ECMP active – active is recommended). This T0 GW will provide external networking access for your tenants and should be routable from the internet. Instead of just importing external network port group how you would do with NSX-V you will import the whole T0 GW in Cloud Director.

During the import you will also have to specify IP subnets and pools that the T0 GW can use for IP sub-allocation to tenants.

Once the external network exist you can create tenant Org VDC Edge Gateways. The service provider can pick specific existing NSX-T Edge Cluster for their placement.

T1 GWs are always deployed in Active x Standby configuration, the placement of active node is automated by NSX-T. The router interlink between T0 and T1 GWs is also created automatically by NSX-T. It is possible to disconnect Org VDC Edge GW from Tier-0 GW (this is for example used in NSX-V to NSX-T migration scenario).

During the Org VDC Edge Gateway the service providers also allocates range of IPs from the external network. Whereas with NSX-V these would actually be assigned to the Org VDC Edge Gateway uplink, this is not the case with NSX-T. Once they are actually used in a specific T1 NAT rule, NSX-T will automatically create static route on the T0 GW and start routing to the correct T1 GW.

Tenant Networks

There are four major types of NSX-T based Org VDC networks and three of them are available to be created via UI:

  • Isolated: Layer 2 segment not connected to T1 GW. DHCP service is not available on this network (contrary to NSX-V implementation).
  • Routed: Network that is connected to T1 GW. The default is NAT-routed which means its subnet is not announced to upstream T0 GW and only way to route to reach it from outside is to use DNAT rule on T1 GW from a allocated external IP address.
    Cloud Director version 10.1 introduces fully routed network more on it below.
  • Imported: Existing NSX-T overlay logical segment can be imported (same as in VCD 9.7 or 9.5). Its routing/external connectivity must be managed outside of vCloud Director.
  • In OpenAPI (POST /1.0.0/OrgVdcNetwork) you will find one more network type:  DIRECT_UPLINK. This is for a specific NFV use case. Such network is connected directly to T0 GW with external interface. Note this feature is not officially supported!

Note that only Isolated and routed networks can be created by tenants.

In direct connect use case it is desirable to announce routed Org VDC networks upstream so workloads are reachable directly without any NAT. This is possible in Cloud Director version 10.1, but requires dedicated Tier-0 GW for the particular tenant. The provider must create new Tier-0, connect it to tenant’s particular direct connect transit VLAN and then when deploying Org VDC Edge GW select Dedicate External Network switch.

Cloud Director will make sure that dedicated External Network Tier-0 GW is not accessible to any other Org VDC Edge Gateway.

Tenant can then configure on its Org VDC Edge GW BGP routing, which is in fact set by Cloud Director on the dedicated Tier-0 GW (while Tier-0 to Tier-1 routes are auto-plumbed by NSX).

Tenant Networking Services

Currently the following T1 GW networking services are available to tenants:

  • Firewall (with IP Sets and Security Groups based on network objects)
  • NAT
  • DHCP (without binding and relay)
  • DNS forwarding
  • IPSec VPN: Policy based with pre shared key is supported.

All other services are currently not supported. This might be due to NSX-T not having them implemented yet, or Cloud Director not catching up yet. Expect big progress here with each new Cloud Director and NSX-T release.

Networking API

All NSX-T related features are available in the Cloud Director OpenAPI (CloudAPI). The pass through API approach that you might be familiar with from the Advanced Networking NSX-V implementation is not used!

Feature Comparison

I have summarized all Cloud Director networking features in the following table for quick comparison between NSX-V and NSX-T.

Automate Let’s Encrypt Certificates – Part 2

Some time ago I blogged about how I automate acquisition of Let’s Encrypt Certificates for my lab (NSX + vCloud Director) with PowerShell. The old script no longer works due to some changes on Let’s Encrypt side therefore the need for part 2.

To quickly summarize my situation. My lab consists of vCloud Director with multiple cells fronted by NSX-V Load Balancer. I need public certificate for vCloud Director which is uploaded to the NSX-V Load Balancer (that does L7 SSL termination) and to vCloud Director public addresses.

Prerequisites:

  • Web server on the domain your are getting the certificate for. It is necessary for the DNS challenge that proves you own the domain you are requesting the certificate for. I am using IIS on the machine I trigger the script from and supply the root folder where the challenge file needs to be placed.
  • NSX-V API access information – needed to replace the certificate on the NSX-Edge
  • Details about the load balancer (on which Edge it is running and what is the LB application profile of vCloud Director)
  • vCloud Director API access information – needed to upload new certificate and the full chain to vCloud Director public addresses.
  • PowerShell modules: POSH-ACME and PowerCLI

$Username = "admin"
$Password = "default"
$NSXManager = "nsx01.fojta.com"
$LBEdge = 'edge-1'
$ApplicationProfile = 'applicationProfile-1'
$Email = "mailto:admin@fojta.com"
$Domain = "vcloud.fojta.com"
$Vcd = "vcloud.fojta.com"
$VcdAdmin = "administrator"
$VcdPassword = "vcloud"
$IisAcmeRoot = "C:\inetpub\wwwroot\.well-known\acme-challenge"

$RootCert = "-----BEGIN CERTIFICATE-----
MIIDSjCCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/
MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT
DkRTVCBSb290IENBIFgzMB4XDTAwMDkzMDIxMTIxOVoXDTIxMDkzMDE0MDExNVow
PzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMRcwFQYDVQQD
Ew5EU1QgUm9vdCBDQSBYMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AN+v6ZdQCINXtMxiZfaQguzH0yxrMMpb7NnDfcdAwRgUi+DoM3ZJKuM/IUmTrE4O
rz5Iy2Xu/NMhD2XSKtkyj4zl93ewEnu1lcCJo6m67XMuegwGMoOifooUMM0RoOEq
OLl5CjH9UL2AZd+3UWODyOKIYepLYYHsUmu5ouJLGiifSKOeDNoJjj4XLh7dIN9b
xiqKqy69cK3FCxolkHRyxXtqqzTWMIn/5WgTe1QLyNau7Fqckh49ZLOMxt+/yUFw
7BZy1SbsOFU5Q9D8/RhcQPGX69Wam40dutolucbY38EVAjqr2m7xPi71XAicPNaD
aeQQmxkqtilX4+U9m5/wAl0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNV
HQ8BAf8EBAMCAQYwHQYDVR0OBBYEFMSnsaR7LHH62+FLkHX/xBVghYkQMA0GCSqG
SIb3DQEBBQUAA4IBAQCjGiybFwBcqR7uKGY3Or+Dxz9LwwmglSBd49lZRNI+DT69
ikugdB/OEIKcdBodfpga3csTS7MgROSR6cz8faXbauX+5v3gTt23ADq1cEmv8uXr
AvHRAosZy5Q6XkjEGB5YGV8eAlrwDPGxrancWYaLbumR9YbK+rlmM6pZW87ipxZz
R8srzJmwN0jP41ZL9c8PDHIyh8bwRLtTcm1D9SZImlJnt1ir/md2cXjbDaJWFBM5
JDGFoqgCWjBH4d1QB7wCCZAA62RjYJsWvIjJEubSfZGL+T0yjWW06XyxV3bqxbYo
Ob8VZRzI9neWagqNdwvYkQsEjgfbKbYK7p2CNTUQ
-----END CERTIFICATE-----
"

#Set-PAServer LE_STAGE
Set-PAServer LE_PROD

## Read https://github.com/rmbolger/Posh-ACME/wiki/%28Advanced%29-Manual-HTTP-Challenge-Validation

New-PAAccount -AcceptTOS -Contact $Email
New-PAOrder $Domain

$auths = Get-PAOrder | Get-PAAuthorizations
$token = $auths[0].HTTP01Token
$toPublish = Get-KeyAuthorization $token

## Upload challenge file to the IIS web server
New-Item -Path $IisAcmeRoot -Name $token -Value $toPublish

$auths.HTTP01Url | Send-ChallengeAck
New-PACertificate $Domain
$cert = Get-PACertificate

$IssuerCert = [IO.File]::ReadAllText($cert.ChainFile)
$PrivateKey = [IO.File]::ReadAllText($cert.KeyFile)
$LBCertificate = [IO.File]::ReadAllText($cert.CertFile)

## Create authorization string and store in $head
$auth = [System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($Username + ":" + $Password))
$head = @{"Authorization"="Basic $auth"}

##Upload certificate
$Uri = "https://$NSXManager/api/2.0/services/truststore/certificate/" + $LBEdge
$Body = "
<trustObject>
<pemEncoding>" + $LBCertificate + $IssuerCert + $RootCert + "</pemEncoding>
<privateKey>" + $PrivateKey + "</privateKey>
<description>vCloud Certificate</description>
</trustObject>"
$r = Invoke-WebRequest -URI $Uri -Method Post -Headers $head -ContentType "application/xml" -Body $Body -ErrorAction:Stop
$NewCertificateId = ([xml]$r).certificates.certificate.objectId

##Delete Root and intermediate certificate from the Edge as they are not needed
$Uri = "https://$NSXManager/api/2.0/services/truststore/certificate/" + $NewCertificateId[0]
$r = Invoke-WebRequest -URI $Uri -Method Delete -Headers $head -ContentType "application/xml" -ErrorAction:Stop
$Uri = "https://$NSXManager/api/2.0/services/truststore/certificate/" + $NewCertificateId[1]
$r = Invoke-WebRequest -URI $Uri -Method Delete -Headers $head -ContentType "application/xml" -ErrorAction:Stop

##Replace certificate in the application profile
$Uri = "https://$NSXManager/api/4.0/edges/" + $LBEdge + "/loadbalancer/config/applicationprofiles/" + $ApplicationProfile
$r = Invoke-WebRequest -URI $Uri -Method Get -Headers $head -ContentType "application/xml" -ErrorAction:Stop
[xml]$sxml = $r.Content
$OldCertificateId = $sxml.applicationProfile.clientSsl.serviceCertificate
$sxml.applicationProfile.clientSsl.serviceCertificate = $NewCertificateId[2]
$r = Invoke-WebRequest -Uri $Uri -Method Put -Headers $head -ContentType "application/xml" -Body $sxml.OuterXML -ErrorAction:Stop

##Delete old certificate from the Edge
$Uri = "https://$NSXManager/api/2.0/services/truststore/certificate/" + $OldCertificateId
$r = Invoke-WebRequest -URI $Uri -Method Delete -Headers $head -ContentType "application/xml" -ErrorAction:Stop

##Update vCloud Director with new certificates

$VcdSession = Connect-CIServer $Vcd -User $VcdAdmin -Password $VcdPassword

$Uri = "https://"+$Vcd+"/api/admin/extension/settings/general"
$head = @{"x-vcloud-authorization"=$VcdSession.SessionSecret} + @{"Accept"="application/*;version=33.0"}
$r = Invoke-WebRequest -URI $Uri -Method Get -Headers $head -ErrorAction:Stop
[xml]$sxml = $r.Content

$sxml.GeneralSettings.RestApiBaseUriPublicCertChain = $LBCertificate + $IssuerCert + $RootCert
$sxml.GeneralSettings.SystemExternalAddressPublicCertChain = $LBCertificate + $IssuerCert + $RootCert

$r = Invoke-WebRequest -URI $Uri -Method Put -Headers $head -ContentType "application/vnd.vmware.admin.generalSettings+xml" -Body $sxml.OuterXML -ErrorAction:Stop