How to Scale Up NSX Advanced Load Balancer Cloud In VCD?

VMware Cloud Director relies on NSX Advanced Load Balancer (Avi) integration to offer load balancing as a service to tenants – more on that here. This article discusses a particular workaround to scale above the current NSX-ALB NSX-T Cloud limits which are 128 Tier-1 objects per NSX-T Cloud and 300 Tier-1 objects per Avi Controller cluster. As you can have up to 1000 Tier-1 objects in single NSX-T instance could we have load balancing service on each of them?

Yes, but let’s first recap the integration facts.

  • VCD integrates with NSX-T via registration of NSX-T Managers
  • The next step is creation of VCD Network Pool by mapping it to Geneve Transport Zone provided by the above NSX-T Managers
  • Individual Org VDCs consume NSX-T resources for network creation based on assigned VCD Network Pool
  • VCD integrates with NSX ALB via registration of Avi Controller Cluster
  • NSX ALB integrates with NSX-T via NSX-T Cloud construct that is defined by NSX-T Manager, Overlay Transport Zone and vCenter Server
  • NSX-T Cloud is imported into VCD and matched with an existing Network Pool based on the same NSX-T Manager/Overlay Transport Zone combination
  • Service Engine Groups are imported from available NSX-T Clouds to be used as templates for resources to run tenant consumed LB services and are assigned to Edge Gateways

Until VCD version 10.4.2 you could map only single NSX-T Cloud to a Geneve Network Pool. However with VCD 10.5 and 10.4.2.2 and newer you can map multiple NSX-T Clouds to the same Geneve Network Pool (even coming from different Avi Controller clusters). This essentially allows you to have more than 128 Tier-1 load balancing enabled GWs per such Network Pool and with multiple NSX ALB instances could scale all the way to 1000 Tier-1 GWs.

The issue is that VCD currently is not smart enough to pick the most suitable NSX-T Cloud for placement from capacity perspective. The only logic VCD is using is the priorization based on the alphabetic ordering of NSX-T Clouds in the list. So it is up to the service provider to make sure that on top is the NSX-T Cloud with the most available capacity.

As can be seen above I used A- and B- prefixed names to change the prioritization. Note that the UI does not allow the name edit, API PUT call must be used instead.

Note: The assignable Service Engine Groups depend if the Edge Gateway (Tier-1) has already been assigned to a particular NSX-T Cloud or not. So the API endpoint will reflect that:
https://{{host}}/cloudapi/1.0.0/loadBalancer/serviceEngineGroups?filter=(_context==<gateway URN>;_context==assignable)

Lastly I want to state that the above is considered a workaround and a better capacity management will be address in the future VCD releases.

New Networking Features in VMware Cloud Director 10.4.1

The recently released new version of VMware Cloud Director 10.4.1 brings quite a lot of new features. In this article I want to focus on those related to networking.

External Networks on Org VDC Gateway (NSX-T Tier-1 GW)

External networks that are NSX-T Segment backed (VLAN or overlay) can now be connected directly to Org VDC Edge Gateway and not routed through Tier-0 or VRF the Org VDC GW is connected to. This connection is done via the service interface (aka Centralize Service Port – CSP) on the service node of the Tier-1 GW that is backing the Org VDC Edge GW. The Org VDC Edge GW still needs a parent Tier-0/VRF (now called Provider Gateway) although it can be disconnected from it.

What are some of the use cases for such direct connection of the external network?

  • routed connectivity via dedicated VLAN to tenant’s co-location physical servers
  • transit connectivity across multiple Org VDC Edge Gateways to route between different Org VDCs
  • service networks
  • MPLS connectivity to direct connect while internet is accessible via shared provider gateway

The connection is configured by the system administrator. It can use only single subnet from which multiple IPs can be allocated to the Org VDC Edge GW. One is configured directly on the interface while the others are just routed through when used for NAT or other services.

If the external network is backed by VLAN segment, it can be connected to only one Org VDC Edge GW. This is due to a NSX-T limitation that a particular edge node can use VLAN segment only for single logical router instantiated on the edge node – and as VCD does not give you the ability to select particular edge nodes for instantiation of Org VDC Edge Tier-1 GWs it simply will not allow you to connect such external network to multiple Edge GWs. If you are sharing the Edge Cluster also with Tier-0 GWs make sure that they do not use the same VLAN for their uplinks too. Typically you would use VLAN for co-location or MPLS direct connect use case.

Overlay (Geneve) backed external network has no such limitations which makes it a great use case for connectivity across multiple GWs for transits or service networks.

NSX-T Tier-1 GW does not provide any dynamic routing capabilities, so routing to such network can be configured only via static routes. Also note that Tier-1 GW has default route (0.0.0.0/0) always pointing towards its parent Tier-0/VRF GW. Therefore if you want to set default route to the segment backed external network you need to use two more specific routes. For example:

0.0.0.0/1 next hop <MPLS router IP> scope <external network>
128.0.0.0/1 next hop <MPLS router IP> scope <external network>

Slightly related to this feature is the ability to scope gateway firewall rules to a particular interface. This is done via Applied To field:

You can select any CSP interface on the Tier-1 GW (used by the external network or non-distributed Org VDC networks) or nothing, which means the rule will be applied to uplink to Tier-0/VRF as well as to any CSP interface.

IP Spaces

IP Spaces is a big feature that will be delivered across multiple releases, where 10.4.1 is the first one. In short IP Spaces are new VCD object that allows managing individual IPs (floating IPs) and subnets (prefixes) independently across multiple networks/gateways. The main goal is to simplify the management of public IPs or prefixes for the provider as well as private subnets for the tenants, however additional benefits are being able to route on shared Provider Gateway (Tier-0/VRF), use same dedicated parent Tier-0/VRF for multiple Org VDC Edge GWs or the ability to re-connect Org VDC Edge GWs to a differeent parent Provider Gateway.

Use cases for IP Spaces:

  • Self-service for request of IPs/prefixes
  • routed public subnet for Org VDC network on shared Provider Gateway (DMZ use case, where VMs get public IPs with no NATing performed)
  • IPv6 routed networks on a shared Provider Gateway
  • tenant dedicated Provider Gateway used by multiple tenant Org VDC Edge Gateways
  • simplified management of public IP addresses across multiple Provider Gateways (shared or dedicated) all connected to the same network (internet)

In the legacy way the system administrator would create subnets at the external network / provider gateway level and then add static IP pools from those subnet for VCD to use. IPs from those IP pools would be allocated to tenant Org VDC Edge Gateways. The IP Spaces mechanism creates standalone IP Spaces (which are collections of IP ranges (e.g. 192.168.111.128-192.168.111.255) and blocks of IP Prefixes(2 blocks of 192.168.111.0/28 – 192.168.111.0/28, 192.168.111.16/28 and 1 block of 192.168.111.32/27).

A particular IP Space is then assigned to a Provider Gateway (NSX-T Tier-0 or VRF GW) as IP Space Uplink:

An IP Space can be used by multiple Provider Gateways.

The tenant Org VDC Edge Gateway connected to such IP Space enabled provider gateway can then request floating IPs (from IP ranges) 

or assign IP block to routable Org VDC network which results into route advertisement for such network.

In the above case such network should be also advertised to the internet, the parent Tier-0/VRF needs to have route advertisement manually configured (see below IP-SPACES-ROUTING policy) as VCD will not do so (contrary to NAT/LB/VPN case).

Note: The IP / IP Block assignments are done from the tenant UI. Tenant needs to have new rights added to see those features in the UI.

The number of IPs or prefixes the tenant can request is managed via quota system at the IP Space level. Note that the system administrator can always exceed the quota when doing the request on behalf of the tenant.

A provider gateway must be designated to be used for IP Spaces. So you cannot combine the legacy and IP Spaces method of managing IPs on the same provider gatway. There is currently no way of converting legacy provider gateway to the one IP Spaces enabled, but such functionality is planned for the future.

Tenant can create their own private IP Spaces which are used for their Org VDC Edge GWs (and routed Org VDC networks) implicitly enabled for IP Spaces. This simplifies the creation of new Org VDC networks where a unique prefix is automatically requested from the private IP Space. The uniqueness is important as it allows multiple Edge GWs to share same parent provider gateway.

Transparent Load Balancing

VMware Cloud Director 10.4.1 adds support for Avi (NSX ALB) Transparent Load Balancing. This allows the pool member to see the source IP of the client which might be needed for certain applications.

The actual implementation in NSX-T and Avi is fairly complicated and described in detail here: https://avinetworks.com/docs/latest/preserve-client-ip-nsxt-overlay/. This is due to the Avi service engine data path and the need to preserve the client source IP while routing the pool member return traffic back to the service engine node.

VCD will hide most of the complexity so that to actually enable the service three steps need to be taken:

  1. Transparent mode must be enabled at the Org VDC GW (Tier-1) level.

2. The pool members must be created via NSX-T Security Group (IP Set).

3. Preserve client IP must be configured on the virtual service.

Due to the data path implementation there are quite many restrictions when using the feature:

  • Avi version must be at least 21.1.4
  • the service engine group must be in A/S mode (legacy HA)
  • the LB service engine subnet must have at least 8 IPs available (/28 subnet – 2 for engines, 5 floating IPs and 1 GW) – see Floating IP explanation below
  • only in-line topology is supported. It means a client that is accessing the LB VIP and not going via T0 > T1 will not be able to access the VS
  • only IPv4 is supported
  • the pool members cannot be accessed via DNAT on the same port as the VS port
  • transparent LB should not be combined with non-transparent LB on the same GW as it will cause health monitor to fail for non-transparent LB
  • pool member NSX-T security group should not be reused for other VS on the same port

Floating IPs

When transparent load balancing is enabled the return traffic from the pool member cannot be sent directly to the client (source IP) but must go back to the service engine otherwise asymmetric routing happens and the traffic flows will be broken. This is implemented in NSX-T via N/S Service Insertion policy where the pool member (defined via security group) traffic is instead to its default GW redirected to the active engine with a floating IP. Floating IPs are from the service engine network subnet but are not from the DHCP range which assigns service engine nodes their primary IP. VCD will dedicate 5 IP from the LB Service network range for floating IPs. Note that multiple transparent VIPs on the same SEG/service network will share floating IP.

Control System Admin Access to VMware Cloud Director

When VMware Cloud Director is deployed in public environment setup it is a good practice to restrict the system admin access only for specific networks so no brute force attack can be triggered against the publicly available UI/API end points.

There is actually a relatively easy way to achieve this via any web application firewall (WAF) with URI access filter. The strategy is to protect only the provider authentication end points which is much easier than to try to distinguish between provider and tenant URIs.

As the access (attack) can be done either through UI or API the solution should address both. Let us first talk about the UI. The tenants and provider use specific URL to access their system/org context but we do not really need to care about this at all. The UI is actually using (public) APIs so there is nothing needed to harden the UI specifically if we harder the API endpoint. Well, the OAuth and SAML logins are exception so let me tackle them separately.

So how can you authenticate to VCD via API?

Integrated Authentication

The integrated basic authentication consisting of login/password is used for VCD local accounts and LDAP accounts. The system admin (provider context) uses /cloudapi/1.0.0/sessions/provider API endpoint while the tenants use /cloudapi/1.0.0/sessions.

The legacy (common for both providers and tenant) API endpoint /api/sessions has been deprecated since API version 33.0 (introduced in VCD 10.0). Note that deprecated does not mean removed and it is still available even with API version 36.x so you can expect to be around for some time as VCD keeps backward compatible APIs for few years.

You might notice that there is in a Feature Flags section the possibility to enable “Legacy Login Removal”.

Feature Flags

Enabling this feature will disable legacy login both for tenants and providers however only if you use alpha API version (in the case of VCD 10.3.3.1 it is 37.0.0-alpha-1652216327). So this is really only useful for testing your own tooling where you can force the usage of that particular API version. The UI and any 3rd party tooling will still use the main (supported) API versions where the legacy endpoint will still work.

However, you can forcefully disable it for provider context for any API version with the following CMT command (run from any cell, no need to restart the services):

/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n vcloud.api.legacy.nonprovideronly -v true

The providers will need to use only the new cloudapi/1.0.0/providers/session endpoint. So be careful as it might break some legacy tools!

API Access Token Authentication

This is a fairly new method of authentication to VCD (introduced in version 10.3.1) that uses once generated secret token for API authentication. It is mainly used by automation or orchestration tools. The actual method of generating session token requires access to the tenant or provider oauth API endpoints:

/oauth/tenant/<tenant_name>/token

/oauth/provider/token

This makes it easy to disable provider context via URI filter.

SAML/OAuth Authentication via UI

Here we must distinguish the API and UI behavior. For SAML, the UI is using /login/org/<org-name>/… endpoint. The provider context is using the default SYSTEM org as the org name. So we must filter URI starting with /login/org/SYSTEM.

For OAuth the UI is using the same endpoint as API access token authentication /oauth/tenant vs /oauth/provider. /login/oauth?service=provider

For API SAML/OAuth logins cloudapi/1.0.0/sessions vs cloudapi/1.0.0/sessions/provider endpoints are used.

WAF Filtering Example

Here is an example how to set up URI filtering with VMware NSX Advanced Load Balancer.

  1. We need to obviously set up VCD cell (SSL) pool and Virtual Service for the external IP and port 443 (SSL).
  2. The virtual service application profile must be set to System-Secure-HTTP as we need to terminate SSL sessions on the load balancer in order to inspect the URI. That means the public SSL certificate must be uploaded to load balancer as well. The cells can actually use self signed certs especially if you use the new console proxy that does not require SSL pass through and works on port 443.
  3. In the virtual service go to Policies > HTTP Request and create following rules:
    Rule Name: Provider Access
    Client IP Address: Is Not: <admin subnets>
    Path: Criteria – Begins with:
    /cloudapi/1.0.0/sessions/provider
    /oauth/provider
    /login/oauth?service=provider
    /login/org/SYSTEM
    Content Switch: Local response – Status Code: 403.
WAF Access Rule

And this is what you can observe when trying to log in via integrated authentication from non-authorized subnets:

And here is an example of SAML login:

Load Balancing with Avi in VMware Cloud Director

VMware Cloud Director 10.2 is adding network load balancing (LB) functionality in NSX-T backed Organization VDCs. It is not using the native NSX-T load balancer capabilities but instead relies on Avi Networks technology that was acquired by VMware about a year ago and since then rebranded to VMware NSX Advanced Load Balancer. I will call it Avi for short in this article.

The way Avi works is quite different from the way load balancing worked in NSX-V or NSX-T. Understanding the differences and Avi architecture is essential to properly use it in multitenant VCD environments.

I will focus only on the comparison with NSX-V LB as this is relevant for VCD (NSX-T legacy LB was never viable option for VCD environments).

In VCD in an NSX-V backed Org VDC the LB is running on Org VDC Edge Gateway (VM) that can have four different sizes (compact, large, quad large and extra large) and be in standalone or active / standby configuration. That Edge VM also needs to perform routing, NATing, firewalling, VPN, DHCP and DNS relay. Load balancer on a stick is not an option with NSX-V in VCD. The LB VIP must be an IP assigned to one of external or internally attached network interfaces of the Org VDC Edge GW.

Enabling load balancing on an Org VDC Edge GW in such case is easy as the resource is already there. 

In the case of Avi LB the load balancing is performed by external (dedicated to load balancing) components which adds more flexibility, scalability and features but also means more complexity. Let’s dive into it.

You can look at Avi as another separate platform solution similar to vSphere or NSX – where vSphere is responsible for compute and storage, NSX for routing, switching and security, Avi is now responsible for load balancing.

Picture is worth thousand words, so let me put this diagram here first and then dig deeper (click for larger version).

 

Control Path

You start by deploying Avi controller cluster (highly available 3 nodes) which integrates with vSphere (to use for compute/storage) and NSX-T (for routing LB data and control plane traffic). The controllers would sit somewhere in your management infrastructure next to all other management solutions.

The integration is done by setting up so called NSX-T Cloud in Avi where you define vCenter Server (only one is supported per NSX-T Cloud) and NSX-T Manager endpoints, NSX-T overlay transport zone (with 1:1 relationship between TZ and NSX-T Cloud definition). Those would be your tenant/workload VC/NSX-T.

You must also point to pre-created management network segment that will be used to connect all load balancing engines (more on them later) so they can communicate with the controllers for management and control traffic. To do so, in NSX-T you would set up dedicated Tier-1 (Avi Management) GW with the Avi Management segment connected and DHCP enabled. The expectation is the Tier-1 GW would be able through Tier-0 to reach the Avi Controllers.

Data Path

Avi Service Engines (SE) are VM resources to perform the load balancing. They are similar to NSX-T Edge Nodes in a sense that the load balancing virtual services can be placed on any SE node based on capacity or reservations (as Tier-1 GW can be placed on any Edge Node). Per se there is no strict relationship between tenant’s LB and SE node. SE node can be shared across Org VDC Edge GWs or even tenants. SE node is a VM with up to 10 network interfaces (NICs). One NIC is always needed for the management/control traffic (blue network). The rest (9) are used to connect to the Org VDC Edge GW (Tier-1 GW) via a Service Network logical segment (yellow and orange). The service networks are created by VCD when you enable load balancing service on the Org VDC Edge GW together with DHCP service to provide IP addresses for the attached SEs. It will by default get 10.255.255.0/25 subnet, but the system admin can change it, if it clashes with existing Org VDC networks. Service Engines run each service interface in a different VRF context so there is no worry about IP conflicts or even cross tenant communication.

When a load balancing pool and virtual service is configured by the tenant Avi will automatically pick a Service Engine to instantiate the LB service. It might even need to first deploy (automatically) an SE node if there is no existing capacity. When SE is assigned Avi will configure static route (/32) on the Org VDC Edge GW pointing the virtual service VIP (virtual IP) to the service engine IP address (from the tenant’s LB service network).

Note: The VIP contrary to NSX-V LB can be almost any arbitrary IP address. It can be routable external IP address allocated to the Org VDC Edge GW or any non-externally routed address but it cannot clash with any existing Org VDC networks. or with the LB service network. If you use an external Org VDC Edge GW allocated IP address you cannot use the address for anything else (e.g. SNAT or DNAT). That’s the way NSX-T works (no NAT and static routing at the same time). So for example if you want to use public address 1.2.3.4 for LB on port 80 but at the same time use it for SNAT, use an internal IP for the LB (e.g. 172.31.255.100) and create DNAT port forwarding rule to it (1.2.3.4:80 to 172.31.255.100:80).

Service Engine Groups

With the basics out of the way let’s discuss how can service provider manage the load balancing quality of service – performance, capacity and availability. This is done via Service Engine Groups (SEG).

SEGs are (today) configured directly in Avi Controller and imported into VCD. They specify SE node sizing (CPU, RAM, storage), bandwidth restrictions, virtual services maximums per node and availability mode.

The availability mode needs more explanation. Avi supports four availability modes:
A/S … legacy (only two nodes are deployed), service is active only on one node at a time and stand by on the other, no scale out support (service across nodes), very fast failover

A/A … elastic, service is active on at least two SEs, session info is proactively replicated, very fast failover

N+M … elastic, N is number of SE nodes service is scaled over, M is a buffer in number of failures the group can sustain, slow failover (due to controller need to re-assign services), but efficient SE utilization

N+0 … same as N+M but no buffer, the controller will deploy new SE nodes when failure occurs. The most efficient use of resources but the slowest failover time.

The base Avi licensing supports only legacy A/S high availability mode. For best availability and performance usage of elastic A/A is recommended.

As mentioned Service Engine Groups are imported into VCD where the system administrator makes a decision whether SEG is going to be dedicated (SE nodes from that group will be attached to only one Org VDC Edge GW) or shared.

Then when load balancing is enabled on a particular Org VDC Edge GW, the service provider assigns one or more SEGs to it together with capacity reservation and maximum in terms of virtual services for the particular Org VDC Edge GW.

Use case examples:

  • A/S dedicated SEG for each tenant / Org VDC Edge GW. Avi will create two SE nodes for each LB enabled Org VDC Edge GW and will provide similar service as LB on NSX-V backed Org VDC Edge GW did. Does not require additional licensing but SEG must be pre-created for each tenant / Org VDC Edge GW.
  • A/A elastic shared across all tenants. Avi will create pool of SE nodes that are going to be shared. Only one SEG is created. Capacity allocation is managed in VCD, Avi elastically deploys and undeploys SE nodes based on actual usage (the usage is measured in number of virtual services, not actual throughput or request per seconds).

Service Engine Node Placement

The service engine nodes are deployed by Avi into the (single) vCenter Server associated with the NSX Cloud and they live outside of VMware Cloud Director management. The placement is defined in the service engine group definition (you must use Avi 20.1.2 or newer). You can select vCenter Server folder and limit the scope of deployment to list of ESXi hosts and datastores. Avi has no understanding of vSphere host, and datastore clusters or resource pools. Avi will also not configure any DRS anti-affinity for the deployed nodes (but you can do so post-deployment).

Conclusion

The whole Avi deployment process for the system admin is described in detail here. The guide in the link refers to general Avi deployment of NSX-T Cloud, however for VCD deployment you would just stop before the step Creating Virtual Service as that would be done from VCD by the tenant.

Avi licensing is basic or enterprise and is set at Avi Controller cluster level. So it is possible to mix both licenses for two tier LB service by deploying two Avi Controller cluster instances and associating each with a different NSX-T transport zone (two vSphere clusters or Provider VDCs).

The feature differences between basic and enterprise editions are quite extensive and complex. Besides Service Engine high availability modes the other important difference is access to metrics, amount of application types, health monitors and pool selection algorithms.

The Avi usage metering for licensing purposes is currently done via Python script that is ran at the Avi Controller to measure Service Engine total  high mark vCPU usage during a given period and must be reported manually. The basic license is included for free with VCPP NSX usage and is capped to 1 vCPU per 640 GB reported vRAM of NSX base usage.

Update 2020/10/23: Make sure to check interoperability matrix. As of today only Avi 20.1.1 is supported with VCD 10.2.