Load Balancing with Avi in VMware Cloud Director

VMware Cloud Director 10.2 is adding network load balancing (LB) functionality in NSX-T backed Organization VDCs. It is not using the native NSX-T load balancer capabilities but instead relies on Avi Networks technology that was acquired by VMware about a year ago and since then rebranded to VMware NSX Advanced Load Balancer. I will call it Avi for short in this article.

The way Avi works is quite different from the way load balancing worked in NSX-V or NSX-T. Understanding the differences and Avi architecture is essential to properly use it in multitenant VCD environments.

I will focus only on the comparison with NSX-V LB as this is relevant for VCD (NSX-T legacy LB was never viable option for VCD environments).

In VCD in an NSX-V backed Org VDC the LB is running on Org VDC Edge Gateway (VM) that can have four different sizes (compact, large, quad large and extra large) and be in standalone or active / standby configuration. That Edge VM also needs to perform routing, NATing, firewalling, VPN, DHCP and DNS relay. Load balancer on a stick is not an option with NSX-V in VCD. The LB VIP must be an IP assigned to one of external or internally attached network interfaces of the Org VDC Edge GW.

Enabling load balancing on an Org VDC Edge GW in such case is easy as the resource is already there. 

In the case of Avi LB the load balancing is performed by external (dedicated to load balancing) components which adds more flexibility, scalability and features but also means more complexity. Let’s dive into it.

You can look at Avi as another separate platform solution similar to vSphere or NSX – where vSphere is responsible for compute and storage, NSX for routing, switching and security, Avi is now responsible for load balancing.

Picture is worth thousand words, so let me put this diagram here first and then dig deeper (click for larger version).

 

Control Path

You start by deploying Avi controller cluster (highly available 3 nodes) which integrates with vSphere (to use for compute/storage) and NSX-T (for routing LB data and control plane traffic). The controllers would sit somewhere in your management infrastructure next to all other management solutions.

The integration is done by setting up so called NSX-T Cloud in Avi where you define vCenter Server (only one is supported per NSX-T Cloud) and NSX-T Manager endpoints, NSX-T overlay transport zone (with 1:1 relationship between TZ and NSX-T Cloud definition). Those would be your tenant/workload VC/NSX-T.

You must also point to pre-created management network segment that will be used to connect all load balancing engines (more on them later) so they can communicate with the controllers for management and control traffic. To do so, in NSX-T you would set up dedicated Tier-1 (Avi Management) GW with the Avi Management segment connected and DHCP enabled. The expectation is the Tier-1 GW would be able through Tier-0 to reach the Avi Controllers.

Data Path

Avi Service Engines (SE) are VM resources to perform the load balancing. They are similar to NSX-T Edge Nodes in a sense that the load balancing virtual services can be placed on any SE node based on capacity or reservations (as Tier-1 GW can be placed on any Edge Node). Per se there is no strict relationship between tenant’s LB and SE node. SE node can be shared across Org VDC Edge GWs or even tenants. SE node is a VM with up to 10 network interfaces (NICs). One NIC is always needed for the management/control traffic (blue network). The rest (9) are used to connect to the Org VDC Edge GW (Tier-1 GW) via a Service Network logical segment (yellow and orange). The service networks are created by VCD when you enable load balancing service on the Org VDC Edge GW together with DHCP service to provide IP addresses for the attached SEs. It will by default get 192.168.255.0/25 subnet, but the system admin can change it, if it clashes with existing Org VDC networks. Service Engines run each service interface in a different VRF context so there is no worry about IP conflicts or even cross tenant communication.

When a load balancing pool and virtual service is configured by the tenant Avi will automatically pick a Service Engine to instantiate the LB service. It might even need to first deploy (automatically) an SE node if there is no existing capacity. When SE is assigned Avi will configure static route (/32) on the Org VDC Edge GW pointing the virtual service VIP (virtual IP) to the service engine IP address (from the tenant’s LB service network).

Note: The VIP contrary to NSX-V LB can be almost any arbitrary IP address. It can be routable external IP address allocated to the Org VDC Edge GW or any non-externally routed address but it cannot clash with any existing Org VDC networks. or with the LB service network. If you use an external Org VDC Edge GW allocated IP address you cannot use the address for anything else (e.g. SNAT or DNAT). That’s the way NSX-T works (no NAT and static routing at the same time). So for example if you want to use public address 1.2.3.4 for LB on port 80 but at the same time use it for SNAT, use an internal IP for the LB (e.g. 172.31.255.100) and create DNAT port forwarding rule to it (1.2.3.4:80 to 172.31.255.100:80).

Service Engine Groups

With the basics out of the way let’s discuss how can service provider manage the load balancing quality of service – performance, capacity and availability. This is done via Service Engine Groups (SEG).

SEGs are (today) configured directly in Avi Controller and imported into VCD. They specify SE node sizing (CPU, RAM, storage), bandwidth restrictions, virtual services maximums per node and availability mode.

The availability mode needs more explanation. Avi supports four availability modes:
A/S … legacy (only two nodes are deployed), service is active only on one node at a time and stand by on the other, no scale out support (service across nodes), very fast failover

A/A … elastic, service is active on at least two SEs, session info is proactively replicated, very fast failover

N+M … elastic, N is number of SE nodes service is scaled over, M is a buffer in number of failures the group can sustain, slow failover (due to controller need to re-assign services), but efficient SE utilization

N+0 … same as N+M but no buffer, the controller will deploy new SE nodes when failure occurs. The most efficient use of resources but the slowest failover time.

The base Avi licensing supports only legacy A/S high availability mode. For best availability and performance usage of elastic A/A is recommended.

As mentioned Service Engine Groups are imported into VCD where the system administrator makes a decision whether SEG is going to be dedicated (SE nodes from that group will be attached to only one Org VDC Edge GW) or shared.

Then when load balancing is enabled on a particular Org VDC Edge GW, the service provider assigns one or more SEGs to it together with capacity reservation and maximum in terms of virtual services for the particular Org VDC Edge GW.

Use case examples:

  • A/S dedicated SEG for each tenant / Org VDC Edge GW. Avi will create two SE nodes for each LB enabled Org VDC Edge GW and will provide similar service as LB on NSX-V backed Org VDC Edge GW did. Does not require additional licensing but SEG must be pre-created for each tenant / Org VDC Edge GW.
  • A/A elastic shared across all tenants. Avi will create pool of SE nodes that are going to be shared. Only one SEG is created. Capacity allocation is managed in VCD, Avi elastically deploys and undeploys SE nodes based on actual usage (the usage is measured in number of virtual services, not actual throughput or request per seconds).

Service Engine Node Placement

The service engine nodes are deployed by Avi into the (single) vCenter Server associated with the NSX Cloud and they live outside of VMware Cloud Director management. The placement is defined in the service engine group definition (you must use Avi 20.1.2 or newer). You can select vCenter Server folder and limit the scope of deployment to list of ESXi hosts and datastores. Avi has no understanding of vSphere host, and datastore clusters or resource pools. Avi will also not configure any DRS anti-affinity for the deployed nodes (but you can do so post-deployment).

Conclusion

The whole Avi deployment process for the system admin is described in detail here. The guide in the link refers to general Avi deployment of NSX-T Cloud, however for VCD deployment you would just stop before the step Creating Virtual Service as that would be done from VCD by the tenant.

Avi licensing is basic or enterprise and is set at Avi Controller cluster level. So it is possible to mix both licenses for two tier LB service by deploying two Avi Controller cluster instances and associating each with a different NSX-T transport zone (two vSphere clusters or Provider VDCs).

The feature differences between basic and enterprise editions are quite extensive and complex. Besides Service Engine high availability modes the other important difference is access to metrics, amount of application types, health monitors and pool selection algorithms.

The Avi usage metering for licensing purposes is currently done via Python script that is ran at the Avi Controller to measure Service Engine total  high mark vCPU usage during a given period and must be reported manually. The basic license is included for free with VCPP NSX usage and is capped to 1 vCPU per 640 GB reported vRAM of NSX base usage.

vCloud Director H5 UI Error: 431 Request Header Fields Too Large

This is just a short blog post to describe an issue you might get with the tenant or portal HTML UI in vCloud Director where you will see errors in the browser related to request header fields too large.

You will see it more likely with Chrome browser and if your cloud domain is shared with other services. The root cause is that the browser API calls will stop working once the request header gets larger than 8 KBs. While 8 KBs seems like big enough size especially as the request headers vCloud Director uses contain only session ID, JWT token and possibly load balancers headers it also includes all the browser cookies applicable to the vCloud Director domain stored by other web services.

The temporary fix is for the end-user to delete her browser cookies. But is there something the provider could do?

In our case we saw the situation where the vCloud Director instance was on *.vmware.com domain and the browser contained lots of large OAM cookies related to VMware Single Sign-On solution. While those cookies are essential for multiple VMware internal applications, there is no reason for vCloud Director to receive them in every API request. One way how to block the cookies and thus decrease the request header size is to remove them at the load balancer. With NSX-V load balancer this can be accomplished by utilizing SSL L7 termination and an application rule (see my older blog post how to configure NSX-V Edge Load balancer).

In my case the application rule I use is:

Update 2019/10/24: The initial rule would remove all Cookies. I have now amended it with another rule that removes all but vcloud_session_id and vcloud_jwt cookies if they are present.

reqirep ^Cookie:\s.*(vcloud_session_id=[^;]*)|(vcloud_jwt=[^;]*) Cookie:\ \1;\ \2
reqidel ^Cookie:.*OAM*

which deletes all cookies from the request header starting with OAM string

 

Load Balancing vCloud Director with NSX-T

I just have had a chance for the first time to set up vCloud Director installation that was fronted by NSX-T based load balancer (version 2.4.1). In the past I have blogged how to load balance vCloud Director cells with NSX-V:

Load Balancing vCloud Director Cells with NSX Edge Gateway

vCloud OpenAPI – Large Payload Issue with Load Balancer

NSX-T differs quite a lot from NSX-V therefore the need for this article. The load balancer instance is deployed into the NSX-T Edge Cluster which is a set of virtual or physical NSX-T Edge Nodes. There are also strict sizing guidelines related to the size and number of LB and size of Edge Nodes – see the official docs.

Certificates

Import your VCD public cert in the NSX Manager UI: System > Certificates > Import Certificate. You will need to provide name, full certificate chain, private key and set is as Service Certificate. If it is signed by Enterprise CA you will also before that import the CA cert.

Monitor

Create new monitor in Networking > Load Balancing > Monitors > Add Active Monitor HTTPs

  • protocol HTTPs
  • monitoring port 443
  • default timers
  • HTTP Request Configuration: GET /cloud/server_status, HTTP Request Version: 1
  • HTTP Response Configuration: HTTP response body: Service is up.
  • SSL Configuration: Enabled, Client Certificate: None

Profiles

Application Profile

Networking > Load Balancing > Profiles > Select Profile Type: Application > Add Application Profile > HTTP

Here in the UI we can set only Request Header Size and Request Body Size. Set both to 65535 maximum (65535 for header size and at least 52428800 for body size as ISO/OVA uploads use 50 MB chunks). We will later use API to also configure Response Header Size.

Persistence and SSL Profiles

I will reuse existing default-source-ip-lb-persistence-profile and default-balanced-client-ssl-profile.

Server Pools

Networking > Load Balancing > Server Pools > Add Server Pool

  • Algorithm: Least Connection
  • Active Monitor: picked the one created before
  • Select members: Enter individual members (do not enter port as we will reuse the pool for multiple ports)

 

Virtual Servers

We will add two virtual servers. One for UI/API and another for VM Remote Console connections. For both I have picked the same IP address from the cell logical segment. Ports will be different (443 vs 8443).

vCloud UI

  • Add virtual server: L7 HTTP
  • Ports: 443
  • Ignore Load Balancer placement for now
  • Server Pool: the one we created before
  • Application Profile: the one we created before
  • Persistence: default-source-ip-lb-persistence-profile
  • SSL Configuration: Client SSL: Enabled, Default Certificate: the one we imported before, Client SSL Profile: default-balanced-client-ssl-profile
    Server SSL: Enabled, Client Certificate: None, Server SSL Profile: default-balanced-client-ssl-profile

vCloud Console

  • Add virtual server: L4 TCP
  • Ports: 8443
  • Ignore Load Balancer placement for now
  • Server Pool: the one we created before
  • Application Profile: default-tcp-lb-app-profile
  • Persistence: disabled

Load Balancer

Now we can create load balancer instance and associate the virtual servers with it. Create the LB instance on the Tier 1 Gateway which routes to your VCD cell network. Make sure the Tier 1 Gateway runs on an Edge node with the proper size (see the doc link before).

Networking > Load Balancing > Load Balancers > Add Load Balancer

  • Size: small
  • Tier 1 Gateway
  • Add Virtual Servers: add the 2 virtual servers created in the previous step

Now we have the load balancer up and running you should get all green in the status column. We are not done yet though.

Firstly we need to increase the response header size as vCloud Director Open API sends huge headers with links. Without this, you would get H5 UI errors (Nginx 502 Bad Gateway) and some API calls would fail.  This can be currently done only with NSX Policy API. Fire up Postman or Curl and do GET and then PUT on the following URI:

NSX-manager/policy/api/v1/infra/lb-app-profiles/<profile-name>

in the payload change the response_header_size to at least 10240 50000 bytes.

And finally we will need to set up NAT so our load balanced virtual servers are available both from the outside world (on Tier-0 Gateway) as well from the internal networks. This is quite network topology specific, but do not forget the cells itself must properly connect to the public (load balanced) URL configured in vCloud Director public addresses.

vCloud OpenAPI – Large Payload Issue with Load Balancer

With vCloud Director version 9 new API (cloudapi) based on OpenAPI specification has been introduced next to the legacy based XML API. In vCloud Director 9.5 API Explorer enables consumption of the API directly from the vCloud UI endpoint (read here). Most of the new features are using this OpenAPI such as H5 UI branding, extensions, vRealize Orchestrator service integrations, Cross VDC networking and Roles management.

OpenAPI is very simple to use, JSON based with links provided in headers. However there might be some issues when load balancer with SSL termination is involved as due to the header or payload size the request response will not get through the load balancer.

One such issue is documented in the vCloud Director 9.5 release notes. Attempting to edit Global Rules in the new H5 UI will fail with an error:

unexpected character at line 1 column 1 of the JSON data.

In my case I am using NSX Edge Load balancer with SSL termination and below is the error screenshot:

There are multiple workarounds described in the release notes but actually none worked for me:

  • increasing header maximum at the Edge LB as described in KB 52553 did not help as the number of headers is not the only issue in the particular scenario – the body payload size is as well
  • limiting maximum page size in vCloud Director with cell-management-tool manage-config -n restapi.queryservice.maxPageSize -v 25 fixes the above API call but the subsequent call made by the UI ignores the setting and the response will not get through the LB again.

After some investigations and troubleshooting I discovered that there is a way to increase Edge LB buffer size above the default 32 KB with similar call to the one in the KB 52553:

PUT https://<NSX-Manager>/api/4.0/edges/<Edge-ID>/systemcontrol/config

<systemControl>
    <property>lb.global.tune.http.maxhdr=1024</property>
    <property>lb.global.tune.bufsize=65536</property>
</systemControl>

The above call (NSX 6.4) was enough to fix the issue for me and i can now edit Global Roles in the UI:

How to Enable TLS1.0 on NSX Edge

In one of my previous articles I wrote how NSX upgrade to 6.2.4 impacts PowerCLI as it disables TLS 1.0 ciphers on Edge Load Balancer. The fix for PowerCLI was easy but what if there are other applications still using TLS1.0 that cannot be fixed/updated?

An example is vSphere Replication 6.1.1 which does not support TLS 1.2.

There is workaround. It is possible to create application rule that specifically enables TLS 1.0. The rule syntax is:

tlsv1 enable

application-rule

 

Once the rule is created it can be added in the Advanced Configuration of the virtual Server.

virtual-server