Load Balancing with Avi in VMware Cloud Director

VMware Cloud Director 10.2 is adding network load balancing (LB) functionality in NSX-T backed Organization VDCs. It is not using the native NSX-T load balancer capabilities but instead relies on Avi Networks technology that was acquired by VMware about a year ago and since then rebranded to VMware NSX Advanced Load Balancer. I will call it Avi for short in this article.

The way Avi works is quite different from the way load balancing worked in NSX-V or NSX-T. Understanding the differences and Avi architecture is essential to properly use it in multitenant VCD environments.

I will focus only on the comparison with NSX-V LB as this is relevant for VCD (NSX-T legacy LB was never viable option for VCD environments).

In VCD in an NSX-V backed Org VDC the LB is running on Org VDC Edge Gateway (VM) that can have four different sizes (compact, large, quad large and extra large) and be in standalone or active / standby configuration. That Edge VM also needs to perform routing, NATing, firewalling, VPN, DHCP and DNS relay. Load balancer on a stick is not an option with NSX-V in VCD. The LB VIP must be an IP assigned to one of external or internally attached network interfaces of the Org VDC Edge GW.

Enabling load balancing on an Org VDC Edge GW in such case is easy as the resource is already there. 

In the case of Avi LB the load balancing is performed by external (dedicated to load balancing) components which adds more flexibility, scalability and features but also means more complexity. Let’s dive into it.

You can look at Avi as another separate platform solution similar to vSphere or NSX – where vSphere is responsible for compute and storage, NSX for routing, switching and security, Avi is now responsible for load balancing.

Picture is worth thousand words, so let me put this diagram here first and then dig deeper (click for larger version).

 

Control Path

You start by deploying Avi controller cluster (highly available 3 nodes) which integrates with vSphere (to use for compute/storage) and NSX-T (for routing LB data and control plane traffic). The controllers would sit somewhere in your management infrastructure next to all other management solutions.

The integration is done by setting up so called NSX-T Cloud in Avi where you define vCenter Server (only one is supported per NSX-T Cloud) and NSX-T Manager endpoints, NSX-T overlay transport zone (with 1:1 relationship between TZ and NSX-T Cloud definition). Those would be your tenant/workload VC/NSX-T.

You must also point to pre-created management network segment that will be used to connect all load balancing engines (more on them later) so they can communicate with the controllers for management and control traffic. To do so, in NSX-T you would set up dedicated Tier-1 (Avi Management) GW with the Avi Management segment connected and DHCP enabled. The expectation is the Tier-1 GW would be able through Tier-0 to reach the Avi Controllers.

Data Path

Avi Service Engines (SE) are VM resources to perform the load balancing. They are similar to NSX-T Edge Nodes in a sense that the load balancing virtual services can be placed on any SE node based on capacity or reservations (as Tier-1 GW can be placed on any Edge Node). Per se there is no strict relationship between tenant’s LB and SE node. SE node can be shared across Org VDC Edge GWs or even tenants. SE node is a VM with up to 10 network interfaces (NICs). One NIC is always needed for the management/control traffic (blue network). The rest (9) are used to connect to the Org VDC Edge GW (Tier-1 GW) via a Service Network logical segment (yellow and orange). The service networks are created by VCD when you enable load balancing service on the Org VDC Edge GW together with DHCP service to provide IP addresses for the attached SEs. It will by default get 192.168.255.0/25 subnet, but the system admin can change it, if it clashes with existing Org VDC networks. Service Engines run each service interface in a different VRF context so there is no worry about IP conflicts or even cross tenant communication.

When a load balancing pool and virtual service is configured by the tenant Avi will automatically pick a Service Engine to instantiate the LB service. It might even need to first deploy (automatically) an SE node if there is no existing capacity. When SE is assigned Avi will configure static route (/32) on the Org VDC Edge GW pointing the virtual service VIP (virtual IP) to the service engine IP address (from the tenant’s LB service network).

Note: The VIP contrary to NSX-V LB can be almost any arbitrary IP address. It can be routable external IP address allocated to the Org VDC Edge GW or any non-externally routed address but it cannot clash with any existing Org VDC networks. or with the LB service network. If you use an external Org VDC Edge GW allocated IP address you cannot use the address for anything else (e.g. SNAT or DNAT). That’s the way NSX-T works (no NAT and static routing at the same time). So for example if you want to use public address 1.2.3.4 for LB on port 80 but at the same time use it for SNAT, use an internal IP for the LB (e.g. 172.31.255.100) and create DNAT port forwarding rule to it (1.2.3.4:80 to 172.31.255.100:80).

Service Engine Groups

With the basics out of the way let’s discuss how can service provider manage the load balancing quality of service – performance, capacity and availability. This is done via Service Engine Groups (SEG).

SEGs are (today) configured directly in Avi Controller and imported into VCD. They specify SE node sizing (CPU, RAM, storage), bandwidth restrictions, virtual services maximums per node and availability mode.

The availability mode needs more explanation. Avi supports four availability modes:
A/S … legacy (only two nodes are deployed), service is active only on one node at a time and stand by on the other, no scale out support (service across nodes), very fast failover

A/A … elastic, service is active on at least two SEs, session info is proactively replicated, very fast failover

N+M … elastic, N is number of SE nodes service is scaled over, M is a buffer in number of failures the group can sustain, slow failover (due to controller need to re-assign services), but efficient SE utilization

N+0 … same as N+M but no buffer, the controller will deploy new SE nodes when failure occurs. The most efficient use of resources but the slowest failover time.

The base Avi licensing supports only legacy A/S high availability mode. For best availability and performance usage of elastic A/A is recommended.

As mentioned Service Engine Groups are imported into VCD where the system administrator makes a decision whether SEG is going to be dedicated (SE nodes from that group will be attached to only one Org VDC Edge GW) or shared.

Then when load balancing is enabled on a particular Org VDC Edge GW, the service provider assigns one or more SEGs to it together with capacity reservation and maximum in terms of virtual services for the particular Org VDC Edge GW.

Use case examples:

  • A/S dedicated SEG for each tenant / Org VDC Edge GW. Avi will create two SE nodes for each LB enabled Org VDC Edge GW and will provide similar service as LB on NSX-V backed Org VDC Edge GW did. Does not require additional licensing but SEG must be pre-created for each tenant / Org VDC Edge GW.
  • A/A elastic shared across all tenants. Avi will create pool of SE nodes that are going to be shared. Only one SEG is created. Capacity allocation is managed in VCD, Avi elastically deploys and undeploys SE nodes based on actual usage (the usage is measured in number of virtual services, not actual throughput or request per seconds).

Service Engine Node Placement

The service engine nodes are deployed by Avi into the (single) vCenter Server associated with the NSX Cloud and they live outside of VMware Cloud Director management. The placement is defined in the service engine group definition (you must use Avi 20.1.2 or newer). You can select vCenter Server folder and limit the scope of deployment to list of ESXi hosts and datastores. Avi has no understanding of vSphere host, and datastore clusters or resource pools. Avi will also not configure any DRS anti-affinity for the deployed nodes (but you can do so post-deployment).

Conclusion

The whole Avi deployment process for the system admin is described in detail here. The guide in the link refers to general Avi deployment of NSX-T Cloud, however for VCD deployment you would just stop before the step Creating Virtual Service as that would be done from VCD by the tenant.

Avi licensing is basic or enterprise and is set at Avi Controller cluster level. So it is possible to mix both licenses for two tier LB service by deploying two Avi Controller cluster instances and associating each with a different NSX-T transport zone (two vSphere clusters or Provider VDCs).

The feature differences between basic and enterprise editions are quite extensive and complex. Besides Service Engine high availability modes the other important difference is access to metrics, amount of application types, health monitors and pool selection algorithms.

The Avi usage metering for licensing purposes is currently done via Python script that is ran at the Avi Controller to measure Service Engine total  high mark vCPU usage during a given period and must be reported manually. The basic license is included for free with VCPP NSX usage and is capped to 1 vCPU per 640 GB reported vRAM of NSX base usage.

VMware Cloud Director on VMware Cloud Foundation

There has been more and more interest lately among service providers in usage of VMware Cloud Foundation (VCF) as the underlying virtualization platform in their datacenter. VCF is getting more and more mature and offers automated lifecycle capabilities that service providers appreciate when operating infrastructure at scale.

I want to focus on the topic how would you design and deploy VMware Cloud Director (VCD) on top of VCF with a specific example. While there are whitepaper on this topic written they do not go into the nitty gritty detail. This should not be considered as prescribed architecture – just one way to skin a cat that should inspire you for your own design.

VCF 4.0 consists of a management domain – smaller infrastructure with one vSphere 7 cluster , NSX-T 3 and vRealize components (vRealize Suite Lifecycle Manager, vRealize Operations Manager, vRealize Log Insight). It is also used for deployment of management components for workload domains, which are separate vSphere 7+NSX-T 3 environments.

VCF has prescribed architecture based on VMware Validated Designs (VVD) how all the management components are deployed. Some are on VLAN backed networks but some are on overlay logical segments created in NSX-T (VVD calls them application virtual networks – AVN) and routed via NSX-T Edge Gateways. The following picture shows typical logical architecture of the management cluster which we will start with:

Reg-MGT and X-Reg-MGMT are overlay segments, rest are VLAN networks.
VC Mgmt … Management vCenter Server
VC Res … Workload domain (resource) vCenter Server
NSX Mgmt … Management NSX-T Managers (3x)
Res Mgmt … Workload domain (resource) NSX-T Managers (3x)
SDDC Mgr … SDDC Manager
Edge Nodes … NSX-T Edge Nodes VMs (2x) that provide resources for Tier-0, Tier-1 gateways and Load Balancer
vRLCM … vRealize Suite Lifecycle Manager
vROps … vRealize Operation Managers (two or more nodes)
vROps RC … vRealize Operation Remote Collectors (optional)
vRLI … vRealize Log Insight (two or more nodes)
WS1A … Workspace ONE Access (former VIDM, one or more nodes)

Now we are going to add VMware Cloud Director solution. I will focus on the following components:

  • VCD cells
  • RabbitMQ (needed for extensibility such as vROps Tenant App or Container Service Extension)
  • vRealize Operations Tenant App (provides multitenant vROps view in VCD and Chargeback functionality)
  • Usage Meter

I have followed these design principles:

  • VCD solution will utilize overlay (AVN) networks
  • leverage existing VCF infrastructure when it makes sense
  • consider future scalability
  • separate internet traffic from the management one

And here is the proposed design:

New overlay segment (AVN) called VCD DMZ has been added to separate the internet traffic. It is routed via separate Tier-1 GW but connected to the existing Tier-0. VCD cells (3 or more) have their primary (eth0) interface on this network with NSX-T Load balancer (running in its own Tier-1 similar to the vROps one). And finally vRealize Operations Tenant App VM.

Existing Reg-Mgmt is used for the secondary interface of VCD cells, Usage Meter VM and for vSAN File Services NFS share that VCD cell require.

And finally the cross region X-Reg-MGMT is utilized for RabbitMQ nodes (2 or more) in order to leverage existing vROps Load Balancer and get away with deploying additional one just for RabbitMQ.

Additional notes:

  • VCF deploys two NSX-T Edge nodes in 2-node NSX-T Edge Cluster. These currently cannot easily be scaled out. Therefore I would recommend deploying additional Edge nodes in separate NSX-T Edge cluster (directly in NSX-T) for the DMZ Tier-1 gateway and VCD load balancer. This guarantees compute and networking resources especially for the load balancer that will perform SSL termination (might not apply if you chose to use different load balancer e.g. Avi). This will also add possibility to deploy separate Tier-0 for more N/S bandwidth.
  • vSAN FS NFS deployment is described here. Do not forget to enable MAC learning on the Reg-MGMT NSX-T logical segment (via segment profile).
  • Both Tier-1 gateways can provide north-south firewalling for additional security
  • As all the incoming internet traffic to VCD goes over the VCD load balancer which provides Source NAT I have opted to have default route on the VCD cells on the management interface to get away with any need for static routes necessary to separate tenant and management traffic

Let me know in the comments if you plan VCD on VCF and if you are facing any challenges.

Enable MAC Learning as Default on vSphere Distributed Switch

This short PowerCLI script will change the vSphere Distributed Switch default port group configuration to enable MAC learning policy. This means every port group on such switch inherits this configuration and will have MAC learning enabled unless specifically disabled.

For more information why would you need that read William’s Lam blog.

$vds = get-vdswitch 'DSwitch1'
$spec = New-Object VMware.Vim.VMwareDVSConfigSpec
$spec.DefaultPortConfig = New-Object VMware.Vim.VMwareDVSPortSetting
$spec.DefaultPortConfig.MacManagementPolicy = New-Object VMware.Vim.DVSMacManagementPolicy
$spec.DefaultPortConfig.MacManagementPolicy.MacLearningPolicy = New-Object VMware.Vim.DVSMacLearningPolicy

$spec.DefaultPortConfig.MacManagementPolicy.MacLearningPolicy.Enabled = $True
$spec.DefaultPortConfig.MacManagementPolicy.MacLearningPolicy.AllowUnicastFlooding = $True
$spec.DefaultPortConfig.MacManagementPolicy.MacLearningPolicy.Limit = 4000
$spec.DefaultPortConfig.MacManagementPolicy.MacLearningPolicy.LimitPolicy = "DROP"
$vds.ExtensionData.ReconfigureDvs_Task($spec)

 

Update 08/07/2020

In case you are using this approach for nested vSphere lab instead of the old promiscuous mode, make sure the vmk0 vmkernel port has a different MAC address than the MAC address of the vmnic of the nested ESXi host. This is because when the vmk0 is migrated to a different ESXi host uplink the vDS will not learn the MAC address on the new switch port as it conflicts with the assigned MAC on the first uplink port (same MAC cannot be learnt on two ports).

The vmkernel port MAC can be easily changed by editing /etc/vmware/esx.conf file.

vCloud Director 9.7 JMS Certificate Issue

Are you still on vCloud Director 9.7 (VCD) in multi-cell configuration? Then you are susceptible to Java Message Service (JMS) certificate expiration issue. Read on.

Background

In multi-cell set up VCD cells need to communicate between themselves. They use shared database but for much faster and efficient communication they also use internal ActiveMQ message bus. It is used for activity sharing and vCenter Server events notifications. If the message bus is dysfunctional it slows any operations almost to halt. For this particular certificate issue you will see in the logs similar message:

Could not accept connection from tcp://<primary-cell-IP:port> : javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown

In vCloud Director 9.7 the bus communication become encrypted in preparation for other use cases (read here). On upgrade or new deployment of each cell new certificate was issued by internal VCD_CA with 365 day duration. In vCloud Director 10.0 or VMware Cloud Director 10.1 the certificate is regenerated upon upgrade and its duration is extended to 3 years.

To find out the certificates expiry date, run the following command from any cell:


/opt/vmware/vcloud-director/bin/cell-management-tool jms-certificates -status

It will for every cell print out its JMS certificate details:

Cell with UUID fd0d2ca0-e357-4aae-9a3b-1c1c5d143d05 and IP 192.168.3.12 has jms certificate: [
[
Version: V3
Subject: CN=vcd-node2.vmware.local
Signature Algorithm: SHA256withRSA, OID = 1.2.840.113549.1.1.11

Key: Sun RSA public key, 2048 bits
modulus: 25783371233977292378120920630797680107189459997753528550984552091043966882929535251723319624716621953887829800012972122297123129787471789561707788386606748136996502359020718390547612728910865287660771295203595369999641232667311679842803649012857218342116800466886995068595408721695568494955018577009303299305701023577567636203030094089380853265887935886100721105614590849657516345704143859471289058449674598267919118170969835480316562172830266483561584406559147911791471716017535443255297063175552471807941727578887748405832530327303427058308095740913857590061680873666005329704917078933424596015255346720758701070463
public exponent: 65537
Validity: [From: Wed Jun 12 15:38:11 UTC 2019,
To: Thu Jun 11 15:38:11 UTC 2020]

 

Yes, this particular cell’s certificate will expire Jun 12 2020 – in less than two months!

The Fix

Set a calendar reminder and when the certificate expiration day is approaching run the following command.

/opt/vmware/vcloud-director/bin/cell-management-tool jms-certificates --certgen

Or upgrade to vCloud Director 10.0 or newer.

Update 21/05/2020: KB 78964 has been published on this topic. Also if the CA signing certificate is expired you will need to disable SSL altogether, restart the cell, regenerate the cert and re-enable SSL.

VMware Cloud Director: Push Notifications in Tenant Context

In VMware Cloud Director 10.1 (VCD) the organization users can subscribe to event and task push notifications which might be useful if the tenant needs to keep track of the activity in the cloud, connect CMDB or any other custom solution and does not want to permanently poll audit log via API.

Access to notifications was in the past only in the realm of service providers who needed to deploy RabbitMQ and connect their Cloud Director cells to it. They can still do so and in fact have to, if they need blocking taks or use VCD API extension (for example Container Service Extension, App Launch Pad or vRealize Operations Tenant App).

The new functionality is enabled by internal Artemis ActiveMQ bus that runs on every VCD cell. The MQTT client connects to the public https endpoint and uses WebSocket connnection to the bus. Authentication is provided via the JWT authentication token. The official documentation provides some detail here, but not enough to actually set this up.

Therefore I want to demonstrate here with very simple Python script how to set up connection and start utilizing this feature.

The Python 3 script leverages the Pyvcloud module (22.0 or newer is required) and Paho MQTT Python Client. Both can be installed simply with pip.

pip install pyvcloud paho-mqtt

In the example org admin credentials are used, which allows to subscription to all organization messages via publish/<org UUID>/* subscription string. It can also be used by system administrator while changing the subscription string to publish/*/*.

#!/usr/bin/python3

import paho.mqtt.client as mqtt
import json
import datetime
import pyvcloud.vcd.client
import pyvcloud.vcd.vdc

vcdHost = 'vcloud.fojta.com'
vcdPort = 443
path = "/messaging/mqtt"
logFile = 'vcd_log.log'

#org admin credentials
user = 'acmeadmin'
password = 'VMware1!'
org = 'acme'

credentials = pyvcloud.vcd.client.BasicLoginCredentials(user, org, password)
vcdClient = pyvcloud.vcd.client.Client(vcdHost+":"+str(vcdPort),None,True,logFile)
vcdClient.set_credentials(credentials)
accessToken = vcdClient.get_access_token()
headers = {"Authorization": "Bearer "+ accessToken}

if max(vcdClient.get_supported_versions_list()) < "34.0":
    exit('VMware Cloud Director 10.1 or newer is required')

org = vcdClient.get_org_list()
orgId = (org[0].attrib).get('id').split('org:',1)[1]

def on_message(client, userdata, message):
    event = message.payload.decode('utf-8')
    event = event.replace('\\','')
    eventPayload = event.split('payload":"',1)[1]
    eventPayload = eventPayload[:-2]
    event_json = json.loads(eventPayload)
    print(datetime.datetime.now())
    print (event_json)

# Enable for logging
# def on_log(client, userdata, level, buf):
#     print("log: ",buf)

client = mqtt.Client(client_id = "PythonMQTT",transport = "websockets")
client.ws_set_options(path = path, headers = headers)
client.tls_set_context(context=None)
# client.tls_insecure_set(True)
client.on_message=on_message
# client.on_log=on_log  #enable for logging
client.connect(host = vcdHost, port = vcdPort , keepalive = 60)
print('Connected')
client.subscribe("publish/"+orgId+"/*")
client.loop_forever()

Notice that the client needs to connect to the /messaging/mqtt path on the VCD endpoint and must provide valid JWT authentication token in the header. That rules some MQTT WebSocket clients that do not support custom headers (JavaScript).

The actual event is in JSON format with nested payload JSON providing the details. The code example prints the time when the event was received and just the nested payload JSON. The script runs forever until interrupted with Ctrl+C.

Note: The actual RabbitMQ extensibility configuration in VCD and the Non-blocking AMQP Notifications setting in the UI have no impact on this functionality and can be disabled if not used by the service provider.