Edge Gateway Deployment Speed in vCloud Director 8.10

Edge GatewayIn vCloud Director 8.10 there is massive improvement in deployment (and configuration) speed of Edge Gateways. This is especially noticeable in use cases where large number of routed vApps are provisioned in as short time as possible – for example nightly builds for testing, or labs for training purposes. But this is also important for customer onboarding – time to login to cloud VM from the swipe of the credit card SLA.

Theory

How is the speed improvement achieved? It is actually not really vCloud Director accomplishment. The deployment and configuration of Edge Gateways were always done by vShield or NSX Manager. However, there is a big difference how vShield Manager and NSX Manager communicate with the Edge Gateway to push its configuration (IP addresses, NAT, firewall and other network services configurations).

As the Edge Gateway can be deployed to any network which can be completely isolated from any external traffic, its configuration cannot be done over the network and instead out-of-band communication channel must be used. vShield Manager always used VIX API (Guest Operations API) which involves communication with vCenter Server, hostd process on ESXi host hosting the Edge Gateway VM and finally VMware Tools running in the Edge Gateway VM (see this older post for more detail).

NSX Manager uses different mechanism. As long as the ESXi host is properly prepared for NSX, message bus communication between the NSX Manager and vsfwd user space process on the ESXi host is established. Additionally the configuration to the Edge Gateway VM is done via VMCI channel.

Prerequisites

There are necessary prerequisites to use the faster message bus communication as opposed to VIX API. If any of these is not fulfilled the communication mechanism fails back to VIX API.

  • The host running the Edge Gateway must be prepared for NSX. So if you are in vCloud Director using solely VLAN (or even VCDNI) backed network pools and you skipped the NSX preparation of underlying clusters, message bus communication cannot be used as the host is missing the NSX VIBs and vsfwd process.
  • The Edge Gateway must be version 6.x. It cannot be the legacy Edge version 5.5 deployed by older vCloud Director releases (8.0, 5.6, etc.). vCloud Director 8.10 deploys Edge Gateway version 6.x however existing Edges deployed before upgrade to 8.10 must be redeployed in vCloud Director or upgraded in NSX (read this whitepaper for a script to do it at once).
  • Obviously NSX Manager must be used (as opposed to vShield Manager) – anyway vCloud Networking and Security is not supported with vCloud Director 8.10 anymore.

Performance Testing

I have done quick proof of concept testing to see what is the relative improvement between the older and newer deployment mechanism.

I used 3 different combinations of the same environment (I was upgrading from one combination to the other).

  • vCloud Director 5.6.5 + vCloud Networking and Security 5.5.4
  • vCloud Director 8.0.1 + NSX 6.2.3 (uses legacy Edges)
  • vCloud Director 8.10 + NSX 6.2.3 (uses NSX Edges)

All 3 combinations used the same hardware and the same vSphere environment (5.5) with nested ESXi hosts. So the point is to look at the relative differences as opposed to absolute deployment times.

I measured in PowerCLI sequential deployment speed of 10 vApps with one isolated network and 10 vApps with one routed network with multiple runs to calculate average per one vApp. The first scenario was to measure differences in provisioning speeds of VXLAN logical switches to see impact of controller based control plane mode. The second includes provisioning of an Edge Gateway and logical switch. The vApps were otherwise empty (no VMs).

Note; If you want to do similar test in your environment, I captured the two empty vApps with only the routed or isolated networks to a catalog with vCloud API (PowerCLI) as it cannot be done from vCloud UI.

Here are the average deployment times of each vApp.

vCloud Director 5.6.5 + vCloud Networking and Security 5.5.4

  • Isolated 5-5.5 seconds
  • Routed 2:17 min

vCloud Director 8.0.1 + NSX 6.2.3

  • Isolated cca 6.8 seconds (Multicast), 7.5 seconds (Unicast)
  • Routed 2:20 min

vCloud Director 8.10 + NSX 6.2.3

  • Isolated 7.7 s (Multicast), 8.1 s (Unicast)
  • Routed 1:35 min

While the speed of logical switch provisioning goes little bit down with NSX and with Unicast control plane mode, the Edge Gateway deployment gets massive boost with NSX and VCD 8.10. While the OVF deployment of NSX Edge takes little bit longer (from 20 to 30 s) it is the configuration that makes up for it (from way over a minute down to about 30 s).

Just for comparison here are the tasks done during deployment of each routed vApp as reported by vSphere Client Recent Task window.

vCloud Director 5.6.5 + vCloud Networking and Security

vCloud Director 5.6.5 + vCloud Networking and Security

vCloud Director 8.10 + NSX 6.2.3

vCloud Director 8.10 + NSX 6.2.3

vCloud Networking and Security Upgrade to NSX in vCloud Director Environments – Update

vCNS to NSXIn April I wrote whitepaper describing all considerations that need to be taken when upgrading vCloud Networking And Security to NSX in vCloud Director Environments. I have updated the whitepaper to include additional information related to new releases:

  • updates related to vCloud Director 8.10 release
  • update related to VMware NSX 6.2.3 release
  • updates related to vCenter Chargeback Manager 2.7.1 release
  • NSX Edge Gateway upgrade script example
  • extended upgrade scenario to include vCloud Director 8.10

The whitepaper will be posted later this month on vCloud Architecture Toolkit for Service Providers website, until then it can be downloaded from the link below.

Edit 7/17/2016: New vCAT website is uphttp://www.vmware.com/solutions/cloud-computing/vcat-sp.html

VMware vCloud Networking and Security to VMware NSX Upgrade v2.1.pdf … link

vCenter Chargeback Manager Notes

ChargebackThis blog post summarizes up-to-date (July 2016) information related to vCenter Chargeback Manager.

Chargeback Manager (CBM) is available only for service providers. Although in past its end of support was announced it was extended until end of 2017. The reason for extension is to provide more time to service providers to migrated to its successor – vRealize Business for Cloud Advanced (vRB)

Both CBM and vRB were removed from all but standard vCloud Service Provider Bundles (with the exception for current users) and can be licensed separately. The reason was to give partners more choice which metering tool to use.

  • The latest CBM version is 2.7.1 and is downloadable from here. Note that this version has following support limitations:
    • vSphere 6 is not supported – this is due to new storage APIs introduced in vSphere 6
    • Both vCloud Networking and Security and NSX are supported (in the vCloud Director context)
    • vCloud Director 8.10 is not supported (see the next bullet point)
  • CBM patch for vCloud Director 8.10 was released in the following KB: https://kb.vmware.com/kb/2146041. It replaces one JAR file of the vCloud Director data collector to properly identify new vCloud API versions. It is also backward compatible with older vCloud Director versions.
  • vSphere 6 support is expected in the next release of CBM later this year

In case you are upgrading from CBM 2.7.0 to 2.7.1 here are some notes and considerations:

  • CBM 2.7.0 and older was 32 bit application. CBM 2.7.1 is 64 bit application.
  • There are two ways how to upgrade CBM. You can either do in-place upgrade, or you can uninstall CBM and install fresh binaries while using the same Chargeback database
    • In-place upgrade:
      • Make sure you shut down all services before running the installer (vCenter-CB.exe)
      • Always run the installer with the Run as administrator option.
      • Even though CBM 2.7.1 is 64 bit application it will still be located in the default 32bit Program Files (x86) folder.
    • Fresh install:
      • Uninstall CBM by running Uninstall.bat located in the …\VMware vCenter Chargeback\Uninstall_VMware vCenter Chargeback folder. If non-embedded collectors were used, uninstall them in similar way as well.
      • Reboot and install new binaries while pointing to the same CBM database. Remember to use Run ad administrator option when executing the installer (vCenter-CB.exe)
      • The installer does not remove old collectors from the database. Wait a few minutes until they will be identified as failed (red X in CBM System Health, Data Collectors UI), note their IDs and then manually remove them from CBM database – CB_DC_STATUS table. You can locate them by their ID in the DC_ID column.
      • CBM 2.7.1 will be located by default in the 64 bit Program Files folder.

vCloud Director Fundamentals E-learning Course

VMware released free vCloud Director Fundamentals e-learning course. It is based on the latest vCloud Director v 8.10 and goes into quite a depth in about 3-4 hours.

VCD Fundamentals

Course Outline:

  1. Cloud Computing and VMware vCloud Director – OVerview
  2. VMware vCloud Director Architecture and Components
  3. VMware vCloud Director Installation and Configuration
  4. VMware vCloud Director Administration
  5. Network Administration in VMware vCloud Director
  6. VMware vCloud Director End-User Tasks

Register for the e-learning course here.

Limit Maximum Size of Disk in vCloud Director

Although cloud services are providing access to abstracted seemingly infinite physical resources, the truth is that the physical infrastructure is not limitless. Pooling and distributed resource scheduling for compute, storage and network helps but at the end there is always a physical host, LUN or network uplink which constraints the granularity of scaling.

When it comes to storage it is the datastore size that limits the maximum size of virtual disk a cloud consumer can attach to his/her VM. While thin and fast provisioning and dedupe (NFS/VSAN) can be used to fit more data and storage DRS can shuffle the data around when a particular datastore is filling up at the end the service provider should not allow creation of arbitrary size of vdisks (vSphere maximum is 62 TB) to avoid datastore out of space condition. For example letting customers provision 4 TB thin disks on 3 TB LUNs is just asking for trouble.

Before vCloud Director 8.10 service providers were leveraging blocking tasks with custom orchestration to check if provisioned VM is within provider specified limits (RAM size, vDisk size, max vCPUs). There is reference implementation published here: CPU and Memory Limit enforcement for vCloud Director.

vCloud Director 8.10 brings hidden configuration option where service provider can globally set the maximum allowed size of virtual disk.

The option can be set with cell-management-tool command on a vCloud cell with the following syntax:

$VCLOUD_HOME/bin/cell-management-tool manage-config -n vmlimits.disk.capacity.maxMb -v 1000000

which would set maximum size of disk to 1000000 MB which is 1 TB.

Note: the command is run on one vCloud cell and its impact is immediate (no need to restart anything).

If the tenant tries to provision larger vDisk he will get the following error:

Disk Limit

Note that the limit is not enforced for system administrators and existing disks are not affected.

What should be the limit is out of scope for this post as there are many considerations that should be taken into account:

  • datastore size
  • can datastore grow?
  • thin provisioning
  • fast provisioning
  • tenant snapshots
  • provider snapshots (backup software generated)
  • yellow and red datastore thresholds
  • Storage DRS
  • deduplication on the array