vCloud Director with NSX: Edge Cluster

I see more and more that new and existing vCloud Director deployments leverage NSX as the networking component instead of the legacy vShield / vCloud Network and Security (vCNS). The main reasons are the announced end-of-life for vCNS and the additional features that NSX brings to the table (although most of them are not yet tenant consumable in vCloud Director – as of version 5.6.4).

When deploying NSX with vCloud Director what new considerations should be included when designing the architecture? In this post I want to concentrate on the concept of the Edge Cluster.

What is Edge Cluster?

VMware has published very good NSX-V Network Virtualization Design Guide. This is very detailed document describing all NSX concepts as well as how they should be properly architected. The concept of Edge Cluster is discussed in quite a detail as well so let me just summarize here.

NSX overlay networks allow the creation of logical networks over an existing IP network fabric. This enables highly scalable network design using Leaf / Spine architecture where the boundary between L2 and L3 networks is at the rack level (leafs) and all communication between racks is L3 only going through a set of spine routers.

NSX spans logical network across all racks however in the end we need to connect virtual workloads from the logical networks to the outside physical world (WAN, Internet, colocated physical servers, etc.). These networks are represented by a set of VLAN networks and because we are not stretching L2 across the racks we cannot trunk them everywhere – so they are connected only to one (or two for redundancy) rack which thus become the Edge Cluster.

So the purpose of the Edge Cluster is to host virtual routers – Edge Service Gateways that provide the connectivity between the physical world (VLANs) and virtual world (VXLAN logical switchites). Note that it does not mean that every Edge Gateway needs to be deployed there. If an Edge Gateway provides connectivity between two VXLAN logical switches – it can be deployed anywhere as logical switches span all clusters.

vCloud Director Edges

vCloud Director deploys Edge VMs in order to provide Organization VDC or vApp connectivity. The actual deployment is done through vCNS or NSX Manager but it is vCloud Director who makes decision about placement and configuration of the Edges. vCloud Director Edge Gateway provides connectivity between one or more vCloud Director External Network and one or more Organization VDC Network. It is deployed inside Provider VDC in a special System VDC Resource Pool on a datastore belonging to the Org VDC default storage policy. vCloud Director placement engine selects the most appropriate cluster where the Edge Gateway VM will be deployed – based on which clusters belong to Provider VDC, what is their available capacity and most importantly their access to the right storage and external networks.

vApp Edges provide connectivity between an Organization VDC network and a vApp network. They always have only one external and one internal interface. They are also deployed by vCloud Director to the Provider VDC System VDC Resource Pool and exist only when the vApp is in deployed mode (Powered On).

Transport Zone

Transport Zone defines the scope of a VXLAN logical switch. It consists of one or more vSphere clusters. Transport Zone can be created manually, however vCloud Director automatically creates for each Provider VDC one Transport Zone which matches the clusters that are added to the Provider VDC and associates it with a VXLAN Network Pool. When Organization VDC is created by vCloud System Administrator a Network Pool must be assigned – all Organization VDC and vApp Networks will then have its scope.

Design Option I – Traditional

In the traditional network architecture Access/Aggregation/Core the L2/L3 boundary is at the aggregation switches. This means all racks connected to the same set of aggregation switches have access to the same VLANs and thus there is no need for an Edge Cluster as the Edge connecting VLAN with VXLAN based networks can run on any given rack. In vCloud Director it means that as long as the external networks (VLANs) are trunked to aggregation switches we do not need to worry about Edge placement. The set of racks (clusters) connected to the same aggregation domain usually map to a vCloud Director Provider VDC. The transport zone is then identical to the aggregation domain.

Traditional Access/Aggregation/Core architecture

Traditional Access/Aggregation/Core architecture


The drawback of such design is that Provider VDCs cannot span multiple aggregation domains.

Design Option II – Combined Edge/Compute Cluster

In case spine/leaf network architecture is used, VLANs backing vCloud Director external networks are trunked only to one cluster. In this design option we will call it Edge/Compute Cluster. As explained above vCloud Director placement engine will deploy Edge VMs to a cluster based on VLAN connectivity – therefore it will automatically place all Edge Gateways into the Edge/Compute cluster as this is the only cluster where the external connectivity (VLANs) exists. vCloud Director will however also opportunistically place regular tenant VMs into this cluster (hence its name Edge/Compute).

Spine/leaf with Edge/Compute Cluster

This design option has all the scale advantages of Spine/Leaf architecture however the possibility of tenant workloads taking limited space of Edge/Compute cluster is the drawback. There are two potential options how to remediate this:

  1. vCloud Director Edge Gateways are always deployed by vCloud System Administrator. He/she could make sure that prior Edge Gateway deployment there is enough capacity in the Edge/Compute cluster. If not some tenant workloads can be migrated away to another cluster – this must be done from within vCloud Director (Resource Pool / Migrate to option). Live migration is however possible only if the Edge/Compute Cluster shares the same VXLAN prepared vSphere Distributed Switch (vDS) with the other clusters and this requires at least four network uplinks on the Edge/Compute Cluster hosts (two uplinks for vDS with external VLANs and two uplinks for VXLAN vDS).
  2. Artificially limit the size of Edge/Compute Cluster so the placement engine does not choose it for regular tenant workloads. This can be done by leveraging Resource Pool which is created manually in the Edge/Compute cluster and attached to the Provider VDC instead of the whole cluster. Then an artificial limit is set by System Administrator and is increased only when a new Edge Gateway needs to be deployed.

Both options unfortunately provide significant operational overhead.

Design Option IIb – Combined Edge/Compute Cluster with Non-elastic VDC

While elastic Org VDC types (such are Pay-As-You-Go or Allocation type) can span multiple clusters what would be the impact of non-elastic VDC such as Reservation Pool in this design option?

In non-elastic Org VDC all tenant workloads are deployed into the primary Provider VDC resource pool. However Edge VMs can be deployed into secondary resource pools. This means as long as the Edge/Compute cluster is added as secondary Resource Pool into a Provider VDC this design option can still be used.

Spine/leaf with Edge/Compute Clsuter and non-elastic VDC

Design Option III – Dedicated Edge Cluster

This design option extends the previous one but in this case we will have dedicated Edge Cluster which is not managed by vCloud Director at all. We will also introduce new Edge Gateway type – Provider Edges. These are manually deployed by the service provider totally outside of vCloud Director into the Edge Cluster. Their external uplinks are connected to external VLAN based networks and internal interfaces are connected to transit VXLAN Logical Switch spanning all Compute and the Edge clusters (manually created transport zone with all clusters). The transit network(s) are then consumed by vCloud Director as External Network – note that little workaround is need to do so – read here.

The Provider Edges can provide all NSX functionality (dynamic routing protocols on external uplinks, L2 bridging, L2 VPN, etc.). They can scale as additional vCloud Director External Networks are added (current maximum in VCD 5.6 is 750 External Networks). The Edges deployed by vCloud Director then go into compute clusters as all their interfaces connect to VXLAN logical switches spanned everywhere in the Provider VDC.

Spine/leaf with Dedicated Edge Cluster

Spine/leaf with Dedicated Edge Cluster

Read vCloud Director with NSX: Edge Cluster (Part 2) here.

vRealize Automation with Multiple Cloud Endpoints

One of my customers had deployed true hybrid vRealize Automation with multiple cloud endpoints: vCloud Air and internal vCloud Director and AWS. I was called in to troubleshoot strange issue where sometimes deployment of a cloud multimachine blueprint (vApp) would work but most often it would fail with the following message:

VCloud Clone VM failed for machine: XXX100 [Workflow Instance Id=19026]
System.InvalidOperationException: Error occurred while getting vApp template with ID: urn:vcloud:vapptemplate:a21de50d-8b5e-41a6-81d1-acfd8ab8364b

INNER EXCEPTION: com.vmware.vcloud.sdk.utility.VCloudException: [ 8ae6fbca-e0d2-43e7-bc94-5bc9d776bf8d ] No access to entity “com.vmware.vcloud.entity.vapptemplate:a21de50d-8b5e-41a6-81d1-acfd8ab8364b”

Endpoint was properly configured, template existed, so what could be wrong? Why were we denied the access to the template?

It turns out that by design vRealize Automation does not match a template to a particular endpoint. It identifies it just by name. So in our case sometimes it would try to deploy the blueprint to wrong endpoint where the template of the particular name did not exist.

The fix is simple:

  • Define reservation policies which would identify each endpoint.
  • Assign them to the proper reservationsReservation
  • Assign reservation policies to the Cloud vApp blueprint. This way there will never be confusion from which template to provision to which endpoint.Blueprint reservation policy

vCloud Connector and Offline Data Transfer

Offline Data Transfer (ODT) is a feature of vCloud Connector that allows migration of VMs from customer own datacenter to vCloud Air with NAS appliance which is shipped via regular mail. The point is to avoid slow wide area network connectivity and leverage awesome bandwidth but slow latency of sneakernet.

Have you ever wondered why it is supported only with vCloud Air and not with any public or private cloud based on vCloud Director? Well I am going to lay down the whole process here in this blog post so nothing is stopping anyone testing this feature on your own.

Let me first paste picture from the manual which describes in high level how the process works:

Offline Data Transfer ProcessvCloud Connector (vCC) is leveraged to manage the whole process. The customer (on left) deploys his own vCloud Connector Server and Node which he attaches to his on premises infrastructure (vSphere based). He then requests the ODT service. The provider will deploy ODT node in the public cloud (on right) and also its own vCC Server to manage it. Regular NAS appliance is prepared – its only purpose is to provide storage capacity which is fast and reliable enough via NFS protocol and can be easily packaged and shipped.

Customer mounts it to his vCC Node (to a directory via NFS mount). Both the ODT and vCC Nodes are registered in his vCC Server. Then via the traditional vSphere Client and vCC Plugin only the local vSphere environment (here it differs from the traditional vCC transfer).

vSphere in vCC

The actual export is done by selecting the objects to export (templates, vApps or VMs) and clicking the small Offline Data Transfer icon: ODT IconMount path is entered and links and credentials to the target Cloud and ODT node. There is also option to select if a particular VM should be deployed and connected to a network. These steps above are all described in the manual here.

But what about the provider side of the whole process?

ODT Node

ODT Node is actually regular vCloud Connector Node tweaked by running script which can be found in /opt/vmware/hcagent/scripts folder on the Node VM itself.

The ODT needs to have network access to the vCenter Server (and ESXi hosts) of the target vCloud VDC environment.


The actual import is done via provider vCloud Connector server which is again the regular vCloud Connector server with no tweaks this time. The ODT Node is registered there which enables import menu in the vCC plugin GUi in vSphere Client. The shipped NAS appliance must be mounted to the ODT Node and the ODT URL and mount path is entered in the Import Wizzard. The actual physical connection of the NAS appliance can be done using dedicated VLAN with point to point connection of the second ODT network interface.

ImportNext we need to pick the target vCenter Server and a credentials for it. ODT Node will import offline VMs which are stored as encrypted OVFs on the NAS appliance into the target vCenter Server. To do that it needs a big enough datastore and a dummy network in order to connect the imported VMs temporarily to it. Once that is done VMs are imported by vCloud Director to the target VDCs, catalogs and networks. The provider needs to have big enough datastore and create dummy standard switch port group on every host with name ‘VM Network’. This network does not need to have external access.

As you can see contrary to the regular internet vCloud Connector transfer where the VM is transfer from the original environment via on-prem node to public node to vCloud Director (through its API and transfer storage – see here for more detail) the transfer does not go through the vCloud Director cells and its transfer storage at all. This is possible thanks to handling the final step of the process by the provider himself (he has vCenter Server access) and makes also the transfer faster (potentially one less step). On the other hand this brings some security and operational process challenges (physical access to management network, vCenter credentials) which must be properly addressed.