What’s New in VMware Cloud Director 10.1

And it is time for another What’s New in (v)Cloud Director blog post. If you are not up to date you can find the older articles for versions 10, 9.7, 9.5 and 9.1 here.

Let us start with the “important” announcement – a name change. vCloud Director has been re-branded to VMware Cloud Director. Fortunately we keep the same (unofficial) acronym – VCD. The current version is 10.1 which might look like a small increase from the previous 10.0 but that is just marketing numbering so do not put too much emphasis on it and assess yourself if it is big release or not.


VMware has added support for NSX-T 3.0, but vSphere 7 support is missing. It is expected to come shortly in a major patch release (correction, NSX-T 3.0 did not make it yet too). You can upgrade your management clusters and dedicated vCenter Servers, just not those that are backing Provider VDCs.


As previously announced, no more Adobe Flex UI (cannot be even enabled with a secret switch). Shouldn’t be an issue however, as the HTML 5 UI has not only 99.9% parity but in fact is significantly better than Flex UI ever was. There are new features such as VM sizing and VM placement groups, advanced filtering, multiselect actions, badges, quick access to VM console or network cards, tasks and events in vApp details, Org VDC ACL, live import from vCenter Server, …

The UI team is no longer in feature parity mode, they are in make-it-better mode and doing a great job as can be seen from the screenshots below.

Platform Security

Certificate validation is now required for VC/NSX endpoints and will be required for LDAP in the next major release as well. It means your endpoints either have to have publicly trusted certificates, or you have to upload their signing certificate, or you have to approve their certificate on the first use (when you add or edit such endpoint). To ease the transition from older vCloud Director releases, you can run cell-management-tool trust-infra-certs command after upgrade that will automatically retrieve and trust certificates of all connected VC/NSX endpoints. If you forget to run this command, Cloud Director will not be able talk to VC/NSX endpoints!


While you still can use the Linux installer of Cloud Director with external database the appliance deployment factor has been again improved and there is less and less reasons not to use it especially for green field deployments.

The appliance API (on port 5480) has been enhanced to monitor status of services, to see which node is running active node of embedded database (useful for load balancing access to the database for external tools), monitor filesystem storage and trigger database node switchoever (planned failover) or promotion (after failure).

The appliance embedded appliance has for the first time automated failover functionality. It is disabled by default but you can enable it with the API.The Appliance UI has also been improved and provides some of the API functionality.

Messaging Bus

As you might know, Cloud Director has embedded messaging bus for inter-cell communication. In the past it was using ActiveMQ (ports 61611 and 61616). If the service provider wanted to use blocking tasks, notifications or vCloud API (or is it now VCloud API?!) extensions then an external RabbitMQ messaging bus had to be deployed. In the current release ActiveMQ has been replaced with Artemis and is also available externally for blocking tasks and notifications, so RabbitMQ is no longer needed for these two use cases (Update: blocking tasks did not make it into this release). Additionally it can also be used by tenants.

Artemis uses MQTT communication protocol and the connection to it can be made via WebSocket session which is authenticated with regular Cloud Director authentication token mechanism.

Note that external RabbitMQ is still supported and still needed for vCloud API extensibility use case.

I have written full article on this feature here.


NSX-T integration has been enhanced with routed Org VDC networks (previous release supported NAT-routed) with BGP routing protocol. This feature currently requires dedicated Tier-0 Gateway for the tenant.
IPSec VPN is now in the UI (pre-shared key authentication and policy based are supported). IP Sets and Security Groups have been split with support for network objects that dynamically refer to all connected VMs.

The service provider can configure multiple NSX-T Edge Clusters and select which one will be used for a particular Tier-1 (Org VDC) Gateway. This enables separation of Tier-0 and Tier-1 Gateways to different Edge Clusters.

The NSX-V side of networking has also one new feature – you can now use Cross VDC networking within the same vCenter Server. This for example enables multi egress networks within single Org VDC for stretched cluster use case.

Edit (14/4/2020): Each NSX-V backed Org VDC Edge GW can now be placed to a specific cluster via API, which might be useful for the above use case. Previously you had to have same Edge Cluster config for the whole Org VDC.

Edit (20/1/2022): The default (U)DLR transit address can be overridden with the following cell-management-command:

/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n gateway.udlr.default.subnet.cidr -v ""


vSphere VM Encryption is now supported within Cloud Director. The encryption happens in the hypervisor which means the data is encrypted both in rest as well in flight. The encryption is set up via vCenter Server storage policies by enabling host based rules. A compatible external key management server must be deployed and connected to vCenter Server. It means the feature is fully in realm of the service provider and key management is not exposed to tenants.

Other Features

  • Proxying of dedicated vCenter Servers (so called Centralized Point of Management – CPoM feature) was improved with extra stats, more proxies and browser extension to simplify the usage
  • Support for VM (UI) and vApp (API only) live migration between Provider VDCs
  • Due to UI upgrade to Clarity 2+ custom themes will have to be recompiled
  • The provider can enable promiscuous mode and forged transmits on VXLAN backed Org VDC network (API only)
  • Blocking tasks for OpenAPI tasks.
  • Cloud Director 10.1 is the first release that enables automated NSX-V to NSX-T migration. More on that in a later blog post.

14 thoughts on “What’s New in VMware Cloud Director 10.1

  1. Hi Tom, thanks as usual for the very informative and detailed post!
    Since you mention that now VM encryption policies are supported, and as your screenshot above suggests, does this mean this version of VCD will also include full support for native SIOC-based policies, perhaps removing the need of the convoluted additional setup currently required on earlier vCloud versions for IOPS management?
    We are using this feature at the moment, and we had to remove datastore clusters, apply vSphere custom values to datastores, making changes to Org-vDC storage policies via API (to enable IopsSettings section), etc.. is all of this finally going away, and can we just rely on SIOC-based policies to handle VM/vdisks IOPS assignment?

    One more thing – what are those tiny “badges” icons you can see on VM cards in the first picture? Is that something we can somehow use to assign custom status to VMs?


    1. IOPS host based policies should also work. Badges are predefined color tags that can be assigned to VM/vApp. UI team is assessing the popularity of the feature for further development.

  2. Hi Tomas,

    You say that “Due to UI upgrade to Clarity 2+ custom themes will have to be recompiled”, is there a new tool, to compile the theme for 10.1?


  3. Hi Tomas, when can we expect OpenAPI to be more complete? In swagger I could not find anything about working with vApps and VMs. Best Regards, Matt

  4. Hi Tomas, I’m trying to install tcpdump on a 10.1.2 appliance following vmware github Photon OS Linux Troubleshooting Guide and get –
    # tdnf install tcpdump
    Found 1 problem(s) while resolving
    1. installed package photon_vasecurity- obsoletes tcpdump provided by tcpdump-4.9.3-1.ph2.x86_64
    Error(1301) : Hawkey general runtime error

    Is it possible to get around this problem and install tcpdump? Regards, Luke

  5. Hi Thomas,

    thank you for this blog post. A few others were a big help to me. Since we are migrating between vCenter-Clusters at the moment, i was hoping that you could explain this Feature in “Other Features”: Support for VM (UI) and vApp (API only) live migration between Provider VDCs

    We updatet to 10.2 in hopes that the missing ui elements will be finally added, seeing as vmware disabled the flex ui completely. I can redeploy edge gws on the new ressource pool, i can create new vms on this ressource pool, but how do i live migrate between provider vdcs?

    Thank you very much for an answer and have a nice evening,

  6. Hello Tomas,

    We are trying to perform the step ‘Support for VM (UI) and vApp (API only) live migration between Provider VDCs’ but the migration of live VMs from one PVDC to another PVDC is failing in the UI. I have opened a PR but at the moment they are stating that this feature to their knowledge is not yet been rolled out. Do you have any prerequisite’s for this to happen successfully. Customer is on 10.1.2 VCD. Any information would be fantastic. SR: 20150724808

    Kind regards,
    Paul Cahalane

    1. Sorry, VM vMotion in the UI is possible only within the same Org VDC (between different vApps). VM vMotion across Org VDCs is possible with the recompose API. I will correct the text in the article.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.