The reference table below summarizes how different vCloud Director Org VDC allocation types consume vSphere resources. In other words: how a choice of allocation model for a particular Org VDC and its parameters (allocation, guarantees, quota, vCPU speed) translate to resource pool and VM resource settings (CPU/RAM) – reservations and limits.
valid for vCloud Director 9.5 and older (down to 5.5)
RP … resource pool
Elastic … Org VDC can be divided across multiple RPs across clusters
Although there are currently only three Org VDC allocation types the Allocation Pool can be elastic or non-elastic based on vCloud Director instance wide setting in General Settings
vCloud Director version 9.5 is the first release to provide networking IPv6 support. In this article I want to go into little bit more detail on the level of IPv6 functionality than was in my What’s New in vCloud Director 9.5 post.
IPv6 functionality is mostly driven by the underlying networking platform support which is provided by NSX. The level of IPv6 support in NSX-V is changing from release to release (for example NAT64 feature was introduced in NSX version 6.4). Therefore my feature list assumes the latest NSX 6.4.4 is used.
Additionally it should be noted that vCloud Director 9.5 also supports in very limited way NSX-T. Currently no Layer 3 functionality is supported for NSX-T based Org VDC networks which are imported based on pre-existing logical switches as isolated networks with IPv4 only subnets.
Here is the feature list (vCloud Director 220.127.116.11 and NSX 6.4.4).
Create External network with IPv6 subnet (provider only). Note: mixing of IPv4 and IPv6 subnets is supported.
Create Org VDC network with IPv6 subnet (direct or routed). Note: distributed Org VDC networks are not supported with IPv6
Use vCloud Director IPAM (static/manual IPv6 assignments via guest customization)
IPv6 (static only) routing via Org VDC Edge Gateway
IPv6 firewall rules on Org VDC Edge Gateway or Org VDC Distributed Firewall via IP Sets
NAT 64 (IPv6-to-IPv4) on Org VDC Edge Gateway
Load balancing on Org VDC Edge Gateway: IPv6 VIP and/or IPv6 pool members
VMware has recently released vCloud Availability 18.104.22.168 update that adds vCloud Director 22.214.171.124 compatibility. The tenant needs to install vSphere Replication 8.1.1 which supports vSphere 6.7U1 all the way down to 6.0U3.
The on-prem upgrade from older vSphere Replication appliance (e.g. 6.5.1) is side-by-side. Meaning; you deploy the new 8.1.1 appliance and it connects to the existing one to migrate the data over.
I have noticed that with the new 8.1.1 appliance my cloud replications were not active.
The reason for that was the vcta service was not running on the appliance. The service is responsible for establishing the tunnel with the cloud endpoint and transferring replicated data. Note that the service is not needed for regular vSphere to vSphere replications.
In lab environment where you need to apply a custom endpoint certificate as described here, you might not notice this issue immediately, as the service is started after the certificate change manually with service vcta restart. However, after appliance reboot the service will be down again.
The fix is easy, just enable the service with:
systemctl enable vcta
command from the appliance CLI (via console or SSH if you enabled it before). This is one more thing to remember when setting up cloud replications next to the ESXi vr2c-firewall.vib issue I documented here.
With vCloud Director 9.5 VMware for the first time released vCloud Director in fully supported appliance format. It is the first iteration of longer process to provide the whole solution in the appliance format, therefore external NFS, database (PostgreSQL/MS SQL) and RabbitMQ is still needed, but this will change in future releases. I would therefore advise today using the 9.5 version only for green field environments and not to mix it with RHEL/CentOS based vCloud Director setups.
If you are going to deploy the appliance here are some tips:
Use vSphere Web Client (FLEX) or OVFTool to deploy the appliance. The HTML5 client is not supported.
OVF Appliance networking (DNS/Gateway) is provided through Network Profile for the particular port group the appliance is going to be connected to. If it does not exist, vSphere Web Client will create it the first time you deploy appliance to the port group.
Appliance is deployed only with one vNIC and one IP address. That means NFS and DB must be accessible from the vNIC (directly or via routed connection). API/UI and Console Proxy are sharing the same IP, but Console Proxy uses port 8443. So you must adjust your Console Proxy Load Balancer network pool to this port.
Appliance uses vcloud user with ID 1002 which most likely is different from RHEL/CentOS vcloud user ID and will cause NFS permission issues. That’s why I do not recommend mixed setup.
Appliance will copy responses.properties file to the NFS share for other cells to use and connect to the database. Note that the file contains encrypted database login credentials but also the encryption key, so make sure access to NFS share is controlled.
If you need to change appliance network configuration after the fact, use the following command: /opt/vmware/share/vami/vami_config_net. The appliance currently has no admin UI.
Appliance is Photon based, so you can install additional packages with tdnf install command.
With vCloud Director version 9 new API (cloudapi) based on OpenAPI specification has been introduced next to the legacy based XML API. In vCloud Director 9.5 API Explorer enables consumption of the API directly from the vCloud UI endpoint (read here). Most of the new features are using this OpenAPI such as H5 UI branding, extensions, vRealize Orchestrator service integrations, Cross VDC networking and Roles management.
OpenAPI is very simple to use, JSON based with links provided in headers. However there might be some issues when load balancer with SSL termination is involved as due to the header or payload size the request response will not get through the load balancer.
unexpected character at line 1 column 1 of the JSON data.
In my case I am using NSX Edge Load balancer with SSL termination and below is the error screenshot:
There are multiple workarounds described in the release notes but actually none worked for me:
increasing header maximum at the Edge LB as described in KB 52553 did not help as the number of headers is not the only issue in the particular scenario – the body payload size is as well
limiting maximum page size in vCloud Director with cell-management-tool manage-config -n restapi.queryservice.maxPageSize -v 25 fixes the above API call but the subsequent call made by the UI ignores the setting and the response will not get through the LB again.
After some investigations and troubleshooting I discovered that there is a way to increase Edge LB buffer size above the default 32 KB with similar call to the one in the KB 52553:
The above call (NSX 6.4) was enough to fix the issue for me and i can now edit Global Roles in the UI: