VMware Cloud Director 10.4.1 Appliance Upgrade Notes

VMware Cloud Director 10.4.1 (VCD) has just been released and with it comes also the change of the appliance internal PostgreSQL database from version 10 to 14 (as well as replication manager upgrade). This means that the standard upgrade process has changed and is now a bit more complicated and could be significantly longer than in the past. So pay attention to this change. This article goal is to explain what is going on in the background to understand better why certain steps are necessary.

VCD 10.4.1 is still interoperable with PostgreSQL 10-14 if you use the Linux deployment format, so this blog applies only if you use the appliance deployment. PostgreSQL version 10 is no longer maintained so that is the main reason for the switch besides the fact that newer is alway better :-).

  • The database upgrade will happen during the vamicli update process.
  • All appliance nodes must be up but the vmware-vcd service should be shut down.
  • Always recommended is to take cold snaphots of all nodes before the upgrade (make sure the snapshots are done while all DB nodes are off at the same time as you do not want to restore snapshot of primary to older state while secondary nodes are ahead)
  • Primary database appliance node is where you have to start the upgrade process, it will most likely take the longest time as new database version 14 will be installed side-by-side and all the data converted. That means you will also need enough free space on the database partition. You can check by running on the primary DB node:

    df -h|grep postgres
    /dev/mapper/database_vg-vpostgres 79G 17G 58G 23% /var/vmware/vpostgres


    The above shows that the partition size is 79 GB, the DB is currently using 17 GB and I have 53 GB free. So I am good to go. The actual additional needed space is less than 17 GB as the database logs and write ahead logs are not copied over. Those can be deducted. You can quickly get their size by running:

    du -B G --summarize /var/vmware/vpostgres/current/pgdata/pg_wal/
    du -B G --summarize /var/vmware/vpostgres/current/pgdata/log/


    Or just use this one-liner:

    du -sh /var/vmware/vpostgres/current/pgdata/ --exclude=log --exclude=pg_wal

    If needed, the DB partition can be easily expanded.
  • The secondary DB nodes will during the vami upgrade process clone the upgraded DB from the primary via replication. So these nodes just drop the current DB and will not need any additional space.
  • DB upgrade process can be monitored by tailing in another ssh session update-postgres-db.log file:

    tail -f /opt/vmware/var/log/vcd/update-postgres-db.log
  • After all nodes (DB and regular ones) are upgraded, the database schema is upgraded via the /opt/vmware/vcloud-director/bin/upgrade command. Then you must reboot all nodes (cells).
  • The vcloud database password will be changed to a new autogenerated 14 character string and will be replicated to all nodes of VCD cluster. If you use your own tooling to access the DB directly you might want to change the password as there is no way of retrieve the autogenerated one. This must be done by running psql in the elevated postgres account context.

    root@vcloud1 [ /tmp ]# su postgres
    postgres@vcloud1 [ /root ]$ psql -c "ALTER ROLE vcloud WITH PASSWORD 'VMware12345678'"



    and then you must update vcd-service on each cell via the CMT reconfigure-database command. This can be done in a fan-out mode from a single cell live by running:

    /opt/vmware/vcloud-director/bin/cell-management-tool reconfigure-database -dbpassword 'VMware12345678' --private-key-path=/opt/vmware/vcloud-director/id_rsa --remote-sudo-user=postgres -i `cat /var/run/vmware-vcd-cell.pid`

    The command above will change DB configuration properties on the local and all remote cells. It will also refresh the running service to use the new password.

    Note the DB password must have at least 14 characters.
  • Any advanced PostgreSQL configuration options will not be retained. In fact they may be incompatible with PostgreSQL 14 (they are backed up in /var/vmware/vpostgres/current/pgdata/postgresql.auto.old)

VMware Cloud Director Cells Behind Internet Proxy

Update 4/25/2022: This configuration no longer works with VMware Cloud Director 10.3.x.

Update 9/6/2022: This configuration works again in VMware Cloud Director 10.4 and 10.3.3.3.

VMware Cloud Director cells are usually deployed in the management cluster and their access to Internet might be limited due to security considerations. This can be a problem because certain features do require outgoing access to external (Internet) resources:

  • Catalog subscription: the cell will need access to the published catalog URL
  • Multisite: if you associate multiple Organizations together, some API calls are fan-out by the cell to the respective associated API endpoints, therefore the cell needs to be able to access them (even its own external API endpoint)
  • Cell Appliance VAMI repository for patches or upgrades

The latest VCD release 10.2.1 now does support internet proxy which means there is no need to have full internet access to the management environment.

On the VCD Appliance the proxy can be configured by editing /etc/sysconfig/proxy file:

root@vcloud1 [ ~ ]# cat /etc/sysconfig/proxy
# Enable a generation of the proxy settings to the profile.
# This setting allows to turn the proxy on and off while
# preserving the particular proxy setup.
#
PROXY_ENABLED="yes"

# Some programs (e.g. wget) support proxies, if set in
# the environment.
# Example: HTTP_PROXY="http://proxy.provider.de:3128/"
HTTP_PROXY="http://proxy.fojta.com:3128"

# Example: HTTPS_PROXY="https://proxy.provider.de:3128/"
HTTPS_PROXY="http://proxy.fojta.com:3128"

You need to restart vmware-vcd service to apply the configuration.

vCloud Director 9.7 Appliance Tips

About half a year ago I published blog post with similar title related to vCloud Director 9.5 appliance. The changes between appliance version 9.5 and 9.7 are so significant therefore I am dedicated a whole new article to the new appliance.

Introduction

The main difference compared to 9.5 version is that vCloud Director 9.7 now comes with embedded PostgreSQL database option that supports replication, with manually triggered semi-automated fail over. The external database is no longer supported with the appliance. Service providers can still use Linux installable version of vCloud Director with external PostgreSQL or Microsoft SQL databases.

The appliance is provided in single OVA file that contains 5 different configurations (flavors). Primary node (small and large), Standby node (small and large) and vCloud Director cell application node.

All node configurations include the vCloud Director cell application services, the primary and standby also includes the database and the replication manager binaries. It is possible to deploy non-DB HA architecture with just the primary and cell nodes, however for production the DB HA is recommended and requires minimum of 3 nodes. One primary and two standbys. The reason for the need of two standby is, that at the moment the replication is configured, PostgreSQL database will not process any write requests as it is not able to synchronously replicated them to at least one standby node. This has some implications also how to remove nodes from clusters which I will get to.

I should also mention that primary and standby nodes once deployed are from appliance perspective equivalent, so standby node can become primary and vice versa. There is always only one primary DB node in the cluster.

NFS transfer share is required and is crucial for sharing information among the nodes about the cluster topology. In the appliance-nodes folder on the transfer share you will find data from each node (name, IP addresses, ssh keys) that are used to automate operations across the cluster.

Contrary to other HA database solution, there is no network load balancing or single floating IP used here, instead all vCloud Director cells are for database access always pointed to the eth1 IP address of the (current) primary node. During the failover the cells are dynamically repointed to the IP of the new node that takes the role of primary.

Speaking about networking interfaces, the appliance has two – eth0 and eth1. Both must be used, and  must have different subnets. The first one (eth0) is primarily used for the vCloud Director services (http – ports 80, 443, console proxy – port 8443, jmx – ports 61611, 61616), the second one (eth1) primary role is for database communication (port 5432). You can use both interfaces for other purposes (ssh, management, ntp, monitoring, communication with vSphere / NSX, ..). Make sure you follow the correct order during their configuration. It is so easy to mix up the subnets or port groups.

Appliance Deployment

Before starting deploying the appliance(s) make sure NFS transfer share is prepared and empty. Yes, it must be empty. When the primary node is deployed, responses.properties and other files are stored on the share and used to bootstrap other appliances in the server group and the database cluster.

The process always starts with the primary node (small or large). I would recommend large for production and small for everything else. Quite a lot of data must be provided in the form of OVF properties (transfer share path, networking, appliance and DB passwords, vCloud Director initial configuration data). As it is easy to make mistake I recommend snapshoting the VM before the first power-on so you can always revert back and fix whatever was wrong (the inputs can be changed in vCenter Flex UI, VM Edit Settings, vApp Options).

To see if the deployments succeeded or why it failed, examine the following log files on the appliance:

firstboot: /opt/vmware/var/log/firstboot
vcd setup:  /opt/vmware/var/log/vcd/setupvcd.log

config data can be checked in: /opt/vmware/etc/vami/ovfEnv.xml

Successful deployment of the primary node results in a single node vCloud Director instance with non-replicated DB running on the same node and with responses.properties file saved to the transfer share ready for other nodes. The file contains database connection information, certificate keystore information and secret to decrypt encrypted passwords. Needless to say, pretty sensitive information to make sure the access to NFS is restricted.

Note about certificates: the appliance generates its own self-signed certificates for the vCloud Director UI/API endpoints (http) and consoleproxy access and stores them to user certificates.ks keystore in /opt/vmware/vcloud-director which is protected with the same password as the initial appliance root password. This is important as the encrypted keystore password in the responses.properties file will be used for configuration of all other appliances and thus you must deploy them with the same appliance root password. If not, you might end up with half working node, where database will be working but the vcd service will not due to failed access to the certificate.ks keystore.

To deploy additional appliance nodes you use standby or pure VCD cell node configs. For HA DB two standbys (at least). As these nodes all run VCD service, deploying additional pure VCD cell nodes is needed only for large environments. Size of the primary and standbys should always be the same.

Database Cluster Operations

Update 2019/06/14: The official documentation has been updated to include this information.

The database appliances currently provides very simple UI on port 5480 showing the cluster state with the only operation to promote standby node and that only if the primary is failed (you cannot in the UI promote standby while primary is running).

Here is a cheat sheet of other database related operations you might need to do through CLI:

  • Start, stop and reload configuration of database on a particular node:
    systemctl start vpostgres.service
    systemctl stop vpostgres.service
    systemctl reload vpostgres.service
  • Show cluster status as seen by particular node:
    sudo -i -u postgres /opt/vmware/vpostgres/10/bin/repmgr -f /opt/vmware/vpostgres/10/etc/repmgr.conf cluster show
  • Planned DB failover (for example for a node maintenance). On the standby cell run:
    sudo -i -u postgres /opt/vmware/vpostgres/current/bin/repmgr standby switchover -f /opt/vmware/vpostgres/current/etc/repmgr.conf –siblings-follow

Location of important database related files:
psql (DB CLI client): /opt/vmware/vpostgres/current/bin/psql
configuration, logs and data files: /var/vmware/vpostgres/current/pgdata

How to Rejoin Failed Database Node to the Cluster

The only supported way is to deploy a new node. You should deploy it as standby node and as mentioned in the deployment chapter it will automatically bootstrap and replicate the database content. That can take some time depending on the databse size. You will need to clean up the old failed VCD cell in vCloud Director Admin UI – Cloud Cells section.

There is an unsupported way to rejoin failed node without redeploy, but use at your own risk – all commands are triggered on the failed node:

Stop the DB service:
systemctl stop vpostgres.service

Delete stale DB data:
rm -rf /var/vmware/vpostgres/current/pgdata

Clone DB from the primary (use its eth1 IP):
sudo -i -u postgres /opt/vmware/vpostgres/current/bin/repmgr -h <primary_database_IP> -U repmgr -d repmgr -f /opt/vmware/vpostgres/current/etc/repmgr.conf standby clone

Start the DB service:
systemctl start vpostgres.service

Add the node to repmgr cluster:
sudo -i -u postgres /opt/vmware/vpostgres/current/bin/repmgr -h <primary_database_IP> -U repmgr -d repmgr -f /opt/vmware/vpostgres/current/etc/repmgr.conf standby register –force

How to Remove Failed Standby Node from the Cluster

On the primary node find the failed node ID via the repmgr cluster status command:
sudo -i -u postgres /opt/vmware/vpostgres/10/bin/repmgr -f /opt/vmware/vpostgres/10/etc/repmgr.conf cluster show

Now unregister failed node by providing its ID (e.g. 13416):
sudo -i -u postgres /opt/vmware/vpostgres/10/bin/repmgr -f /opt/vmware/vpostgres/10/etc/repmgr.conf standby unregister –node-id=13416

Clean up failed VCD cell in Cloud Cells VCD Admin UI.

How to Revert from DB Cluster to Single DB Node Deployment

As mentioned in the introduction, if you shutdown both (all) standby nodes, your primary database will stop serving write I/O request. So how to get out of this pickle?

First, unregister both (deleted) standbys via the previous mentioned commands:

sudo -i -u postgres /opt/vmware/vpostgres/10/bin/repmgr -f /opt/vmware/vpostgres/10/etc/repmgr.conf cluster show
sudo -i -u postgres /opt/vmware/vpostgres/10/bin/repmgr -f /opt/vmware/vpostgres/10/etc/repmgr.conf standby unregister –node-id=<id1>
sudo -i -u postgres /opt/vmware/vpostgres/10/bin/repmgr -f /opt/vmware/vpostgres/10/etc/repmgr.conf standby unregister –node-id=<id2>

Delete appliance-nodes subfolders on the transfer share corresponding to these nodes. Use grep -R standby /opt/vmware/vcloud-director/data/transfer/appliance-nodes to find out which folders should be deleted.

For example:
rm -Rf /opt/vmware/vcloud-director/data/transfer/appliance-nodes/node-38037bcd-1545-49fc-86f2-d0187b4e9768

And finally edit postgresql.conf and change synchronous_standby_names line to synchronous_standby_names = ”. This disables the wait for the transaction commit to at least one standby.

vi /var/vmware/vpostgres/current/pgdata/postgresql.conf

Reload DB config: systemctl reload vpostgres.service.  The database should start serving write I/O requests.

Upgrade and Migration to Appliance

Moving both from Linux cells or 9.5 appliance to 9.7 appliance with embedded DB requires a migration. Unfortunately, it is not possible to just upgrade 9.5 appliance to 9.7 due to the embedded database design.

The way to get to 9.7 appliance is you will first upgrade the existing environment to 9.7, then deploy a brand new 9.7 appliance based environment and transplant the old database content to it.

It is a not a simple process. I recommend testing it up front on a production clone so you are not surprised during the actual migration maintenance window. The procedure is documented in official docs, I will provide only high level process and my notes.

  • Upgrade existing setup to 9.7(.0.x) version. Shut down VCD service and backup the database, global.properties, responses.properties and certificate files. Shut down the nodes if we are going to reuse their IPs.
  • Prepare clean NFS share and deploy single node appliance based VCD instance. I prefer to do the migration on single node instance and then expand it to multi node HA when the transplant is done.
  • Shut down the vcd service on the appliance, delete its vcloud database so we can start with the transplant.
  • We will restore the database (if the source is MS SQL we will use cell-management-tool migration) and overwrite global.properties and responses.properties files. Do not overwrite the user certificate.ks file.
  • Now we will run  the configure script to finalize the transplant. At this point on 9.7.0.1 appliance I hit a bug that was related to SSL DB communication. In case your global.properties file contains vcloud.ssl.truststore.password line, comment it out and run the configure script with SSL disabled. This is my example:
    /opt/vmware/vcloud-director/bin/configure –unattended-installation –database-type postgres –database-user vcloud \
    –database-password “VMware1!” –database-host 10.0.4.62 –database-port 5432 \
    –database-name vcloud –database-ssl false –uuid –keystore /opt/vmware/vcloud-director/certificates.ks \
    –keystore-password “VMware1!” –primary-ip 10.0.1.62 \
    –console-proxy-ip 10.0.1.62 –console-proxy-port-https 8443
  • Update 2019/05/24: The correct way to resolve the bug is to also copy truststore file from the source (if the file does not exist, which can happen if the source was freshly upgraded to 9.7.0.1 or later start the vmware-vcd service at least once). The official docs will be updated shortly. The configure script can be then run with ssl set to true:
    /opt/vmware/vcloud-director/bin/configure –unattended-installation –database-type postgres –database-user vcloud \
    –database-password “VMware1!” –database-host 10.0.4.62 –database-port 5432 \
    –database-name vcloud –database-ssl true–uuid –keystore /opt/vmware/vcloud-director/certificates.ks \
    –keystore-password “VMware1!” –primary-ip 10.0.1.62 \
    –console-proxy-ip 10.0.1.62 –console-proxy-port-https 8443Note that the keystore password is the inital appliance root password! We are still reusing appliance autogenerated self-signed certs at this point.
  • If this went right, start the vcd service and deploy additional nodes as needed.
  • On each node replace self-signed certificate with the CA signed.

Backup and Restore

The backup of the appliance is very easy, the restore less so. The backup is triggered from the primary node with the command:

/opt/vmware/appliance/bin/create-db-backup

It creates single tar file with database content and additional data to fully restore the vCloud Director instance. The problem is that partial restores (that would reuse existing nodes) are nearly impossible (at least in HA DB cluster scenario) and the restore involve basically the same procedure as was the case with migration.

CA Certificate Replacement

There are probably many ways how to accomplish this. You can create your own keystore and import certificates from it with cell-management-tool certificates command to the existing appliance /opt/vmware/vcloud-director/certificates.ks keystore. Or you can replace the appliance certificate.ks file and re-run the configure command. See here for deep dive.

Note that the appliance UI (on port 5480) uses different certificates. These are stored in /opt/vmware/appliance/etc/ssl. I will update this post with the procedure once it is available.

External DB Access

In case you need to access vCloud Director database externally, you must edit pg_hba.conf file with the IP address or subnet of the external host. However, pg_hba.conf file is dynamically generated and any manual changes will be quickly overwritten. The correct procedure is to create on the DB appliance node new file (with any name) in /opt/vmware/appliance/etc/pg_hba.d folder with a similar line:

host all all 10.0.2.0/24 md5

Which means that any host from 10.0.2.0/24 subnet will be able to log in via password authentication method with any database user account and access any database.

There is currently not an easy way to use network load balancer to always point to the primary node. This is planned for the next vCloud Director release.

Postgres User Time Bomb

Both vCloud Director 9.7 and 9.7.0.1 appliance version have unfortunate time bomb issue where postgres user account will expire in 60 days (since the appliance creation, not its deployment). When that happens, the repmgr commands triggered via ssh stop working. So for example UI initiated failover with the promote button will not work.

The 9.7 appliance postgres user expires May 25 2019, 9.7.0.1 appliance postgres user expires July 9 2019. The fix is as root on each DB appliance run the following command (see KB 70332):
chage -M -1 -d 1 postgres

You can check the postgres account status with:
chage -l postgres

 

 

Patching and Upgrading vCloud Director 9.7+ Appliance

Update 23/09/2019: The same process can be used to upgrade vCloud Director appliance to version 10. You can also use VMware patch repository if your appliances have internet connectivity. To reset the repo location from local to VMware provided just use the following command:

vamicli update –repo “”

vCloud Director 9.7.0.1 patch has just been released and it is the first opportunity to patch the appliance edition of vCloud Director. Let me describe the process.

I have three appliance deployment with each node running the embedded database in active – standby – standby configuration. While in theory you could treat the appliance as regular Linux deployment and use the same patching process that was used for years by simply running vmware-vcloud-director-distribution-9.7.0-13635483.bin this would not patch just the vCloud Director binaries, but not the appliance packages. Therefore we must follow completely different process.

It should also be noted that currently we cannot use the automated orchestrated upgrade procedure or appliance UI. Hopefully both will come in the future as the appliance version matures.

Download the Appliance upgrade file: VMware_vCloud_Director_9.7.0.4264-13635483_update.tar.gz and unpack it to a transfer directory that is available to all the cells.

mkdir /opt/vmware/vcloud-director/data/transfer/update

tar xzf VMware_vCloud_Director_9.7.0.4264-13635483_update.tar.gz -C /opt/vmware/vcloud-director/data/transfer/update

Now on each cell we will have to set the repo, check if we need to update, shutdown the vCloud Director service and patch.

vamicli update –repo file:///opt/vmware/vcloud-director/data/transfer/update/

vamicli update –check

/opt/vmware/vcloud-director/bin/cell-management-tool -u administrator cell -s

vamicli update –install latest

Note that during the whole process that embedded database is still running on each node, so until the vcd service shutdown of the last node the vCloud Director is still functional.

Once the last node is patched we can upgrade the database schema. Before we do that we will make a database backup. This is done from the primary DB node (which node is primary can be checked at the vCD Database Availability UI running on each node on port 5480).

/opt/vmware/appliance/bin/create-db-backup

The backup is created in the pgdb-backup folder in the transfer share (e.g. /opt/vmware/vcloud-director/data/transfer/pgdb-backup/db-backup-2019-05-20-090502.tgz).

Now we can finally proceed with the database schema upgrade:

/opt/vmware/vcloud-director/bin/upgrade

If everything went right we can start vcd service on each cell and enjoy our updated vCloud Director instance.

service-vmware vcd start

vCloud Director 9.5 Appliance Tips

With vCloud Director 9.5 VMware for the first time released vCloud Director in fully supported appliance format. It is the first iteration of longer process to provide the whole solution in the appliance format, therefore external NFS, database (PostgreSQL/MS SQL) and RabbitMQ is still needed, but this will change in future releases. I would therefore advise today using the 9.5 version only for green field environments and not to mix it with RHEL/CentOS based vCloud Director setups.

If you are going to deploy the appliance here are some tips:

  • Use vSphere Web Client (FLEX) or OVFTool to deploy the appliance. The HTML5 client is not supported.
  • OVF Appliance networking (DNS/Gateway) is provided through Network Profile for the particular port group the appliance is going to be connected to. If it does not exist, vSphere Web Client will create it the first time you deploy appliance to the port group.
  • Appliance is deployed only with one vNIC and one IP address. That means NFS and DB must be accessible from the vNIC (directly or via routed connection). API/UI and Console Proxy are sharing the same IP, but Console Proxy uses port 8443. So you must adjust your Console Proxy Load Balancer network pool to this port.
  • Appliance uses vcloud user with ID 1002 which most likely is different from RHEL/CentOS vcloud user ID and will cause NFS permission issues. That’s why I do not recommend mixed setup.
  • Appliance will copy responses.properties file to the NFS share for other cells to use and connect to the database. Note that the file contains encrypted database login credentials but also the encryption key, so make sure access to NFS share is controlled.
  • If you need to change appliance network configuration after the fact, use the following command: /opt/vmware/share/vami/vami_config_net. The appliance currently has no admin UI.
  • Appliance is Photon based, so you can install additional packages with tdnf install command.