vCloud Availability – Cloud Proxy with Multiple NICs

Cloud Proxy is important component of vCloud Availability solution that sits in DMZ and tunnels replicated traffic in and out of the provider’s environment. For deep dive on the traffic flows see this older article. Cloud Proxy is very similar to vCloud Director cell, it runs on Linux VM, can be multihomed with internet and management facing interfaces.

By default, Cloud Proxy uses its primary network interface both for to-the-cloud (port 443) and from-the-cloud (port 31031) traffic. When multihoming is used, it might be beneficial to move the listener of the from-the-cloud traffic to the internal interface. This can be accomplished by adding the following line to the $VCLOUD_HOME/etc/global.properties file, with the IP address of the internal interface.

cloudproxy.fromcloudtunnel.host = 192.168.250.110

After restarting the cell, the listener will be moved the the new IP address.

Here is example from my lab:

Cloud Proxy with two NICs:

[root@vcd-01a ~]# ifconfig
eno16780032: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.110.40 netmask 255.255.255.0 broadcast 192.168.110.255
inet6 fe80::250:56ff:fe3f:969 prefixlen 64 scopeid 0x20<link>
inet6 fdba:dd06:f00d:a400:250:56ff:fe3f:969 prefixlen 64 scopeid 0x0<global>
ether 00:50:56:3f:09:69 txqueuelen 1000 (Ethernet)
RX packets 45153159 bytes 11625785984 (10.8 GiB)
RX errors 0 dropped 1118 overruns 0 frame 0
TX packets 52432329 bytes 14266764397 (13.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.250.110 netmask 255.255.255.0 broadcast 192.168.250.255
inet6 fe80::570a:1196:4322:521f prefixlen 64 scopeid 0x20<link>
inet6 fdba:dd06:f00d:a400:3495:c013:e72:cc58 prefixlen 64 scopeid 0x0<global>
ether 00:50:56:37:03:81 txqueuelen 1000 (Ethernet)
RX packets 4409 bytes 279816 (273.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 26 bytes 2691 (2.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Before the edit:

[root@vcd-01a ~]# netstat -an|grep 31031
tcp6 0 0 192.168.110.40:31031 :::* LISTEN

After the edit and cell restart:

[root@vcd-01a ~]# netstat -an|grep 31031
tcp6 0 0 192.168.250.110:31031 :::* LISTEN
Advertisements

vCloud Director 8.20: Orchestrated Upgrade

vCloud Director architecture consist of multiple cells that share common database. The upgrade process involves shutting down services on all cells, upgrading them, upgrading the database and starting the cells. In large environments where there are three or more cells this can be quite labor intensive.

vCloud Director 8.20 brings new feature – an orchestrated upgrade. All cells and vCloud database can be upgraded with a single command from the primary cell VM. This brings two advantages. Simplicity – it is no longer needed to login to each cell VM, upload binaries and execute upgrade process manually. Availability – downtime during the upgrade maintenance window is reduced.

Prerequisites

Set up ssh private key login from the primary cell to all other cells in the vCloud Director instance for user vcloud.

  1. On the primary cell generate private/public key (with no passphrase):

    ssh-keygen -t rsa -f $VCLOUD_HOME/etc/id_rsa
    chown vcloud:vcloud $VCLOUD_HOME/etc/id_rsa
    chmod 600 /opt/vmware/vcloud-director/etc/id_rsa
     

  2. Copy public key to each additional cell in the instance to authorized_keys file. This can be done with one line command ran from the primary cell or with this ssh-copy-id. Use IP/FQDN it is registered with in VCD

    cat $VCLOUD_HOME/etc/id_rsa.pub | ssh root@<cell-IP> “mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys” 

  3. Verify that login with private key works for each secondary cell in the environment

    sudo -u vcloud ssh -i $VCLOUD_HOME/etc/id_rsa root@<cell IP/FQDN>

Multi-cell Installation

Upload vCloud Director binary to the primary cell and make it executable. Execute the file with private-key-path option pointing to the private key.

/root/vmware-vcloud-director-distribution-8.20.0-5070903.bin –private-key-path $VCLOUD_HOME/etc/id_rsa

 

Optionally a maintenance cell can be specified with –maintenance-cell option.

For troubleshooting, the upgrade log is located on the primary cell in  $VCLOUD_HOME/logs/upgrade-<date and time>.log

For no-prompt execution you can add –unattended-upgrade option.

Workflow

This is the workflow that is automatically executed:

  1. Quiesce, shutdown and upgrade of the primary cell. Does not start the cell.
  2. If maintenance cell was specified, it is put into maintenance mode.
  3. Quiescing and shut down of all the other cells.
  4. Upgrade of the vCloud Database (a prompt for backup)
  5. Upgrade and start of all other cells (except the maintenance cell)
  6. If maintenance cell was specified, it is upgraded and started.
  7. Start of the primary cell

What is the difference between a quiesced cell and a cell in the maintenance mode?

Quiesced cell:

  • finishes existing long running operations
  • answers to new requests and queues them
  • does not dequeue any operations (they will stay in the queue)
  • VC lister keeps running
  • Console proxy keeps running

Cell in maintenance mode

  • waits for finish of long running but fails all queued operations
  • answer to most requests with HTTP Error code 504 (unavailable)
  • still issues auth token for /api/sessions login requests
  • No VC listener
  • No Console proxy

Interoperability with vCloud Availability

vCloud Availability uses Cloud Proxies to terminate replication tunnels from the internet. Cloud Proxies are essentially stripped down vCloud Director cells and are therefore treated as regular cells during the orchestrated upgrade.

Quiesced Cloud Proxy has no impact on replication operations and traffic. Cloud Proxy in the maintenance mode still preserves existing replications however new replications cannot be established.

2/27/2017: Multiple edits based on feedback from engineering. Thank you Matthew Frost!