A customer asked how to resize a disk of very large VM (file server) which is protected with vCloud Availability and thus replicated to the cloud.
It is not straight forward as the underlying replication engine relies on tracking changed blocks and both the source and target disks must have the same size. In short the replication must be stopped for a moment and then re-established after the necessary disk resizing. Here is step by step process:
Fail over VM via vCloud Availability UI/API without powering on the VM (leave on-prem running).
Consolidate the VM in the cloud (this must be done by SP or use workarounds with copy to catalog and deploy back).
Stop replication of on-prem VM (via vSphere UI plugin).
Resize disk of on-prem VM (including partition and file system).
Resize disk of cloud VM from step #2 (only hardware).
Setup replication from scratch by using ‘Use replication seeds’ option while selecting the seed of failed over cloud VM from step #5
Some time ago I blogged about the possibility to link to vCloud Availability Portal directly from vCloud Director UI (here and here). This was done by inserting custom links into the vCloud Director Flex UI.
vCloud Director 9.x tenant HTML5 UI provides much richer possibilities to embed additional links, pages and full websites. My colleague Kelby Valenti wrote two whitepapers and one blog post how to do so.
Documentation – Static page with links to vCloud Director documentation.
The last module is the vCloud Availability 2.0 portal – the subject of this article:
It is also embedded using iframe.
I am attaching the source files so you can download and adapt them for your purposes. You will also need the SDK and I recommend deployment automation created by Kelby as described in his blog post listed above.
The actual link to the portal is in the src/main/vcav.component.ts file. In my case it is https://portal.proxy.cpsbu.local so replace it with the correct link for your environment.
For security reasons the vCloud Availability portal prohibits being rendered in browser frame by setting X-Frame-Options header to DENY. To work around this limitation I am replacing the header with X-Frame-Options: ALLOW-FROM <VCD-url> on the existing load balancer that is load balancing my two vCloud Availability Portal nodes as well as redirecting external port 443 to appliances’ port 8443. This is done with NSX Edge Gateway, SSL termination and the following application rule:
The link to the portal is also passing the vCloud Director session authentication token for Single Sign-On. Note that however in the current release (2.0.1) this functionality is broken.
I have updated my vCAT-SP vCloud Availability whitepaper to reflect changes that came with vCloud Availability 2.0 and vSphere 6.5/6.7.
It can be downloaded from the vCAT-SP site from the Storage and Availability section. The direct link to PDF is here. You will know you have the latest document if you see June 2018 date on the title page.
Cloud Proxy is important component of vCloud Availability solution that sits in DMZ and tunnels replicated traffic in and out of the provider’s environment. For deep dive on the traffic flows see this older article. Cloud Proxy is very similar to vCloud Director cell, it runs on Linux VM, can be multihomed with internet and management facing interfaces.
By default, Cloud Proxy uses its primary network interface both for to-the-cloud (port 443) and from-the-cloud (port 31031) traffic. When multihoming is used, it might be beneficial to move the listener of the from-the-cloud traffic to the internal interface. This can be accomplished by adding the following line to the $VCLOUD_HOME/etc/global.properties file, with the IP address of the internal interface.
cloudproxy.fromcloudtunnel.host = 192.168.250.110
After restarting the cell, the listener will be moved the the new IP address.
Minor patch of vCloud Availability 2.0.1 was released last week. Besides many bug fixes, improved documentation and support for Cassandra version 3.x I want to highlight two undocumented features and add upgrade comment.
Provider vSphere Web Client Plugin
This is a return from 1.0 version of an experimental feature, where the provider can monitor state of vSphere Replication Manager Server, vSphere Replication Servers and all incoming and outgoing replications from inside the vSphere Web Client plugin in the particular (provider side) vCenter Server. This is especially useful for quick troubleshooting.
Complex vSphere SSO Domain Support
Although it is not recommended to have multiple vCloud Director / vCloud Availability instances sharing the same vSphere SSO domain, it is now possible to accommodate such scenario. The reason why it is not recommended is, that it creates unnecessary dependency between the instances, limits upgradability and scale of each instance.
Upon startup vSphere Replication Cloud Service (vRCS) is querying SSO Lookup Service for Cassandra nodes and resource vCenter Servers. In order to limit the scope of such query to only those that belong to the particular vCloud Availability instance, create text file /opt/vmware/hms/conf/sites on all vRCS nodes with SSO site names that should be queried (one line per site).
There might be some confusion between vCenter SSO domain and vCenter SSO site. vCenter SSO domain name is usually vsphere.local and the domain is defined by the span of all replicated PCSs. Any single node contains all replicated data, such us IdP configurations, lookup service registrations, solution users, tags, license information, etc.
vCenter SSO domain can contain multiple SSO sites. The first site name is defined with the first PSC deployment (in vSphere 6.7 the default name is default-site).
The other SSO sites are created when joining an existing SSO domain.
So based on the example above, the /opt/vmware/hms/conf/sites file could have single line with text string default-site. vRCS will then ignore any other SSO sites in this vsphere.local domain.
You can upgrade to vCloud Availability 2.0.1 both from version 1.0.x and 2.0, however you need to use different upgrade ISO images for upgrading of the replication components (vRMS, vRCS and vRS). The installer and UI appliances are redeployed fresh as they are all stateless.