vCloud Availability: Replication Traffic Deep Dive

VMware this week released vCloud Availability 1.0.1. It is a disaster recovery as a service solution that extends vCloud Director and enables VM replications between vSphere environments and a multitenant public cloud.

One of the unique features it offers is that there is no need for private networks for replication traffic between the tenant on-prem environment and the public cloud. The bi-directional replication can securely traverse the Internet and in this post I will dive deeper into how this is technically achieved.

On-Premises Components

The tenant on-prem environment can run almost any version of vSphere together with vSphere Replication (VR) appliance. The appliance must have access to the internet but does not need to have an external internet routable IP address. It can be behind a firewall with Source NATing and only one port open – TCP 443.

The appliance runs various services:

  • vSphere Replication Manager Service (vRMS) – the brain of the on-prem VR solution with an internal or optionally external database. It also provides the vSphere Web Client plugin extension for managing the replication.
  • vSphere Replication Server (vRS) – staging point for incoming (from-the-cloud) replications before they are de-staged via ESXi hosts to target datastores. This component can scale-out if needed by deploying additional appliances (cca 200 incoming replications for each vRS). It is not in the path for outgoing (to-the-cloud) replications.
  • vCloud Tunneling Agent (vCTA) – component that provides secure tunnel connections to the cloud. It also keeps control connection open so reverse replication can be initiated as well. More on that later.

That is it. There is nothing else needed to be installed by the tenant. This is because the actual replication engine (the vSphere Replication agent and vSCSI filter) is already present in the hypervisor – ESXi VMkernel. This also means no dependency what so ever on the storage hardware.

To-the-cloud replication flow:

ESXi host (VR Agent) > vSphere Replication Appliance (vCTA) > Internet > vCloud Availability public endpoint (Cloud Proxy load balanced VIP)

From-the-cloud replication flow:

vCloud Availability public endpoint (Cloud Proxy) > Internet > vSphere Replication Appliance (vCTA) > vSphere Replication Server (either embedded on VR appliance or standalone) > ESXi host

Public Cloud Components

In the cloud we need to have supported version of vCloud Director. It consists of multiple load balanced vCloud Director cells, database and resource vSphere environments.

For vCloud Availability we need vSphere Replication components – vRMS appliances for each resource vCenter Server and vRS appliances for replication staging that scale-out based on number of replications.

Additionally we need:

  • vSphere Replication Cloud Service (vRCS) – highly available appliances. The brain of the solution with extended vCloud VR APIs. It needs external Cassandra database and RabbitMQ to communicate with vCloud Director.
  • Cloud Proxies – load balanced vCloud Director cell like components with all vCloud services disabled with only the Cloud Proxy service running (multitenanted vCTA)
  • vCloud Availability Portal appliances – load balanced stateless components that provide portal to manage replications in the cloud when the on-prem vSphere Replication UI is not available (in disaster situations).

The provider can serve hundreds of customers with thousands of concurrent tunnels. To achieve such level of scalability, the Cloud Proxies are deployed in scale-out fashion with load balancer in front. The load balancer provides single endpoint for the on-prem vCTA control connection as well as for the to-the-cloud replication traffic.

To-the-cloud replication flow:

Tenant on-prem VR Appliance > Internet > Load balancer > Cloud Proxy (tunnel termination and decryption) > vRS > ESXi host.

From-the-cloud replication flow:

In order not to require public visibility of the on-prem tenant VR Appliance, the from-the-cloud replication is set up in a quite clever way. There are two options of doing this – one with load balancer with L7 application rules and the other without. The seconds approach is more scalable and recommended so let me describe it.

The following diagram shows the workflow:

from-the-cloud-replication

As was said before the connection is always initiated by the on-prem environment. That’s why we have the control connection (1) that is load balanced to one of the Cloud Proxies (in our example Cloud Proxy 2). The replicated traffic is coming from in the cloud resource ESXi host that sends it (2) via internal load balancer to one of the Cloud Proxies – in our case Cloud Proxy 1 (3). Through the control connection the on-prem vCTA endpoint is notified which Cloud Proxy is used for the particular replication (4-7). Now the on-prem vCTA can establish new connection to the correct Cloud Proxy 1 – it does not use its load balanced address, but instead a direct IP/FQDN that is DNATed 1:1 to the Cloud Proxy (9). Finally the two connections (3) and (9) can be stitched together (10) and we can start sending from-the-cloud replication traffic all the way to the on-prem environment.

To summarize, we need:

  • Cloud Proxy load balancer with CloudProxyBase VIP that is used for the Control connection and to-the-cloud replications.
  • Internal load balancer for resource ESXi to Cloud Proxy (from-the-cloud) traffic.
  • Additional public IP/FQDN for each Cloud Proxy for from-the-cloud traffic. This FQDN is configured on the Cloud Proxy cell in global.properties file (cloudproxy.reverseconnection.fqdn=FQDN:443).
  • As a consequence of using the same Cloud Proxy under different FQDNs (CloudProxyBase VIP and Cloud Proxy reverse connection) we need to take care that Cloud Proxy cell http certificate is set for both FQDNs. Probably the easiest way to achieve this is to use wild card certificate on cloud proxies (CN *.cloudproxy.example.com).

10 thoughts on “vCloud Availability: Replication Traffic Deep Dive

  1. Hi, Tom. I’m start learning installation process for vCloud Availability 1.0.1 and I understand that everything is bad in the documentation.

    On page “Download vCloud Availability for vCloud Director Appliances” (http://pubs.vmware.com/vcloud-availability-for-vcd-101/index.jsp#com.vmware.vcavcd.install.doc/GUID-AE8C5E67-9DFD-43EC-AF8B-4278DAA38FD1.html) there is a section about “Download the vSphere Replication 6.1.0 Appliances” in which there is a link to ” vCloud Availability for vCloud Director 1.0.0 (zip)”.

    All of this makes a huge mess if you plan to deploy. I’m at a loss. =))

    1. While the instruction might be confusing the zip file linked contains the various vCloud Availability appliances (vRCS, vRMS and vRS). The portal and installer are separate downloads.

  2. Hi Tom,
    excellent article and very helpful. Would I be correct in thinking that the vCloud Availability product can have as its target any ESXi platform or vCloud Director for SP platform as long as the appliance and connectivity requirements are met? It seems to imply by the wording that the only option is a target in public cloud.

    regards

    1. No, there is not. Obviously compliance with your RPO will depend if the replication link has enough bandwidth to sync the changes in set RPO time frame.

  3. hi tom, do you have step by step for this deployment ??
    right now i already have vsphere replication on dc and drc site, and vcloud director on drc site.
    then what else i need to make replication to vcloud director success ?

      1. Okay, got it,,
        and, do we need to enable replication on vcloud director org/virtual datacenter ?
        if yes, do you have any documentation on it ???

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.