Layer 2 VPN to the Cloud – Part III

I feel like it is time for another update on VMware Cloud Director (VCD) capabilities regarding establishing L2 VPN between on-prem location and Org VDC. The previous blog posts were written in 2015 and 2018 and do not reflect changes related to usage of NSX-T as the underlying cloud network platform.

The primary use case for L2 VPN to the cloud is migration of workloads to the cloud when the L2 VPN tunnel is temporarily established until migration of all VMs on single network is done. The secondary use case is Disaster Recovery but I feel that running L2 VPN permanently is not the right approach.

But that is not the topic of today’s post. VCD does support setting up L2 VPN on tenant’s Org VDC Gateway (Tier-1 GW) from version 10.2 however still it is hidden, API-only feature (the GUI is finally coming soon … in VCD 10.3.1). The actual set up is not trivial as the underlying NSX-T technology requires first IPSec VPN tunnel to be established to secure the L2 VPN client to server communication. VMware Cloud Director Availability (VCDA) version 4.2 is an addon disaster recovery and migration solution for tenant workloads on top of VCD and it simplifies the set up of both the server (cloud) and client (on-prem) L2 VPN endpoints from its own UI. To reiterate, VCDA is not needed to set up L2 VPN, but it makes it much easier.

The screenshot above shows the VCDA UI plugin embeded in the VCD portal. You can see three L2 VPN session has been created on VDC Gateway GW1 (NSX-T Tier-1 backed) in ACME-PAYG Org VDC. Each session uses different L2 PVN client endpoint type.

The on-prem client can be existing NSX-T tier-0 or tier-1 GW, NSX-T autonomous edge or standalone Edge client. And each requires different type of configuration, so let me discuss each separately.

NSX-T Tier-0 or Tier-1 Gateway

This is mostly suitable for tenants who are running existing NSX-T environment on-prem. They will need to set up both IPSec and L2VPN tunnels directly in NSX-T Manager and is the most complicated process of the three options. On either Tier-0 or Tier-1 GW they will first need to set up IPSec VPN and L2 VPN client services, then the L2VPN session must be created with local and remote endpoint IPs and Peer Code that must be retrieved before via VCD API (it is not available in VCDA UI, but will be available in VCD UI in 10.3.1 or newer). The peer code contains all necessary configuration for the parent IPSec session in Base64 encoding.

Lastly local NSX-T segments to be bridged to the cloud can be configured for the session. The parent IPSec session will be created automagically by NSX-T and after while you should see green status for both IPSec and L2 VPN sessions.

Standalone Edge Client

This option leverages the very light (150 MB) OVA appliance that can be downloaded from NSX-T download website and actually works both with NSX-V and NSX-T L2 VPN server endpoints. It does not require any NSX installation. It provides no UI and its configuration must be done at the time of deployment via OVF parameters. Again the peer code must be provided.

Autonomous Edge

This is the prefered option for non-NSX environments. Autonomous edge is a regular NSX-T edge node that is deployed from OVA, but is not connected to NSX-T Manager. During the OVA deployment Is Autonomous Edge checkbox must be checked. It provides its own UI and much better performance and configurability. Additionally the client tunnel configuration can be done via the VCDA on-premises appliance UI. You just need to deploy the autonomous edge appliance and VCDA will discover it and let you manage it from then via its UI.

This option requires no need to retrieve the Peer Code as the VCDA plugin will retrieve all necessary information from the cloud site.

vCloud Availability – Resizing Disk of Protected VM

A customer asked how to resize a disk of very large VM (file server) which is protected with vCloud Availability and thus replicated to the cloud.

It is not straight forward as the underlying replication engine relies on tracking changed blocks and both the source and target disks must have the same size. In short the replication must be stopped for a moment and then re-established after the necessary disk resizing. Here is step by step process:

  1. Fail over VM via vCloud Availability UI/API without powering on the VM (leave on-prem running).
  2. Consolidate the VM in the cloud (this must be done by SP or use workarounds with copy to catalog and deploy back).
  3. Stop replication of on-prem VM (via vSphere UI plugin).
  4. Resize disk of on-prem VM (including partition and file system).
  5. Resize disk of cloud VM from step #2 (only hardware).
  6. Setup replication from scratch by using ‘Use replication seeds’ option while selecting the seed of failed over cloud VM from step #5

 

Embedding vCloud Availability Portal into vCloud Director UI

Some time ago I blogged about the possibility to link to vCloud Availability Portal directly from vCloud Director UI (here and here). This was done by inserting custom links into the vCloud Director Flex UI.

vCloud Director 9.x tenant HTML5 UI provides much richer possibilities to embed additional links, pages and full websites. My colleague Kelby Valenti wrote two whitepapers and one blog post how to do so.

Extending VMware vCloud Director User Interface Using Portal

ExtensibilityExtending VMware vCloud Director User Interface Using Portal Extensibility – Ticketing Example

Publishing vCloud Director User Interface Extensions

VMware also already released one service that integrates its UI into vCloud Director – vRealize Operations Tenant App.

In the below screenshot you can see VCD UI extended with five new sections that appear as additional menu options next to Datacenters, Libraries and Administration:

Stub Module – default example included in the UI Extensibility SDK providing static page example (Terms of Service, etc.).

Operations Manager – above mentioned vRealize Operations Tenant App

Blog – this blog embedded as iframe.

Documentation – Static page with links to vCloud Director documentation.

The last module is the vCloud Availability 2.0 portal – the subject of this article:

It is also embedded using iframe.

I am attaching the source files so you can download and adapt them for your purposes. You will also need the SDK and I recommend deployment automation created by Kelby as described in his blog post listed above.

Some notes:

  • The actual link to the portal is in the src/main/vcav.component.ts file. In my case it is https://portal.proxy.cpsbu.local so replace it with the correct link for your environment.
  • For security reasons the vCloud Availability portal prohibits being rendered in browser frame by setting  X-Frame-Options header to DENY. To work around this limitation I am replacing the header with X-Frame-Options: ALLOW-FROM <VCD-url> on the existing load balancer that is load balancing my two vCloud Availability Portal nodes as well as redirecting external port 443 to appliances’ port 8443. This is done with NSX Edge Gateway, SSL termination and the following application rule:
  • The link to the portal is also passing the vCloud Director session authentication token for Single Sign-On. Note that however in the current release (2.0.1) this functionality is broken.

 

vCloud Availability – Updated Whitepaper

I have updated my vCAT-SP vCloud Availability whitepaper to reflect changes that came with vCloud Availability 2.0 and vSphere 6.5/6.7.

It can be downloaded from the vCAT-SP site from the Storage and Availability section. The direct link to PDF is here. You will know you have the latest document if you see June 2018 date on the title page.

Edit highlights:

  • Installer Appliance section
  • Tenant and Provider portal sections
  • PSC section update
  • Supported Org VDC Topologies
  • Application Network Design
  • Network Bandwidth Requirements
  • Monitoring updates
  • Updates and Upgrades section
  • Monitoring with vRealize Operations

vCloud Availability – Cloud Proxy with Multiple NICs

Cloud Proxy is important component of vCloud Availability solution that sits in DMZ and tunnels replicated traffic in and out of the provider’s environment. For deep dive on the traffic flows see this older article. Cloud Proxy is very similar to vCloud Director cell, it runs on Linux VM, can be multihomed with internet and management facing interfaces.

By default, Cloud Proxy uses its primary network interface both for to-the-cloud (port 443) and from-the-cloud (port 31031) traffic. When multihoming is used, it might be beneficial to move the listener of the from-the-cloud traffic to the internal interface. This can be accomplished by adding the following line to the $VCLOUD_HOME/etc/global.properties file, with the IP address of the internal interface.

cloudproxy.fromcloudtunnel.host = 192.168.250.110

After restarting the cell, the listener will be moved the the new IP address.

Here is example from my lab:

Cloud Proxy with two NICs:

[root@vcd-01a ~]# ifconfig
eno16780032: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500
inet 192.168.110.40 netmask 255.255.255.0 broadcast 192.168.110.255
inet6 fe80::250:56ff:fe3f:969 prefixlen 64 scopeid 0x20&lt;link&gt;
inet6 fdba:dd06:f00d:a400:250:56ff:fe3f:969 prefixlen 64 scopeid 0x0&lt;global&gt;
ether 00:50:56:3f:09:69 txqueuelen 1000 (Ethernet)
RX packets 45153159 bytes 11625785984 (10.8 GiB)
RX errors 0 dropped 1118 overruns 0 frame 0
TX packets 52432329 bytes 14266764397 (13.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens224: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500
inet 192.168.250.110 netmask 255.255.255.0 broadcast 192.168.250.255
inet6 fe80::570a:1196:4322:521f prefixlen 64 scopeid 0x20&lt;link&gt;
inet6 fdba:dd06:f00d:a400:3495:c013:e72:cc58 prefixlen 64 scopeid 0x0&lt;global&gt;
ether 00:50:56:37:03:81 txqueuelen 1000 (Ethernet)
RX packets 4409 bytes 279816 (273.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 26 bytes 2691 (2.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Before the edit:

[root@vcd-01a ~]# netstat -an|grep 31031
tcp6 0 0 192.168.110.40:31031 :::* LISTEN

After the edit and cell restart:

[root@vcd-01a ~]# netstat -an|grep 31031
tcp6 0 0 192.168.250.110:31031 :::* LISTEN