vCloud Director 8.20: Distributed Firewall

NSX Distributed Firewall (DFW) is the most popular feature of NSX which enables microsegmenation of networks with vNIC level firewalls in hypervisor. For real technical deep dive into the feature I recommend reading Wade Holmes free e-book available here.

vCloud Director 8.20 provides this feature to tenants with brand new HTML5 UI and API. It is managed at Org VDC level from Manage Firewall link. This opens new tab with the new user interface.

manage-firewall

dfw-ui

Firewall Comparison

vCloud Director now offers three different firewalls types for tenants, which might be confusing. So let me quickly compare them.

firewall-comparison

The picture above shows two Org VDCs each with different network topologies. Org VDC 1 is using Org VDC Edge Gateway that provides firewalling as well as other networking services (load balancing, VPNs, NAT, routing, etc.). It has also brand new UI and Network API. Firewalling at this level is enforced only on packets routed through the Edge Gateway.

One level below we see vApps with vApp Edges. These provide routing, firewalling and NAT between routed vApp Network and Org VDC network. There is no change in firewall capability of vApp Edge in vCloud Director 8.20 and old flash UI and vCloud API can be used for its configuration. Firewalling at vApp Edge level is enforced only on packets routed between Org VDC and vApp networks.

Distributed firewall is applied at the vNIC level of virtual machines. It means it can inspect every packet and frame coming and leaving VM and is therefore completely independent from the network topology and can be used for microsegmentation of layer 2 network. Both layer 3 and layer 2 rules can be created.

Obviously all three firewall types can be combined and used together.

Managing Access to Distributed Firewall

There are four new access rights related to DFW in vCloud Director.

  • Manage Firewall
  • Configure Distributed Firewall Rules
  • View Distributed Firewall Rules
  • Enable / Disable Distributed Firewall

The last right is by default available only to system administrators, therefore the provider can control which tenant can and cannot use DFW and it can thus be offered as a value added service. The provider can either enable DFW selectively for specific Org VDCs or alternatively grant Enable/Disable Distributed Firewall right to a specific organization via API. The tenant can enable DFW by himself.

Distributed Firewall under the Hood

Each tenant is given a section in the NSX firewall table and can only apply rules to VMs and Edge Gateways in his domain. There is one section for each Org VDC that has DFW enabled and it is created always on top.

Edit 3/14/2017: In fact it is possible to create the section at the bottom just above the default section. This allows provider to create its own section on the top which will be always enforced first. The use case for this could be service network.

To force creation of the section at the bottom the firewall must be enabled with API call with ?append=true at the end.

Example: 

POST https://vcloud.fojta.com/network/firewall/vdc/be0f2baa-d36f-47f0-8443-3c5cac231ba5?append=true

Org VDC Section Appended at the Bottom

As tenants could have overlapping IPs all rules in the section are scoped to a security group with dynamic membership of tenant Org VDC resource pools and thus will be applied only to VMs in the Org VDC.

nsx-dfw-section
Org VDC section in NSX DFW
org-vdc-security-group
Org VDC Security Group

Tenants can create layer 3 (IP based) or layer 2 (MAC based) rules while using the following objects when defining them:

  •  IP address, IP/MAC sets
  • Virtual Machine
  • Org VDC Network
  • Org VDC

Note that using L3 non-IP based rules requires NSX to learn IP address(es) of the guest VM. One of the following mechanism must be enabled:

  • VMware Tools installed in guest VM
  • DHCP Snooping IP Detection Type
  • ARP Snooping IP Detection Type

IP Detection Type is configured in NSX at Cluster Level in Host Preparation tab.

host-preparation

ip-detection-type

Scope for each rule can be defined in Applied To column. As mentioned before by default it is set to the Org VDC, however tenant can further limit the scope of the rule to a particular VM, or Org VDC network (note that vApp network cannot be used). It is also possible to apply the rule to Org VDC Edge Gateway, in such case the rule is actually created and enforced on the Edge Gateway as pre-rule which has precedence over all other firewall rules defined at that Edge Gateway.

DFW Rule Applied to Edge GW
DFW Rule Applied to Edge GW

Tenant can enable logging of a specific firewall rule with API by editing <rule … logged=”true|false”> element. NSX then logs the first session packet matching the rule to ESXi host log with tenant specific tag (Org VDC UUID subset string). The provider can then filter such logs and forward them to tenants with its own syslog solution.

logging
NSX DFW Rule Tenant Tag

VCDNI to VXLAN Migration

vCloud Network Isolation (VCDNI or VCNI) is legacy mechanism to create overlay logical networks independently from physical networking underlay. It was originally used in VMware vCenter Lab Manager (where it was known as Cross Host Fencing). vCloud Director offers it as one of many mechanisms for creation of logical networks (next to VXLAN, VLAN and port group backings). VCDNI uses VMware proprietary MAC-in-MAC encapsulation done by vCloud Agent running in ESXi host vmkernel.

It has been for some time superseded by VXLAN technology which is much more scalable, provides better performance and is industry standard technology. VXLAN network pools have been available in vCloud Director since version 5.1.

VCDNI is consumed by manual creation of a vCloud Network Isolation backed Network Pool that is mapped to an underlay VLAN network with up to 1000 logical networks for each pool (VLAN).

As a deprecated and obsolete technology it is no longer supported in vSphere 6.5 and vCloud Director 8.20 is the last release that will support such network pools. vCloud Director 8.20 also provides simple mechanism to perform low-disruption migrations for Org VDC and vApp networks to VXLAN backed networks. Such migration must be done before upgrade to vSphere 6.5 (see more in KB 2148381).

The migration can be performed via UI or API by system administrator with Org VDC granularity.

Migration via UI

  1. For an Org VDC using VCDNI network pool open in the System tab – Manager & Monitor, Org VDC properties (note that doing the same from Org tab will not work).
    org-vdc
  2. Go to Network Pool & Services tab and change VCDNI backed network pool to VXLAN backed one and click OK.
    network-pool
  3. Again open Network Pool & Services tab of the Org VDC. Migrate to VXLAN button will now appear.
    migrate-to-vxlan
  4. Click the button, confirm the message and start the migration.
    confirmation
  5. After while the Org VDC status will change from busy to ready and the migration is finished. Details (and possible errors) can be reviewed in the Recent Tasks of the Audit Log.
    audit-log

Migration with vCloud API

Org VDC network migration is triggered by single API POST call at the Org VDC level.

POST /api/admin/vdc/<org VDC UUID>/migrateVcdniToVxlan
Content Type: application/vnd.vmware.admin.vdcnitovxlanmigration+xml

The Process

The following happens in the background when migration is triggered for each VCDNI backed network in an Org VDC:

  1. ‘Dummy’ VXLAN logical switch is created
  2. All VMs connected to VCDNI network are reconnected to the new VXLAN logical switch
  3. Edge Gateways connected to VCDNI network are connected to the new VXLAN logical switch
  4. Org VDC/vApp network backing is changed in vCloud DB to use the new VXLAN logical switch
  5. Original VCDNI port group is deleted

Small network disruption is expected during VM and Edge Gateway reconnections. The following Recent Tasks picture from vSphere Client shows what is happening at vCenter Server level and how much time each task could take. In the example there was one Org VDC network and one vApp network migrated with VM1 and Edge Gateway ACME-GW2 involved.

vc-recent-tasks

vCloud Availability: Replication Traffic Deep Dive

VMware this week released vCloud Availability 1.0.1. It is a disaster recovery as a service solution that extends vCloud Director and enables VM replications between vSphere environments and a multitenant public cloud.

One of the unique features it offers is that there is no need for private networks for replication traffic between the tenant on-prem environment and the public cloud. The bi-directional replication can securely traverse the Internet and in this post I will dive deeper into how this is technically achieved.

On-Premises Components

The tenant on-prem environment can run almost any version of vSphere together with vSphere Replication (VR) appliance. The appliance must have access to the internet but does not need to have an external internet routable IP address. It can be behind a firewall with Source NATing and only one port open – TCP 443.

The appliance runs various services:

  • vSphere Replication Manager Service (vRMS) – the brain of the on-prem VR solution with an internal or optionally external database. It also provides the vSphere Web Client plugin extension for managing the replication.
  • vSphere Replication Server (vRS) – staging point for incoming (from-the-cloud) replications before they are de-staged via ESXi hosts to target datastores. This component can scale-out if needed by deploying additional appliances (cca 200 incoming replications for each vRS). It is not in the path for outgoing (to-the-cloud) replications.
  • vCloud Tunneling Agent (vCTA) – component that provides secure tunnel connections to the cloud. It also keeps control connection open so reverse replication can be initiated as well. More on that later.

That is it. There is nothing else needed to be installed by the tenant. This is because the actual replication engine (the vSphere Replication agent and vSCSI filter) is already present in the hypervisor – ESXi VMkernel. This also means no dependency what so ever on the storage hardware.

To-the-cloud replication flow:

ESXi host (VR Agent) > vSphere Replication Appliance (vCTA) > Internet > vCloud Availability public endpoint (Cloud Proxy load balanced VIP)

From-the-cloud replication flow:

vCloud Availability public endpoint (Cloud Proxy) > Internet > vSphere Replication Appliance (vCTA) > vSphere Replication Server (either embedded on VR appliance or standalone) > ESXi host

Public Cloud Components

In the cloud we need to have supported version of vCloud Director. It consists of multiple load balanced vCloud Director cells, database and resource vSphere environments.

For vCloud Availability we need vSphere Replication components – vRMS appliances for each resource vCenter Server and vRS appliances for replication staging that scale-out based on number of replications.

Additionally we need:

  • vSphere Replication Cloud Service (vRCS) – highly available appliances. The brain of the solution with extended vCloud VR APIs. It needs external Cassandra database and RabbitMQ to communicate with vCloud Director.
  • Cloud Proxies – load balanced vCloud Director cell like components with all vCloud services disabled with only the Cloud Proxy service running (multitenanted vCTA)
  • vCloud Availability Portal appliances – load balanced stateless components that provide portal to manage replications in the cloud when the on-prem vSphere Replication UI is not available (in disaster situations).

The provider can serve hundreds of customers with thousands of concurrent tunnels. To achieve such level of scalability, the Cloud Proxies are deployed in scale-out fashion with load balancer in front. The load balancer provides single endpoint for the on-prem vCTA control connection as well as for the to-the-cloud replication traffic.

To-the-cloud replication flow:

Tenant on-prem VR Appliance > Internet > Load balancer > Cloud Proxy (tunnel termination and decryption) > vRS > ESXi host.

From-the-cloud replication flow:

In order not to require public visibility of the on-prem tenant VR Appliance, the from-the-cloud replication is set up in a quite clever way. There are two options of doing this – one with load balancer with L7 application rules and the other without. The seconds approach is more scalable and recommended so let me describe it.

The following diagram shows the workflow:

from-the-cloud-replication

As was said before the connection is always initiated by the on-prem environment. That’s why we have the control connection (1) that is load balanced to one of the Cloud Proxies (in our example Cloud Proxy 2). The replicated traffic is coming from in the cloud resource ESXi host that sends it (2) via internal load balancer to one of the Cloud Proxies – in our case Cloud Proxy 1 (3). Through the control connection the on-prem vCTA endpoint is notified which Cloud Proxy is used for the particular replication (4-7). Now the on-prem vCTA can establish new connection to the correct Cloud Proxy 1 – it does not use its load balanced address, but instead a direct IP/FQDN that is DNATed 1:1 to the Cloud Proxy (9). Finally the two connections (3) and (9) can be stitched together (10) and we can start sending from-the-cloud replication traffic all the way to the on-prem environment.

To summarize, we need:

  • Cloud Proxy load balancer with CloudProxyBase VIP that is used for the Control connection and to-the-cloud replications.
  • Internal load balancer for resource ESXi to Cloud Proxy (from-the-cloud) traffic.
  • Additional public IP/FQDN for each Cloud Proxy for from-the-cloud traffic. This FQDN is configured on the Cloud Proxy cell in global.properties file (cloudproxy.reverseconnection.fqdn=FQDN:443).
  • As a consequence of using the same Cloud Proxy under different FQDNs (CloudProxyBase VIP and Cloud Proxy reverse connection) we need to take care that Cloud Proxy cell http certificate is set for both FQDNs. Probably the easiest way to achieve this is to use wild card certificate on cloud proxies (CN *.cloudproxy.example.com).

Setup Site-to-Site VPN between Azure and vCloud Director

My previous blog post was about setting up IPSec VPN tunnel between AWS VPC and vCloud Director Org VDC. This time I will describe how to achieve the same with Microsoft Azure.

vCloud Director is not among Azure list of supported IPSec VPN endpoints however it is possible to set up such VPN although it is not straightforward.

I will describe the setup of both Azure and VCD endpoints very briefly as it is very similar to the one I described in my previous article.

Azure Configuration

  • Resource Group (logical container object) – in my example RG UK
  • Virtual network (large address space similar to AWS VPN subnet) – 172.30.0.0/16
  • Subnets – at least one for VMs (172.30.0.0/24) and one for Gateway (172.30.255.0/29)
  • Virtual Network Gateway – Azure VPN endpoint with public IP address associated with the Gateway subnet above. Gateway type is VPN, VPN type is Policy-based (this is because Route-based type uses IKE2 which is not supported by NSX platform used by vCloud Director).
  • Local Network Gateway – vCloud VPN endpoint definition with its public IP address and subnets that should be reachable behind the vCloud VPN endpoint (81.x.x.x, 192.168.100.0/24)
  • Connection – definition of the tunnel:
    • Connection type: Site-to-site (IPSec)
    • Virtual network gateway and local network gateway are straightforward (those created previously)
    • Connection name: whatever
    • Shared Key (PSK): create your own 32+ character key using upper and lower case characters and numbers
  • Test VM connected to the VM subnet (IP 172.30.0.4)

azure-resources

vCloud Configuration

As explained above we created Policy Based VPN endpoint in Azure. Policy Based VPN uses IKE version 1, Diffie-Hellman Group 2 and no Perfect Forward Secrecy.

However selection of DH group and PFS is not available to tenant in vCloud Director on the legacy Org VDC Edge Gateway. Therefore the following workaround is proposed:

Tenant configures VPN on his Org VDC Edge Gateway with the following:

  • Name: Azure
  • Enable this VPN configuration
  • Establisth VPN to: a remote network
  • Local Networks: 192.168.100.0/24 (Org VDC network(s))
  • Peer Networks: 172.30.0.0/24
  • Local Endpoint: Internet (interface facing internet)
  • Local ID: 10.0.2.121 (Org VDC Edge GW internet interface)
  • Peer ID: 51.x.x.x (public IP of the Azure Virtual network gateway)
  • Peer IP: 51.x.x.x (same as previous)
  • Encryption protocol: AES256
  • Shared Key: the same as in Azure Connection definition

Now we need to ask the service provider to directly in NSX in the Edge VPN configuration disable PFS and change DH Group to DH2.

nsx-vpn

Note that this workaround is not necessary on Org VDC Edge Gateway that has been enabled with Advanced Networking services. This feature is at the moment only in vCloud Air, however soon will be available to all vCloud Air Network service providers.

If all firewall rules are properly set up we should be able to ping between Azure and vCloud VMs.

ping

Setup Site-to-Site VPN between AWS and vCloud Director

In today’s reality of multi cloud world customers are asking how to set up connection between them. In this article I am going to demonstrate how to set up IPsec VPN tunnel between AWS VPC and vCloud Director Org VDC.

IPSec is standard protocol suite which works at OSI Layer 3 and allows encrypting IP packet communication. It is supported by many software, hardware and cloud vendor implementations, however it is also quite complex to set up due to large sets of different settings which both tunnel endpoints must support. Additionally as it does not rely on TCP L4 protocol NAT traversal can be a challenge.

In my example I am using my home lab vCloud Director instance running behind NATed internet connection. So what could go wrong 🙂

The diagram below shows the set up.

 

AWS Virtual Private Cloud on the left is created with large subnet 172.31.0.0/16, a few instances, and Internet and VPN gateways.

On the right is vCloud Director Org VDC with a network 192.168.100.0/24 behind an Org VDC Edge Gateway which is connected to the Internet via my home ADSL router.

    1. We start by taking care of IPSec NAT traversal over the ADSL router. As I have dd-wrt OS on the router, I am showing how I enabled port forwarding of UDP ports 500 and 4500 to the Edge GW IP 10.0.2.121 and added DNAT for protocols 50 (AH) and 51 (ESP) to the router startup script.
      udp-port-forwardingiptables -t nat -A PREROUTING -p 50 -j DNAT –to 10.0.2.121
      iptables -t nat -A PREROUTING -p 51 -j DNAT –to 10.0.2.121
    2. Now we can proceed with the AWS VPN configuration. In AWS console, we go to VPC, VPN Connections – Customer Gateways and create Customer Gateway – the definition of the vCloud Director Org VDC Edge Gateway endpoint. We give it a name, set it to static routing and provide its public IP address (in my case the public address of the ADSL router).customer-gateway
    3. Next we define the other end of the tunnel – Virtual Private Gatway – in menu below. We give it a name and right after it is created, associate it with the VPC by right clicking on it.virtual-private-gateway
    4. Now we can create VPN Connection in the next menu below (VPN Connections). We give it a descriptive name and associate Virtual Private Gateway from step #3 with Customer Gateway from step #2. We select static routing and provide the subnet at the other end of the tunnel, which is in our case 192.168.100.0/24. This step might take some time to finish.
    5. When the VPN Connection is created we need to download its configuration. AWS will provide the configuration in various formats customized for the appliance on the other side of the tunnel. Generic format will do for our purposes. Needless to say, AWS does not allow custom setting of any of the given parameters – it is take it or leave it. download-configuration
    6. Before leaving AWS console we need to make sure that the subnet at the other side of the tunnel is propagated to the VPC routing table. This can be done in the Route Table menu, select the existing Route Table, in the Route Propagation tab find the Virtual Private Gateway from step #3 and check Propagate check box.route-table
    7. To configure the other side of the VPN endpoint – the Org VDC Edge Gateway we need to collect the following information from the configuration file obtained in the step #5.
      Virtual Private Gateway IP: 52.x.y.z
      Encryption Algorithm: AES-128
      Perfect Forward Secrecy: Diffie-Hellman Group 2
      Pre-Shared Key (PSK): 32 random characters
      MTU: 1436.
      Note: As was said before, none of these parameters can be changed on AWS side. So the router on the other side must support all of them. And here we hit a little issue. AWS pre-shared key is generated with number and letter (upper and lower case) characters and a special character – like dot, underscore, etc. Unfortunately vShield Edge does not support PSK with special character. NSX Edge does, but the legacy vCloud Director UI/API will not allow us to create IPsec VPN configuration with PSK containing special character. There are various ways how to solve it. One is not to use the native AWS VPN Gateway and instead use software VPN option, another is to create/edit the VPN configuration directly in NSX Manager (only Service Provider can do this) and lastly convert the Edge Gateway to Advanced Gateway and take advantage of the new networking UI and API that does not have this limitation (this functionality is currently available only on vCloud Air, but will soon be available to all vCloud Air Network providers).
    8. In vCloud Director UI go to Administration, select your Virtual Datacenter, Edge Gateways tab and right click on the correct Edge GW to select its Edge Gateway Services.edge-gw-services
    9. In The VPN tab Enable VPN by clicking the checkbox. In my NATed example I also had to configure public IP for the Edge GW (which is the address of the ADSL router).enable-vpn
    10. Finally we can create the VPN tunnel by clicking the Add button and selecting Establish VPN to a remote network pulldown option. Select local network(s) (192.168.110.0/24), in peer networks enter AWS VPC subnet (172.31.0.0/24), select internet interface of the Edge in the Local Endpoint, enter its IP address (10.0.2.121). For Peer ID and Peer IP use public address of Virtual Private Gateway from step #7. Change Encryption algorithm to AES and paste Shared Key (see the note in #7). Finally modify MTU size (1436).

If everything was set correctly then back in AWS console, under VPN Connections, Tunnel details we should see the tunnel status change to UP.

AWS offers two tunnel endpoints for redundancy, however in our case we are using only Tunnel 1.

tunnel-status-in-aws

If the firewall in Org VDC and Security Groups in AWS are properly set, we should be able to prove tunnel communication with pings from AWS instance to the Org VDC VM.

ping-test