vRealize Operations Management Pack for NSX-V and Log Insight Integration

Quick post about an issue I discovered in my lab during upgrade to NSX 6.3.3. This particular NSX version has a silent new feature that verifies if syslog configuration on Edges is correct. If the syslog entry is incorrect (it is not an IP address or FQDN with at least one dot character or does not have TCP/UDP protocol specified) it will not let you save it. This however also means that older Edges (with version 6.3.2 or older) that have incorrect syslog setting will fail to be upgraded as the incorrect config will not be accepted.

So how does it relate to the title of the article? If you have vROps in your environment with NSX-V management pack and you have enabled Log Insight integration, the Management Pack will configure syslog on all NSX components. Unfortunately in my case it configures them incorrectly with only hostname and no protocol. This reconfiguration happens roughly every hour. This might be especially annoying in vCloud Director environment where all the Edges are initially deployed with syslog setting specified by VCD, but then are changed within an hour by vROps to something different.

Anyway, the remediation is simple. Disable the Log Insight integration of the vROps NSX Management Pack as shown on the picture below.

vCloud Director 8.20: Distributed Firewall

NSX Distributed Firewall (DFW) is the most popular feature of NSX which enables microsegmenation of networks with vNIC level firewalls in hypervisor. For real technical deep dive into the feature I recommend reading Wade Holmes free e-book available here.

vCloud Director 8.20 provides this feature to tenants with brand new HTML5 UI and API. It is managed at Org VDC level from Manage Firewall link. This opens new tab with the new user interface.

manage-firewall

dfw-ui

Firewall Comparison

vCloud Director now offers three different firewalls types for tenants, which might be confusing. So let me quickly compare them.

firewall-comparison

The picture above shows two Org VDCs each with different network topologies. Org VDC 1 is using Org VDC Edge Gateway that provides firewalling as well as other networking services (load balancing, VPNs, NAT, routing, etc.). It has also brand new UI and Network API. Firewalling at this level is enforced only on packets routed through the Edge Gateway.

One level below we see vApps with vApp Edges. These provide routing, firewalling and NAT between routed vApp Network and Org VDC network. There is no change in firewall capability of vApp Edge in vCloud Director 8.20 and old flash UI and vCloud API can be used for its configuration. Firewalling at vApp Edge level is enforced only on packets routed between Org VDC and vApp networks.

Distributed firewall is applied at the vNIC level of virtual machines. It means it can inspect every packet and frame coming and leaving VM and is therefore completely independent from the network topology and can be used for microsegmentation of layer 2 network. Both layer 3 and layer 2 rules can be created.

Obviously all three firewall types can be combined and used together.

Managing Access to Distributed Firewall

There are four new access rights related to DFW in vCloud Director.

  • Manage Firewall
  • Configure Distributed Firewall Rules
  • View Distributed Firewall Rules
  • Enable / Disable Distributed Firewall

The last right is by default available only to system administrators, therefore the provider can control which tenant can and cannot use DFW and it can thus be offered as a value added service. The provider can either enable DFW selectively for specific Org VDCs or alternatively grant Enable/Disable Distributed Firewall right to a specific organization via API. The tenant can enable DFW by himself.

Distributed Firewall under the Hood

Each tenant is given a section in the NSX firewall table and can only apply rules to VMs and Edge Gateways in his domain. There is one section for each Org VDC that has DFW enabled and it is created always on top.

Edit 3/14/2017: In fact it is possible to create the section at the bottom just above the default section. This allows provider to create its own section on the top which will be always enforced first. The use case for this could be service network.

To force creation of the section at the bottom the firewall must be enabled with API call with ?append=true at the end.

Example: 

POST https://vcloud.fojta.com/network/firewall/vdc/be0f2baa-d36f-47f0-8443-3c5cac231ba5?append=true

Org VDC Section Appended at the Bottom

As tenants could have overlapping IPs all rules in the section are scoped to a security group with dynamic membership of tenant Org VDC resource pools and thus will be applied only to VMs in the Org VDC.

nsx-dfw-section
Org VDC section in NSX DFW
org-vdc-security-group
Org VDC Security Group

Tenants can create layer 3 (IP based) or layer 2 (MAC based) rules while using the following objects when defining them:

  •  IP address, IP/MAC sets
  • Virtual Machine
  • Org VDC Network
  • Org VDC

Note that using L3 non-IP based rules requires NSX to learn IP address(es) of the guest VM. One of the following mechanism must be enabled:

  • VMware Tools installed in guest VM
  • DHCP Snooping IP Detection Type
  • ARP Snooping IP Detection Type

IP Detection Type is configured in NSX at Cluster Level in Host Preparation tab.

host-preparation

ip-detection-type

Scope for each rule can be defined in Applied To column. As mentioned before by default it is set to the Org VDC, however tenant can further limit the scope of the rule to a particular VM, or Org VDC network (note that vApp network cannot be used). It is also possible to apply the rule to Org VDC Edge Gateway, in such case the rule is actually created and enforced on the Edge Gateway as pre-rule which has precedence over all other firewall rules defined at that Edge Gateway.

DFW Rule Applied to Edge GW
DFW Rule Applied to Edge GW

Tenant can enable logging of a specific firewall rule with API by editing <rule … logged=”true|false”> element. NSX then logs the first session packet matching the rule to ESXi host log with tenant specific tag (Org VDC UUID subset string). The provider can then filter such logs and forward them to tenants with its own syslog solution.

logging
NSX DFW Rule Tenant Tag

VCDNI to VXLAN Migration

vCloud Network Isolation (VCDNI or VCNI) is legacy mechanism to create overlay logical networks independently from physical networking underlay. It was originally used in VMware vCenter Lab Manager (where it was known as Cross Host Fencing). vCloud Director offers it as one of many mechanisms for creation of logical networks (next to VXLAN, VLAN and port group backings). VCDNI uses VMware proprietary MAC-in-MAC encapsulation done by vCloud Agent running in ESXi host vmkernel.

It has been for some time superseded by VXLAN technology which is much more scalable, provides better performance and is industry standard technology. VXLAN network pools have been available in vCloud Director since version 5.1.

VCDNI is consumed by manual creation of a vCloud Network Isolation backed Network Pool that is mapped to an underlay VLAN network with up to 1000 logical networks for each pool (VLAN).

As a deprecated and obsolete technology it is no longer supported in vSphere 6.5 and vCloud Director 8.20 is the last release that will support such network pools. vCloud Director 8.20 also provides simple mechanism to perform low-disruption migrations for Org VDC and vApp networks to VXLAN backed networks. Such migration must be done before upgrade to vSphere 6.5 (see more in KB 2148381).

The migration can be performed via UI or API by system administrator with Org VDC granularity.

Migration via UI

  1. For an Org VDC using VCDNI network pool open in the System tab – Manager & Monitor, Org VDC properties (note that doing the same from Org tab will not work).
    org-vdc
  2. Go to Network Pool & Services tab and change VCDNI backed network pool to VXLAN backed one and click OK.
    network-pool
  3. Again open Network Pool & Services tab of the Org VDC. Migrate to VXLAN button will now appear.
    migrate-to-vxlan
  4. Click the button, confirm the message and start the migration.
    confirmation
  5. After while the Org VDC status will change from busy to ready and the migration is finished. Details (and possible errors) can be reviewed in the Recent Tasks of the Audit Log.
    audit-log

Migration with vCloud API

Org VDC network migration is triggered by single API POST call at the Org VDC level.

POST /api/admin/vdc/<org VDC UUID>/migrateVcdniToVxlan
Content Type: application/vnd.vmware.admin.vdcnitovxlanmigration+xml

The Process

The following happens in the background when migration is triggered for each VCDNI backed network in an Org VDC:

  1. ‘Dummy’ VXLAN logical switch is created
  2. All VMs connected to VCDNI network are reconnected to the new VXLAN logical switch
  3. Edge Gateways connected to VCDNI network are connected to the new VXLAN logical switch
  4. Org VDC/vApp network backing is changed in vCloud DB to use the new VXLAN logical switch
  5. Original VCDNI port group is deleted

Small network disruption is expected during VM and Edge Gateway reconnections. The following Recent Tasks picture from vSphere Client shows what is happening at vCenter Server level and how much time each task could take. In the example there was one Org VDC network and one vApp network migrated with VM1 and Edge Gateway ACME-GW2 involved.

vc-recent-tasks

Update 5/8/2017: Engineering informed me that it was reported that due to vSphere bug, during the migration fenced parameters are not removed from NSX Edge VMs vmx file. This impacts the Edge connectivity to migrated network. As a workaround redeploy the Edge Gateway after the migration.

vCloud Availability: Replication Traffic Deep Dive

VMware this week released vCloud Availability 1.0.1. It is a disaster recovery as a service solution that extends vCloud Director and enables VM replications between vSphere environments and a multitenant public cloud.

One of the unique features it offers is that there is no need for private networks for replication traffic between the tenant on-prem environment and the public cloud. The bi-directional replication can securely traverse the Internet and in this post I will dive deeper into how this is technically achieved.

On-Premises Components

The tenant on-prem environment can run almost any version of vSphere together with vSphere Replication (VR) appliance. The appliance must have access to the internet but does not need to have an external internet routable IP address. It can be behind a firewall with Source NATing and only one port open – TCP 443.

The appliance runs various services:

  • vSphere Replication Manager Service (vRMS) – the brain of the on-prem VR solution with an internal or optionally external database. It also provides the vSphere Web Client plugin extension for managing the replication.
  • vSphere Replication Server (vRS) – staging point for incoming (from-the-cloud) replications before they are de-staged via ESXi hosts to target datastores. This component can scale-out if needed by deploying additional appliances (cca 200 incoming replications for each vRS). It is not in the path for outgoing (to-the-cloud) replications.
  • vCloud Tunneling Agent (vCTA) – component that provides secure tunnel connections to the cloud. It also keeps control connection open so reverse replication can be initiated as well. More on that later.

That is it. There is nothing else needed to be installed by the tenant. This is because the actual replication engine (the vSphere Replication agent and vSCSI filter) is already present in the hypervisor – ESXi VMkernel. This also means no dependency what so ever on the storage hardware.

To-the-cloud replication flow:

ESXi host (VR Agent) > vSphere Replication Appliance (vCTA) > Internet > vCloud Availability public endpoint (Cloud Proxy load balanced VIP)

From-the-cloud replication flow:

vCloud Availability public endpoint (Cloud Proxy) > Internet > vSphere Replication Appliance (vCTA) > vSphere Replication Server (either embedded on VR appliance or standalone) > ESXi host

Public Cloud Components

In the cloud we need to have supported version of vCloud Director. It consists of multiple load balanced vCloud Director cells, database and resource vSphere environments.

For vCloud Availability we need vSphere Replication components – vRMS appliances for each resource vCenter Server and vRS appliances for replication staging that scale-out based on number of replications.

Additionally we need:

  • vSphere Replication Cloud Service (vRCS) – highly available appliances. The brain of the solution with extended vCloud VR APIs. It needs external Cassandra database and RabbitMQ to communicate with vCloud Director.
  • Cloud Proxies – load balanced vCloud Director cell like components with all vCloud services disabled with only the Cloud Proxy service running (multitenanted vCTA)
  • vCloud Availability Portal appliances – load balanced stateless components that provide portal to manage replications in the cloud when the on-prem vSphere Replication UI is not available (in disaster situations).

The provider can serve hundreds of customers with thousands of concurrent tunnels. To achieve such level of scalability, the Cloud Proxies are deployed in scale-out fashion with load balancer in front. The load balancer provides single endpoint for the on-prem vCTA control connection as well as for the to-the-cloud replication traffic.

To-the-cloud replication flow:

Tenant on-prem VR Appliance > Internet > Load balancer > Cloud Proxy (tunnel termination and decryption) > vRS > ESXi host.

From-the-cloud replication flow:

In order not to require public visibility of the on-prem tenant VR Appliance, the from-the-cloud replication is set up in a quite clever way. There are two options of doing this – one with load balancer with L7 application rules and the other without. The seconds approach is more scalable and recommended so let me describe it.

The following diagram shows the workflow:

from-the-cloud-replication

As was said before the connection is always initiated by the on-prem environment. That’s why we have the control connection (1) that is load balanced to one of the Cloud Proxies (in our example Cloud Proxy 2). The replicated traffic is coming from in the cloud resource ESXi host that sends it (2) via internal load balancer to one of the Cloud Proxies – in our case Cloud Proxy 1 (3). Through the control connection the on-prem vCTA endpoint is notified which Cloud Proxy is used for the particular replication (4-7). Now the on-prem vCTA can establish new connection to the correct Cloud Proxy 1 – it does not use its load balanced address, but instead a direct IP/FQDN that is DNATed 1:1 to the Cloud Proxy (9). Finally the two connections (3) and (9) can be stitched together (10) and we can start sending from-the-cloud replication traffic all the way to the on-prem environment.

To summarize, we need:

  • Cloud Proxy load balancer with CloudProxyBase VIP that is used for the Control connection and to-the-cloud replications.
  • Internal load balancer for resource ESXi to Cloud Proxy (from-the-cloud) traffic.
  • Additional public IP/FQDN for each Cloud Proxy for from-the-cloud traffic. This FQDN is configured on the Cloud Proxy cell in global.properties file (cloudproxy.reverseconnection.fqdn=FQDN:443).
  • As a consequence of using the same Cloud Proxy under different FQDNs (CloudProxyBase VIP and Cloud Proxy reverse connection) we need to take care that Cloud Proxy cell http certificate is set for both FQDNs. Probably the easiest way to achieve this is to use wild card certificate on cloud proxies (CN *.cloudproxy.example.com).

Setup Site-to-Site VPN between Azure and vCloud Director

My previous blog post was about setting up IPSec VPN tunnel between AWS VPC and vCloud Director Org VDC. This time I will describe how to achieve the same with Microsoft Azure.

vCloud Director is not among Azure list of supported IPSec VPN endpoints however it is possible to set up such VPN although it is not straightforward.

I will describe the setup of both Azure and VCD endpoints very briefly as it is very similar to the one I described in my previous article.

Azure Configuration

  • Resource Group (logical container object) – in my example RG UK
  • Virtual network (large address space similar to AWS VPN subnet) – 172.30.0.0/16
  • Subnets – at least one for VMs (172.30.0.0/24) and one for Gateway (172.30.255.0/29)
  • Virtual Network Gateway – Azure VPN endpoint with public IP address associated with the Gateway subnet above. Gateway type is VPN, VPN type is Policy-based (this is because Route-based type uses IKE2 which is not supported by NSX platform used by vCloud Director).
  • Local Network Gateway – vCloud VPN endpoint definition with its public IP address and subnets that should be reachable behind the vCloud VPN endpoint (81.x.x.x, 192.168.100.0/24)
  • Connection – definition of the tunnel:
    • Connection type: Site-to-site (IPSec)
    • Virtual network gateway and local network gateway are straightforward (those created previously)
    • Connection name: whatever
    • Shared Key (PSK): create your own 32+ character key using upper and lower case characters and numbers
  • Test VM connected to the VM subnet (IP 172.30.0.4)

azure-resources

vCloud Configuration

As explained above we created Policy Based VPN endpoint in Azure. Policy Based VPN uses IKE version 1, Diffie-Hellman Group 2 and no Perfect Forward Secrecy.

However selection of DH group and PFS is not available to tenant in vCloud Director on the legacy Org VDC Edge Gateway. Therefore the following workaround is proposed:

Tenant configures VPN on his Org VDC Edge Gateway with the following:

  • Name: Azure
  • Enable this VPN configuration
  • Establisth VPN to: a remote network
  • Local Networks: 192.168.100.0/24 (Org VDC network(s))
  • Peer Networks: 172.30.0.0/24
  • Local Endpoint: Internet (interface facing internet)
  • Local ID: 10.0.2.121 (Org VDC Edge GW internet interface)
  • Peer ID: 51.x.x.x (public IP of the Azure Virtual network gateway)
  • Peer IP: 51.x.x.x (same as previous)
  • Encryption protocol: AES256
  • Shared Key: the same as in Azure Connection definition

Now we need to ask the service provider to directly in NSX in the Edge VPN configuration disable PFS and change DH Group to DH2.

nsx-vpn

Note that this workaround is not necessary on Org VDC Edge Gateway that has been enabled with Advanced Networking services. This feature is at the moment only in vCloud Air, however soon will be available to all vCloud Air Network service providers.

If all firewall rules are properly set up we should be able to ping between Azure and vCloud VMs.

ping