vCloud Director version 9.5 is the first release to provide networking IPv6 support. In this article I want to go into little bit more detail on the level of IPv6 functionality than was in my What’s New in vCloud Director 9.5 post.
IPv6 functionality is mostly driven by the underlying networking platform support which is provided by NSX. The level of IPv6 support in NSX-V is changing from release to release (for example NAT64 feature was introduced in NSX version 6.4). Therefore my feature list assumes the latest NSX 6.4.4 is used.
Additionally it should be noted that vCloud Director 9.5 also supports in very limited way NSX-T. Currently no Layer 3 functionality is supported for NSX-T based Org VDC networks which are imported based on pre-existing logical switches as isolated networks with IPv4 only subnets.
Here is the feature list (vCloud Director 126.96.36.199 and NSX 6.4.4).
- Create External network with IPv6 subnet (provider only). Note: mixing of IPv4 and IPv6 subnets is supported.
- Create Org VDC network with IPv6 subnet (direct or routed). Note: distributed Org VDC networks are not supported with IPv6
- Use vCloud Director IPAM (static/manual IPv6 assignments via guest customization)
- IPv6 (static only) routing via Org VDC Edge Gateway
- IPv6 firewall rules on Org VDC Edge Gateway or Org VDC Distributed Firewall via IP Sets
- NAT 64 (IPv6-to-IPv4) on Org VDC Edge Gateway
- Load balancing on Org VDC Edge Gateway: IPv6 VIP and/or IPv6 pool members
- DHCP6, SLAAC (RA)
- Routed vApp networks with IPv6 subnets
- Isolated Org VDC/vApp networks with IPv6 subnets
- OSPF v3, IPv6 BGP dynamic routing on Org VDC Edge Gateway
- Distributed IPv6 Org VDC networks
- Dual stacking IPv4/IPv6 on OrgVDC networks
- L2 VPN (tunnel only)
- SSL VPN (tunnel only)
- IPSec VPN (tunnel + inner subnets)
With vCloud Director version 9 new API (cloudapi) based on OpenAPI specification has been introduced next to the legacy based XML API. In vCloud Director 9.5 API Explorer enables consumption of the API directly from the vCloud UI endpoint (read here). Most of the new features are using this OpenAPI such as H5 UI branding, extensions, vRealize Orchestrator service integrations, Cross VDC networking and Roles management.
OpenAPI is very simple to use, JSON based with links provided in headers. However there might be some issues when load balancer with SSL termination is involved as due to the header or payload size the request response will not get through the load balancer.
One such issue is documented in the vCloud Director 9.5 release notes. Attempting to edit Global Rules in the new H5 UI will fail with an error:
unexpected character at line 1 column 1 of the JSON data.
In my case I am using NSX Edge Load balancer with SSL termination and below is the error screenshot:
There are multiple workarounds described in the release notes but actually none worked for me:
- increasing header maximum at the Edge LB as described in KB 52553 did not help as the number of headers is not the only issue in the particular scenario – the body payload size is as well
- limiting maximum page size in vCloud Director with cell-management-tool manage-config -n restapi.queryservice.maxPageSize -v 25 fixes the above API call but the subsequent call made by the UI ignores the setting and the response will not get through the LB again.
After some investigations and troubleshooting I discovered that there is a way to increase Edge LB buffer size above the default 32 KB with similar call to the one in the KB 52553:
The above call (NSX 6.4) was enough to fix the issue for me and i can now edit Global Roles in the UI:
Almost 3 years ago I have published an article how to set up layer 2 VPN between on-prem vSphere environment and vCloud Director Org VDC.
As both vCloud Director and NSX evolved quite a bit since to simplify the whole set up, here comes the part II.
Let me first summarize the use case:
The tenant has an application that resides on 3 different VLAN based networks running in its own (vSphere) datacenter. The networks are routed with existing physical router. The tenant wants to extend 2 of these networks to cloud for cloud bursting or DR purposes, but not the 3rd one (for example because there runs a physical database server).
The following diagram shows the setup.
The main advancements are:
- vCloud Director natively supports NSX L2 VPN (VCD 8.20 or newer needed).
- NSX now (since 6.2) supports configuration of unstretched networks directly (no static routes are necessary anymore)
- This means the full setup can be done by the tenant in self-service fashion
Here are the steps:
- The tenant will deploy freely available NSX Standalone Edge in its datacenter connected to trunk port with 2 VLANs mapped (10 and 11). Additional network configuration is necessary (forged transmits and promiscuous mode or sink port creation – see the link)
- In the cloud Org VDC tenant deploys two routed Org VDC networks with identical subnets and gateways as networks A and B. These networks must be connected to the Org VDC Edge GW via subinterface (there can be up to 200 such networks on single Edge). The Org VDC Edge must have advanced networking enabled.
- Tenant enables and configures L2VPN server on its Org VDC Edge GW. Note that this is a premium feature that the service provider must enable in Organization first (see this blog post).
- Before the L2VPN tunnel is established the following must be taken into account:
- The Org VDC Edge GW IP addresses are identical with the on-prem existing physical router. Therefore Egress Optimization Gateway addresses must be entered in the Peer Site configuration. That will prevent the Org VDC Edge GW from sending ARP replies over the tunnel.
- The same must be performed on the Standalone NSX Edge via CLI (see egress-optimize command here).
- The non-stretched network (subnet C) must be configured on the Org VDC Edge GW so it knows that the subnet is reachable through the tunnel and not via its upstream interface(s). This option however is not in vCloud UI, instead vCloud networking API must be used.
Edit 3/26/2018: This does not works for standalone NSX Edges. See the end of the article for more details.
Alternatively the provider could configure non-stretched network directly in the NSX UI:
- Finally, the tunnel can be established by configuring L2VPN server details on the on-prem Standalone NSX Edge L2VPN client (endpoint IP, port, credentials, encryption) and providing VLAN to tunnel mappings.
- Note to find the Org VDC network subinterface tunnel mapping vCloud API must be used again:
After multiple questions regarding unstretched networks and some testing I need to make some clarifications.
The routing of unstretched networks through the tunnel is achieved via static routes configured on the Edge GW. So in principle it still works the same way as described in the original article, the difference doing it via UI/API is that the setting of the IPs and routes is automatic.
The server Edge routing table looks like this:
show ip route
S 0.0.0.0/0 [1/0] via 10.0.2.254
C 10.0.2.0/24 [0/0] via 10.0.2.121
C 169.254.64.192/26 [0/0] via 169.254.64.193
C 169.254.255.248/30 [0/0] via 169.254.255.249
C 192.168.100.0/24 [0/0] via 192.168.100.1
C 192.168.101.0/24 [0/0] via 192.168.101.1
S 192.168.102.0/24 [1/0] via 169.254.64.194
show ip address
17: vNic_4094@br-sub: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:50:56:88:31:21 brd ff:ff:ff:ff:ff:ff
inet 169.254.64.193/26 brd 169.254.64.255 scope global vNic_4094
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe88:3121/64 scope link
valid_lft forever preferred_lft forever
You can see that the 169.254.64.193 IP address was autoassigned to the 4096 tunnel interface and static route was set to route the unstretched network to the other side via IP 169.254.64.194. The assignment of the .194 address on the other Edge will happen only if that Edge is managed by NSX and is actually performing routing! This is in fact not true for the use case above (with standalone Edge and existing physical router). Therefore the following manual approach must be taken:
- Create Org VDC transit network with arbitrary small subnet (e.g. 169.254.200.0/29) in the cloud. Assign IP .1 as the gateway on the Org VDC Edge. This network will not be used for workloads, it is used just for routing to unstretched network.
- Create corresponding VLAN transit network on-prem. Assign IP .2 as its gateway interface on the existing router (note the IP addresses of the routing intefaces in #1 and #2 are different).
- Create L2 VPN tunnel as before, however also stretch the transit network but do not optimize its GW (no need as on-prem and cloud are using different IPs).
- Create static routes on the Org VDC Edge GW to route to on-prem unstretched networks via the 169.254.200.2 transit network router IP.
Note that this is approach very similar to the original blog post. The only difference is that we must create separate transit network as vCloud Director does not support multiple subnets on the same Edge GW interface.
vCloud Director version 9 introduces support for the last major missing NSX feature – the distributed logical router (DLR). DLR provides optimized router which in distributed fashion performs routing between different logical switches in the hypervisor. The routing always happens in the hypervisor running the source VM which means that the traffic goes between maximum two ESXi hosts (source and destination) and no tromboning through third host running router VM is necessary. Read here for technical deep dive into how this works. This not only provides much better performance than traditional Edge GW routing, but also scales up to 1000 routed logical networks (as opposed to 10 on Edge GW or up to 209 if trunk port is enabled).
Generally, DLR should be used for routing only between VXLAN based logical switches, although NSX supports VLANs networks with certain caveats as well. Additionally dynamic routing protocols are supported as well and managed by Control VM of the DLR.
Now let’s look how vCloud Director implements DLR. The main focus was making DLR very simple to use and seamlessly integrate with the existing networking Org VDC concepts.
In the diagram below is the networking topology of such setup.
In the example you can see three Org VDC networks. One (blue) traditional (10.10.10.0/24) attached directly to the Org VDC Edge GW and two (purple and orange) distributed (192.168.0.0/24 and 192.168.1.0/24) connected through the DLR instance. The P2P connection between Org VDC Edge GW and DLR instance is green.
- Up to 1000 distributed Org VDC networks can be connected to one Org VDC Edge GW (one DLR instance per Org VDC Edge GW).
- Some networking features (such as L2 VPN) are not supported on the distributed Org VDC networks.
- VLAN based Org VDC networks cannot be distributed. The Org VDC must use VXLAN network pool.
- IPv6 is not supported by DLR
- vApp routed networks cannot be distributed
- The tenant can override the automatic DHCP and static route configurations done by vCloud Director for distributed networks on the Org VDC Edge GW. The tenant cannot modify the P2P connection between the Edge and DLR instance.
- Disabling DLR on Org VDC Edge Gateways is possible but all distributed networks must be removed before.
- Both enabling and disabling DLR on Org VDC Edge Gateway are by default system administrator only operations. It is possible to grant these rights to a tenant with the granular RBAC introduced in vCloud Director 8.20.
- DLR feature is in the base NSX license in the VMware Cloud Provider Program.
Edit 02/10/2017: Engineering (Abhinav Mishra) provided a way how to change P2P subnet between the Edge and DLR. Add the following property value with CMT:
$VCLOUD_HOME/bin/cell-management-tool manage-config -n gateway.dlr.default.subnet.cidr -v <subnet CIDR>
Example: $VCLOUD_HOME/bin/cell-management-tool manage-config -n gateway.dlr.default.subnet.cidr -v 169.254.255.248/30
No need for cell reboot.
Edit 03/10/2017: Existing Org VDC networks can be migrated from traditional to DLR or sub-interface based networks in all directions in non-disruptive way with running VMs attached.
vCloud Director uses Network Pools to create programmatically on-demand L2 networking segments for Org VDC and vApp networks. Network pools can be based on VLANs, VXLAN, port groups and legacy (deprecated) vCloud Network isolation (VCDNI) technology.
VXLAN Network Pool is recommended to be used as it scales the best. Until version 9, vCloud Director would create new VXLAN Network Pool automatically for each Provider VDC backed by NSX Transport Zone (again created automatically) scoped to cluster that belong to the particular Provider VDC. This would create multiple VXLAN network pools and potentially confusion which to use for a particular Org VDC.
In vCloud Director 9 we have the option to create our own VXLAN network pool backed by a NSX Transport Zone manually created and scoped to clusters we want to (and using any control plane mode).
During creation of Provider VDC we then have a choice to create a new VXLAN Network Pool (the legacy behavior) or use an existing one.
Advantages of the new feature are:
- No more clutter of large amount of VXLAN network pools (if there are many Provider VDCs)
- Simpler way to use hybrid or unicast control plane modes (vCloud Director would always default to multicast before)
- Control over scope of VXLAN networks – especially useful for sharing Org VDC networks between Org VDCs from different Provider VDCs.
- Adhering to best practice of scoping transport zone to whole vDS (more here)