NSX-T supports Role Based Access Control by integrating with VMware Identity Manager which provides access to 3rd party Identity Sources such as LDAP, AD, SAML2, etc.
When NSX-T version 2.3 is integrated with VIDM you would get a choice during the login which type of account you are going to provide (remote or local).
NSX-T version 2.4 no longer provides the option and will always default to the SAML source (VIDM). To force the login with local account provide this specific URL:
vCloud Director version 9.5 is the first release to provide networking IPv6 support. In this article I want to go into little bit more detail on the level of IPv6 functionality than was in my What’s New in vCloud Director 9.5 post.
IPv6 functionality is mostly driven by the underlying networking platform support which is provided by NSX. The level of IPv6 support in NSX-V is changing from release to release (for example NAT64 feature was introduced in NSX version 6.4). Therefore my feature list assumes the latest NSX 6.4.4 is used.
Additionally it should be noted that vCloud Director 9.5 also supports in very limited way NSX-T. Currently no Layer 3 functionality is supported for NSX-T based Org VDC networks which are imported based on pre-existing logical switches as isolated networks with IPv4 only subnets.
Here is the feature list (vCloud Director 22.214.171.124 and NSX 6.4.4).
- Create External network with IPv6 subnet (provider only). Note: mixing of IPv4 and IPv6 subnets is supported.
- Create Org VDC network with IPv6 subnet (direct or routed). Note: distributed Org VDC networks are not supported with IPv6
- Use vCloud Director IPAM (static/manual IPv6 assignments via guest customization)
- IPv6 (static only) routing via Org VDC Edge Gateway
- IPv6 firewall rules on Org VDC Edge Gateway or Org VDC Distributed Firewall via IP Sets
- NAT 64 (IPv6-to-IPv4) on Org VDC Edge Gateway
- Load balancing on Org VDC Edge Gateway: IPv6 VIP and/or IPv6 pool members
- DHCP6, SLAAC (RA)
- Routed vApp networks with IPv6 subnets
- Isolated Org VDC/vApp networks with IPv6 subnets
- OSPF v3, IPv6 BGP dynamic routing on Org VDC Edge Gateway
- Distributed IPv6 Org VDC networks
- Dual stacking IPv4/IPv6 on OrgVDC networks
- L2 VPN (tunnel only)
- SSL VPN (tunnel only)
- IPSec VPN (tunnel + inner subnets)
During installation of vCloud Director you must supply installation ID from range 1-63. Its purpose is very similar to vCenter Server ID, it is used for generation of unique MAC addresses for VMs running in the vCloud Director instance. The MAC address format is 00:50:56:ID:xx:xx. vCloud Director overrides MAC vCenter Server assignments.
Obviously on the same layer 2 network MAC addresses must always be unique so when using L2 VPN or similar network extension mechanisms a care should be given that each vCloud Director instance participating in such network extension has a different installation ID to guarantee non-overlapping MAC addresses of deployed VMs.
Until vCloud Director 9.5 it was not possible to change its installation ID. The reason was, that during the installation vCloud Director actually generates all possible MAC addresses in its database, so that table would have to be regenerated with the ID change.
This can now be accomplished with cell-management-tool mac-address-management CLI command that takes care of the MAC address table regeneration and also informs how many MAC addresses still exist that are based on the old ID. Those existing VMs will keep its old MAC unless it is manually reset/regenerated from vCloud Director UI or via vCloud API.
The CMT command can either regenerate MAC addresses with a specific ID that can differ from the installation ID (option –regenerate-with-seed), or you can change the installation ID in the database first (GSS alert!) and just use –regenerate option.
The pgAdmin screenshot below shows the ID location in the vCloud DB. For production setups this should be done with help from GSS.
Finally, here is a screenshot showing the –show-seed option listing the actual MAC usage based on the seed IDs.
Almost 3 years ago I have published an article how to set up layer 2 VPN between on-prem vSphere environment and vCloud Director Org VDC.
As both vCloud Director and NSX evolved quite a bit since to simplify the whole set up, here comes the part II.
Let me first summarize the use case:
The tenant has an application that resides on 3 different VLAN based networks running in its own (vSphere) datacenter. The networks are routed with existing physical router. The tenant wants to extend 2 of these networks to cloud for cloud bursting or DR purposes, but not the 3rd one (for example because there runs a physical database server).
The following diagram shows the setup.
The main advancements are:
- vCloud Director natively supports NSX L2 VPN (VCD 8.20 or newer needed).
- NSX now (since 6.2) supports configuration of unstretched networks directly (no static routes are necessary anymore)
- This means the full setup can be done by the tenant in self-service fashion
Here are the steps:
- The tenant will deploy freely available NSX Standalone Edge in its datacenter connected to trunk port with 2 VLANs mapped (10 and 11). Additional network configuration is necessary (forged transmits and promiscuous mode or sink port creation – see the link)
- In the cloud Org VDC tenant deploys two routed Org VDC networks with identical subnets and gateways as networks A and B. These networks must be connected to the Org VDC Edge GW via subinterface (there can be up to 200 such networks on single Edge). The Org VDC Edge must have advanced networking enabled.
- Tenant enables and configures L2VPN server on its Org VDC Edge GW. Note that this is a premium feature that the service provider must enable in Organization first (see this blog post).
- Before the L2VPN tunnel is established the following must be taken into account:
- The Org VDC Edge GW IP addresses are identical with the on-prem existing physical router. Therefore Egress Optimization Gateway addresses must be entered in the Peer Site configuration. That will prevent the Org VDC Edge GW from sending ARP replies over the tunnel.
- The same must be performed on the Standalone NSX Edge via CLI (see egress-optimize command here).
- The non-stretched network (subnet C) must be configured on the Org VDC Edge GW so it knows that the subnet is reachable through the tunnel and not via its upstream interface(s). This option however is not in vCloud UI, instead vCloud networking API must be used.
Edit 3/26/2018: This does not works for standalone NSX Edges. See the end of the article for more details.
Alternatively the provider could configure non-stretched network directly in the NSX UI:
- Finally, the tunnel can be established by configuring L2VPN server details on the on-prem Standalone NSX Edge L2VPN client (endpoint IP, port, credentials, encryption) and providing VLAN to tunnel mappings.
- Note to find the Org VDC network subinterface tunnel mapping vCloud API must be used again:
After multiple questions regarding unstretched networks and some testing I need to make some clarifications.
The routing of unstretched networks through the tunnel is achieved via static routes configured on the Edge GW. So in principle it still works the same way as described in the original article, the difference doing it via UI/API is that the setting of the IPs and routes is automatic.
The server Edge routing table looks like this:
show ip route
S 0.0.0.0/0 [1/0] via 10.0.2.254
C 10.0.2.0/24 [0/0] via 10.0.2.121
C 169.254.64.192/26 [0/0] via 169.254.64.193
C 169.254.255.248/30 [0/0] via 169.254.255.249
C 192.168.100.0/24 [0/0] via 192.168.100.1
C 192.168.101.0/24 [0/0] via 192.168.101.1
S 192.168.102.0/24 [1/0] via 169.254.64.194
show ip address
17: vNic_4094@br-sub: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:50:56:88:31:21 brd ff:ff:ff:ff:ff:ff
inet 169.254.64.193/26 brd 169.254.64.255 scope global vNic_4094
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe88:3121/64 scope link
valid_lft forever preferred_lft forever
You can see that the 169.254.64.193 IP address was autoassigned to the 4096 tunnel interface and static route was set to route the unstretched network to the other side via IP 169.254.64.194. The assignment of the .194 address on the other Edge will happen only if that Edge is managed by NSX and is actually performing routing! This is in fact not true for the use case above (with standalone Edge and existing physical router). Therefore the following manual approach must be taken:
- Create Org VDC transit network with arbitrary small subnet (e.g. 169.254.200.0/29) in the cloud. Assign IP .1 as the gateway on the Org VDC Edge. This network will not be used for workloads, it is used just for routing to unstretched network.
- Create corresponding VLAN transit network on-prem. Assign IP .2 as its gateway interface on the existing router (note the IP addresses of the routing intefaces in #1 and #2 are different).
- Create L2 VPN tunnel as before, however also stretch the transit network but do not optimize its GW (no need as on-prem and cloud are using different IPs).
- Create static routes on the Org VDC Edge GW to route to on-prem unstretched networks via the 169.254.200.2 transit network router IP.
Note that this is approach very similar to the original blog post. The only difference is that we must create separate transit network as vCloud Director does not support multiple subnets on the same Edge GW interface.
vCloud Director version 9 introduces support for the last major missing NSX feature – the distributed logical router (DLR). DLR provides optimized router which in distributed fashion performs routing between different logical switches in the hypervisor. The routing always happens in the hypervisor running the source VM which means that the traffic goes between maximum two ESXi hosts (source and destination) and no tromboning through third host running router VM is necessary. Read here for technical deep dive into how this works. This not only provides much better performance than traditional Edge GW routing, but also scales up to 1000 routed logical networks (as opposed to 10 on Edge GW or up to 209 if trunk port is enabled).
Generally, DLR should be used for routing only between VXLAN based logical switches, although NSX supports VLANs networks with certain caveats as well. Additionally dynamic routing protocols are supported as well and managed by Control VM of the DLR.
Now let’s look how vCloud Director implements DLR. The main focus was making DLR very simple to use and seamlessly integrate with the existing networking Org VDC concepts.
In the diagram below is the networking topology of such setup.
In the example you can see three Org VDC networks. One (blue) traditional (10.10.10.0/24) attached directly to the Org VDC Edge GW and two (purple and orange) distributed (192.168.0.0/24 and 192.168.1.0/24) connected through the DLR instance. The P2P connection between Org VDC Edge GW and DLR instance is green.
- Up to 1000 distributed Org VDC networks can be connected to one Org VDC Edge GW (one DLR instance per Org VDC Edge GW).
- Some networking features (such as L2 VPN) are not supported on the distributed Org VDC networks.
- VLAN based Org VDC networks cannot be distributed. The Org VDC must use VXLAN network pool.
- IPv6 is not supported by DLR
- vApp routed networks cannot be distributed
- The tenant can override the automatic DHCP and static route configurations done by vCloud Director for distributed networks on the Org VDC Edge GW. The tenant cannot modify the P2P connection between the Edge and DLR instance.
- Disabling DLR on Org VDC Edge Gateways is possible but all distributed networks must be removed before.
- Both enabling and disabling DLR on Org VDC Edge Gateway are by default system administrator only operations. It is possible to grant these rights to a tenant with the granular RBAC introduced in vCloud Director 8.20.
- DLR feature is in the base NSX license in the VMware Cloud Provider Program.
Edit 02/10/2017: Engineering (Abhinav Mishra) provided a way how to change P2P subnet between the Edge and DLR. Add the following property value with CMT:
$VCLOUD_HOME/bin/cell-management-tool manage-config -n gateway.dlr.default.subnet.cidr -v <subnet CIDR>
Example: $VCLOUD_HOME/bin/cell-management-tool manage-config -n gateway.dlr.default.subnet.cidr -v 169.254.255.248/30
No need for cell reboot.
Edit 03/10/2017: Existing Org VDC networks can be migrated from traditional to DLR or sub-interface based networks in all directions in non-disruptive way with running VMs attached.