In the past in vCloud Director 8.20 (and older versions) system admins (the provider context) could use local, LDAP and vSphere SSO accounts. vCloud Director 9.0 now replaces vSphere SSO accounts with more generic SAML2 accounts which means you can have the same IdP mechanism in the tenant and system context.
This change however breaks the previous vSphere SSO federation which was as simple as entering the vSphere Lookup Service URL and enabling the vSphere Single Sign-On with a check box (which in vCloud Director 9.0 is no longer there).
Here is the procedure how to enable vSphere Single Sign-On in vCloud Director 9.0.
Login to vCloud Director as system admin and from administration>System Settings/Federation download the metadata document (spring_saml_metadata.xml) from the link provided (../cloud/org/System/saml/metadata/alias/vcd). Make sure the certificate (below) is valid.
Login to vSphere Web Client as SSO admin and go to Administration/Single Sign-On/Configuration/SAML Service Providers
Import the metadata from step #1
Download the vsphere.local.xml metadata from the link below.
Go back to VCD, check use SAML Identity Provider and upload metadata from #4.
Note that Import Users/Group source now changes from vSphere SSO to SAML.
vCloud Director version 9 introduces support for the last major missing NSX feature – the distributed logical router (DLR). DLR provides optimized router which in distributed fashion performs routing between different logical switches in the hypervisor. The routing always happens in the hypervisor running the source VM which means that the traffic goes between maximum two ESXi hosts (source and destination) and no tromboning through third host running router VM is necessary. Read here for technical deep dive into how this works. This not only provides much better performance than traditional Edge GW routing, but also scales up to 1000 routed logical networks (as opposed to 10 on Edge GW or up to 209 if trunk port is enabled).
Generally, DLR should be used for routing only between VXLAN based logical switches, although NSX supports VLANs networks with certain caveats as well. Additionally dynamic routing protocols are supported as well and managed by Control VM of the DLR.
Now let’s look how vCloud Director implements DLR. The main focus was making DLR very simple to use and seamlessly integrate with the existing networking Org VDC concepts.
DLR is enabled on Org VDC Edge Gateway which must be already converted to advanced networking. You cannot use DLR without Org VDC Edge Gateway! There must be one free interface on the Edge (you will see later on why).
Once DLR is enabled, a logical DLR instance is created in NSX in headless mode without DLR Control VM (the instance is named in NSX vse-dlr-<GW name) (<UUID>)). vCloud Director can get away without Control VM as dynamic routing is not necessary – see later below.
The DLR instance uplink interface is connected to the Org VDC Edge GW with P2P connection using 10.255.255.248/30 subnet. The DLR uses .250 IP address and the Org VDC Edge GW uses .249. This subnet is hardcoded and cannot overlap with existing Org VDC Edge GW subnets. Obviously the Org VDC Edge GW needs at least one free interface.
DLR has default gateway set to the Org VDC Edge GW interface (10.255.255.249)
New Org VDC networks now can be created in the Org VDC with the choice to attach them to the Edge Gateway (as regular or subinterface in a trunk) or to attach them to the DLR instance. For each distributed Org VDC network a static route will be created on the Org VDC Edge Gateway to point to the DLR uplink interface. This means there is no need for dynamic routing protocols on the DLR instance.
In the diagram below is the networking topology of such setup.
In the example you can see three Org VDC networks. One (blue) traditional (10.10.10.0/24) attached directly to the Org VDC Edge GW and two (purple and orange) distributed (192.168.0.0/24 and 192.168.1.0/24) connected through the DLR instance. The P2P connection between Org VDC Edge GW and DLR instance is green.
DHCP relay agents are automatically configured on DLR instance for each distributed Org VDC network and point to DHCP Relay Server – the Org VDC Edge GW interface (10.255.255.249). To enable DHCP service for particular distributed Org VDC network, the DHCP Pool with proper IP Range just needs to be manually created on the Org VDC Edge Gateway. If Auto Configure DNS is enabled, DHCP will provide IP address of the Org VDC Edge P2P interface to the DLR instance.
Up to 1000 distributed Org VDC networks can be connected to one Org VDC Edge GW (one DLR instance per Org VDC Edge GW).
Some networking features (such as L2 VPN) are not supported on the distributed Org VDC networks.
VLAN based Org VDC networks cannot be distributed. The Org VDC must use VXLAN network pool.
IPv6 is not supported by DLR
vApp routed networks cannot be distributed
The tenant can override the automatic DHCP and static route configurations done by vCloud Director for distributed networks on the Org VDC Edge GW. The tenant cannot modify the P2P connection between the Edge and DLR instance.
Disabling DLR on Org VDC Edge Gateways is possible but all distributed networks must be removed before.
Both enabling and disabling DLR on Org VDC Edge Gateway are by default system administrator only operations. It is possible to grant these rights to a tenant with the granular RBAC introduced in vCloud Director 8.20.
DLR feature is in the base NSX license in the VMware Cloud Provider Program.
Edit 02/10/2017: Engineering (Abhinav Mishra) provided a way how to change P2P subnet between the Edge and DLR. Add the following property value with CMT:
vCloud Director uses Network Pools to create programmatically on-demand L2 networking segments for Org VDC and vApp networks. Network pools can be based on VLANs, VXLAN, port groups and legacy (deprecated) vCloud Network isolation (VCDNI) technology.
VXLAN Network Pool is recommended to be used as it scales the best. Until version 9, vCloud Director would create new VXLAN Network Pool automatically for each Provider VDC backed by NSX Transport Zone (again created automatically) scoped to cluster that belong to the particular Provider VDC. This would create multiple VXLAN network pools and potentially confusion which to use for a particular Org VDC.
In vCloud Director 9 we have the option to create our own VXLAN network pool backed by a NSX Transport Zone manually created and scoped to clusters we want to (and using any control plane mode).
During creation of Provider VDC we then have a choice to create a new VXLAN Network Pool (the legacy behavior) or use an existing one.
Advantages of the new feature are:
No more clutter of large amount of VXLAN network pools (if there are many Provider VDCs)
Simpler way to use hybrid or unicast control plane modes (vCloud Director would always default to multicast before)
Control over scope of VXLAN networks – especially useful for sharing Org VDC networks between Org VDCs from different Provider VDCs.
Adhering to best practice of scoping transport zone to whole vDS (more here)
I plan in the following days to blog about the major new features, so for now I will just provide list of all of the features, categorized by their exposure to tenants or provider.
New features in tenant context:
HTML 5 user interface which provides simplified VM deployment workflows (no vApp needed), is customizable and also provides VM Metrics (those coming from Cassandra DB)
Networking enhancements: NSX Distributed routing with up to 1000 east-west routing optimized Org VDC networks per Org VDC Edge Gateway, security groups and tags for distributed firewall policies, VLAN tagging for VXLAN based Org VDC networks for Virtual Guest Tagging
vMotion of VM between vApps in the same Org VDC
New features in tenant context with provider support:
Multisite – multiple vCloud Director instances can be federated with association of individual organizations
VLAN trunk support for vCloud Director external networks