Postman and vCloud Director 9.5 Access Token Authentication

Quick post on how to configure Postman to use the new vCloud API 31.0 bearer token authentication instead of the deprecated authorization token header.

    1. Create your environment if you have not done yet so by clicking the gear icon in the top right corner. Specify environment name and host variable with FQDN to the vCloud Director instance.
    2. Select the environment in the pull down selection box next to the gear icon.
    3. Create new POST request with URL https://{{host}}/api/sessions
      In Headers section add Accept header: application/*+xml;version=31.0
    4. Go to the Tests section and add the following code snippet:
      var bearer = postman.getResponseHeader("X-VMWARE-VCLOUD-ACCESS-TOKEN")
      pm.environment.set("X-VMWARE-VCLOUD-ACCESS-TOKEN",bearer)
      

    5. In the Authorization section, select Basic Auth type and provide username (including @org) and password.
    6. Click Send. You should see Status: 200 OK and the response Headers and Body. Save the request into existing or new collection.

      If you did not get 200 OK, fix the error (credentials, or typo).
    7. Notice that in the Headers section of the response is provided the X-VMWARE-VCLOUD-ACCESS-TOKEN. We will not use it for subsequent API calls. It has been picked up and saved into environment variable by the code provided in step #4.
    8. Create new API call. For example: GET https://{{host}}/api/org. Keep the same Accept header. Go to Authorization tab and change the type to Bearer Token and in the token field provide {{X-VMWARE-VCLOUD-ACCESS-TOKEN}}
    9. Click Send. You should get response Status: 200 OK and a list of all Organizations the user is authorized in. Save the new call into collection as Get Organizations.

    Create additional calls into your collection as needed by repeating steps #8-9. You can now reuse your collection anytime also on different environments. Log in first with the POST Login call while specifying correct credentials and then run any other calls from the collection.

    Advertisements

vCloud Director – vCenter Server Relationship

vCloud Director as a cloud management platform needs resources to provision the target workloads to. These resources are provided by vCenter Server (compute, storage, networks) and NSX Manager (networks and networking services).

In the past vCloud Director required tight grip on those resources, so the recommendation and best practice was to dedicate them to the particular vCloud Director instance. System admins were discouraged to run additional workloads not managed by vCloud Director on them. However that has changed recently, therefore the need for this blog article.

vCenter Server Extension

vCloud Director used to register itself as a vCenter Server extension. That allowed to ‘protect’ VCD managed VMs with a special icon and a warning pop up during vCenter edits of such VMs.

vCloud Director specific VM icon

Today, vCloud Director is quite resilient against changes on a particular VM done directly in vCenter Server, so there is no more need for those warnings. vCloud Director 9.5 thus no longer register itself as vCenter Server extension, so you will no longer see these icons and pop up warnings.

As a side note you will also see a change in the VM naming. The long UUID is no longer added to the VM name and is replaced by shorter 4 digit random characters.

Host Preparation

In the past during creation of a Provider VDC, the system admin was asked for ESXi host credentials. This was needed to upload cloud agent vib that was used for certain features (thumbnails, VCDNI network encapsulation). All these features were either replaced by different mechanism or deprecated, so there is no need to upload any vcloud vibs to ESXi hosts anymore.

Additionally, custom attribute system.service… used to be set for each vCloud Director managed host and vCloud Director managed VM. This provided a way to control where vCenter DRS could vMotion VMs through this host to VM compatibility option. Disabling a host would remove the custom property. vCloud Director VM could not be vMotioned to unprepared host as vCenter would scream the host is incompatible with the VM.

 

Host Custom Attributes
VM Custom Attribute

vMotion to Unprepared Host Error

In vCloud Director 9.5 this mechanism was completely eliminated. When host is put into maintenance mode, it is considered unavailable for vCloud Director, therefore there is no need anymore to disable it first in vCloud Director. You will  no longer see any host preparation dialog and the hosts section is simplified to bare minimum.

So What About the Relationship?

As you can see the vCloud Director – vCenter Server relationship is very loose. In fact it is no longer monogamous, meaning vCenter Server can be married (associated) with multiple vCloud Director instances at the same time.

Why would you do that?

I can think of three use cases, but obviously our smart service providers will come up with more.

Use case 1: Test & Dev

Do you need to test new vCloud Director release? Or provide test instance of vCloud Director for your internal developers? No need to spin up whole vSphere + NSX environment with storage, etc. Just deploy one VM with vCloud Director bits (you can even use the appliance if you have external DB and NFS ready) and point it to the existing vSphere/NSX endpoints.

Use case 2: Whitelabeling / Reselling

To enable three tier mode, where SP provides infrastructure, reseller will get its own vCloud Director instance (branded) to resell to their end customers. SP needs to setup one big vSphere/NSX infrastructure and have an automated way to deploy VCD instances on top of it. The reseller gets its own instance with system admin equivalent rights and manages its own tenants.

Use case 3: Uber Org Admin Role

Some end-customers request bigger than Org Admin role. They want to create their own organizations and Org VDCs to better align with their business groups. SP can dedicate whole VCD instance to such customer. Without the need of provisioning dedicated vSphere/NSX as well.

Any Caveats, Recommendations?

  • Segment vSphere environment with Clusters and Resource Pools for each VCD instance.
  • Use different VCD instance names and IDs
  • Use separate accounts both for vCenter Server and NSX for each VCD instance. Give each account permission only to resources it should see (use vCenter No-Access privilege on clusters/RP/folders it should not see)
  • Dedicate storage resources to each VCD
  • Use separate NSX transport zone for each VCD instance
  • Monitor the load of multiple VCD listeners on the single VC. Scale out VCs if needed. VMware does not test this kind of setup at scale.

Fun Fact

You can in fact vMotion running VM from one vCloud Director environment to another one. To do so, you will move it in vCenter from the source Org VDC RP to the destination Org VDC RP. You must also move it out of the VM Folder (remember the No Access privilege?) to be visible to the target VCD. Obviously it needs to be connected to the right target networks.

Finally, you will need to remove the original vCloud UUID (with PowerCLI or similar) and let it be auto-discovered by the target VCD. There is no auto-removal from the original VCD, so you will need to use the process described here.

 

 

 

How to Change vCloud Director Installation ID

During installation of vCloud Director you must supply installation ID from range 1-63. Its purpose is very similar to vCenter Server ID, it is used for generation of unique MAC addresses for VMs running in the vCloud Director instance. The MAC address format is 00:50:56:ID:xx:xx. vCloud Director overrides MAC vCenter Server assignments.

Obviously on the same layer 2 network MAC addresses must always be unique so when using L2 VPN or similar network extension mechanisms a care should be given that each vCloud Director instance participating in such network extension has a different installation ID to guarantee non-overlapping MAC addresses of deployed VMs.

Until vCloud Director 9.5 it was not possible to change its installation ID. The reason was, that during the installation vCloud Director actually generates all possible MAC addresses in its database, so that table would have to be regenerated with the ID change.

This can now be accomplished with cell-management-tool mac-address-management CLI command that takes care of the MAC address table regeneration and also informs how many MAC addresses still exist that are based on the old ID. Those existing VMs will keep its old MAC unless it is manually reset/regenerated from vCloud Director UI or via vCloud API.

The CMT command can either regenerate MAC addresses with a specific ID that can differ from the installation ID (option –regenerate-with-seed), or you can change the installation ID in the database first (GSS alert!) and just use –regenerate option.

The pgAdmin screenshot below shows the ID location in the vCloud DB. For production setups this should be done with help from GSS.

Finally, here is a screenshot showing the –show-seed option listing the actual MAC usage based on the seed IDs.

What’s New in vCloud Director 9.5

After vCloud Director 9.1 release in March we have new version 9.5 out!

Here are links to release notes and a whitepaper describing its new features.

Let me also go through the new features here so I can link additional blogs that will dive deeper into each one.

New UI

  • The tenant HTML 5 UI (accessible via /tenant link) has been further enhanced and has now almost full feature parity with the legacy Flex UI. There might be some corner cases or small features missing, but in general tenants do not really have a reason to not use it all the time. Also new features (e.g. multi-site networking) are available only in the new UI
  • Some highlights:
    • VDC dashboard working across all associated Org VDCs (across one or many vCloud Director instances)
    • Task pane
    • Ribbon
    • Multisite networking
    • Independent disk support
    • Networking services improvements
  • The UI can now be customized with custom themes (at system level) with this css theme generator.
  • Provider UI (accessible via /provider link) has been also enhanced, although the legacy Flex UI is still needed and system administrators will probably spent most time there.
  • Some highlights:
    • User management, IdP
    • Roles and rights management

Networking

  • IPv6 support.
    Both external and Org VDC networks (including vApp networks) can be assigned with IPv6 subnet. Note that you cannot use distributed Org VDC networks with IPv6 as NSX Logical Distributed Router supports only IPv4.
  • Cross VDC networking.
    In multi vCenter Server, single NSX domain architecture, it is now possible to create universal logical switches spanning multiple VDCs across VCs connected to universal distributed router with multiple egress Org VDC Edge Gateways. There is also a new concept of VDC groups to create (site) grouping.
  • Limited NSX-T support.
    NSX-T is a new network virtualization platform that is in many aspects differs from NSX-V. NSX-T architecture is not tied with vCenter Servers, has different concept of routing – ESGs vs T0 and T1 routers and also uses Geneve instead of VXLAN as encapsulation protocol. Due to the huge differences between NSX-T and V, vCloud Director 9.5 currently only allows the import of existing logical switches as Org VDC networks and distributed firewalling (API only).
  • Related to the above, it is now in vCloud Director possible to register vCenter Server without NSX-V associated manager (API only).

Compute

  • Org VDC compute policies
    User with Edit VM CPU and Memory reservation settings right can configure VM reservation, limit and shares in any Org VDC allocation model. Org VDC maximums (quotas) are still enforced.  vCloud Director will also not override reservation configurations done at vCenter Server level. This is groundwork for future enhancement.
  • It is no longer needed to prepare ESXi hosts to be used by vCloud Director. No agents nor custom attributes need to be installed or set. Provisioning or decommissioning of ESXi host is much simplified. Read more here.

    Storage

  • VM moves across VDCs (Move to … action) or clusters no longer use cloning method, but instead use more efficient relocate VC function.

Other

  • New role based access control with right bundles, global roles (published/delegated to one or more tenants). Also system admins imported from LDAP group can have a role assigned.
  • New /cloudapi APIs are now autodocumented with Swagger and can be viewed and executed directly from vCloud Director API Explorer web point at /api-explorer. Note that /cloudapi does not replace /api. Those APIs are different and only some new features are available via the /cloudapi endpoint (H5 UI branding, vRealize Orchestrator services, UI plugins, etc.).
  • Oracle database can no longer be used as vCloud Director database. MS SQL database is now announced as deprecated.
  • The creation of legacy Edge Gateways is now in deprecated mode and will be removed in future releases.
  • vCloud API version is now at 31.0. Some older ones were removed (notably 5.6, 9.0) so make sure your scripts are updated. As always check /api/versions for the supported and deprecated list.
  • vCloud Director cell is now available as Photon appliance. This can simplify greenfield deployments, although NFS transfer share, RabbitMQ and vCloud databases (PostgreSQL or Cassandra) are not available in appliance format yet. You can still download vCloud Director binaries to be used in CentOS/RHEL VM as before.
  • At the release day vSphere 6.0U3, 6.5U1/U2 and 6.7.0, NSX-V 6.3.5, 6.3.6, 6.4.0-6.4.3, NSX-T 2.2 are supported. Always check for updates here.
  • VCD-CLI (CLI command tool to manage VCD both for tenants and sys admins) and pyvcloud (Python SDK) have been updated as well.
  • vCloud Availability 2.0.1 and vCloud Availability for Cloud-to-Cloud DR 1.0 are not supported with vCloud Director 9.5. Both will require updates, which are coming soon.
  • vCloud Installation ID change is now possible.

 

vCloud Availability – Resizing Disk of Protected VM

A customer asked how to resize a disk of very large VM (file server) which is protected with vCloud Availability and thus replicated to the cloud.

It is not straight forward as the underlying replication engine relies on tracking changed blocks and both the source and target disks must have the same size. In short the replication must be stopped for a moment and then re-established after the necessary disk resizing. Here is step by step process:

  1. Fail over VM via vCloud Availability UI/API without powering on the VM (leave on-prem running).
  2. Consolidate the VM in the cloud (this must be done by SP or use workarounds with copy to catalog and deploy back).
  3. Stop replication of on-prem VM (via vSphere UI plugin).
  4. Resize disk of on-prem VM (including partition and file system).
  5. Resize disk of cloud VM from step #2 (only hardware).
  6. Setup replication from scratch by using ‘Use replication seeds’ option while selecting the seed of failed over cloud VM from step #5