Multitenant Logging with VMware Cloud Director

Topic of multitenant logging in VMware Cloud Director (VCD) especially with NSX-T came up recently in multiple conversations, so I will summarize the capabilities and options as they are today.

Audit and Event Logs

Information about VCD events such as user logins, vApp operation, VM power-on etc are available in the Audit and Event log. Tenant can access their particular logs via API – AuditTrail Events and Tasks (type=task query) or directly in the UI.

Event Log UI

An additional option is to receive events and notifications via MQTT message bus as was described here.

NSX-V Networking Logs

With NSX-V each tenant would have one or more Org VDC Gateways each backed by its own NSX-V Edge Service Gateway (single or two VMs in HA mode). The tenant could configure their own syslog endpoint to send all Edge logs there. As each Edge could have two syslog entries, the provider could set up their syslog endpoint as well to receive the same logs.

Tenant Edge Gateway Syslog Configuration
Provider Edge Gateway Syslog Configuration

And this is how the configuration looks in NSX-V:

Distributed firewall logs are generated in vmkernel of each ESXi host. It means they are sent to syslog configured at ESXi level which is not multitenant. However VCD configures a rule tag which is unique for each Org VDC (its UUID) so it is possible to filter logs for a specific tenant.

Org VDC UUID rule tag configured for each DFW entry in NSX-V

NSX-T Networking Logs

Org VDC gateways in NSX-T Org VDCs are backed by NSX-T Tier-1 gateways which are logical entities on shared NSX-T Edge Nodes. This means it is not possible to configure tenant’s syslog on them as was done with NSX-V. Therefore VCD will configure Log label for each GW firewall entry. The log is composit of first 10 characters of the UUID of the GW parent object (Org VDC or Data Center Group), 5 characters of the GW name and 5 characters of the GW UUID.

Note that you need at least NSX-T 3.2 and VCD 10.4.

For distributed firewall the situation is identical to NSX-V. The only difference it that the log label is composit of first 5 characters of the Data Center Group name and 27 characters of its UUID.

NSX-T DFW Log Label

Additional improvements are planned in NSX-T 4.1+ to allow multitenant access to other log types.

Log Filtering and Forwarding

To test the functionality vRealize Log Insight supports up to 10 log forwarders. These can be configured in the Log Management section. In the screenshot below you can see Distributed Firewall for NSX-V and NSX-T and Gateway Firewall for NSX-T forwarding to tenant syslog 10.0.2.29 configured.

vRealize Log Insight filtering and forwarding

Control System Admin Access to VMware Cloud Director

When VMware Cloud Director is deployed in public environment setup it is a good practice to restrict the system admin access only for specific networks so no brute force attack can be triggered against the publicly available UI/API end points.

There is actually a relatively easy way to achieve this via any web application firewall (WAF) with URI access filter. The strategy is to protect only the provider authentication end points which is much easier than to try to distinguish between provider and tenant URIs.

As the access (attack) can be done either through UI or API the solution should address both. Let us first talk about the UI. The tenants and provider use specific URL to access their system/org context but we do not really need to care about this at all. The UI is actually using (public) APIs so there is nothing needed to harden the UI specifically if we harder the API endpoint. Well, the OAuth and SAML logins are exception so let me tackle them separately.

So how can you authenticate to VCD via API?

Integrated Authentication

The integrated basic authentication consisting of login/password is used for VCD local accounts and LDAP accounts. The system admin (provider context) uses /cloudapi/1.0.0/sessions/provider API endpoint while the tenants use /cloudapi/1.0.0/sessions.

The legacy (common for both providers and tenant) API endpoint /api/sessions has been deprecated since API version 33.0 (introduced in VCD 10.0). Note that deprecated does not mean removed and it is still available even with API version 36.x so you can expect to be around for some time as VCD keeps backward compatible APIs for few years.

You might notice that there is in a Feature Flags section the possibility to enable “Legacy Login Removal”.

Feature Flags

Enabling this feature will disable legacy login both for tenants and providers however only if you use alpha API version (in the case of VCD 10.3.3.1 it is 37.0.0-alpha-1652216327). So this is really only useful for testing your own tooling where you can force the usage of that particular API version. The UI and any 3rd party tooling will still use the main (supported) API versions where the legacy endpoint will still work.

However, you can forcefully disable it for provider context for any API version with the following CMT command (run from any cell, no need to restart the services):

/opt/vmware/vcloud-director/bin/cell-management-tool manage-config -n vcloud.api.legacy.nonprovideronly -v true

The providers will need to use only the new cloudapi/1.0.0/providers/session endpoint. So be careful as it might break some legacy tools!

API Access Token Authentication

This is a fairly new method of authentication to VCD (introduced in version 10.3.1) that uses once generated secret token for API authentication. It is mainly used by automation or orchestration tools. The actual method of generating session token requires access to the tenant or provider oauth API endpoints:

/oauth/tenant/<tenant_name>/token

/oauth/provider/token

This makes it easy to disable provider context via URI filter.

SAML/OAuth Authentication via UI

Here we must distinguish the API and UI behavior. For SAML, the UI is using /login/org/<org-name>/… endpoint. The provider context is using the default SYSTEM org as the org name. So we must filter URI starting with /login/org/SYSTEM.

For OAuth the UI is using the same endpoint as API access token authentication /oauth/tenant vs /oauth/provider. /login/oauth?service=provider

For API SAML/OAuth logins cloudapi/1.0.0/sessions vs cloudapi/1.0.0/sessions/provider endpoints are used.

WAF Filtering Example

Here is an example how to set up URI filtering with VMware NSX Advanced Load Balancer.

  1. We need to obviously set up VCD cell (SSL) pool and Virtual Service for the external IP and port 443 (SSL).
  2. The virtual service application profile must be set to System-Secure-HTTP as we need to terminate SSL sessions on the load balancer in order to inspect the URI. That means the public SSL certificate must be uploaded to load balancer as well. The cells can actually use self signed certs especially if you use the new console proxy that does not require SSL pass through and works on port 443.
  3. In the virtual service go to Policies > HTTP Request and create following rules:
    Rule Name: Provider Access
    Client IP Address: Is Not: <admin subnets>
    Path: Criteria – Begins with:
    /cloudapi/1.0.0/sessions/provider
    /oauth/provider
    /login/oauth?service=provider
    /login/org/SYSTEM
    Content Switch: Local response – Status Code: 403.
WAF Access Rule

And this is what you can observe when trying to log in via integrated authentication from non-authorized subnets:

And here is an example of SAML login:

How to Move (Live) vApps Across Org VDCs

VMware Cloud Director has secret not well known API only feature that allows to move vApps across Org VDCs while they are running. This feature has been purposefully made for the NSX-V to NSX-T Migration Tool, but can be used for other use cases hence the reason here to shed more lights on it.

We should start with mentioning that vApp migration across Org VDCs has been around since forever – in the UI you can select an existing vApp and you will find out Move command in the action menu. But that is something completely different – that method does in the background (vSphere) cloning operation with deletion of the source VM(s). Thus it is slow, requires vApp to be powered off and creates new identity for the vApp and VMs after the move (their UUIDs will change). The UI is using API method POST /vdc/{id}/action/cloneVApp with flag IsSourceDelete set to true.

So the above method is *not* the subject of this article – instead we will talk about API method POST /VDC/{id}/action/moveVApp.

The main differences are:

  • vMotion (e.g. live, share nothing and cross vCenter) is used
  • identity of vApp and VM does not change (UUID is retained)
  • vApp can be in running state
  • VMs can be connected to Named (independent) disks
  • Fast provisioning (linked clones) support

The moveVApp API is fairly new and still evolving. For example VMware Cloud Director 10.3.2 added support for move router vApps. Movement of running encrypted vApps will be supported in the future. So be aware there might be limitations based on your VCD version.

The vApp can be moved across Org VDCs/Provider VDCs/clusters, vCenters of the same tenant but it will not work across associated Orgs for example. It also cannot be used for moving vApps across clusters/resource pools in the same Org VDC (for that use Migrate VM UI/API). Obviously the underlying vSphere platform must support vMotions across the involved clusters or vCenters. NSX backing (V to T) change is also supported.

The API method is using the target Org VDC endpoint with quite elaborate payload that must describe which vApp is being moved, how will the target network configuration look like (obviously parent Org VDC networks will change) and what storage, compute or placement policies will be used by every vApp VM at the target.

Note that if a VM is connected to a media (ISO) it must be accessible to the target Org VDC (the ISO is not migrated).

An example is worth 1000 words:

POST https://{{host}}/api/vdc/5b2abda9-aa2e-4745-a33b-b4b8fa1dc5f4/action/moveVApp

Content-Type:application/vnd.vmware.vcloud.MoveVAppParams+xml
Accept:application/*+xml;version=36.2

<?xml version="1.0"?>
<MoveVAppParams xmlns="http://www.vmware.com/vcloud/v1.5" xmlns:ns7="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ns8="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:ns9="http://www.vmware.com/schema/ovf">
  <Source href="https://vcd-01a.corp.local/api/vApp/vapp-96d3a015-4a08-4c59-93fa-384b41d4e453"/>
  <NetworkConfigSection>
    <ns7:Info>The configuration parameters for logical networks</ns7:Info>
       <NetworkConfig networkName="vApp-192.168.40.0">
            <Configuration>
                <IpScopes>
                    <IpScope>
                        <IsInherited>false</IsInherited>
                        <Gateway>192.168.40.1</Gateway>
                        <Netmask>255.255.255.0</Netmask>
                        <SubnetPrefixLength>24</SubnetPrefixLength>
                        <IsEnabled>true</IsEnabled>
                        <IpRanges>
                            <IpRange>
<StartAddress>192.168.40.2</StartAddress>
<EndAddress>192.168.40.99</EndAddress>
                            </IpRange>
                        </IpRanges>
                    </IpScope>
                </IpScopes>
                <ParentNetwork href="https://vcd-01a.corp.local/api/admin/network/1b8a200b-7ee7-47d5-81a1-a0dcb3161452" id="1b8a200b-7ee7-47d5-81a1-a0dcb3161452" name="Isol_192.168.33.0-v2t"/>
                <FenceMode>natRouted</FenceMode>
                <RetainNetInfoAcrossDeployments>false</RetainNetInfoAcrossDeployments>
                <Features>
                    <FirewallService>
                        <IsEnabled>true</IsEnabled>
                        <DefaultAction>drop</DefaultAction>
                        <LogDefaultAction>false</LogDefaultAction>
                        <FirewallRule>
                            <IsEnabled>true</IsEnabled>
                            <Description>ssh-VM6</Description>
                            <Policy>allow</Policy>
                            <Protocols>
<Tcp>true</Tcp>
                            </Protocols>
                            <DestinationPortRange>22</DestinationPortRange>
                            <DestinationVm>
<VAppScopedVmId>88445b8a-a9c4-43d5-bfd8-3630994a0a88</VAppScopedVmId>
<VmNicId>0</VmNicId>
<IpType>assigned</IpType>
                            </DestinationVm>
                            <SourcePortRange>Any</SourcePortRange>
                            <SourceIp>Any</SourceIp>
                            <EnableLogging>false</EnableLogging>
                        </FirewallRule>
                        <FirewallRule>
                            <IsEnabled>true</IsEnabled>
                            <Description>ssh-VM5</Description>
                            <Policy>allow</Policy>
                            <Protocols>
<Tcp>true</Tcp>
                            </Protocols>
                            <DestinationPortRange>22</DestinationPortRange>
                            <DestinationVm>
<VAppScopedVmId>e61491e5-56c4-48bd-809a-db16b9619d63</VAppScopedVmId>
<VmNicId>0</VmNicId>
<IpType>assigned</IpType>
                            </DestinationVm>
                            <SourcePortRange>Any</SourcePortRange>
                            <SourceIp>Any</SourceIp>
                            <EnableLogging>false</EnableLogging>
                        </FirewallRule>
                        <FirewallRule>
                            <IsEnabled>true</IsEnabled>
                            <Description>Allow all outgoing traffic</Description>
                            <Policy>allow</Policy>
                            <Protocols>
<Any>true</Any>
                            </Protocols>
                            <DestinationPortRange>Any</DestinationPortRange>
                            <DestinationIp>external</DestinationIp>
                            <SourcePortRange>Any</SourcePortRange>
                            <SourceIp>internal</SourceIp>
                            <EnableLogging>false</EnableLogging>
                        </FirewallRule>
                    </FirewallService>
                    <NatService>
                        <IsEnabled>true</IsEnabled>
                        <NatType>portForwarding</NatType>
                        <Policy>allowTraffic</Policy>
                        <NatRule>
                            <Id>65537</Id>
                            <VmRule>
<ExternalIpAddress>192.168.33.2</ExternalIpAddress>
<ExternalPort>2222</ExternalPort>
<VAppScopedVmId>e61491e5-56c4-48bd-809a-db16b9619d63</VAppScopedVmId>
<VmNicId>0</VmNicId>
<InternalPort>22</InternalPort>
<Protocol>TCP</Protocol>
                            </VmRule>
                        </NatRule>
                        <NatRule>
                            <Id>65538</Id>
                            <VmRule>
<ExternalIpAddress>192.168.33.2</ExternalIpAddress>
<ExternalPort>22</ExternalPort>
<VAppScopedVmId>88445b8a-a9c4-43d5-bfd8-3630994a0a88</VAppScopedVmId>
<VmNicId>0</VmNicId>
<InternalPort>22</InternalPort>
<Protocol>TCP</Protocol>
                            </VmRule>
                        </NatRule>
                    </NatService>
                </Features>
                <SyslogServerSettings/>
                <RouterInfo>
                    <ExternalIp>192.168.33.2</ExternalIp>
                </RouterInfo>
                <GuestVlanAllowed>false</GuestVlanAllowed>
                <DualStackNetwork>false</DualStackNetwork>
            </Configuration>
            <IsDeployed>true</IsDeployed>
        </NetworkConfig>
  </NetworkConfigSection>
  <SourcedItem>
    <Source href="https://vcd-01a.corp.local/api/vApp/vm-fa47982a-120a-421a-a321-62e764e10b80"/>
    <InstantiationParams>
      <NetworkConnectionSection>
        <ns7:Info>Network Connection Section</ns7:Info>
        <PrimaryNetworkConnectionIndex>0</PrimaryNetworkConnectionIndex>
                <NetworkConnection network="vApp-192.168.40.0" needsCustomization="false">
                    <NetworkConnectionIndex>0</NetworkConnectionIndex>
                    <IpAddress>192.168.40.2</IpAddress>
                    <IpType>IPV4</IpType>
                    <ExternalIpAddress>192.168.33.3</ExternalIpAddress>
                    <IsConnected>true</IsConnected>
                    <MACAddress>00:50:56:28:00:30</MACAddress>
                    <IpAddressAllocationMode>POOL</IpAddressAllocationMode>
                    <SecondaryIpAddressAllocationMode>NONE</SecondaryIpAddressAllocationMode>
                    <NetworkAdapterType>VMXNET3</NetworkAdapterType>
                </NetworkConnection>
      </NetworkConnectionSection>
    </InstantiationParams>
    <StorageProfile href="https://vcd-01a.corp.local/api/vdcStorageProfile/bdf68bda-8ab9-4ec1-970a-fafc34cdcf5b"/>
  </SourcedItem>
    <SourcedItem>
    <Source href="https://vcd-01a.corp.local/api/vApp/vm-a1f87b29-60e7-45ee-86e2-5b749a81ed19"/>
    <InstantiationParams>
      <NetworkConnectionSection>
        <ns7:Info>Network Connection Section</ns7:Info>
        <PrimaryNetworkConnectionIndex>0</PrimaryNetworkConnectionIndex>
                <NetworkConnection network="vApp-192.168.40.0" needsCustomization="false">
                    <NetworkConnectionIndex>0</NetworkConnectionIndex>
                    <IpAddress>192.168.40.3</IpAddress>
                    <IpType>IPV4</IpType>
                    <ExternalIpAddress>192.168.33.2</ExternalIpAddress>
                    <IsConnected>true</IsConnected>
                    <MACAddress>00:50:56:28:00:37</MACAddress>
                    <IpAddressAllocationMode>POOL</IpAddressAllocationMode>
                    <SecondaryIpAddressAllocationMode>NONE</SecondaryIpAddressAllocationMode>
                    <NetworkAdapterType>VMXNET3</NetworkAdapterType>
                </NetworkConnection>
      </NetworkConnectionSection>
    </InstantiationParams>
    <StorageProfile href="https://vcd-01a.corp.local/api/vdcStorageProfile/bdf68bda-8ab9-4ec1-970a-fafc34cdcf5b"/>
  </SourcedItem>
</MoveVAppParams>

In our case this is routed two VM vApp where both VMs are connected to the same routed vApp network named vApp-192.168.40.0 with set of port forwarding NAT rules and FW policies configured on the vApp router.

  • As said above it is a POST call against the target Org VDC – in our case 5b2abda9-aa2e-4745-a33b-b4b8fa1dc5f4.
  • The payload starts with the source vApp (vapp-96d3a015-4a08-4c59-93fa-384b41d4e453).
  • The follows the NetworkConfig section. Here we are describing the target vApp network topology. In general that section should be identical to the source vApp payload with the only difference being the ParentNetwork must refer to an Org VDC network from the target Org VDC. So in our case we are describing the subnet and IP pools of the vApp network (vApp-192.168.40.0), its new parent Org VDC network (Isol_192.168.33.0-v2t) and the way these two are connected (bridged or natRouted). As we are using routed vApp it is natRouted in our case. Then follow (optional) routed vApp features such as firewall policies or NAT rules. They should be pretty self explanatory and again they are usually identical to the source vApp section from the NetworkConfig. Note that VM object rules use VAppScopedVmId that is random looking UUID that changes every time the vApp is moved.
    We should highlight that IP addresses allocated to the vApp (its VMs or vApp routers) from the source Org VDC network are retained during the migration (and must be available in the target Org VDC network static IP pool).
  • After the NetworkConfigSection follow details of every vApp VM (SourcedItem) – to which vApp network(s) defined above the VM network interface(s) will connect (with which IP/MAC and IPAM mode) and which storage, placement and compute policies (StorageProfile, VdcComputePolicy and ComputePolicy) it should use. For the NIC section you usually take the source VM equivalent info. The vApp network name must be the one defined in the NetworkConfig section. For the policies you must obviously use target Org VDC policies as these will change.
  • BTW storage policy can be also defined at the disk level with DiskSetting element (the followin excerpt shows when named disk is connected)
            <DiskSettings>
                <DiskId>2016</DiskId>
                <SizeMb>8</SizeMb>
                <UnitNumber>0</UnitNumber>
                <BusNumber>1</BusNumber>
                <AdapterType>3</AdapterType>
                <ThinProvisioned>true</ThinProvisioned>
                <Disk href="https://vcd-01a.corp.local/api/disk/567bdd04-4905-4a62-95e7-9f4850f85240" id="urn:vcloud:disk:567bdd04-4905-4a62-95e7-9f4850f85240" type="application/vnd.vmware.vcloud.disk+xml" name="Disk1"/>
                <StorageProfile href="https://vcd-01a.corp.local/api/vdcStorageProfile/1f8bf2df-d28c-4bec-900c-726f20507b5b"/>
                <overrideVmDefault>true</overrideVmDefault>
                <iops>0</iops>
                <VirtualQuantityUnit>byte</VirtualQuantityUnit>
                <resizable>true</resizable>
                <encrypted>false</encrypted>
                <shareable>false</shareable>
                <sharingType>None</sharingType>
            </DiskSettings>

The actual vApp migration triggers async operation that takes some time to complete. If you observe what is happening in VCD and vCenter you will see that a new temporary “-generated” vApp is created in the target Org VDC with the VMs being first migrated there. In case of routed vApps the vApp routers (edge service gateways or Tier-1 gateways) must be deployed as well. When all the vApp VMs are moved the source vApp is removed and the target vApp with the same identity is created and the VMs from generated vApp are relocated there. If all goes as expected the generated vApp is removed.

Shout-out to Julian – the engineering brain behind this feature.

Layer 2 VPN to the Cloud – Part III

I feel like it is time for another update on VMware Cloud Director (VCD) capabilities regarding establishing L2 VPN between on-prem location and Org VDC. The previous blog posts were written in 2015 and 2018 and do not reflect changes related to usage of NSX-T as the underlying cloud network platform.

The primary use case for L2 VPN to the cloud is migration of workloads to the cloud when the L2 VPN tunnel is temporarily established until migration of all VMs on single network is done. The secondary use case is Disaster Recovery but I feel that running L2 VPN permanently is not the right approach.

But that is not the topic of today’s post. VCD does support setting up L2 VPN on tenant’s Org VDC Gateway (Tier-1 GW) from version 10.2 however still it is hidden, API-only feature (the GUI is finally coming soon … in VCD 10.3.1). The actual set up is not trivial as the underlying NSX-T technology requires first IPSec VPN tunnel to be established to secure the L2 VPN client to server communication. VMware Cloud Director Availability (VCDA) version 4.2 is an addon disaster recovery and migration solution for tenant workloads on top of VCD and it simplifies the set up of both the server (cloud) and client (on-prem) L2 VPN endpoints from its own UI. To reiterate, VCDA is not needed to set up L2 VPN, but it makes it much easier.

The screenshot above shows the VCDA UI plugin embeded in the VCD portal. You can see three L2 VPN session has been created on VDC Gateway GW1 (NSX-T Tier-1 backed) in ACME-PAYG Org VDC. Each session uses different L2 PVN client endpoint type.

The on-prem client can be existing NSX-T tier-0 or tier-1 GW, NSX-T autonomous edge or standalone Edge client. And each requires different type of configuration, so let me discuss each separately.

NSX-T Tier-0 or Tier-1 Gateway

This is mostly suitable for tenants who are running existing NSX-T environment on-prem. They will need to set up both IPSec and L2VPN tunnels directly in NSX-T Manager and is the most complicated process of the three options. On either Tier-0 or Tier-1 GW they will first need to set up IPSec VPN and L2 VPN client services, then the L2VPN session must be created with local and remote endpoint IPs and Peer Code that must be retrieved before via VCD API (it is not available in VCDA UI, but will be available in VCD UI in 10.3.1 or newer). The peer code contains all necessary configuration for the parent IPSec session in Base64 encoding.

Lastly local NSX-T segments to be bridged to the cloud can be configured for the session. The parent IPSec session will be created automagically by NSX-T and after while you should see green status for both IPSec and L2 VPN sessions.

Standalone Edge Client

This option leverages the very light (150 MB) OVA appliance that can be downloaded from NSX-T download website and actually works both with NSX-V and NSX-T L2 VPN server endpoints. It does not require any NSX installation. It provides no UI and its configuration must be done at the time of deployment via OVF parameters. Again the peer code must be provided.

Autonomous Edge

This is the prefered option for non-NSX environments. Autonomous edge is a regular NSX-T edge node that is deployed from OVA, but is not connected to NSX-T Manager. During the OVA deployment Is Autonomous Edge checkbox must be checked. It provides its own UI and much better performance and configurability. Additionally the client tunnel configuration can be done via the VCDA on-premises appliance UI. You just need to deploy the autonomous edge appliance and VCDA will discover it and let you manage it from then via its UI.

This option requires no need to retrieve the Peer Code as the VCDA plugin will retrieve all necessary information from the cloud site.

How to Unregister NSX-V Manager from VMware Cloud Director

After successful migration from NSX-V to NSX-T in VMware Cloud Director you might wish to unregister NSX-V Manager and completely delete it from the vCenter. This is not so easy as the whole VCD model was build on the assumption that vCenter Server and NSX-V Manager are tied together and thus retired together as well. This is obviously no longer the case as you can now use NSX-T backed PVDCs or not use NSX at all (new feature as of VCD 10.3).

VMware Cloud Director (version 10.3+) adds API support to unregister NSX-V Manager without removing the vCenter Server. To do so you need to use the OpenAPI PUT VirtualCenter call. You will first have to run GET call with the VC URN to retrieve its current configuration payload, remove the nsxVManager element and then PUT it back.

Example:

GET https://{{host}}/cloudapi/1.0.0/virtualCenters/urn:vcloud:vimserver:cd0471d4-e48f-4669-8852-de1fdd2648aa

Response:

{
    "vcId": "urn:vcloud:vimserver:cd0471d4-e48f-4669-8852-de1fdd2648aa",
    "name": "vc-01a",
    "description": "",
    "username": "vcd@vsphere.local",
    "password": "******",
    "url": "https://vc-01a.corp.local",
    "isEnabled": true,
    "vsphereWebClientServerUrl": null,
    "hasProxy": false,
    "rootFolder": null,
    "vcNoneNetwork": null,
    "tenantVisibleName": "Site A",
    "isConnected": true,
    "mode": "IAAS",
    "listenerState": "CONNECTED",
    "clusterHealthStatus": "GREEN",
    "vcVersion": "7.0.0",
    "buildNumber": null,
    "uuid": "1da63a23-534a-4315-b3fa-29873d542ae5",
    "nsxVManager": {
        "username": "admin",
        "password": "******",
        "url": "http://192.168.110.24:443",
        "softwareVersion": "6.4.8"
    },
    "proxyConfigurationUrn": null
}

PUT https://{{host}}/cloudapi/1.0.0/virtualCenters/urn:vcloud:vimserver:cd0471d4-e48f-4669-8852-de1fdd2648aa

{
    "vcId": "urn:vcloud:vimserver:cd0471d4-e48f-4669-8852-de1fdd2648aa",
    "name": "vc-01a",
    "description": "",
    "username": "vcd@vsphere.local",
    "password": "******",
    "url": "https://vc-01a.corp.local",
    "isEnabled": true,
    "vsphereWebClientServerUrl": null,
    "hasProxy": false,
    "rootFolder": null,
    "vcNoneNetwork": null,
    "tenantVisibleName": "Site A",
    "isConnected": true,
    "mode": "IAAS",
    "listenerState": "CONNECTED",
    "clusterHealthStatus": "GREEN",
    "vcVersion": "7.0.0",
    "uuid": "1da63a23-534a-4315-b3fa-29873d542ae5",
    "proxyConfigurationUrn": null
}

In order for the NSX-V Manager removal to succeed you must make sure that:

  • Org VDCs using the vCenter Server do not have any NSX-V objects (VXLAN networks, Edge Gateways, vApp or DHCP Edges)
  • Org VDCs using the vCenter Server do not use VXLAN network pool
  • There is no VXLAN network pool managed by the to-be-removed NSX-V Manager

If all above is satistfied you will not need to remove existing Provider VDCs (if they were in the past using NSX-V). They will become NSX-less (so you will not be able to use NSX-T objects in them). NSX-T backed PVDCs will not be affected at all.