How to Migrate VMware Cloud Director from NSX-V to NSX-T

VMware Cloud Director as a cloud management solution is built on top of the underlying compute and networking platforms that virtualize the physical infrastructure. For the compute and storage part VMware vSphere was always used. However, the networking platform is more interesting. It all started with vShield Edge which was later rebranded to vCloud Networking and Security, Cisco Nexus 1000V was briefly an option, but currently NSX for vSphere (NSX-V) and NSX-T Data Center are supported.

VMware has announced the sunsetting of NSX-V (current end of general support is planned for (January 2022) and is fully committed going forward to the NSX-T Data Center flavor. The two NSX releases while similar are completely different products and there is no direct upgrade path from one to the other. So it is natural that all existing NSX-V users are asking how to migrate their environments to the NSX-T?

NSX-T Data Center Migration Coordinator has been available for some time but the way it works is quite destructive for Cloud Director and cannot be used in such environments.

Therefore with VMware Cloud Director 10.1 VMware is releasing compatible migration tool called VMware NSX Migration for VMware Cloud Director.

The philosophy of the tool is following:

  • Enable granular migration of tenant workloads and networking at Org VDC granularity with minimum downtime from NSX-V backed Provider VDC (PVDC) to NSX-T backed PVDC.
  • Check and allow migration of only supported networking features
  • Evolve with new releases of NSX-T and Cloud Director

In other words, it is not in-place migration. The provider will need to stand up new NSX-T backed cluster(s) next to the NSX-V backed ones in the same vCenter Server. Also the current NSX-T feature set in Cloud Director is not equivalent to the NSX-V. Therefore there are networking features that cannot in principle be migrated. To see comparison of the NSX-V and NSX-T Cloud Director feature set see the table at the end of this blog post.

The service provider will thus need to evaluate what Org VDCs can be migrated today based on existing limitations and functionality. Start with the simple Org VDCs and as new releases are provided migrate the rest.

How does the tool work?

  • It is Python based CLI tool that is installed and run by the system administrator. It uses public APIs to communicate with Cloud Director, NSX-T and vCenter Server to perform the migrations.
  • The environment must be prepared is such way that there exists NSX-T backed PVDC in the same vCenter Server as the source NSX-V PVDC and that their external networks are at the infrastructure level equivalent as existing external IP addresses are seamlessly migrated as well.
  • The service provider defines which source Org VDC (NSX-V backed) is going to be migrated and what is the target Provider VDC (NSX-T backed)
  • The service provider must prepare dedicated NSX-T Edge Cluster whose sole purpose is to perform Layer-2 bridging of source and destination Org VDC networks. This Edge Cluster needs one node for each migrated network and must be deployed in the NSX-V prepared cluster as it will perform VXLAN port group to NSX-T Overlay (Geneve) Logical Segment bridging.
  • When the tool is started, it will first discover the source Org VDC feature set and assess if there are any incompatible (unsupported) features. If so, the migration will be halted.
  • Then it will create the target Org VDC and start cloning the network topology, establish bridging, disconnect target networks and run basic network checks to see if the bridges work properly. If not then roll-back is performed (bridges and target Org VDC are destroyed).
  • In the next step the north / south networking flow will be reconfigured to flow through the target Org VDC. This is done by disconnecting the source networks from the gateway and reconnecting the target ones. During this step brief N/S network disruption is expected. Also notice that the source Org VDC Edge GW needs to be connected temporarily to a dummy network as NSX-V requires at least one connected interface on the Edge at all times.
  • Each vApp is then vMotioned from the source Org VDC to the target one. As this is live vMotion no significant network/compute disruption is expected.
  • Once the provider verifies the correct functionality of the target VDC she can manually trigger the cleanup step that migrates source catalogs, destroys bridges and the source Org VDC and renames the target Org VDC.
  • Rinse and repeat for the other Org VDCs.

Please make sure you read the release notes and user guide for the list of supported solutions and features. The tool will rapidly evolve – short roadmap already includes pre-validation and roll-back features. You are also encouraged to provide early feedback to help VMware decide how the tool should evolve.

17 thoughts on “How to Migrate VMware Cloud Director from NSX-V to NSX-T

    1. Thank you for the answer Tomas. For us as service provider is this very important, we have alot of customers who are running outside of VXLAN in normal VLANs on vCloud Director..

    2. Does the migration tool support to migrate “vSphere port group-backed” network? If not, when will the migration tool support to migrate “vSphere port group-backed” network?

      1. Do you mean networks created from Network Pool of type “Port groups backed’ Frankly, there was no intention of supporting this as it exists only for legacy reasons and AFAIK only legitimate use for the port group backed NP was with 3rd party switches (Cisco Nexus, IBM, …). What is your use case?

        Or do you mean VLAN backed NP? Yes, that one will be supported.

    3. For Network Pool type, there are “VXLAN-backed”, “VLAN-backed”, “Port groups backed” and “Geneve backed”. We are using “Port groups backed” Network Pool at our existing deployment. Does the migration tool supports the “Port groups backed” as source Network Pool? If not, when will it supports? For the target network pool, which Network Pool type does the migration tool supports and when?

      1. We currently support only VXLAN-backed. VLAN-backed will come in the next release.
        We do not plan to support Port groups backed as we do not see much use as I explained above. What is your use case? Why don’t you use VLAN-backed instead?

    4. We only use “Port group based” network pool. Our network team want to have tight control which VLAN ID allocated to which Organization and they request us to use “Port group based” Network Pool. We created the vSphere port groups at vCenter VDS in advance. Each port group contains one VLAN ID. Then, let the operator to select the port groups into the per-OrgVDC network pool, based on the VLAN ID.

  1. Hi T, is there an alternative method as opposed to an additional capital outlay for a new NSX-T prepared set of hosts solely used for migration purposes? This will be difficult to justify for budgetary purposes.

      1. NSX-V. In two-tier ESG hierarchy orgVDC networks which acting as a transit point-to-point networks between tenant-level ESG (exposed as an edge in VCD) and perimeter level ESG (not part of VCD view).
        Conceptually role of those transit networks are very similar to NSX-T auto plumbed transit networks between T0 and T1 gateways.

        PE Perimeter ESG Tenant ESG VM-LAN

        – In some cases IP level tracebility is required end to end and public IPv4 addressing has to be used all the way to the tenant VM.

  2. Hi Thomas! Is this script supported by NSX-T 3.0 version or only NSX 2.5.1? I tried to migrate, but the workflow crashed in the “Create Uplink Profile” step. In the logs I see this message “List index out of range”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.