Cisco Nexus 1000V vCloud Director Considerations

I am currently designing a vCloud Director environment and one of the requirements was to use Cisco Nexus 1000V virtual distributed switch with VXLAN support. VXLAN is the new standard to create virtualized networks. The other options to create isolated networks are VLAN or VCDNI pools: VLAN is limited to 4095 virtual networks which is usually not enough for service providers, vCloud Director network isolation (VCDNI) is on the other hand VMware proprietary MAC-in-MAC encapsulation and is harder to accept by security or networking teams. It is obvious that VXLAN is the future. However the only way to use it in current version of vCloud Director (version 1.5.1) is to use Cisco Nexus distributed switch.

Cisco Nexus 1000V brings also the separation of virtual networking from the VMware administrators back to the responsibility of the networking team. On top of that it can be managed with physical Nexus 1010 appliance that runs the Virtual Supervisor Modules (VSM). Why is that important? Some corporations have strict rules about virtual vs hardware cost allocations and network team cannot take ownership of virtual entities. The hardware Nexus 1010 helps them to overcome the process complexities.

vCloud Director Integration

There is no GUI option to create VXLAN backed network pools in VCD 1.5.1. The way Nexus 1000V integrates with vCloud is that it replaces the vCloud Director Network Isolation when using in combination with Nexus vDS.

Network pool options in vCloud Director 1.5.1

To get it to work some prior steps must be done:

  1. VXLAN must be enabled at the Nexus switch (network segmentation manager and segmentation features must be enabled).
  2. Nexus VSM must be registered with vShield Manager. The network creation workflow is basically: vCloud Director => vShield Manager => Virtual Supervisor Module. To avoid confusion, vShield Manager has acronym vSM and Virtual Supervisor Module is VSM. So in short: VCD => vSM => VSM.

    vShield Manager VSM integration
  3. vmkernel port-profile must be created on VSM with VXLAN capability.
  4. vmknic hast to be created on all ESX hosts that will participate in VXLAN networks. This will be basically the uplink for the VXLAN encapsulated traffic.
  5. The MTU on the ethernet uplink must be enlarged to 1550 to support the header overhead of the UDP encapsulated packets.
  6. As multicast is used to limit the broadcast traffic to only those ESX hosts that have VMs in given VXLAN segment, IGMP snooping must be enabled on the upstream physical switches.

Other considerations

There are however other important considerations related to the Nexus 1000V deployment. Each Nexus 1000V switch must be managed by its own VSM. The VSM is a virtual appliance and can run either directly on ESX host or on hardware Nexus 1010 appliance. The Nexus 1010 appliance can host up to 6 VSMs. VSM also needs to talk not only to vCenter over management network but also to Virtual Ethernet Modules (VEM) running as modules in vmkernel on each ESX host. This communication happens over Packet and Control networks. They can be in two separate VLANs or can be mixed into one. But still it is important to trunk this additional VLAN to the Nexus switch.

There are also important scalability considerations. One Nexus 1000V switch can have only 216 ports per ESX host and only 2048 ports per switch. This can have quite an impact on the design when hosts with a lot of memory (300 GB+) are used. Basically the tenant can create a large number of relatively small VMs in his organization vDC and exhaust all the available switch ports. If we consider an average 2 GB per VM, we can fit about 150 VM per host and exhaust the switch with 14 host cluster.

And there is another limit – only 1 VSM per vCenter datacenter. So we cannot just add another 14 host cluster to scale. We have to add another vCenter Datacenter. The impact is that when an organization has two org vDCs each in different cluster/datacenter backed by two Nexus 1000Vs these two org vDCs cannot be connected with one organization routed or isolated network – those are backed by VXLAN and VXLAN cannot span two Nexus 1000V switches. The spanning limitation is also limitation with vSphere distributed switch (VDS) but that has much higher scalability limits and does not hurt so much (VDS can have 1016 ports per host and 30000 ports per switch. Also there is no limit of 1 VDS per datacenter).

Hopefully new release of vCloud Director or Nexus 1000V will address some of the limitations. Meanwhile do not forget to include those constraints into your design consideration.

Edit 29 September 2012:

Cisco representative informed me about some inaccuracies in my last paragraphs about the scalability considerations. There is no 1 VSM per vCenter Datacenter limit. The correct statement is that Cisco switch cannot be deployed across datacenters, but that is also true for vSphere distributed switch. So there is no need to add additional datacenters. But we still need to dedicate one Nexus 1000V per cluster due to the maximum port limit (so no vMotion between clusters) and we can have only 12 clusters per vCenter due to 12 Nexus switches limit per vCenter in vCloud Director. However vCloud Director 5.1 enables VXLAN spanning over different switches, so org networks then can span clusters.

Regarding the port limits (per host, per switch), I was told that they are soft limits that were tested, so customers might exceed them. Not sure of the supportability implications, but I guess its up to the customer to do the scalability tests and how close is their relationship with Cisco support.

Advertisements

One thought on “Cisco Nexus 1000V vCloud Director Considerations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s