Homelab: Downsizing NSX Controller

One of the problems of testing NSX in homelab environment is that it is really resource hungry. For example NSX Manager VM deploys with 12 GB RAM. While it is simple to edit its settings and lower memory to about 8 GB without any major impact, VMs that are deployed by NSX automatically (Controllers and Edges) cannot be edited in vSphere Client as the Edit Settings menu option is disabled. Each NSX Controller requires 4 vCPUs and 2.05 GHz CPU reservation. If you go by the book and deploy 3 of them it creates quite a resource impact.

vCPUs can be changed by editing VMs VMX file or by hacking NSX Manager OVF file from which it is deployed from on NSX Manager (located in common/em/components/vdn/controller/ovf/nsx-controller-<version>.vxlan.ovf) if you know how to get to support engineer mode or are not afraid of mounting linux partitions. The CPU reservation cannot be changed this way.

The approach I use is to disable vCenter Server protection of NSX Controller VMs. Find their MoRefIDs (for example with vCenter Managed Object Browser) and then delete respective records from vCenter Server VPX_DISABLED_METHODS table. Afer a restart of vCenter service the VMs are no longer protected and you can simply edit their settings in vSphere Client.

Disclaimer: this is unsupported and should not be done in production environments.

Edit 1/25/1018: Came up with much simpler way of re-enabling disabled method on the NSX Controller VMs. Just use MOB.

Shuttle PC – vSphere 5.5 White Box Gotchas

I love barebone Shuttle PCs for home lab purposes. They have very compact design, can fit 3 hard disks (great for VSAN), low power consumption, are quiet and can fit up to 32 GB of RAM.

SH87R6

I have two of them (SZ68R5 and SH87R6). I was recently reinstalling them to brand new vSphere 5.5 U1 to prep them up for VSAN and here are some problems I encountered.

On-board Realtek NIC

Although I always add dual NIC Intel Pro/1000 PT card, there is one on-board Realtek 8111G NIC as well. This card used to work with vSphere 5.1, however as of vSphere 5.5 the driver for it is no longer included. If you upgraded from vSphere 5.1 to 5.5 the card will still work, but brand new installation will not recognize it.

To solve it I have created custom image with the vSphere 5.1 Realtek 8168 driver. Here is ImageBuilder PowerCLI script I used (includes also Cisco and NetApp VIBs).

Add-EsxSoftwareDepot .\update-from-esxi5.5-5.5_update01.zip
Add-EsxSoftwareDepot .\VEM550-201401164104-BG-release.zip
Add-EsxSoftwareDepot .\NetAppNasPlugin.v20.zip
New-EsxImageProfile -CloneProfile ESXi-5.5.0-20140302001-standard -name ESXi-5.5.0-20140302001-Cisco-Realtek-NetApp -vendor Fojta
Add-EsxSoftwareDepot https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Add-EsxSoftwarePackage -ImageProfile ESXi-5.5.0-20140302001-Cisco-Realtek-NetApp -SoftwarePackage net-r8168
Add-EsxSoftwarePackage -ImageProfile ESXi-5.5.0-20140302001-Cisco-Realtek-NetApp -SoftwarePackage net-r8169
Add-EsxSoftwarePackage -ImageProfile ESXi-5.5.0-20140302001-Cisco-Realtek-NetApp -SoftwarePackage cisco-vem-v164-esx
Add-EsxSoftwarePackage -ImageProfile ESXi-5.5.0-20140302001-Cisco-Realtek-NetApp -SoftwarePackage NetAppNasPlugin
Export-EsxImageProfile -ImageProfile ESXi-5.5.0-20140302001-Cisco-Realtek-NetApp -ExportToIso -filepath ESXi-5.5.0-20140302001-Cisco-Realtek-NetApp.iso

Thanks go to: Paul Braren and Erik Bussink for the hints.

USB Flash Disk

I am booting off of USB flash disk in order to preserve internal SDD and HDD for VSAN. I use 8 GB SanDisk Cruzer Fit which is so tiny you don’t even notice it in the USB slot.

I prepared the USB flash disk in Workstation then plug it and do the initial network configuration. The strange thing I was encountering was that I was able to boot from the disk, but the ESXi installation was stateless. Any configuration changes I did would be lost after reboot.

The reason was that although I could boot of the USB, ESXi did not recognize it and would not save the configuration. After some troubleshooting I found out that I need to use USB 2.0 ports (B6) and not USB 3.0 (B7).

Shuttle USB 2.0 and 3.0 Ports

Identical BIOS UUID

Both Shuttle boxes have identical BIOS UUID:

esxcfg-info | grep “BIOS UUID”

results in 03000200-0400-0500-0006-000700080009.

The is big problem for Nexus 1000V who identifies VEMs via the supposedly unique BIOS UUID. For now I removed one of the boxes from the Nexus switch. I would be grateful for any info how to flash BIOS with different UUID.

EDIT 3/22/2014:

Thanks to the comments below from RonTom42 here are the steps needed to change BIOS UUID:

1. Download AMIDMI.EXE tool from here.
2. Download FreeDOS and put it onto USB stick from here. I used FreeDOS-1.1-memstick-2-2048M.img with the Win32 Disk Imager option.
3. Copy AMIDMI.EXE to USB disk
4. Boot Shuttle from the USB disk. Enter the fdos option 4.
5. Run amidmi /u command
6. Reboot ESXi host and check BIOS UUID with: esxcfg-info | grep “BIOS UUID”

Iomega ix4-200d: The Good, The Bad and The Ugly

Recently I have added to my home lab NAS Iomega ix4-200d – with 4×2 TB hard drives. Here are some of my thoughts about the product.

Iomega makes consumer disk based products but since April 2010 is part of EMC. They are now claiming to deliver enterprise storage solutions to small and medium businesses. The Iomega’s biggest unit – 12 disk ix12-300r is rack mountable, with dual hot-swappable power supplies but with the same OS as my Iomega ix4 – EMC Lifeline Linux. I have read many blogs saying that Iomega is very good fit for VMware home labs. It is even on VMware hardware compatibility list.

So what are my thoughts?

The good

  • it is very small, neat and quiet unit with four 2 TB hard drives, if RAID5 is used you get around 5.5 TB of capacity
  • it is very simple to setup and use via web based GUI
  • it has many features: RAID 0/1/5, CIFS, NFS, iSCSI, FTP, TFTP, rsync and CIFS replication, Active Directory integration, Quotas, Printer server, USB ports for external drives, Bluetooth dongle or UPS communication, scheduled backups, power management, torrent client, …
  • dual ethernet ports, jumbo frames
  • as mentioned it is on VMware HCL
  • in this white paper EMC recommends the use of Iomega for Remote Office Branch Office (ROBO) deployments with centralized backup repository to Celerra

The Bad

  • it uses software RAID (linux mdadm), slow 5.9K rpm Seagate 2TB drives, ARM9 (ARM926EJ-S) CPU with 512 MB RAM, therefore the disk performance is not very good
  • the web based GUI is sometimes too simple and limiting
  • although it has many features, most of them are implemented on very basic level
  • the dual network ports do not support VLANs, using each port for different network segment is possible, but the separation of services is not done very well. For example you cannot limit the management interface to only one network.
  • only NFS is VMware certified, iSCSI is not. You cannot limit iSCSI to one network segment and separate LAN from SAN.

The Ugly

  • Very basic documentation
  • My hopes were that because of the Active Directory integration it could be used as replacement of Windows File server. Well, this is not the case. You can create shares with AD access control list only at the root level and only via the GUI. You cannot create subfolders with different access rights neither from Windows nor from the GUI. This makes it unusable for business deployment.

The result

I think the unit is perfect for home use. It has enough capacity to be used for home media – movies, music and photos, for backups and for some light VMware home lab usage. However for small businesses I would recommend to use it only for backups. I was expecting Iomega to have Celerra like features and am little bit dissapointed but that is probably too much to ask in this price range.

HP StorageWorks P4000 Virtual SAN Appliance Now With VAAI



I am using HP StorageWorks (formerly known as LeftHand) P4000 Virtual SAN Appliance (VSA) for my iSCSI storage. It is a virtual machine that uses any storage (in my case local disks) and presents it as iSCSI targets. It is enterprise level software (as opposed to OpenFiler) with such features as high availability (network RAID), thin provisioning, snapshots, replication and Site Recovery Manager plugin. The list price is 4500 EUR, but the good thing is that it can be used for free without the advanced features such are replication, snapshots, HA, etc. Those features can be used in trial mode for 60 days which is perfect for Site Recovery Manager testing. HP also sells hardware equivalents of P4000 which are basically regular computers with SAN appliance software.

The SAN appliance software is called SAN/IQ and new update version 9 was released last week. For me the most interesting new feature is vStorage API for Array Integration (VAAI). Now in vSphere ESX4.1 there are some storage related operations offload from the vmkernel to the storage processor. One of them is zeroing newly created thick disk (eager zero thick disk) needed for fault tolerance VMs. To test if VAAI works I compared creation of 10GB FT enabled disk. Without VAAI the disk was created in 249 seconds with VAAI it took only 193 seconds without any ESX host CPU overhead or unnecessary SAN traffic.

Here is the screenshot of a datastore with hardware acceleration.

I love when you can play with enterprise technology at home.

The HP VSA can be downloaded here: www.hp.com/go/tryvsa

My vSphere Home Lab

Building vSphere home lab is in my opinion essential. It is quite popular subject and there was even VMworld session dedicated to it:

My reasons for having a home lab are following:

  • Run my home IT infrastructure (firewall, DHCP, Active Directory, Mail server, VoIP, …)
  • Try and learn new products
  • Get hands on experience needed for certifications

I tried many approaches (VMware server, VMware Workstation, Xeon server, …). My goals were to build quite powerful lab, that would fulfill the reasons stated above but at the same time to be cheap, to have low power requirements not to be noisy and not make too much heat. Here is my current config:

I have two servers. One acts as ‘unified’ storage server (NAS and iSCSI SAN) and one as the host for all other workloads.

Storage server

To try advanced virtualization features shared storage is a must. I needed huge fileserver even before I started with virtualization to store my DVD collection so I build multi terabyte linux based (Debian) fileserver with software raid, which later become Openfiler NAS with iSCSI protocol. However to learn VMware Site Recovery Manager I needed storage which could replicate and was compatible with SRM. The best low cost choice is VSA – Virtual Storage Appliance. VSAs are in OVF format and need either VMware Workstation or ESX server. I installed ESXi on my storage server. I virtualize the multiterabyte fileserver with RDM disks and next to it I run HP StorageWorks (LeftHand) VSA, FalconStor VSA or EMC Celerra VSA. For example for SRM tests I had two copies of Lefthand VSA replicating each other.

Hardware

The server was build with the purpose to fit there as many hard drives as possible. I use Chieftec Smart case with 9 external 5.25“ drive bays that are multiplied with 3in2 or 4in3 disk backplanes. The motherboard is ASUS P5B-VM DO with 7 SATA ports. I added 4 more with two PCI SATA controllers (Kouwell). I experimented with hardware RAID card (Dell PERC 5/i) but it made too much heat and the local Raw Disk Mapping did not work. The OS is booted from USB flash disk so I can put 11 drives to this server. Currently I have five 1TB drives (low powered green RE3 Western Digitals) for NAS in RAID 5 and two 500 GB drives for VSAs. Intel Core 2 Duo E6400 CPU with 3 GB RAM more than enough to run 3 VMs at the moment (RedHat based NAS fileserver, FalconStor and LeftHand).  One onboard Intel NIC is coupled with Intel PRO/1000 PT dual port PCIe adapter cheaply bought off eBay. One NIC is used for management and fileserver traffic (with VLAN separation), two NICs are teamed for iSCSI traffic.

Workload server

The purpose of ‘workload server’ is to run all the VMs needed for infrastructure purposes and testing. From my experience I found out that the consolidation of non production servers is usually limited by the memory available, therefore I was looking for the most cost effective option to build server with as much memory as possible. At the end I settled on Asus P5Q-VM which has 4 DIMM slots and supports up to 16GB of DDR2 RAM. I bought it cheaply off local eBay equivalent, added Intel Core 2 Duo E8400 3GHz (also bought used) processor and 12 GB of RAM (brand new 4 GB DIMM costs around 110 EUR). The onboard Realtek NIC is not ESX compatible, so I added Intel PRO/1000 PT dual port PCIe, Intel Pro PCIe and PCI adapters to get 4 network interfaces. The server is diskless, boots from USB flash disk and is very quite housed in small micro ATX case.

At the moment I run my infrastructure VMs (two firewalls – Endian and Vyatta), Domain Controller, Mail server, vCenter, vMA, TrixBox, Win XP) and vCloud Director test VMs (vCloud Director, Oracle DB, vShield Manager) without memory ballooning and CPU usage is at less than 25%. The heat and power consumption is acceptable although the noise of 7 hard drives and notably the backplane fans is substantial. In the future I may go the Iomega StorCenter direction to replace the fileserving functions as it is very quite and power efficient but most likely it will not replace the flexibility of VSA.