HP StorageWorks P4000 Virtual SAN Appliance Now With VAAI



I am using HP StorageWorks (formerly known as LeftHand) P4000 Virtual SAN Appliance (VSA) for my iSCSI storage. It is a virtual machine that uses any storage (in my case local disks) and presents it as iSCSI targets. It is enterprise level software (as opposed to OpenFiler) with such features as high availability (network RAID), thin provisioning, snapshots, replication and Site Recovery Manager plugin. The list price is 4500 EUR, but the good thing is that it can be used for free without the advanced features such are replication, snapshots, HA, etc. Those features can be used in trial mode for 60 days which is perfect for Site Recovery Manager testing. HP also sells hardware equivalents of P4000 which are basically regular computers with SAN appliance software.

The SAN appliance software is called SAN/IQ and new update version 9 was released last week. For me the most interesting new feature is vStorage API for Array Integration (VAAI). Now in vSphere ESX4.1 there are some storage related operations offload from the vmkernel to the storage processor. One of them is zeroing newly created thick disk (eager zero thick disk) needed for fault tolerance VMs. To test if VAAI works I compared creation of 10GB FT enabled disk. Without VAAI the disk was created in 249 seconds with VAAI it took only 193 seconds without any ESX host CPU overhead or unnecessary SAN traffic.

Here is the screenshot of a datastore with hardware acceleration.

I love when you can play with enterprise technology at home.

The HP VSA can be downloaded here: www.hp.com/go/tryvsa

My vSphere Home Lab

Building vSphere home lab is in my opinion essential. It is quite popular subject and there was even VMworld session dedicated to it:

My reasons for having a home lab are following:

  • Run my home IT infrastructure (firewall, DHCP, Active Directory, Mail server, VoIP, …)
  • Try and learn new products
  • Get hands on experience needed for certifications

I tried many approaches (VMware server, VMware Workstation, Xeon server, …). My goals were to build quite powerful lab, that would fulfill the reasons stated above but at the same time to be cheap, to have low power requirements not to be noisy and not make too much heat. Here is my current config:

I have two servers. One acts as ‘unified’ storage server (NAS and iSCSI SAN) and one as the host for all other workloads.

Storage server

To try advanced virtualization features shared storage is a must. I needed huge fileserver even before I started with virtualization to store my DVD collection so I build multi terabyte linux based (Debian) fileserver with software raid, which later become Openfiler NAS with iSCSI protocol. However to learn VMware Site Recovery Manager I needed storage which could replicate and was compatible with SRM. The best low cost choice is VSA – Virtual Storage Appliance. VSAs are in OVF format and need either VMware Workstation or ESX server. I installed ESXi on my storage server. I virtualize the multiterabyte fileserver with RDM disks and next to it I run HP StorageWorks (LeftHand) VSA, FalconStor VSA or EMC Celerra VSA. For example for SRM tests I had two copies of Lefthand VSA replicating each other.

Hardware

The server was build with the purpose to fit there as many hard drives as possible. I use Chieftec Smart case with 9 external 5.25“ drive bays that are multiplied with 3in2 or 4in3 disk backplanes. The motherboard is ASUS P5B-VM DO with 7 SATA ports. I added 4 more with two PCI SATA controllers (Kouwell). I experimented with hardware RAID card (Dell PERC 5/i) but it made too much heat and the local Raw Disk Mapping did not work. The OS is booted from USB flash disk so I can put 11 drives to this server. Currently I have five 1TB drives (low powered green RE3 Western Digitals) for NAS in RAID 5 and two 500 GB drives for VSAs. Intel Core 2 Duo E6400 CPU with 3 GB RAM more than enough to run 3 VMs at the moment (RedHat based NAS fileserver, FalconStor and LeftHand).  One onboard Intel NIC is coupled with Intel PRO/1000 PT dual port PCIe adapter cheaply bought off eBay. One NIC is used for management and fileserver traffic (with VLAN separation), two NICs are teamed for iSCSI traffic.

Workload server

The purpose of ‘workload server’ is to run all the VMs needed for infrastructure purposes and testing. From my experience I found out that the consolidation of non production servers is usually limited by the memory available, therefore I was looking for the most cost effective option to build server with as much memory as possible. At the end I settled on Asus P5Q-VM which has 4 DIMM slots and supports up to 16GB of DDR2 RAM. I bought it cheaply off local eBay equivalent, added Intel Core 2 Duo E8400 3GHz (also bought used) processor and 12 GB of RAM (brand new 4 GB DIMM costs around 110 EUR). The onboard Realtek NIC is not ESX compatible, so I added Intel PRO/1000 PT dual port PCIe, Intel Pro PCIe and PCI adapters to get 4 network interfaces. The server is diskless, boots from USB flash disk and is very quite housed in small micro ATX case.

At the moment I run my infrastructure VMs (two firewalls – Endian and Vyatta), Domain Controller, Mail server, vCenter, vMA, TrixBox, Win XP) and vCloud Director test VMs (vCloud Director, Oracle DB, vShield Manager) without memory ballooning and CPU usage is at less than 25%. The heat and power consumption is acceptable although the noise of 7 hard drives and notably the backplane fans is substantial. In the future I may go the Iomega StorCenter direction to replace the fileserving functions as it is very quite and power efficient but most likely it will not replace the flexibility of VSA.