Recently I have added to my home lab NAS Iomega ix4-200d – with 4×2 TB hard drives. Here are some of my thoughts about the product.
Iomega makes consumer disk based products but since April 2010 is part of EMC. They are now claiming to deliver enterprise storage solutions to small and medium businesses. The Iomega’s biggest unit – 12 disk ix12-300r is rack mountable, with dual hot-swappable power supplies but with the same OS as my Iomega ix4 – EMC Lifeline Linux. I have read many blogs saying that Iomega is very good fit for VMware home labs. It is even on VMware hardware compatibility list.
So what are my thoughts?
- it is very small, neat and quiet unit with four 2 TB hard drives, if RAID5 is used you get around 5.5 TB of capacity
- it is very simple to setup and use via web based GUI
- it has many features: RAID 0/1/5, CIFS, NFS, iSCSI, FTP, TFTP, rsync and CIFS replication, Active Directory integration, Quotas, Printer server, USB ports for external drives, Bluetooth dongle or UPS communication, scheduled backups, power management, torrent client, …
- dual ethernet ports, jumbo frames
- as mentioned it is on VMware HCL
- in this white paper EMC recommends the use of Iomega for Remote Office Branch Office (ROBO) deployments with centralized backup repository to Celerra
- it uses software RAID (linux mdadm), slow 5.9K rpm Seagate 2TB drives, ARM9 (ARM926EJ-S) CPU with 512 MB RAM, therefore the disk performance is not very good
- the web based GUI is sometimes too simple and limiting
- although it has many features, most of them are implemented on very basic level
- the dual network ports do not support VLANs, using each port for different network segment is possible, but the separation of services is not done very well. For example you cannot limit the management interface to only one network.
- only NFS is VMware certified, iSCSI is not. You cannot limit iSCSI to one network segment and separate LAN from SAN.
- Very basic documentation
- My hopes were that because of the Active Directory integration it could be used as replacement of Windows File server. Well, this is not the case. You can create shares with AD access control list only at the root level and only via the GUI. You cannot create subfolders with different access rights neither from Windows nor from the GUI. This makes it unusable for business deployment.
I think the unit is perfect for home use. It has enough capacity to be used for home media – movies, music and photos, for backups and for some light VMware home lab usage. However for small businesses I would recommend to use it only for backups. I was expecting Iomega to have Celerra like features and am little bit dissapointed but that is probably too much to ask in this price range.
I am using HP StorageWorks (formerly known as LeftHand) P4000 Virtual SAN Appliance (VSA) for my iSCSI storage. It is a virtual machine that uses any storage (in my case local disks) and presents it as iSCSI targets. It is enterprise level software (as opposed to OpenFiler) with such features as high availability (network RAID), thin provisioning, snapshots, replication and Site Recovery Manager plugin. The list price is 4500 EUR, but the good thing is that it can be used for free without the advanced features such are replication, snapshots, HA, etc. Those features can be used in trial mode for 60 days which is perfect for Site Recovery Manager testing. HP also sells hardware equivalents of P4000 which are basically regular computers with SAN appliance software.
The SAN appliance software is called SAN/IQ and new update version 9 was released last week. For me the most interesting new feature is vStorage API for Array Integration (VAAI). Now in vSphere ESX4.1 there are some storage related operations offload from the vmkernel to the storage processor. One of them is zeroing newly created thick disk (eager zero thick disk) needed for fault tolerance VMs. To test if VAAI works I compared creation of 10GB FT enabled disk. Without VAAI the disk was created in 249 seconds with VAAI it took only 193 seconds without any ESX host CPU overhead or unnecessary SAN traffic.
Here is the screenshot of a datastore with hardware acceleration.
I love when you can play with enterprise technology at home.
The HP VSA can be downloaded here: www.hp.com/go/tryvsa
I am building my home lab for VMware vSphere testing. To use advanced features vSphere offers external storage is a must. There are many options. My choices were to buy NAS (my favorite is Iomega StorCenter ix4-200d which is even on vSphere HCL), use my file server linux box with NFS exports or build an open source NAS that supports iSCSI. vSphere supports NFS, so standard linux with NFS-kernel-server is an option, however you cannot use vStorage thin provisioning. I needed flexibility, which buying NAS would not offer (5×1 TB RAID5 for my files and 2x 500GB RAID1 for vSphere) and so decided to change my file server into an Openfiler appliance. This way I could still use it as file server with samba shares for my Windows station and also as iSCSI storage for vSphere ESX 4 hosts. To maximize the number of disks I could put into the box I decided to boot it from SD card. Unfortunately that is not an easy task. There are many guides on the Openfiler forum, however at the end I came up with my own solution with the help of VMware Workstation.
- Download ISO from here
- Create VM in Workstation with similar setup as the physical machine that will be used for Openfiler. In my case with 2 GB HDD, 2 GB RAM and 3 e1000 NICs.
- Install Openfiler with linux text expert option
- Select Druid partitioning option and make these partitions:
||100 MB EXT2
||1200 MB EXT2
||512 MB EXT2
- After installation, log in via web interface and update Openfiler. Use background update. It takes some time. Reboot.
- Before moving the installation to SD card we must add USB storage drivers to boot image:
mv initrd-22.214.171.124-0.6.smp.gcc3.4.x86_64.img initrd-126.96.36.199-0.6.smp.gcc3.4.x86_64.img.old
mkinitrd –preload ehci-hcd –with usb-storage initrd-188.8.131.52-0.6.smp.gcc3.4.x86_64.img 184.108.40.206-0.6.smp.gcc3.4.x86_64
- Edit the fstab options to protect the limited writes of SD card with noatime option and moving some folders to ramdrive
LABEL=/ / ext2 defaults,noatime 0 0
LABEL=/boot /boot ext2 defaults,noatime 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults,noatime 0 0
/proc /proc proc defaults,noatime 0 0
/sys /sys sysfs defaults,noatime 0 0
LABEL=/var /var ext2 defaults,noatime 0 0
tmpfs /tmp tmpfs defaults,noatime 0 0
tmpfs /var/tmp tmpfs defaults,noatime 0 0
- Do any additional customizations (admin password, NIC setup, etc.)
- Once we are done we can transfer the image to SD card. For that I used another linux VM with Debian. I added the VMDK disk from Openfiler VM and plugged in SD with physical access to Debian VM (little icon in the right corner of VMware Workstation – Disconnect from Host).
- With fdisk create /boot / and /var partitions same size and filesystem as in Openfiler install
- copy boot partition with dd (/dev/sdb is Openfiler disk, /dev/sdd is SD card
dd /dev/sdb1 /dev/sdd1
mount -t ext2 /dev/sdb2 /mnt1
mount -t ext2 /dev/sdd2 /mnt2
cp -a /mnt1 /mnt2
mount -t ext2 /dev/sdb3 /mnt1
mount -t ext2 /dev/sdd3 /mnt2
cp -a /mnt1 /mnt2
- label the partition so they can be mounted properly
e2label /dev/sdd1 /boot
e2label /dev/sdd2 /
e2label /dev/sdd3 /var
- We are done. Now the SD card is ready to be inserted into the physical Openfiler machine.