iSCSI and ESXi: multipathing and jumbo frames

For my home lab I have decided to use ESXi running from a 2 GB USB flash disk without any local storage. The main reason is that now the ESXi host does not produce as much noise and heat as before. VMware claims that they will retire ESX classic in the future so I take it as an opportunity to learn and be prepared for the switch to service-consoleless hypervisor.  

For the shared storage I am currently using Openfiler with iSCSI protocol. I am dedicating pair of 1Gb NICs on the ESXi host and on the Openfiler server for the iSCSI traffic. To get advantage of all the available bandwidth not so trivial set up is need which I am going to describe below.  

iSCSI multipathing

In order to be able to use iSCSI multipathing in vSphere, we need to create two VMkernel ports, bind them to two different uplinks and attach them to software iSCSI HBA.
My ESXi host has 4 NICs. Two are assigned to vSwitch0 which has Management VM Port group and three VMkernel ports. One for Management and two for iSCSI. Following picture shows vSwitch0 in the networking tab of the vSphere Client:  

  

The management traffic in untagged, iSCSI traffic is on VLAN 1000. As I also wanted to use jumbo frames (yes, ESXi supports jumbo frames, despite official documentation claiming for a long time otherwise), I had to create the VMkernel ports from CLI. The binding of the iSCSI VMkernel ports to sw iSCSI HBA must also be done from CLI. ESXi does not have service console, therefore first step is to install vMA (VMware Management Assistant) which replaces the service console. Continue reading “iSCSI and ESXi: multipathing and jumbo frames”