vSphere Upgrade Saga: Setting up a RHEL iSCSI Server

When using the RHEL scsi-target-utils, there is some special mojo needed when connecting to vSphere 5 (perhaps any version of vSphere). Unlike the iSCSI Enterprise Target (IET), the new service makes use of modern iSCSI targeting techniques, and these did not work as expected with vSphere out of the box.  For a few days, I was confused as to what was happening, but not anymore, so now my iSCSI server for my vSphere Environment is back in running shape after its hardware upgrade, new operating system, and upgraded disk drives.I use a whitebox RHEL 6 installation as a NFS, iSCSI Server, KVM Server, vCLI, Perl SDK, and for VMware Workstation. Not to mention my myriad development projects for virtualization.  As an iSCSI server, I need a minimal of storage to fit my entire disk environment as this server becomes a storage vMotion target when I have to upgrade my production SAN. As such, it has 4 shiny new 2TB disks controlled via LVM. LVM is setup to stripe across all 4 disks, thereby affording me slightly better performance. In addition, there is a 1GB dedicated link to the iSCSI component of this server through a switch dedicated to iSCSI. There is a very cool feature of HP Flex-10 in my BladeSystem, it recognizes iSCSI traffic and will optimize such traffic.

The setup of the scsi-target-utils was pretty straight forward, I pointed the /etc/tgt/targets.conf to my 7.8GB LVM LUN for use for iSCSI. Which meant I had to use VMFS v5 to access it all. Which is my intention. But then that is where several issues occured:

  • vSphere 5 requires the iSCSI vmkernel port to hang off a dedicated vSwitch and not a vNetwork Distributed Switch (vDS). This is a change from vSphere 4.

This predicated a slight change to my networking which led to the creation of a dedicated network just for iSCSI and the use of  one of my free Flex-10 NICs just for iSCSI. Unfortunately, in order to change the networking within the Server Profile within Virtual Connect for Flex-10, you first have to shutdown the node. Since my server upgrade, I have more than enough memory to run everything my nodes so that powering off one will not adversely effect anything. So, I went through the process of upgrading server profiles, putting nodes into maintenance mode, shutting them down, saving the virtual connect profile, rebooting the node and viola it was done. If I had more than 4 nodes, this would have been a week end nightmare. How bigger shops manage this is with quite a bit of scripting.

  • vSphere 5 would not allow me to migrate my vmkernel device from a vDS back to a vSwitch.

The second problem I faced was that I had to recreate all my vmkernel ports for iSCSI instead of migrating them off the vDS on which they were set. Because there was no migration path available. You can migrate from a vSwitch but apparently not back to one.

  • vSphere 5 required me to bind a vmkernel device for iSCSI utilization.

Not a huge issue, but it is a definite change from setting up iSCSI with older versions of vSphere, where all you needed to do was have the proper IP address and it would route appropriately.

  • vSphere 5 good not see the iSCSI Server yet, the iSCSI Server could ping the vmkernel devices.

This is where the mojo comes in. The first problem I had was iptables, it was set to deny iSCSI traffic as well as iSNS traffic. That was fixed temporarily by disabling iptables. However, it still would not connect. The solution ended up being a change to how iSNS traffic worked. I disabled the iSNSAccessControl parameter from On to Off, restarted iSNS and suddenly things started to work.

My final /etc/tgt/targets.conf file consists of the following:

default-driver iscsi
iSNSServerIP W.X.Y.Z
iSNSServerPort 3205
iSNSAccessControl Off
iSNS On
<target iqn.2008-09.com.example:server.target1>
        backing-store /dev/vg_iscsi/lv_iscsi
        lun 1
</target>

And for my security settings I ended up adding these rules to my /etc/sysconfig/iptables file:

-A INPUT -m state --state NEW -m tcp -p tcp -s W.X.Y.0/24 --dport 3260 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp -s W.X.Y.0/24 --dport 3205 -j ACCEPT

Now I have a functioning iSCSI server with close to 8TBs available for use by vSphere 5. It will come in handy for some work I have to do on my SAN. Oh, and the hardware upgrade for this all purpose node? 32GBs of Memory plus a Sandybridge quad core running very fast. The 1GB dedicated link I am using is through a switch that will only be used for iSCSI, so all my iSCSI accesses are segregated from the rest of my network. That should help with security as well as performance!

Leave a comment

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.