vSphere Upgrade Saga: Moving to 10 G Switches

It was time to upgrade my storage network from 1 G switching to 10 G switching. Actually, this was to upgrade the external connections to my virtual environment. The internal connections run at 20 G, as they use the back plane of my blade chassis. The goal was to add my Synology as a 10 G storage device. My existing iSCSI servers were already at 10 G, but not my Synology or any other non-VSA approach to storage. In essence, I needed more 10 G switch ports.

My topology is pretty simple: two 10 G network ports to my Synology (thanks to an add-in card), two 10 G network ports uplinked through Open vSwitch to my secondary iSCSI server, and two uplinks from the iSCSI Flex-10 network uplinked from a VMware vSphere Distributed Switch to my HPE StoreVirtual VSA cluster. As depicted in the Desired Lab Network Topology:

Lab Network Topology
Desired Lab Network Topology

The previous network topology was missing the 10 GbE network switch.

Current Network Topology
Current Lab Network Topology

This works, as the Flex-10 can uplink to the Open vSwitch.  However, the Synology does not participate in this network. It participates in another network. The goal was to have all my hypervisor storage run minimally at 10 GbE. To do this, I needed to purchase several items:

  • A 10 GbE switch with, minimally, 8 ports (16 or more, preferably)
  • Two 10 GbE GBICs for the HPE Flex-10
  • One 10 GbE dual port card for the Synology

The plan was to create an L2 VLAN on the 10 GbE switch, connect all the storage networks to it, and end up with the Desired Lab Network Topology (see images). Unfortunately, the hardware purchase was the easiest part of this. I ended up purchasing a 24- port Netgear 10 GbE copper switch so that I could use my existing Cat 6e cables. This seemed pretty straightforward, but it ended up being not so simple.

Carving out the VLAN was the problem, actually. It was simple to do in the switch, but two of the devices did not like to play well with this. I carved ports 1–8 out for the VLAN. However, ports used to uplink to the Open vSwitch and the Synology would not properly fail over, even when LACP and failover modes were set. Each device thought all ports were active at the same time, yet the HPE Flex-10 did not have all ports active. This led to some difficulties.

The solution was to configure the pair of Synology ports into their own LAG and the pair of Open vSwitch ports into their own LAG, then to join the LAGs to the VLAN. Once that was done, everything communicated properly. Not quite what I’d expected; I’d expected that the Synology would properly handle failover and that the Open vSwitch would properly handle failover given that that  was how both were configured.

Debugging this was made difficult due to the fact that the only storage I had at the time was the HPE StoreVirtual cluster, which ended up becoming unavailable or had really large ping times even between nodes with lots of packets dropped.

My summation of the problem may be a bit inaccurate, but the solution I found was to properly configure the VLAN on the Netgear, which required not only creating the VLAN, but also setting the Port PVID on the ports within the VLAN. This two-step process was the final bit I needed to get my storage network speaking 10 GbE.

Perhaps there is an easier way, but this is the way that worked for me. Ideally, I would not like to set up LAG.

Leave a comment

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.