Currently, the VMware vCenter Server Appliance (vCSA) requires a Windows helper VM. This helper VM can be loaded with a number of different items, most notably vSphere Update Manager (VUM) and VMware Horizon View Composer. The services this VM can contain are listed later in this article. Suffice it to say, the helper VM (or VMs) become critical to your environment. I was running W2K8 R2 but decided to move to W2K12 R2. This was more of an effort than I had imagined. Most things worked just fine, but it took a bit of research to find out why some items did not work. See my previous article vSphere Upgrade Saga: vCSA Helper VM for the first part of this series.
In my previous vSphere Upgrade Saga post, VSAN Upgrade Woes, I discussed upgrade problems in a relatively unsupported configuration. I finally figured out why I had such a problem. It was not the unsupported nature of the configuration, but the disk space used within VSAN. In effect, my VSAN was heavily overcommitted, and as such, there was no room to move things around to allow updates. I needed a new implementation of VSAN, one that is supported.
The upgrade to VSAN 6.2 did not go as smoothly as I wished. It was possible to do but required me to rebuild not only VSAN but my cluster as a whole as a rolling upgrade did not work as expected. Perhaps this is just the way I have my VSAN configured.
In my environment I use VCSA as it is generally far easier to manage. However, to do so, you still need either a single or multiple Microsft Windows helper VMs. The VMs run packages like SRM, vSphere Update Manager, HPE OneView, and other tools that integrate with VMware vCenter. I recently wanted to do some automation based on a few PowerCLI scripts and my version of PowerCLI was out of date. In order to bring it up to date, I either needed to reconfigure my W2K8R2 helper VM or create a new helper VM based on W2K12R2. I chose the later as it was time to upgrade anyways.
My last 6.0 patch upgrade had an interesting phenomenon. Staging of three of the patches worked without a hitch. Before I could install the next patch, however, I had to cold-reboot the nodes. A soft reboot caused a red message to the console complaining about a module that would not load: a module I did not recognize, and now cannot remember. However, when this happened I did a cold reboot, and everything looks like it worked.
My infrastructure recently underwent a catastrophic failure from which recovery was more tedious than difficult. An iSCSI server running from a KVM server using Open vSwitch decided to go south. Why? I am still trying to figure that out. The long and short of it is that Open vSwitch running within a CentOS 7 KVM has a pretty major performance glitch. I hope to fix that soon. So, what happened?