I was a little confused about EVC recently, and hopefully this will clear it up for many others. My vSphere 5 Upgrade has been halted due to work, family, and my travel schedule. I wanted to put out one more staging tip for vSphere 5 with respect to EVC. This will come in handy later as I add in a new blade with a pair of Intel Westmere chips so I can play with AES&I as well as the new Intel TXT functionality.
My issue with EVC stemmed from trying to enable it on an existing set of nodes so that I can easily transfer VMs between them. Exactly how EVC was planned to be used. However, in my environment somewhere along the line EVC was disabled. To enable it, I had to find the setting that would enable. I tried to first enable it for the chipset I had, that did not work. Then I enabled it for the previous chipset, of course that did not work. The solution was to enable it for the chipset I did not already have. In other words, I am running the chipset before Westmere, so in order for EVC to enable on my cluster, I had to set EVC mode to be Westmere chips. In other words, I am masking off all Westmere special functions.
This is what confused me, we all know we use EVC to mask off what we do not want, but if you have an existing cluster, you need to max off the chipset ABOVE the one you are on. That is, unless you desire to shutdown all your existing VMs, set EVC to the mode you desire and then power them al back on.
That was not something I was willing to do.
So, now I add to my pre-stage steps, setting EVC to mask off chipset functionality above what I already have. This will become a fairly major requirement when I add a Westmere based blade into my cluster.