vSphere Upgrade Saga: Building Virtual-in-Virtual Labs

I often have to test code and tools on versions of vSphere and other hypervisors that I do not have deployed on physical hardware. To do so, I use a virtual-in-virtual (VinV) approach. However, it is still not just laying down a hypervisor. There is quite a bit of planning that goes into making a virtual lab. What follows is the approach that works for me to produce a segregated test environment that lives within my existing production environment.

There are several use cases that we can consider apropos for a VinV lab. Minimally, these are:

  • Testing out new code (such as SecureESX, Log Insight SOC) on older or even newer versions of hypervisors.
  • Testing a build procedure for a new hypervisor.
  • Judging the impact of virtualization security tools.
  • Etc.

The list is pretty endless, actually. I personally use VinV labs for all of these. A VinV lab is appropriate only if everything you need to do will stay within that lab. Once you have to go to physical hardware, the lab environment may need to change drastically. I once did a design for a VinV lab that worked with a physical lab bench approach. What was required to do this? Virtual firewalls and more physical networking. I will write this design up at a future date.

I now have a cookie-cutter approach to building out labs. Once I have one, adding more is pretty straightforward. The steps are as follows.

The preparation:

  • Find one or two free physical NICs (pNICs) for east-west communication between all nodes on which the lab(s) will reside.
  • Set up a VDS using those pNICs as the uplinks for just testing. Ensure that east-west traffic between all nodes exists. A broken VDS will cause havoc and limit where you can place workloads. The same happens for VDS without backing. The switches may not even need to be connected to your core; it is purely for east-west traffic for the VDS. This simplifies VLANs, etc. as well.
  • Depending on workloads, you may wish to set up ingress access through a firewall. I do this through an NSX-V ESG from my production vSwitch/portgroup to my testing vSwitch/portgroup(s).
  • Set up a set of portgroups specific to each lab. How many depends on the needs of the lab. For a vSphere 6.5 or 6.7 lab, I have found that three is a good start. Give each portgroup a unique VLAN. We really do not want one lab being able to talk to another lab.
  • Set up multiple legs off your firewall, with one to access the main area of the lab so that you can get from production to test. You can also use this approach to deny all access and then monitor what is being denied. You end up with:
    VinV Lab
  • For vSphere, create two more portgroups on the vTesting for the internal and external networks associated with ESXi. I use a VLAN numbering that matches the lab number I am using: e.g., Lab 1 would have 101 for external, 102 for internal. These portgroups would keep workloads from talking to your virtualization management components, which are usually on VLAN100 in this example.

The Build-Out:

Now that we have everything prepared, we need to build out our tooling. Order is important:

  • Build AD server. I use Server 2012. Almost everything will need some form of AD. I use the AD server as my DHCP, AD, and DNS servers. AD and DNS contains just those items required by the lab and is its own unrelated forest/domain. I have cloned this from a previously built lab and change its IP, thereby inheritting quite a bit of setup.
  • Build a helper VM. I use Server 2012 for this as well. The helper VM ends up being the main login point for the environment. It is where any Windows-based virtualization management services need to be installed but also where you can access all virtualization management components via web browsers. You could use Windows 10 for this, but I have found the server versions to be more useful. I.e., if you were using SRM, SRM has Windows components; this would be installed in the helper VM. I usually clone this from a previously built lab and change its IP if not using DHCP. Ensure the users exist within AD to support multiple logins.
  • Build a Linux helper VM or use my AAC-Lib VMA installer for CentOS/Debian installs. This allows you to run tools and other elements from a Linux command line as required. I have scripts running within mine to automatically simulate various actions and monitoring style tools like SecureESX. These scripts provide data I use for the vRealize Log Insight SOC. I prefer Linux, but the same could be done within the helper VM as well. I usually clone this from a previously built lab and change its IP if not using DHCP.
  • Build your vCSA. I actually import my vCSA using the AAC-Lib ov-import script, which makes use of ovftool. After the vCSA is built, reboot it, and then go through its IP address and port 5480 to finish its configuration and install. Assign your licenses. Hook up to AD for non–service user access. I use vCenter SSO for service accounts within vCSA and AD for human users unless I am using HyTrust for everything; then, that would change for certain service accounts.
  • Build your vRLI. I use the AAC-Lib ov-import script, which makes use of ovftool. Once vRLI is booted, you will need to configure within its web page. Assign your licenses. You will also need to create a vRLI user within vCSA. Since this is a service user, I end up creating it within vCenter SSO and not AD.
  • Build your NSX Manager. I use the AAC-Lib ov-import script, which makes use of ovftool. Once NSX Manager is booted, you will need to configure within its web page. Assign your licenses. You will also need to create an NSX user within vCSA. Since this is a service user, I end up creating it within vCenter SSO and not AD.

Note that all of the above are installed into the default VLAN for the lab, not the external or internal portgroups. They are also not nested but running within the regular hypervisors. Now we start to build out our ESXi hosts.

This is actually the least difficult part, as William Lam has done great work on preparing nested images. I use those myself. They are also vSAN ready. So, visit William Lam’s Nested Virtualization page and use his Content Library. Then you can install three nested ESXi servers if you want to use vSAN. Important, on install do not create VMFS automatically. Once they are installed, you need to configure them into vCSA and set up vSAN and networking. We all know how to configure ESXi.

The last bit is to set up the nested network (external) outbound for virtual machines if necessary. This requires updates to the NSX-V ESG I use external to the lab, but an NSX-V install within the lab would also be beneficial for controlling all outgoing data. Now you end up with defense in depth, the external firewall, external to the lab and the one within the lab nested virtualization environment.

Lastly, we end up with a virtualization management area that is separated from the workloads. This leads to a much more secure virtual-in-virtual lab.

Join the Conversation

  1. Edward Haletky

2 Comments

  1. Do you plan on expanding on this post in a future date to go into greater detail how you segmented networking using NSX?

    Great guide and thank you for sharing this with the community!

    1. Hello,

      Absolutely. I will drop my rules and configurations for NSX-V ESGs in the future. BTW, regardless of network in place (VDS, VSS, VXLAN) an ESG keeps things very separate. I guess you could get the same segmentation with DFW. I use NAT and have yet to build out any other type of routing for my labs yet. That is coming.

Leave a comment

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.