RHEV Upgrade Saga: Installing Open vSwitch on RHEL 7

I was finally going to install RHEV on my brand-new system running RHEL 7 RC with KVM. However, it has a dependency on DNS. Which was fine, but my DNS server was on another network, not the private network used by KVM with the standard virtual bridge. To fix this, I chose to move my KVM installation to use Open vSwitch. I have written before about adding Open vSwitch to KVM as well as hooking VMs to the Open vSwitch; however, Open vSwitch 2.1 has its own idiosyncrasies, as does libvirt 1.1.

So, here are the steps to add Open vSwitch 2.x into RHEL 7. While I am using RHEL 7 Release Candidate, this also should apply to RHEL 7 GA.

Step 1: Get the Open vSwitch 2.x source and place on a build VM specific to RHEL 7

The best place to get the Open vSwitch source code is http://openvswitch.org. This I transferred to a virtual machine running on top of RHEL 7 that is designed purely to upgrade and build drivers for RHEL 7. We can call this VM “rhel7-build.” I would rather not have the entire build environment on my virtualization host. Actually, we do not even have a normal display head for this particular host.

Step 2: Build the Open vSwitch package for RHEL

An interesting aspect of Open vSwitch is that the RHEL 7 RC kernel already contains the openvswitch drivers, so all we need to do is build the management packages.

# tar –xzf openvswitch-1.7.1.tar.gz
# rpmbuild –bb rhel/openvswitch.spec

Transfer the resultant openvswitch packages to your RHEL7 KVM box and install them.

Step 3: Start Open vSwitch and verify that it is working

We need to set up the Open vSwitch and ensure that it is working and configured. Note that at this time, Open vSwitch and the standard virtual bridge for KVM are available for use. Actually, I even have four VMs running as well.

# echo "BRCOMPAT=yes" >> /etc/sysconfig/openvswitch

Then, start the openvswitch service(s) and configure them to start on reboot as well:

# /sbin/service openvswitch start
# /sbin/chkconfig openvswitch on

Now, verify a few things:

# virsh version
Compiled against library: libvirt 1.1.1
Using library: libvirt 1.1.1
Using API: QEMU 1.1.1
Running hypervisor: QEMU 1.5.3
# lsmod |grep openvswitch
openvswitch            70743  0 
vxlan                  37584  1 openvswitch
gre                    13808  1 openvswitch
libcrc32c              12644  2 xfs,openvswitch
# ovs-vsctl show
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
    ovs_version: "2.1.2"

All of this tells us we are ready to proceed.

Step 4: Create a few Open vSwitch bridges for management and virtual machine workloads

For security reasons, you will need two bridges or two VLANs: one for management traffic and one for non-management workloads. This is the best practice for a secure virtual environment. These steps have not changed from the first time I did this.

# ovs-vsctl add-br ovsbr0 
# ovs-vsctl add-br ovsbr1

The next step differs depending on whether you are bonding physical NICs together or using a single NIC.

# ovs-vsctl add-bond ovsbr0 bond0 eth0 eth2 lacp=active # only needed for bonding
# ovs-vsctl add-port ovsbr0 eth0 # only needed for single NIC installations
# ovs-vsctl add-bond ovsbr1 bond1 eth1 eth3 lacp=active # only needed for bonding
# ovs-vsctl add-port ovsbr1 eth1 # only needed for single NIC installations
# ovs-vsctl add-port ovsbr0 mgmt0 -- set interface mgmt0 type=internal

However, this is far from complete. We need to modify the configuration files within /etc/sysconfig/network-scripts to make all the networks come back on reboot. So, the config scripts look like the following, depending on whether we are using bonding or not. First, we have to specify the bridges themselves as an OVSBridge type.

# cat ifcfg-ovsbr0
DEVICE=ovsbr0
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSBridge
HOTPLUG=no
USERCTL=no
# cat ifcfg-ovsbr1
DEVICE=ovsbr1
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSBridge
HOTPLUG=no
USERCTL=no

Now, define our local management port to use and ensure all networks come up in the proper form. For management, we define it as type OVSPort and specify that it is part of the OVS_BRIDGE named ovsbr0. We give it the IP address assigned to the machine, as well as the the proper netmask.

# cat ifcfg-mgmt0
DEVICE=mgmt0
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSIntPort
OVS_BRIDGE=ovsbr0
USERCTL=no
BOOTPROTO=none
HOTPLUG=no
IPADDR=A.B.C.D
NETMASK=255.255.255.192
GATEWAY=W.X.Y.Z
DNS1=P.Q.R.S

Then we have to tell the network subsystem how to bring up each bridge’s physical component.

For a single NIC and the NIC bond (repeat for all bonds or single NICs in use):

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=yes
NETBOOT=yes IPV6INIT=no BOOTPROTO=none NAME=”eth0″ DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=ovsbr0
# cat ifcfg-bond0
DEVICE=”bond0″
ONBOOT=yes
NETBOOT=yes
IPV6INIT=no
BOOTPROTO=none
NAME=”bond0″
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=ovsbr0

For NIC bonds only, we tell the network subsystem how to bring up each physical interface (repeat for all NIC bonds in use):

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=yes
NETBOOT=yes
IPV6INIT=no
BOOTPROTO=none
# cat ifcfg-eth1
DEVICE=”eth1″
ONBOOT=yes
NETBOOT=yes
IPV6INIT=no
BOOTPROTO=none

The last thing we need to do is set the default route to use our new mgmt0 network for the KVM host. This should already happen for us, given the configurations, but for some reason it does not.

# cat mgmt0-route
default via W.X.Y.Z

Now we are ready to restart our network subsystem. If you do this while VMs are running, they will lose network connectivity for a bit until you can fix the networks in use by each VM. This will disable all old virtual bridge mechanisms, as we are disabling the ebtables service. You can run both together, but for my lab, I prefer just to use Open vSwitch.

# service network restart
# service ebtables stop
# chkconfig ebtables off
# ovs-vstl show
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
    Bridge "ovsbr0"
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
        Port "mgmt0"
            Interface "mgmt0"
                type: internal
        Port "eth0"
            Interface "eth0"
    ovs_version: "2.1.2"
# ping -c 3 W.X.Y.Z
PING W.Y.X.Z (W.Y.X.Z) 56(84) bytes of data.
64 bytes from W.Y.X.Z: icmp_seq=1 ttl=64 time=0.553 ms
64 bytes from W.Y.X.Z: icmp_seq=2 ttl=64 time=0.316 ms
64 bytes from W.X.Y.Z: icmp_seq=3 ttl=64 time=0.350 ms

--- W.X.Y.Z ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.316/0.406/0.553/0.105 ms

Step 5: Define the bridge networks within virsh for use by our VMs

Until we define our network within virsh for use by our VMs, our new Open vSwitch will not be accessible for use or assignment. In the following, I created ovsbr0 and then ovsbr1 using the exact same method, just substituting the different name within the XML and commands.

# cat ovsbr0.xml
<network>
  <name>ovsbr1</name>
  <forward mode='bridge'/>
  <bridge name='ovsbr0'/>
  <virtualport type='openvswitch'/>
  <portgroup name='vlan-01' default='yes'>
  </portgroup>
</network>
# virsh net-define ovsbr1.xml
# virsh net-list –all
Name                 State     Autostart      Persistent
--------------------------------------------------
default              active    yes            yes
ovsbr0               inactive  no             yes
# virsh net-destroy default
# virsh net-start ovsbr0 # virsh net-info ovsbr0 
Name            ovsbr0 
UUID            XXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX 
Active:         yes 
Persistent:     yes 
Autostart:      yes 
Bridge:         ovsbr0 
# virsh auto-start ovsbr0

Once completed, I ended up with the following:

# virsh net-list --all
 Name                 State     Autostart      Persistent
 --------------------------------------------------
 ovsbr0               active    yes            yes
 ovsbr1               active    yes            yes

As you can see, the bridges are available for use, and the old bridge is no longer defined.

Step 6: Assign the VMs

The penultimate step is to assign the VMs to the appropriate bridge. There are a number of ways to do this. If the VM is running, you can use virt-manager and graphically change the networking (which is what I did once my management network came back up, as my VMs were still running). Or you can shut down each VM using virsh and edit the XML by hand. If you go that route, you will change the networking segments to look like the following, which differs from the element used within Open vSwitch v1.

# virsh edit vmname
 ...
 <interface type='network'>
 <mac address='52:54:00:XX:YY:ZZ'/>
 <source network='ovsbr1'/>
 <model type='virtio'/>
 <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
 </interface>
 ...

Once you are done, your Open vSwitch definition may look like:

# ovs-vsctl show
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
    Bridge "ovsbr0"
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
        Port "mgmt0"
            Interface "mgmt0"
                type: internal
        Port "eth0"
            Interface "eth0"
    Bridge "ovsbr1"
        Port "ovsbr1"
            Interface "ovsbr1"
                type: internal
        Port "vnet3"
            Interface "vnet3"
        Port "vnet4"
            Interface "vnet4"
        Port "vnet2"
            Interface "vnet2"
        Port "vnet0"
            Interface "vnet0"
        Port "vnet1"
            Interface "vnet1"
        Port "eth1"
            Interface "eth1"
    ovs_version: "2.1.2"

Now I have successfully migrated my KVM node to Open vSwitch 2.x, with all my VMs on the proper networks: one for management and one for workloads. The only next thing is to install RHEV-M within a VM, so that I can begin to manage the node. Since I am now using the proper networking, my standard DNS configuration now applies.

Join the Conversation

  1. Edward Haletky

4 Comments

  1. Does these procedures still work for the final RHEL7? Some parts of your post use OVS 1.7.1 and another uses 2.1.2. I’m trying to install OVS on CentOS 7 and having problems getting OVSDB to start using the systemd.

    1. Hello Greg,

      Yes these work just fine on RHEL7 GA. Using it now actually.

      Best regards,
      Edward L. Haletky

  2. I only have a single nic and have followed every step but I cannot ping externally from the guest. I’ve tried everything from changing the iptables to adding VLANS to no avail. Do you have any suggestions?

    1. Generally that means that your OVS is not configured correctly (as in attached to the proper port on the system) or that your VM has an incorrect OVS port. When you list the OVS you should see a mix of physical and virtual ports represented. Make sure the physical matches your system.

Leave a comment

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.