RHEV Upgrade Saga: KVM Client

Citrix has XenClient, but there is no equivalent for Hyper-V, vSphere, or KVM. Here are the steps to build a KVM client for your own use. Granted, this could be faked by using a graphical console on your KVM server as well as Hyper-V, but that defeats the purpose of keeping the console of the virtualization hosts unavailable to direct use; in fact, this would violate best security practices. A KVM client approach would allow you to use a local graphics adapter for a single VM and, if you had the proper hardware, even more VMs with graphical capabilities.There are a number of circumstances in which a KVM client is desirable:

  • In a 100% virtual environment, for graphical access to management tools within the data center
  • For high-performance graphics at a desktop that is physically close to the data center
  • As part of a virtualization lab in which a lab host also doubles as a user access node
  • Where there is a need to run multiple VMs on bare metal on a laptop
  • When NVIDIA GRID cards are to be shared out.

Whatever the reasons (mine are pretty much all of the above), there is a need for a way in KVM to ensure that the hypervisor is not accessible by desktops being shared directly from the host as well as through remote access. The technique that follows works for any NVIDIA Quadro, Tesla, or GRID card, but not for any of the current NVIDIA gaming cards. It works for several Radeons as well. This is a conscious decision by the vendors, as what makes the technique work is what is in the driver, not the hardware.

Please be aware that this is a fairly technical undertaking and is not for those who are not intimately familiar with Linux. Try at your own risk. The steps are for Red Hat Enterprise Linux (RHEL), but they work with any Fedora-based distribution based on a 3.10 kernel. Similar steps work on all other versions on Linux as well. I used RHEL 7 Release Candidate. Note that this cookbook is not applicable to RHEL installation, use of Red Hat Subscription Manager, or anything outside the scope of the task at hand.

  1. Install RHEL 7 Release Candidate as a virtualization host with an administrative user that can sudo to root but is not root.
  2. Ensure the administrative user is able to use sudo. Log in to the administrative user, su to root and run visudo. Uncomment the line that reads:  %wheel  ALL=(ALL)       ALL
  3. Using sudo, install all the virt-* packages, specifically virt-who (you will need this to use Subscription Manager within Linux VMs when using RHEL 7). Initially, virt-manager, virt-install, and virt-viewer are not installed. To set this up properly, you need those installed for the initial VM creation. Also install xorg-x11-xauth for remote X access.
  4. Create a few pools for use by VMs. I have two pools as well as the default. The first is a volume group named vg_kvm for general VMs, and the second is one for my KVM client VM. The sequence for vg_kvm follows; repeat using a new volume group name for the KVM client VMs. I use 4 TB partitions for both volume groups.
    # virsh pool-define-as vg_kvm logical --target /dev/vg_kvm
    # virsh pool-start vg_kvm
    Pool vg_kvm started
    # virsh pool-autostart vg_kvm
    Pool vg_kvm marked as autostarted
    # virsh pool-list
     Name            State          Autostart
     -----------------------------------------
     default         active         yes
     vg_kvm          active         yes 
  5. Using a remote X server (I use XQuartz on my rMBP), connect to the RHEL 7 virtualization host using: ssh -X root@rhel7-virtualization-host
  6. Now start virt-manager by typing at the prompt: virt-manager
  7. Create a RHEL 7 VM to use as your basic manager; I named this VM “rhel7-mgmt”. Ensure this VM has virtualization management capabilities. I chose the server with the graphics install method, then selected virtualization management tools to be installed. Since I do not use Spice currently, I changed the default graphics for the VM from Spice to VNC, with the Listen on all public network interfaces option checked. I also added a password for the VNC session and set the port to 5901 instead of automatic allocation. This way, I always know which VNC session to use for management.
  8. Once the rhel7-mgmt VM was created, I killed the X session. I logged in to the rhel7-mgmt VM over VNC using the newly created rhel7-virtualization-host:1 session. Once the VM was properly set up, I then launched virt-manager from this VM and connected it to the virtualization host using the SSH method, pointing it at the bridge IP for the host (we have not installed Openvswitch yet, so we can only use the default bridge). Login was using the root password. Granted, I did remove the local hypervisor access, as this is a VM, not a hypervisor.
  9. From the RHEL7-mgmt node, connect to the hypervisor via SSH to the administrative user and move the RHEL7 iso to /var/lib/libvirt/images. I also placed a RHEL 6.5 ISO image in that location. Why? Because when you run remote, there are only so many places virt-manager can see installation ISOs. See figure 1:
    RHEL7-MGMT
    RHEL7-MGMT (click to expand)
  10. Now we need to create a second VM to build a few kernel modules on RHEL7. We do not want to pollute our virtualization host or management VM with a build environment, so we use another VM. This one is, once more, a stock RHEL7 VM that starts as a network server (there is no need for graphics on this VM) with nothing special installed. We will install our environment by hand. We need to rebuild the VFIO module to include VFIO_VGA, or else our KVM client will not get any graphics at all.
  11. Using a graphical terminal on rhel7-mgmt, access the rhel7-build machine and set up the node for building a kernel. First, install the kernel-devel rpm package, which will pull in a bunch of other dependencies. After I did this, I did the following:
    [user@rhel7-build] sudo yum -y install kernel-devel rpm-build rpmdevtools ncurses-devel elinks
     [user@rhel7-build] rpmbuild-setuptree
     [user@rhel7-build] wget ftp://ftp.redhat.com/redhat/rhel/rc/7/Server/source/tree/Packages/kernel-3.10.0-121.el7.src.rpm
     [user@rhel7-build] sudo yum builddep kernel-3.10.0-121.el7.src.rpm
     [user@rhel7-build] rpm -ivh kernel-3.10.0-121.el7.src.rpm
     [user@rhel7-build] rpmbuild -bp ~/rpmbuild/SPECS/kernel.spec
     # resolve dependencies THEN redo the previous command
  12. Once the kernel is installed, it is now possible to rebuild it, making just one change. We need to enable the VFIO_PCI_VGA option, which can be accomplished in one of three ways. The easiest is to use the following:
    [user@rhel7-build] cd rpmbuild/BUILD/kernel-3.10.0-121.el7/linux-3.10.0-121.el7.x86_64
    [user@rhel7-build] vi .config 

    Now, we search for CONFIG_VFIO_PCI_VGA, ensure that the line reads CONFIG_VFIO_PCI_VGA=y, and save the file:

    [user@rhel7-build] make oldconfig
    [user@rhel7-build] make
    [user@rhel7-build] make modules
    [user@rhel7-build] cd drivers/vfio/pci
    [user@rhel7-build] scp vfio-pci.ko user@virtualization-host:.
    [user@rhel7-build] ssh user@virtualization-host
    [user@virtualization-host] cd /lib/modules/`uname -r`/kernel/drivers/vfio
    [user@virtualization-host] sudo cp vfio-pci.ko vfio-pci.ko.orig
    [user@virtualization-host] sudo cp ~user/vfio-pci.ko .
    
  13. Now we have the bits we need built, and we just have to set up the virtualization host properly. This is the tricky bit. You can refer to this long forum thread if problems exist: https://bbs.archlinux.org/viewtopic.php?id=162768 . This post helped me through most of the issues I faced while trying to use an NVIDIA GTX Titan. In essence, I had to invest in an NVIDIA K4000, and I considered picking up a K6000 or GRID K2 card. To make a KVM client, you must have a card that connects to a display, however, so GRID K2 is only for remote access, not direct access.
  14. Now, on your rhel7-build VM using elinks, download the vfio-bind-1.0.0-7.1.noarch.rpm from rpm.pbone.net. Yes, this is for openSUSE, but it will work on RHEL 7 as well. Transfer the RPM to your virtualization host using the following; note the nodeps option to install the package.
    [user@rhel7-build] scp vfio-bind-1.0.0-7.1.noarch.rpm user@virtualization-host:.
    [user@rhel7-build] ssh user@virtualization-host
    [user@virtualization-host] sudo rpm -ivh --nodeps vfio-bind-1.0.0-7.1.noarch.rpm
    
  15. Up until this point, we have just built some VMs, rebuilt the kernel, and installed RPMs. Now, things get more difficult. Please note that these steps, if not followed properly, could render your virtualization host inoperable. We now start modifying boot parameters of the virtualization host. To do this, we first need to find out the PCI ID used by the graphics adapter and USB hub we will grant to the virtual machine we will build.
    [user@virtualization-host] lspci |egrep -i 'nv|usb'
    00:1a.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #2 (rev 06)
    00:1d.0 USB controller: Intel Corporation C600/X79 series chipset USB2 Enhanced Host Controller #1 (rev 06)
    06:00.0 VGA compatible controller: NVIDIA Corporation GK106GL [Quadro K4000] (rev a1)
    06:00.1 Audio device: NVIDIA Corporation GK106 HDMI Audio Controller (rev a1)
    [user@virtualization-host] lspci -n |egrep '06:00|00.1d'
    00:1d.0 0c03: 8086:1d26 (rev 06)
    06:00.0 0300: 10de:11fa (rev a1)
    06:00.1 0403: 10de:0e0b (rev a1)
    
  16. This gives us the data we need to ensure the system will boot, allowing us to use the graphics adapter and USB port within a VM. The next step is to modify /etc/vfio-bind.conf to include the following line:
    DEVICES="0000:06:00.0 0000:06:00.1 0000:00:1d.0"
  17. Now we modify the kernel to boot properly. We do this by editing /boot/grub2/grub.cfg and modifying the linux16 line by appending the following to it. Be sure to enable intel_iommu and set the blacklist such that the system does not see the non-vendor-specific device. This works best when you have two distinct models of graphics adapters on your device.
    rdblacklist=nouveau pci-stubs.ids=10de:11fa,10de:0e0b,8086:1d26 intel_iommu=on
  18. Next we set up some specific module and system configurations for the virtualization host. Please note that while I used vi, you may use any editor you desire. All files have to be edited to have the appropriate contents, given below the sudo vi commands. All of these are necessary to make things work appropriately.
    [user@virtualization-host] sudo vi /etc/modprobe.d/blacklist.conf
    blacklist nouveau
    options_nouveau modeset=0
    [user@virtualization-host] sudo vi /etc/modprobe.d/vfio_iommu_type1.conf
    options vfio_iommu_type1 allow_unsafe_interrupts=1
    [user@virtualization-host] cd /etc/systemd/system/multi-user.target.wants
    [user@virtualization-host] sudo ln -s /usr/lib/systemd/system/vfio-bind.service .
    [user@virtualization-host] sudo vi vfio-bind.service
    [Unit]
    Description=Bind devices to KVM vfio
    Before=libvirtd.service
    [Service]
    EnvironmentFile=/etc/vfio-bind.conf
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=-/usr/sbin/vfio-bind $DEVICES
    [Install]
    WantedBy=multi-user.target
  19. Edit the following /etc/libvirt/qemu.conf file so that the following changes are made for the specific blocks of configuration: one for user/group and the other for cgroups. Then reboot the system.
    [user@virtualization-host] sudo vi /etc/libvirt/qemu.conf 
    ... 
    user = "root"
    # The group for QEMU processes run by the system instance. It can be
    # specified in a similar way to user.
    group = "root"
    ...
    cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
    "/dev/vfio/18", "/dev/vfio/10"
    ]
    ...
    [user@virtualization-host] sudo reboot
  20. If everything went as planned, you should now see numbered files in /dev/vfio. If you do not see any numbered files, then something has gone wrong. You are missing either the pci-stubs or the intel-iommu options on the kernel command line.
    [user@virtualization-host] ls -al /dev/vfio
    total 0
    drwxr-xr-x. 2 root root 100 May 10 18:22 .
    drwxr-xr-x. 21 root root 3400 May 10 18:22 ..
    crw-------. 1 root root 246, 1 May 10 18:22 10
    crw-------. 1 root root 246, 0 May 10 18:22 18
    crw-rw-rw-. 1 root root 10, 196 May 10 18:22 vfio 
  21. Now, we are ready to create our KVM client VM. This VM starts as your normal VM, but it has to be modified to support use of the pass-through graphics and USB devices. We start by using virt-manager to create a pretty normal-looking virtual machine with the appropriate amount of memory and storage. (I used 4 GBs of memory and 4 TBs of storage; after all, this VM is really replacing my Linux desktop/build machine/testing engine/etc.)
  22. Once we have the general VM built and installed, it is time to shut down the VM and make some significant changes to the default definition:
    1. Change the first line that defines the domain to include a different schema. My first line ended up looking like the following. Basically, I added the xmlns:qemu bits:
      <domain type='kvm' id='4' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
    2. Change the OS architecture to use a different machine definition and BIOS: in this case, the Q35 definition and seabios:
      <os>
        <type arch='x86_64' machine='pc-q35-rhel7.0.0'>hvm</type>
        <loader>/usr/share/seabios/bios.bin</loader>
        <boot dev='hd'/>
        <bootmenu enable='no'/>
      </os>
    3. Add a qemu:commandline stanza as the last stanza of the file, just before the </domain> field. These extra command line entries tell libvirt how to start the VM, so that you have direct access to the graphics, mouse, and keyboard devices we set up inside the 3.10 kernel.
      <qemu:commandline>
           <qemu:arg value='-device'/>
           <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
           <qemu:arg value='-device'/>
           <qemu:arg value='vfio-pci,host=06:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
           <qemu:arg value='-device'/>
           <qemu:arg value='vfio-pci,host=06:00.1,bus=root.1,addr=00.1'/>
           <qemu:arg value='-device'/>
           <qemu:arg value='vfio-pci,host=00:1d.0,bus=root.1,addr=1d.0'/>
         </qemu:commandline>
       </domain>
    4. Modify the <disk> stanza PCI device bus settings for the virtio virtual disks. I had to do this to get the VM to boot, as the VFIO-PCI devices conflict.
            <alias name='virtio-disk0'/>
            <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
          </disk>
    5. Add in the Q35 SATA Controller and PCIe bridge devices, or else the VM will not find the passthrough devices.
      <controller type='sata' index='0'>
            <alias name='sata0'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
      </controller>
      <controller type='pci' index='0' model='pcie-root'>
            <alias name='pcie.0'/>
      </controller>
    6. Now we boot the VM. (See below for full virt-manager XML file.)
  23. Once we boot the VM, we will have to modify the X.org configuration file; however, to do that we first must discover what the VM thinks is the PCI ID. We do that by running “X -configure” as the root user within the VM. This command will produce a file named /root/xorg.conf.new, which can be used to further configure X to run. It has one peculiarity: it has two graphical heads within it. One is for the Cirrus adapter we used to build the VM, but the first head (the important one) is the NVIDIA device for the KVM client.
  24. From within the VM, copy the /root/xorg.conf.new file to /etc/X11/xorg.conf
  25. Start graphics. Now, to get proper resolutions, you may want to refer to this post on configuring NVIDIA graphics for Linux (disregard the old version of Fedora; the instructions work generically for all X.org installations). The key, however, is to use the xorg.conf generated from X -configure as the basis for the changes you make. I have included my current xorg.conf below.

There you have it, a KVM client. Now I can do the same for any other VMs attached to an NVIDIA GRID card for remote display. The instructions are the same.

My current VM configuration in XML format can be imported into virsh. Please note that the VMNAME, MAC Address, and other identifying items have been tokenized to protect the innocent.

Click to expand XML
<domain type='kvm' id='4' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>   <name>VMNAME</name>   <uuid>bda0a491-e21f-4ede-bd80-ccdf8a0faa91</uuid>   <memory unit='KiB'>4194304</memory>   <currentMemory unit='KiB'>4194304</currentMemory>   <vcpu placement='static'>2</vcpu>   <resource>     <partition>/machine</partition>   </resource>   <os>     <type arch='x86_64' machine='pc-q35-rhel7.0.0'>hvm</type>     <loader>/usr/share/seabios/bios.bin</loader>     <boot dev='hd'/>     <bootmenu enable='no'/>   </os>   <features>     <acpi/>   </features>   <clock offset='utc'/>   <on_poweroff>destroy</on_poweroff>   <on_reboot>restart</on_reboot>   <on_crash>restart</on_crash>   <devices>     <emulator>/usr/libexec/qemu-kvm</emulator>     <disk type='block' device='disk'>       <driver name='qemu' type='raw' cache='none' io='native'/>       <source dev='/dev/vg_disk/VMNAME'/>       <target dev='vda' bus='virtio'/>       <alias name='virtio-disk0'/>       <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>     </disk>     <disk type='file' device='cdrom'>       <driver name='qemu' type='raw'/>       <target dev='hdc' bus='ide'/>       <readonly/>       <alias name='ide0-0-0'/>       <address type='drive' controller='0' bus='0' target='0' unit='0'/>     </disk>     <controller type='sata' index='0'>       <alias name='sata0'/>       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>     </controller>     <controller type='pci' index='0' model='pcie-root'>       <alias name='pcie.0'/>     </controller>     <controller type='pci' index='1' model='dmi-to-pci-bridge'>       <alias name='pci.1'/>       <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>     </controller>     <controller type='pci' index='2' model='pci-bridge'>       <alias name='pci.2'/>       <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>     </controller>     <controller type='ide' index='0'>       <alias name='ide0'/>     </controller>     <controller type='virtio-serial' index='0'>       <alias name='virtio-serial0'/>       <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>     </controller>     <interface type='network'>       <mac address='UU:VV:WW:XX:YY:ZZ'/>       <source network='default'/>       <target dev='vnet1'/>       <model type='virtio'/>       <alias name='net0'/>       <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>     </interface>     <serial type='pty'>       <source path='/dev/pts/2'/>       <target port='0'/>       <alias name='serial0'/>     </serial>     <console type='pty' tty='/dev/pts/2'>       <source path='/dev/pts/2'/>       <target type='serial' port='0'/>       <alias name='serial0'/></console>     <input type='mouse' bus='ps2'/>     <graphics type='vnc' port='XXXX' autoport='no' listen='0.0.0.0'>       <listen type='address' address='0.0.0.0'/>     </graphics>     <video>       <model type='cirrus' vram='9216' heads='1'/>       <alias name='video0'/>       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>     </video>     <memballoon model='virtio'>       <alias name='balloon0'/>       <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>     </memballoon>   </devices>   <qemu:commandline>     <qemu:arg value='-device'/>     <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>     <qemu:arg value='-device'/>     <qemu:arg value='vfio-pci,host=06:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>     <qemu:arg value='-device'/>     <qemu:arg value='vfio-pci,host=06:00.1,bus=root.1,addr=00.1'/>     <qemu:arg value='-device'/>     <qemu:arg value='vfio-pci,host=00:1d.0,bus=root.1,addr=1d.0'/>   </qemu:commandline> </domain>

My current xorg.conf file for use within the VM.

Click to expand xorg.conf
Section "ServerLayout"         Identifier     "X.org Configured"         Screen      0  "Screen0" 0 0         Screen      1  "Screen1" RightOf "Screen0"         InputDevice    "Mouse0" "CorePointer"         InputDevice    "Keyboard0" "CoreKeyboard" EndSection Section "Files"         ModulePath   "/usr/lib64/xorg/modules"         FontPath     "catalogue:/etc/X11/fontpath.d"         FontPath     "built-ins" EndSection Section "Module"         Load  "glx" EndSection Section "InputDevice"         Identifier  "Keyboard0"         Driver      "kbd" EndSection Section "InputDevice"         Identifier  "Mouse0"         Driver      "mouse"         Option      "Protocol" "auto"         Option      "Device" "/dev/input/mice"         Option      "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor"         Identifier   "Monitor0"         VendorName   "Monitor Vendor"         ModelName    "Monitor Model"         HorizSync       31.5 - 100.0         VertRefresh     40.0 - 150.0         ModeLine       "1920x1200_50.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync         Option         "DPMS" EndSection Section "Monitor"         Identifier   "Monitor1"         VendorName   "Monitor Vendor"         ModelName    "Monitor Model" EndSection Section "Device"         ### Available Driver options are:-         ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",         ### <string>: "String", <freq>: "<f> Hz/kHz/MHz",         ### <percent>: "<f>%"         ### [arg]: arg optional         #Option     "SWcursor"                  # [<bool>]         #Option     "HWcursor"                  # [<bool>]         #Option     "NoAccel"                   # [<bool>]         #Option     "ShadowFB"                  # [<bool>]         #Option     "UseFBDev"                  # [<bool>]         #Option     "Rotate"                    # [<str>]         #Option     "VideoKey"                  # <i>         #Option     "FlatPanel"                 # [<bool>]         #Option     "FPDither"                  # [<bool>]         #Option     "CrtcNumber"                # <i>         #Option     "FPScale"                   # [<bool>]         #Option     "FPTweak"                   # <i>         #Option     "DualHead"                  # [<bool>]         Identifier  "Card0"         VendorName  "NVIDIA Corporation"         BoardName   "Quadro K4000"         Driver      "nvidia"         BusID       "PCI:1:0:0" EndSection Section "Device"         ### Available Driver options are:-         ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",         ### <string>: "String", <freq>: "<f> Hz/kHz/MHz",         ### <percent>: "<f>%"         ### [arg]: arg optional         #Option     "SWcursor"                  # [<bool>]         #Option     "kmsdev"                    # <str>         #Option     "ShadowFB"                  # [<bool>]         Identifier  "Card1"         Driver      "modesetting"         BusID       "PCI:0:1:0" EndSection Section "Screen"         Identifier "Screen0"         Device     "Card0"         Monitor    "Monitor0"     DefaultDepth    24     Option         "NoVirtualSizeCheck"     Option         "DisableGLXRootClipping" "True"     Option         "RenderAccel" "true"     Option         "NoRenderExtension" "False"     Option         "AllowGLXWithComposite" "true"     Option         "UseEdidFreqs" "false"     Option         "AddARGBGLXVisuals" "true"     Option         "UseEdidDpi" "FALSE"     Option         "UseEdid" "FALSE"     Option         "NvAGP" "0"     Option         "DPI" "96 x 96"     Option         "XAANoOffscreenPixmaps" "true"     Option         "DRI" "true"     Option         "HWcursor"     Option         "CursorShadow"     Option         "CursorShadowAlpha" "32"     Option         "CursorShadowXOffset" "2"     Option         "CursorShadowYOffset" "2"     Option         "TwinView" "0"     Option         "Stereo" "0"     Option         "nvidiaXineramaInfoOrder" "CRT-0"     Option         "metamodes" "1920x1440 +0+0; 1920x1200_50.00 +0+0; 1920x1200_60 +0+0; 1280x1024_60 +0+0; 1024x768_60 +0+0; 1024x768 +0+0; 800x600 +0+0; 640x480 +0+0; 1280x1024 +0+0"     Option         "SLI" "Off"     Option         "MultiGPU" "Off"     Option         "BaseMosaic" "off"     SubSection     "Display"         Depth       24     EndSubSection EndSection Section "Screen"         Identifier "Screen1"         Device     "Card1"         Monitor    "Monitor1"         SubSection "Display"                 Viewport   0 0                 Depth     1         EndSubSection         SubSection "Display"                 Viewport   0 0                 Depth     4         EndSubSection         SubSection "Display"                 Viewport   0 0                 Depth     8         EndSubSection         SubSection "Display"                 Viewport   0 0                 Depth     15         EndSubSection         SubSection "Display"                 Viewport   0 0                 Depth     16         EndSubSection         SubSection "Display"                 Viewport   0 0                 Depth     24         EndSubSection EndSection

Join the Conversation

  1. Edward Haletky

3 Comments

  1. Apologies if I missed something in your post, but you seem to be rebuilding the RHEL7.0 kernel to enable VFIO-VGA and then hacking up your domain xml for Q35 and x-vga, but you’re assigning Nvidia Quadro cards that are supported for assignment on 7.0 as-is. Red Hat and Nvidia officially support Nvidia K-series Quadro, GRID, and Tesla cards for secondary GPU assignment. Therefore, you can take stock RHEL7.0 and simply use the standard 440FX device model in QEMU. The functional difference from what you have above is that the GPU will only be enabled when the Nvidia drivers are loaded, you’ll use the emulated graphics for boot and install.

    1. Where would I find this documentation as it was not prevalent when I went to do this the first time… I would prefer NOT to change my kernel but alas it seems the best way to get this performance. So documentation would be ideal here.

      Also would I not need to disable the nouveau driver/graphics else the Nvidia ones will never load when you hit initstate 5.

      1. I don’t think we have documentation specific to GPU assignment. The only special requirement is that graphics drivers are not as amenable to unbinding and re-binding as things like NIC drivers, so you need to use pci-stub or blacklisting to prevent them from being loaded in the host (as you list in step 17). In the guest, only the Nvidia driver is supported and some versions of nouveau can produce AER errors on the device, so you want to avoid novueau for the install, blacklist it, and install the Nvidia drivers. Note that blacklisting novueau is required for any use cases of the Nvidia driver, in a VM or on bare-metal.

        For a Linux guest you will need to edit the xorg.conf file and add a BusID for the GPU. I recommend installing the guest w/o the GPU, blacklisting nouveau, then simply attach the GPU the same as you would a NIC to the VM. Note that the audio function of Quadro cards is not supported for assignment due to broken INTx support in the hardware, but if you can coax it to use MSI (modprobe snd_hda_intel enable_msi=1) it’s mostly ok (see my blog for how to coax Windows to use MSI). It’s perfectly acceptable to virsh nodedev-detach the audio function, but not assign it to the guest.

Leave a comment

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.