VirtIO Drivers

From ProfitBricks Online Help
Jump to: navigation, search

Virtio provides an efficient abstraction for hypervisors and a common set of I/O virtualization drivers. It was chosen to be the main platform for I/O virtualization in KVM. There are up to four drivers available:

• Balloon - The balloon driver affects the memory management of the guest OS.

• VIOSERIAL - The serial driver, affects single serial device limitation within KVM.

• NetKVM - The network driver, affects Ethernet network adapters.

• VIOSTOR - The block driver, affects SCSI based controllers.


Installing VirtIO Drivers for Windows

Windows-based systems require VirtIO drivers primarily to recognize the VirtIO (SCSI) controller and network adapter presented by the ProfitBricks KVM-based hypervisor. This can be accomplished in a variety of ways depending on the state of the virtual machine.

ProfitBricks provides pre-configured Windows Server images which already contain the necessary VirtIO drivers and optimal network adapter configuration.

ProfitBricks also offers a VirtIO ISO to easily manage the driver installation process for Windows 2008 R2, Windows 2012 & Windows 2012 R2 systems. This ISO can be found in the CD-ROM drop-down menu under “ProfitBricks Images” which can be used for new Windows installations (only required for customer provided media) as well as windows images that have been migrated from other environments (e.g. via VMDK upload).

TIP: For older windows operating systems, VirtIO drivers ISO can be downloaded from the Fedora archive

The easiest way to install the VirtIO drivers on a Windows-based system is to boot directly from the ProfitBricks VirtIO ISO to automate the installation process. This can be accomplished by mounting the ISO as a virtual CD-ROM, mark it as the boot device and restart the system to boot directly into the menu-driven utility (DOS style) as depicted in Figure – 1. At this point, the ProfitBricks remote console can be used to follow the on-screen options and complete the drivers installation.

Tip: Remember to remove the VirtIO virtual CD-ROM and set the appropriate disk as the bootdevice after the driver installation and provision the changes to boot into the operating system.


Install windows1.png


Alternatively, in the case where the operating system was installed from a customer provided ISO, the storage driver was likely loaded during the operating system installation. The remaining VirtIO drivers can be installed from within the operating system by updating the drivers from for the devices listed under “Other devices” in the Device Manager as shown in Figure-2 below:


Install windows2.png


There might also be situations where virtual machines that have been imported from a third-party environment (e.g. via VMDK upload), and where the necessary drivers are not included in the VirtIO ISO provided by ProfitBricks. This situation would require to temporarily change the Bus Type for the drive containing the Operating System boot partition from VirtIO to IDE to be able to boot and install the corresponding drivers from Device Manager as described in the previous section. See figure – 3 depicting this configuration:


Install windows3.png


TIP: If an imported machine has only one drive, temporarily attach a secondary VirtIO enabled drive in order for this hardware type to be displayed in device manager. This will allow for the block driver to be installed allowing the change of the original drive from IDE to VirtIO after the driver installation.

Lastly, it is recommended to optimize the internal adapters (LAN) for windows systems that were deployed from a non-ProfitBricks image with the following configuration:


Set the MTU Value to 6400

At a command prompt, execute the following:

netsh interface ipv4 set subinterface "Local Area Connection" mtu=64000 store=persistent
  • Replace <Local Area Connection> with the name of the adapter you want to configure.


Disable TCP Offloading / Chimney & activate TCP/IP Auto Tuning

At a command prompt, execute the following:

netsh int tcp set global chimney=disabled
netsh int tcp set global rss=disabled
netsh int tcp set global congestionprovider=none
netsh int tcp set global netdma=disabled dca=disabled
netsh int tcp set global ecncapability=disabled
netsh int tcp set global timestamps=enabled
netsh interface tcp set global autotuninglevel=normal


The installation will be active after a restart. The following command can be used to verify the status of the configuration above.

netsh interface tcp show global

Installing VirtIO Drivers for linux-based Systems

Most Linux-based devices already contain the drivers required to interact with the KVM hypervisor and therefore there is no additional drivers required for installing a Linux-based machine at ProfitBricks.

Additionally, it is worth mentioning that ProfitBricks Linux images have been pre-configured with an optimal network configuration which includes increasing the MTU size to 64000 which is recommended to take full advantage of the ProfitBricks InfiniBand-based internal network.

Linux systems installed from non-ProfitBricks media will require setting the MTU size to 64000 to optimize the network adapters after the initial operating system installation. Below is a sample configuration for a RedHat based system:

# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=dhcp
TYPE=Ethernet
ONBOOT=yes
MTU=64000

Also, it is possible for Linux systems that have been imported from a non-ProfitBricks environment (e.g. via VMDK upload) to fail at boot due to the VirtIO drivers not being included in the initial ramdisk. In order to correct this, it may be necessary to create a new initial image for the kernel to preload the block device modules which are needed to access the file system. To do this, boot into “Linux Rescue” by booting from Disc 1 of your Linux installation media. From the linux rescue command prompt, create a new image with “mkinitrd” and the virtio parameters. Below is a sample list of commands that can be performed from the rescue shell for a RedHat based system:

# chroot /mnt/sysimage
# cd /boot
BOOTPROTO=dhcp
# mv initrd-2.6.18-308.el5.img initrd-2.6.18-308.el5.img.backup
# mkinitrd --with virtio --with virtio_blk --with virtio_net --with virtio_pci -f initrd-2.6.18-308.el5.img 2.6.18-308.el5
  • It is always recommended to make a backup of your initrd as shown above. Also, replace the kernel version accordingly.

At this point, the Linux virtual machine should be able to boot directly from the VirtIO disks. It is worth mentioning that servers that were running a graphical interface (level 5) may encounter a graphics error as the video driver may not be the correct one. This can be fixed by reconfiguring “X” from runlevel 3. As an example, this is done by executing the following command on a RedHat based system:

# system-config-display --reconfig
Personal tools
Namespaces

Variants
Actions
Online Help
Introduction
First Steps in the Data Center Designer
Elements and Functions
Information
Tutorials
FAQ
DevOps Central Website
Tools