Eight Things You Need To Know About Linux on Hyper-V

Eight Things You Need To Know About Linux on Hyper-V

#1 All major Linux distributions will work out-of-box on Hyper-V

By far one of the most common misconceptions is that Linux does not work on Hyper-V or that only a small number of distributions do. This could not be farther from the truth. Out-of-box most distributions will work. I personally have installed Gentoo, Debian, Ubuntu, CentOS, Fedora, RHEL, OEL, SLES, OpenSUSE and even some non Linux such as FreeBSD, OpenSolaris.

One of the things to keep in mind that Hyper-V can expose two different kind of virtual devices to guests, emulated and synthetic. Now to keep it very simple, emulated devices are actual hardware devices (S3 Trip Video, Intel 440BX, and Intel 21140 Network Adapter) that have been recreated in software. Synthetic devices on the other hand are actual virtual devices, for which distributions will have to have specific drivers for. Now the benefit to the synthetic devices is they are much more performant since they do not require the slower process of emulation, because of this it is best to install the Linux Integration Components where possible to get access to the synthetic devices.

#2 Linux Integration Components are GPL and are in the 2.6.32+ Linux kernel

In July of 2009 Microsoft released the source code for the Linux Integration Components version 2.0 which in turn allowed these drivers to be put directly into future versions of the Linux kernel. Now Microsoft has released a beta of version 2.1 of the Linux Integration Components and these changes will be integrated back into the kernel source code (Linux kernel developers willing). By including these drivers in the kernel it will allow for distributions to enable out-of-box synthetic devices on Hyper-V, and improve performance significantly for Linux guests, which can only increase adoption of Linux on Hyper-V. It will additionally allow for Independent Software Vendors to release Linux-based “virtual appliances” for Hyper-V more readily.

It is also important to note that mouse integration is not included in the kernel, and will not work with the kernel included modules (due to the appending of “hv_” by the kernel team onto the names of all four kernel modules).

#3 SMP Bug is fixed in 2.6.32.6 and Linux Integration Components v2.1 beta

The original version of the vmbus.ko (which is the vmbus driver in the Linux Integration Components) was not SMP Safe. After the GPL of the source code, Linux Kernel Developers found and fixed the offending code, and it was resolved in 2.6.32.6. This fix was also rolled into the Linux Integration Components v2.1.

#4 Microsoft’s Linux support statement is unnecessarily complicated

People seem to have a really hard time with this, and frankly it is completely understandable. Microsoft has chosen to list specific Linux distributions (and versions) which they will support, I have listed them below.

SUSE Linux Enterprise Server 10 SP1 (x86 and x64)

SUSE Linux Enterprise Server 10 SP2 (x86 and x64)

SUSE Linux Enterprise Server 11 (x86 and x64)

Red Hat Enterprise Linux 5.2, 5.3 and 5.4 (x86 and x64)

The problem here comes from Microsoft and its customers not reading from the same page when it comes to a definition of support. Customers want to know that if anything goes wrong with the hypervisor that Microsoft will fix the problem, regardless of if they find the problem through an unsupported guest configuration. Microsoft on the other hand wants to know that if a user contacts them with a problem which is traced back to an Operating System level problem on Linux that there is another company which can step in and provide support at this point. This way Microsoft doesn’t find itself in a position where it needs to help users recompile Linux kernels or inversely have to show their customers the door before their problems are resolved. As such this requires that another company to have a reciprocal support agreement with Microsoft which allows Novell for example to pass hypervisor issues to Microsoft and Microsoft to pass distribution issues to Novell. Ultimately this is a good thing, but Microsoft really needs to clear the air and publish a test matrix for all of the major distributions describing what does and does not work (even if it is not supported).

#5 Time skew (clock drift) can be a problem

It is important to note that the problem of clock drift is not specific to Hyper-V or Microsoft; it exists in all guests of any virtualization platform and even on physical hardware to an extent (though NTP has largely resolved the issues on physical hardware). Virtualization however has to emulate the hardware clock for the guests, which since CPU speeds vary with workload the time will also vary, so while NTP and Windows Time are keeping your hosts on the correct time, time sync also needs to happen between the host and the guest quite frequently. Hyper-V provides two levels of time sync; the first takes place upon the start of a guest, the second is through a vmbus component included in the Integration Components. Linux experienced a much larger problem with time skew since this vmbus component was not included in the Linux Integration Components until v2.1 beta. If you are not using the new version of ICs then I recommend a combination of rdate and NTP to bring consistent time to your guests. NTP is mainly focused on making small changes to correct minor (normal) drift. Rdate is better suited to perform major time changes. So rdate makes the big changes and NTP keeps the time consistent.

#6 Jumbo Frames are not supported

Jumbo frames are a networking technology which allows us to transmit data inside of larger network packets, reducing the overhead needed to send large amounts of data. Think of it like this if you have a 500 page paper which you needed to snail mail to someone, jumbo frames is the equivalent to the large manila envelopes, your only other option is to split it into multiple groups and fold away until they fit into a hundred or so standard envelopes. Now once you have them all in the envelopes (manila or standard) they will all need to be addressed including a return address, which would take you less time? Jumbo frames are not for everything. You will mostly want to use jumbo frames in dedicated Storage Area Networks or dedicated backup networks. In my environments I like to have my Virtual Machine Hosts connected to the Storage Area Network, allowing my guests direct access to the iSCSI storage. I host my VHDs locally with data drives attached from the SAN, this allows me more flexibility. It has saved me a lot of time when it comes to P2V, and network migrations. This is a very economical setup for smaller customers with an entry-level SAN. So due to this preference I enable jumbo frames on my guests storage networks.

#7 Integrated shutdown has just been made available

Shutting down a VM is basic management as far as I am concerned. This is an area in which Microsoft has been lacking since the release of Hyper-V. However lately they have made some in-roads, in the Linux Integration Components v2.1 beta they have implemented integrated shutdown, this allows you to use the Shutdown button in the Hyper-V Management Console, or even have the graceful shutdown of the host trigger the shutdown of the Linux guest. This is performed by reaching inside of the Linux Operating System and issuing an init 0. The release candidate of the Linux Integration Components 2.1 also added a heartbeat into the vmbus, which allows the integrated shutdown to detect the guest and the ICs inside of the guest. This is required to shutdown a VM using the SCVMM GUI.

Now I would not be honest if I told you that the value in this is the ability to shutdown the guest. The truth is that there is much greater value in this functionality. This ability to reach inside the Operating System could allow (eventually) the host machine reach inside the Linux guest and gather key system information, such as version of the Linux ICs, kernel version, distribution, computer name, and other information which could be exposed to Hyper-V Management Console or SCVMM to provide a richer management environment.

#8 System Center Virtual Machine Manager only has basic management of Linux guests (kinda)

My biggest concern with Virtual Machine Manager when it comes to Linux guests is its inability to pull data from the guests; this is not so much a limitation of the SCVMM product, but rather a limitation of communication channel between vmbus and the hypervisor. For Windows guests the Integration Components allow both Hyper-V Management Console and Virtual Machine Manager to pull the computer name and the Operating System version and display it or insert it into the database (as is the case in VMM). Now with the release candidate of the Linux Integration Components 2.1, Microsoft has built out this communications path between the hypervisor and the guest. Currently this only allows for heartbeat, but over time this can allow the richer management which will help customers all over the map.