February 24th, 2011 | Tags: , ,

BIND 9 gives us the ability to run a split DNS configuration on a single server, in this article we will go over the configuration of slave servers which will transfer the zones from the master while still maintaining its views.

Now before we go any further, if you do not have a working master server with views configured you will want to review part 1 here.

Configure the Slaves

slavedns01:/# cat /etc/issue
Debian GNU/Linux 5.0 \n \l
slavedns01:/# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.0.0.12
netmask 255.255.255.0
gateway 10.0.0.1

auto eth0:0
iface eth0:0 inet static
address 10.0.0.14
netmask 255.255.255.0
slavedns01:/# cat /etc/bind/named.conf
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local

include "/etc/bind/named.conf.options";

include "/etc/bind/named.conf.local";
slavedns01:/# cat /etc/bind/named.conf.options
options {
directory "/var/cache/bind";

// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk.  See http://www.kb.cert.org/vuls/id/800113

// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.

// forwarders {
//     0.0.0.0;
// };

auth-nxdomain no;    # conform to RFC1035
listen-on-v6 { any; };
};
slavedns01:/# cat /etc/bind/named.conf.local
//
// Do any local configuration here
//

// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";

acl master { 10.0.0.11/32; };

acl internals { !10.0.0.1/32; 10.0.0.0/24; localhost; };

acl externals { 10.0.0.1/32; any; };

view "internal" {
match-clients { internals; };
query-source address 10.0.0.12 ;
transfer-source 10.0.0.12 ;
allow-recursion { any; };
zone "allanglesit.net" {
type slave;
file "/var/cache/bind/internal/db.example.org";
masters { 10.0.0.11; };
allow-notify { master; };
};
zone "0.0.10.in-addr.arpa" {
type slave;
file "/var/cache/bind/internal/db.reverse.10.0.0";
masters { 10.0.0.11; };
allow-notify { master; };
};
zone "." {
type hint;
file "/etc/bind/db.root";
};
zone "localhost" {
type master;
file "/etc/bind/db.local";
};
zone "127.in-addr.arpa" {
type master;
file "/etc/bind/db.127";
};
zone "0.in-addr.arpa" {
type master;
file "/etc/bind/db.0";
};
zone "255.in-addr.arpa" {
type master;
file "/etc/bind/db.255";
};
};

view "external" {
match-clients { externals; };
query-source address 10.0.0.14 ;
transfer-source 10.0.0.14 ;
allow-recursion { none; };
zone "example.org" {
type slave;
file "/var/cache/bind/external/db.example.org";
masters { 10.0.0.11; };
allow-notify { master; };
};
};

Now as I mentioned in part 1, we are using 2 slave servers, however I won’t go into how to configure the second one, as it is exactly the same as the first one, with the exception of the IP addresses.  So now lets go through some of the configurations in a little more detail.

First thing we will talk about is the network configuration.  For each view we will need a different IP address, so that we can transfer the zone from the correct view.  We do this using aliases or sub-interfaces.  Below is where we have defined the interface (eth0) and the alias (eth0:0).

auto eth0
iface eth0 inet static
address 10.0.0.12
netmask 255.255.255.0
gateway 10.0.0.1

auto eth0:0
iface eth0:0 inet static
address 10.0.0.14
netmask 255.255.255.0

Now when configuring the internals acl in the /etc/bind/named.conf.local you will notice that I have included the 10.0.0.0/24 while excluding 10.0.0.1.  This is due to my external DNS queries being NAT’d through my firewall, which essentially appears as if they are coming from the firewall’s internal interface.  This acl is later called as part of the match-clients statement in the view.  Additionally you will need to specify both the query-source address and the transfer-source these parameters will ensure that the requests come from the correct IP when transferring the zones from the master.

acl internals { !10.0.0.1/32; 10.0.0.0/24; localhost; };

acl externals { 10.0.0.1/32; any; };

view "internal" {
match-clients { internals; };
query-source address 10.0.0.12 ;
transfer-source 10.0.0.12 ;
allow-recursion { any; };

This should complete the configuration of your split DNS using BIND 9 views.

Comments Off
February 24th, 2011 | Tags: , ,

In almost all organizations the network infrastructure needs to be designed in such a way as to allow both internal and external name resolution authoritatively.  In most organizations this has been accomplished by having separate internal and external servers.  Clearly this way is functional and simpler, however it is also wasteful considering how little resources DNS actually requires.  BIND 9 gives us a new method to manage these types of configurations.  The idea is that you can create a view which will reference specific zones based on the network location that you are coming from.

This is how the series will be broken down.

Part 1 Configuring the master server.

Part 2 Configuring the slave server(s).

Environment Details

-1 Master (does not service requests from clients)

-2 Slaves (which service requests from clients)

-Service both Internal and External requests (allowing recursion on Internal Requests only)

 

Configure the Master

masterdns01:/# cat /etc/issue
Debian GNU/Linux 5.0 \n \l
masterdns01:/# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 10.0.0.11
netmask 255.255.255.0
gateway 10.0.0.1
masterdns01:/# cat /etc/bind/named.conf
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local

include "/etc/bind/named.conf.options";

include "/etc/bind/named.conf.local";
masterdns01:/# cat /etc/bind/named.conf.options
options {
directory "/var/cache/bind";

// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk.  See http://www.kb.cert.org/vuls/id/800113

// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.

// forwarders {
//      0.0.0.0;
// };

auth-nxdomain no;    # conform to RFC1035
listen-on-v6 { any; };
};
masterdns01:/# cat /etc/bind/named.conf.local
//
// Do any local configuration here
//

// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";

acl internal-slaves { 10.0.0.12/32; 10.0.0.13/32; };
acl external-slaves { 10.0.0.14/32; 10.0.0.15/32; };

acl internal { !10.0.0.1/32; !10.0.0.13/32; !10.0.0.15/32; 10.0.0.0/24; localhost; };
acl external { 10.0.0.1/32; 10.0.0.13/32; 10.0.0.15/32; any; };

view "internal" {
match-clients { internals; };
allow-recursion { any; };
zone "example.org" {
type master;
file "/etc/bind/internal/db.example.org";
allow-transfer { internal-slaves; };
};
zone "0.0.10.in-addr.arpa" {
type master;
file "/etc/bind/internal/db.reverse.10.0.0";
allow-transfer { internal-slaves; };
};
zone "." {
type hint;
file "/etc/bind/db.root";
};
zone "localhost" {
type master;
file "/etc/bind/db.local";
};
zone "127.in-addr.arpa" {
type master;
file "/etc/bind/db.127";
};
zone "0.in-addr.arpa" {
type master;
file "/etc/bind/db.0";
};
zone "255.in-addr.arpa" {
type master;
file "/etc/bind/db.255";
};
};

view "external" {
match-clients { externals; };
allow-recursion { none; };
zone "example.org" {
type master;
file "/etc/bind/external/db.example.org";
allow-transfer { external-slaves; };
};
};

masterdns01:/# cat /etc/bind/internal/db.example.org
;BIND db file for example.org INTERNAL
;
$TTL 1d
;
@       IN      SOA     masterdns01.example.org.        hostmaster@example.org. (
110223001       ;serial number YYMMDDNNN
8h              ;refresh
2h              ;retry
2d              ;expire
6h              ;min ttl
)
IN      NS      masterdns01.example.org.
IN      NS      slavedns01.example.org.
IN      NS      slavedns02.example.org.

$ORIGIN example.org.

masterdns01     IN      A       10.0.0.11
slavedns01      IN      A       10.0.0.12
slavedns02      IN      A       10.0.0.13
www             IN      A       10.0.0.51
server          IN      A       10.0.0.55

masterdns01:/# cat /etc/bind/external/db.example.org
;BIND db file for example.org EXTERNAL
;
$TTL 1d
;
@       IN      SOA     masterdns01.example.org.        hostmaster@example.org. (
110223001       ;serial number YYMMDDNNN
8h              ;refresh
2h              ;retry
2d              ;expire
6h              ;min ttl
)
IN      NS      ns1.example.org.
IN      NS      ns2.example.org.

$ORIGIN example.org.

ns1             IN      A       1.1.1.1
ns2             IN      A       1.1.1.2
www             IN      A       1.1.1.3
masterdns01:/# cat /etc/bind/internal/db.reverse.10.0.0
;BIND db file for 10.0.0 INTERNAL
;
$TTL 1d
;
@       IN      SOA     masterdns01.example.org.        hostmaster@example.org. (
110223001       ;serial number YYMMDDNNN
8h              ;refresh
2h              ;retry
2d              ;expire
6h              ;min ttl
)
IN      NS      masterdns01.example.org.
IN      NS      slavedns01.example.org.
IN      NS      slavedns02.example.org.

11      IN      PTR     masterdns01
12      IN      PTR     slavedns01
13      IN      PTR     slavedns02
51      IN      PTR     www
55      IN      PTR     server

So to look a little bit closer at the setup of the view itself…

Below you will see an excerpt of the /etc/bind/named.conf.local

Some important things to note.  You might notice that I have excluded some individual IP addresses from the internal acl (10.0.0.1, 10.0.0.13, and 10.0.0.15), I do this by including an exclamation point prior to the IP address.  This is common in Unix and it is interpreted as a negative of whatever it is combined with so != is “not equal”.  Now the reasoning is quite simple, one device is the firewall, since my external DNS traffic is NAT’d through the firewall, the external queries actually end up coming from the internal interface of the firewall (from the perspective of the DNS server).  The other two devices are the slaves, if you do not exclude these then you will end up downloading your internal zones into all of your views on the slave servers rendering your views nearly worthless.

acl internal { !10.0.0.1/32; !10.0.0.13/32; !10.0.0.15/32; 10.0.0.0/24; localhost; };
acl external { 10.0.0.1/32; 10.0.0.13/32; 10.0.0.15/32; any; };

view "internal" {
match-clients { internals; };
allow-recursion { any; };

In part 2 we will be configuring the slave servers.

Comments Off
February 12th, 2011 | Tags: , , ,

Linux-KVM and Linux Containers both require a bridged interface so that it can be shared with the guests.  This can also be useful if you want to share a network configuration between a wireless card and a wired card (though I will not be going into this particular configuration).  Configuring a bridge is a pretty straight forward process.

# apt-get install bridge-utils

# cat /etc/network/interfaces
## This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.0.200
netmask 255.255.255.0
gateway 192.168.0.1

auto eth1
iface eth1 inet manual

auto br0
iface br0 inet static
address 192.168.0.201
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports eth1
bridge_stp off
bridge_maxwait 0
bridge_fd 0

As you can see above I have eth0 configured as a standard interface, while eth1 is initialized but not configured.  The configuration actually lives on br0, and bridge_ports defines the member as eth1.

Also you can use brctl to display information on your bridge(s).  After you have created your bridge you will be able to see your bridge.

# brctl show
bridge name    bridge id        STP enabled    interfaces
br0        8000.002219c41fc5    no        eth1

Although once you start adding machines which are using the bridge you will see additional interfaces show up under the appropriate bridge interface column.

# brctl show
bridge name    bridge id        STP enabled    interfaces
br0        8000.002219c41fc5    no        eth1
vnet0
vnet1

Also it is important to note that if you try and configure multiple interfaces on the same bridge connected to the same network, you can end up causing loops and disabling your network traffic.  This could be fixed by setting bridge_stp on however I am not positive and I have not tested that.  Additionally I have also tried to create a bond and then bridge the bond (in order to increase available throughput to the guests).  I have been able to get it to successfully work on the host level, but the guests cannot see anything on the network.  If I am successful in this attempt I will post an update.  If you are able to do so please post in the comments and I would be glad to update my post with your credited information.

Comments Off

As part of virtualization on Hyper-V, there is the ability to use Para-virtualized drivers instead of using built-in drivers for slower, emulated devices. Para-virtualized drivers are simply device drivers that are written for native virtual devices. In other words they are not emulating a pre-existing physical device. The Path to Enlightenment is simply the method you use to take an operating system and enable it to use these synthetic devices.

In this article we will be describing the different methods of enabling enlightenment on Ubuntu Linux for the following 3 categories: Ubuntu 9.04 and Earlier, Ubuntu 9.10, and Ubuntu 10.04. Each of these distribution versions has significant differences which require sometimes slightly and sometimes major differences in procedure.

Methods of Enlightenment

The first method of enlightenment for Linux on Hyper-V is the release package from Microsoft, the Linux Integration Components (LIC), as of the time of this writing there are versions 1.0, 2.0, and the release candidate of 2.1. If you are using Hyper-V 2008 R2, then you will need to use version 2.0 or greater of the LIC. All versions of the LIC include vmbus, netvsc, storvsc, and blkvsc. In the release candidate of version 2.1 Microsoft has additionally incorporated time sync, heartbeat, and integrated shutdown into the vmbus, additionally 2.1 includes code which allows for SMP (multiple processors). Prior to this version you only received the synthetic device drivers for SCSI (storvsc), IDE (blkvsc), and networking (netvsc) plus the underlying vmbus (vmbus) architecture which all of the previous components depend upon. The biggest drawback to the LIC is the support statement that Microsoft has made. Currently the targeted distributions are very narrow (though much wider than when first released). Now of course Ubuntu is not one of the supported distributions (that would just be too easy). Due to this the LIC installer can be a bit tricky on Ubuntu.

The second method is by using a Linux kernel version which is newer than 2.6.32. This kernel must be compiled with the Hyper-V drivers out of the staging tree of the kernel source code. It can either be self-compiled or installed via a kernel install package. The primary difference between the Linux Integration Components and the kernel modules is that the kernel modules have been renamed to follow kernel coding standards, as such all modules that exist in the LIC have been prepended with hv_ so the modules are named hv_vmbus, hv_netvsc, hv_blkvsc, and hv_storvsc. Additionally please keep in mind that self-compiled kernels are NOT supported by Canonical (the company behind Ubuntu). As such if your system is production you should not use this option unless you are willing to self-support. Additionally there were fixes rolled into version 2.6.32.6 which allow the hv_vmbus module to properly handle SMP (multiple processors), so please ensure that you are taking advantage of this. Ultimately using the newer kernel method is preferable in most cases since we are able to bypass Microsoft’s installer which does not support Ubuntu.

Mouse Integration

When a user uses Hyper-V Management Console or System Center Virtual Machine Manager to connect to a VM and they begin to interact with the desktop via the mouse, they find themselves in an interesting situation where they are unable to reclaim their mouse pointer from the VM without entering a somewhat cryptic keystroke ctrl + alt + left arrow. Mouse integration provides for seamless interaction between the VM and the workstation being used to connect to the VM. If a VM Connect window is selected AND the mouse pointer is inside of the boundary of the VM’s desktop then the mouse commands will be sent to the guest. If these conditions are not met then the commands are sent to the workstation.

Mouse integration is currently achieved by installing the inputvsc module which was included in Citrix’s Project Satori. This installer has the same support statement as the LIC, meaning it is not supported on Ubuntu (though in some cases it does work). The inputvsc requires the vmbus module to be both installed and operable. The requirement for the vmbus module is not fulfilled by using the kernel included modules due to the existence of the hv_vmbus as a replacement for the vmbus module.

It is also important to note that if you do not have Mouse Integration via Citrix’s Project Satori then you will be unable to capture and use the mouse if you are using the SCVMM or HV Management Consoles while you are connected through a RDP session. However if you are using the SCVMM or HV Management Consoles directly on the physical machine that you are on you will be able to capture the mouse as expected.

Linux Integration Components – Future Versions

Currently the code for the Linux Integration Components version 2.1 RC is in the linux-next source code, so unless some problems are discovered to warrant their removal they will be included in 2.6.35, this means that we can rely on the actual kernel modules to provide the ability to use synthetic devices as well as timesync, integrated shutdown, and heartbeat.

Method 1 – Use 2.6.32 Modules from Default Install (Ubuntu 10.04 only)

Easy Install YES
SMP-Safe YES
Mouse Integration Compatible NO
Integrated Shutdown NO
Pre-Release Software NO

Configure Modules to Load on Boot

Add the following to /etc/initramfs-tools/modules

hv_vmbus

hv_storvsc

hv_blkvsc

hv_netvsc

# update-initramfs -u

Configure Synthetic Network Interface

Add the following to /etc/network/interfaces

auto seth0

iface seth0 inet dhcp

Method 2 – Install 2.6.32 Kernel Image from Lucid Repository (Ubuntu 9.10 and Earlier)

Easy Install YES
SMP-Safe YES
Mouse Integration Compatible NO
Integrated Shutdown NO
Pre-Release Software NO

Download Kernel Image

# wget http://us.archive.ubuntu.com/ubuntu/pool/main/l/linux/linux-image-2.6.32-31-server_2.6.32-21.32_amd64.deb

Install Kernel Image

# dpkg -i linux-image-2.6.32-31-server_2.6.32-21.32_amd64.deb

Configure Modules to Load on Boot

Add the following to /etc/initramfs-tools/modules

hv_vmbus

hv_storvsc

hv_blkvsc

hv_netvsc

# update-initramfs -u -k 2.6.32-21-server

Configure Synthetic Network Interface

Add the following to /etc/network/interfaces

auto seth0

iface seth0 inet dhcp

Method 3 – Install 2.6.18 Kernel Image for Direct Installation of LIC v2.0 (Ubuntu 9.04 and Earlier)

Easy Install NO
SMP-Safe NO
Mouse Integration Compatible YES
Integrated Shutdown NO
Pre-Release Software NO

Download Kernel Image and Headers

Add the following to /etc/apt/sources.list (comment out after completed with installation)

##Repositories which contain older versions of the linux kernel (needed for the Hyper-V Linux IC)

deb http://ftp.us.debian.org/debian etch main

deb http://security.debian.org/debian-security etch/updates main

Install Kernel Image, Headers, and Prerequisites

# apt-get install build-essential linux-image-2.6.18-6-amd64 linux-headers-2.6.18-6-amd64

Install Linux IC v2.0 (Assumes Files are Pre-staged in /opt/linux_ic-20/)

# cd /opt/linux_ic-20

# ./setup.pl drivers

Install Linux Inputvsc (Assumes Files are Pre-staged in /opt/linux_ic-input/) – Optional for Mouse Integration

# cd /opt/linux_ic-input

# ./setup.pl input

Configure Modules to Load on Boot

Add the following to /etc/initramfs-tools/modules

vmbus

storvsc

blkvsc

netvsc

inputvsc #Optional for Mouse Integration

# update-initramfs -u

Configure Synthetic Network Interface

Add the following to /etc/network/interfaces

auto seth0

iface seth0 inet dhcp

Reboot

Method 4 – Install 2.6.18 Kernel Image to for Direct Installation of LIC v2.1 (Ubuntu 9.04 and Earlier)

Easy Install NO
SMP-Safe YES
Mouse Integration Compatible YES
Integrated Shutdown YES
Pre-Release Software YES

Download Kernel Image and Headers

Add the following to /etc/apt/sources.list (comment out after completed with installation)

##Repositories which contain older versions of the linux kernel (needed for the Hyper-V Linux IC)

deb http://ftp.us.debian.org/debian etch main

deb http://security.debian.org/debian-security etch/updates main

Install Kernel Image, Headers, and Prerequisites

# apt-get install build-essential linux-image-2.6.18-6-amd64 linux-headers-2.6.18-6-amd64

Install Linux IC v2.1 (Assumes Files are Pre-staged in /opt/linux_ic-21/)

# cd /opt/linux_ic-21

# ./setup.pl drivers

Install Linux Inputvsc (Assumes Files are Pre-staged in /opt/linux_ic-input/) – Optional for Mouse Integration

# cd /opt/linux_ic-input

# ./setup.pl input

Configure Modules to Load on Boot

Add the following to /etc/initramfs-tools/modules

vmbus

storvsc

blkvsc

netvsc

inputvsc #Optional for Mouse Integration

# update-initramfs -u

Configure Synthetic Network Interface

Add the following to /etc/network/interfaces

auto seth0

iface seth0 inet dhcp

Reboot

Comments Off

#1 All major Linux distributions will work out-of-box on Hyper-V

By far one of the most common misconceptions is that Linux does not work on Hyper-V or that only a small number of distributions do. This could not be farther from the truth. Out-of-box most distributions will work. I personally have installed Gentoo, Debian, Ubuntu, CentOS, Fedora, RHEL, OEL, SLES, OpenSUSE and even some non Linux such as FreeBSD, OpenSolaris.

One of the things to keep in mind that Hyper-V can expose two different kind of virtual devices to guests, emulated and synthetic. Now to keep it very simple, emulated devices are actual hardware devices (S3 Trip Video, Intel 440BX, and Intel 21140 Network Adapter) that have been recreated in software. Synthetic devices on the other hand are actual virtual devices, for which distributions will have to have specific drivers for. Now the benefit to the synthetic devices is they are much more performant since they do not require the slower process of emulation, because of this it is best to install the Linux Integration Components where possible to get access to the synthetic devices.

#2 Linux Integration Components are GPL and are in the 2.6.32+ Linux kernel

In July of 2009 Microsoft released the source code for the Linux Integration Components version 2.0 which in turn allowed these drivers to be put directly into future versions of the Linux kernel. Now Microsoft has released a beta of version 2.1 of the Linux Integration Components and these changes will be integrated back into the kernel source code (Linux kernel developers willing). By including these drivers in the kernel it will allow for distributions to enable out-of-box synthetic devices on Hyper-V, and improve performance significantly for Linux guests, which can only increase adoption of Linux on Hyper-V. It will additionally allow for Independent Software Vendors to release Linux-based “virtual appliances” for Hyper-V more readily.

It is also important to note that mouse integration is not included in the kernel, and will not work with the kernel included modules (due to the appending of “hv_” by the kernel team onto the names of all four kernel modules).

#3 SMP Bug is fixed in 2.6.32.6 and Linux Integration Components v2.1 beta

The original version of the vmbus.ko (which is the vmbus driver in the Linux Integration Components) was not SMP Safe. After the GPL of the source code, Linux Kernel Developers found and fixed the offending code, and it was resolved in 2.6.32.6. This fix was also rolled into the Linux Integration Components v2.1.

#4 Microsoft’s Linux support statement is unnecessarily complicated

People seem to have a really hard time with this, and frankly it is completely understandable. Microsoft has chosen to list specific Linux distributions (and versions) which they will support, I have listed them below.

SUSE Linux Enterprise Server 10 SP1 (x86 and x64)

SUSE Linux Enterprise Server 10 SP2 (x86 and x64)

SUSE Linux Enterprise Server 11 (x86 and x64)

Red Hat Enterprise Linux 5.2, 5.3 and 5.4 (x86 and x64)

The problem here comes from Microsoft and its customers not reading from the same page when it comes to a definition of support. Customers want to know that if anything goes wrong with the hypervisor that Microsoft will fix the problem, regardless of if they find the problem through an unsupported guest configuration. Microsoft on the other hand wants to know that if a user contacts them with a problem which is traced back to an Operating System level problem on Linux that there is another company which can step in and provide support at this point. This way Microsoft doesn’t find itself in a position where it needs to help users recompile Linux kernels or inversely have to show their customers the door before their problems are resolved. As such this requires that another company to have a reciprocal support agreement with Microsoft which allows Novell for example to pass hypervisor issues to Microsoft and Microsoft to pass distribution issues to Novell. Ultimately this is a good thing, but Microsoft really needs to clear the air and publish a test matrix for all of the major distributions describing what does and does not work (even if it is not supported).

#5 Time skew (clock drift) can be a problem

It is important to note that the problem of clock drift is not specific to Hyper-V or Microsoft; it exists in all guests of any virtualization platform and even on physical hardware to an extent (though NTP has largely resolved the issues on physical hardware). Virtualization however has to emulate the hardware clock for the guests, which since CPU speeds vary with workload the time will also vary, so while NTP and Windows Time are keeping your hosts on the correct time, time sync also needs to happen between the host and the guest quite frequently. Hyper-V provides two levels of time sync; the first takes place upon the start of a guest, the second is through a vmbus component included in the Integration Components. Linux experienced a much larger problem with time skew since this vmbus component was not included in the Linux Integration Components until v2.1 beta. If you are not using the new version of ICs then I recommend a combination of rdate and NTP to bring consistent time to your guests. NTP is mainly focused on making small changes to correct minor (normal) drift. Rdate is better suited to perform major time changes. So rdate makes the big changes and NTP keeps the time consistent.

#6 Jumbo Frames are not supported

Jumbo frames are a networking technology which allows us to transmit data inside of larger network packets, reducing the overhead needed to send large amounts of data. Think of it like this if you have a 500 page paper which you needed to snail mail to someone, jumbo frames is the equivalent to the large manila envelopes, your only other option is to split it into multiple groups and fold away until they fit into a hundred or so standard envelopes. Now once you have them all in the envelopes (manila or standard) they will all need to be addressed including a return address, which would take you less time? Jumbo frames are not for everything. You will mostly want to use jumbo frames in dedicated Storage Area Networks or dedicated backup networks. In my environments I like to have my Virtual Machine Hosts connected to the Storage Area Network, allowing my guests direct access to the iSCSI storage. I host my VHDs locally with data drives attached from the SAN, this allows me more flexibility. It has saved me a lot of time when it comes to P2V, and network migrations. This is a very economical setup for smaller customers with an entry-level SAN. So due to this preference I enable jumbo frames on my guests storage networks.

#7 Integrated shutdown has just been made available

Shutting down a VM is basic management as far as I am concerned. This is an area in which Microsoft has been lacking since the release of Hyper-V. However lately they have made some in-roads, in the Linux Integration Components v2.1 beta they have implemented integrated shutdown, this allows you to use the Shutdown button in the Hyper-V Management Console, or even have the graceful shutdown of the host trigger the shutdown of the Linux guest. This is performed by reaching inside of the Linux Operating System and issuing an init 0. The release candidate of the Linux Integration Components 2.1 also added a heartbeat into the vmbus, which allows the integrated shutdown to detect the guest and the ICs inside of the guest. This is required to shutdown a VM using the SCVMM GUI.

Now I would not be honest if I told you that the value in this is the ability to shutdown the guest. The truth is that there is much greater value in this functionality. This ability to reach inside the Operating System could allow (eventually) the host machine reach inside the Linux guest and gather key system information, such as version of the Linux ICs, kernel version, distribution, computer name, and other information which could be exposed to Hyper-V Management Console or SCVMM to provide a richer management environment.

#8 System Center Virtual Machine Manager only has basic management of Linux guests (kinda)

My biggest concern with Virtual Machine Manager when it comes to Linux guests is its inability to pull data from the guests; this is not so much a limitation of the SCVMM product, but rather a limitation of communication channel between vmbus and the hypervisor. For Windows guests the Integration Components allow both Hyper-V Management Console and Virtual Machine Manager to pull the computer name and the Operating System version and display it or insert it into the database (as is the case in VMM). Now with the release candidate of the Linux Integration Components 2.1, Microsoft has built out this communications path between the hypervisor and the guest. Currently this only allows for heartbeat, but over time this can allow the richer management which will help customers all over the map.

Comments Off
Page 23 of 25« First...10...2122232425
TOP