March 28th, 2011 | Tags: , , , , ,

I have been doing research for my next big series “So You Want to Learn ZFS.”  This series is basically going to be a multi-part series of How-To’s which hopefully will give you the ability to build a file server (or even a SAN) based on ZFS if you so choose.  However there are a few things that I failed to take into account.

  1. Solaris 11 is so different from Solaris 10.
  2. Solaris 11 is so different from EVERYTHING else.

I figured that I’d be able to kind of gloss over the high points of how to get your system up and running and just dive right into the fun ZFS stuff.  So before we get into the good stuff there are some basics that we will need to go over first.  Today we will cover basic networking.

First off with the acquisition of Sun by Oracle last year the documentation is kind of scattered.  The most important place to know of is here.  I am sure Oracle will get this under control eventually.

Alright so what makes network configuration so difficult with Solaris 11?  Some things are much easier than they should be while others are just ridiculously difficult.  I personally attribute this to a tendency towards over-engineering on the part of Sun Engineers, everything is done in the most correct way.  Now this is not to say that Solaris is better than everything or that Sun hardware was better than anything else.  My basic point is that the most correct way is not always the best way.  I think that Sun’s over-engineering hurt them in the long run (which ultimately is why Oracle bought them and not the other way around).  However there is one area where I think the over-engineering paid off and the most correct way was actually the best way, this would be ZFS.  But I digress that will be for a later article.

If you install Solaris 11 Express then by default a service called Network Auto Magic (NWAM), which simplifies the process significantly, however if you look to do more advanced tasks such as aggregation then this won’t work for you.  NWAM is really very much the same as Network Manger, it can provide location based networking profiles and manage multiple types of interfaces (wireless and wired) seamlessly, although it may not be the best for a server configuration.

Disable Network Auto Magic

# svcadm disable network/physical:nwam
# svcadm enable network/physical:default

Once we have disabled NWAM we will lose all network connectivity and configurations.

View the Datalink Devices

Solaris 11 devices have many layers to their configuration, which makes advanced configurations much simpler however does complicate basic configurations. Basically the kernel is aware of the physical hardware and we can see this visibility with the first command.

# dladm show-phys
LINK         MEDIA                STATE      SPEED  DUPLEX    DEVICE
bge0         Ethernet             unknown    1000   full      bge0

The second command gives us the ability to see the physical interface linked to a logical interface.  After disabling NWAM you will NOT have a logical interface linked to your physical device (in my case bge0) because of this you will see that the state of the data-link device is “unknown”.  Also it is important to note that the device names are based off of vendor bge = broadcom and they are incremented based on the number of devices in the machine.

# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
bge0        phys      1500   unknown  --         --

Also before we move on we will just take a look at our existing logical interfaces, the only one you should have after disabling NWAM is lo0 which is your loopback interface.

# ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---

Create and Configure a Logical Interface

So the first step is creating a logical interface, then we can apply an IP configuration against it.  This will create a link from the logical interface to the physical interface, and will change the state to “up” from “unknown” that we saw before.

# ipadm create-if bge0
# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
bge0        phys      1500   up       --         --
# ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---
bge0       down     bm--------46 -46

Now above we have successfully created the logical interface and we can now apply an IP configuration to it.  This is where it gets a bit tricky.  Notice below we are going to apply DHCP as the configuration, we will end up deleting this configuration and making it static, this way you also get the opportunity to learn how to change the configuration (which is really a delete and add).  We will go through the specifics of the ipadm create-addr command after we also go over the static command as well since they are very similar.

# ipadm create-addr -T dhcp bge0/v4
# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
bge0/v4           dhcp     ok           192.168.100.225/24
lo0/v6            static   ok           ::1/128

Now to delete the DHCP configuration from the logical interface so that we can make it static.

# ipadm delete-addr bge0/v4

And to create a static IP configuration on the logical interface.

ipadm create-addr -T static -a 192.168.100.200/24 bge0/v4
# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
bge0/v4           static   ok           192.168.100.200/24
lo0/v6            static   ok           ::1/128

Alright so as we can see these are the two commands to create the configurations.

# ipadm create-addr -T dhcp bge0/v4
# ipadm create-addr -T static -a 192.168.100.200/24 bge0/v4

Now the -T option defines the type of configuration static and dhcp are the most common options, -a is for the address on a static configuration and you will notice that we are not using the logical interface name (bge0), but instead a variation (bge0/v4).  This represents the version of the IP protocol the configuration is using.  So you can have a bge0/v6 and a bge0/v4.

Alright so you have successfully configured your network interfaces, however NWAM was doing more than just this, so you might not have full network connectivity yet.

Verify Full Network Configuration and Connectivity

Using some of the above commands we can review our configurations.

# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
bge0/v4           static   ok           192.168.100.200/24
lo0/v6            static   ok           ::1/128

Additionally we need to verify name resolution and routing in order to be confident in our configuration.

# netstat -r

Routing Table: IPv4
Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
solaris              solaris              UH        2          0 lo0
192.168.100.0        192.168.100.200      U         3          1 bge0

Routing Table: IPv6
Destination/Mask            Gateway                   Flags Ref   Use    If
--------------------------- --------------------------- ----- --- ------- -----
solaris                     solaris                     UH      2       4 lo0

Above will display the current routing table (which does not have a default route), ensure your default route is defined and correct.  If you need to create it use the below command.

# route -p add default 192.168.100.1
add net default: gateway 192.168.100.1
add persistent net default: gateway 192.168.100.1

Once it has been corrected it should look something like this, and you should be able to ping off-net.

# netstat -r

Routing Table: IPv4
Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              fw01.allanglesit.net UG        2      10466
solaris              solaris              UH        2         12 lo0
192.168.100.0        192.168.100.200      U         6       1810 bge0

Routing Table: IPv6
Destination/Mask            Gateway                   Flags Ref   Use    If
--------------------------- --------------------------- ----- --- ------- -----
solaris                     solaris                     UH      2     156 lo0

To verify DNS configuration check the /etc/resolv.conf and then verify the functionality with nslookup or dig.

# cat /etc/resolv.conf
domain  allanglesit.net
nameserver  192.168.100.22
nameserver  192.168.100.25
# dig www.google.com

Solaris additionally uses /etc/nsswitch.conf to tell the system what types of name resolution to use for different types of lookups.  When disabling NWAM (which was configuring /etc/nsswitch.conf for us) then we will have a hosts file only configuration, which means our system won’t attempt to use DNS on its own (nslookup and dig will work since they know to use DNS themselves, but things like Firefox, wget, samba, etc only look to the system for name resolution).

# cat /etc/nsswitch.conf
.
.
hosts:      files dns
ipnodes:    files dns
.
.

I trimmed the above file for brevity.

At this point you should have full network connectivity without using NWAM.  So now just reboot to ensure that your settings persist after a reboot.

For WAY more information…

http://download.oracle.com/docs/cd/E19963-01/pdf/821-1458.pdf

 

UPDATE
September 16, 2011

In the comments below you will notice “Kristen” mentioned that the ipadm command has changed in newer builds of Solaris 11.  At the time she was using a newer build than I had available to me, so I could not verify her claim, however now I have verified this change against the Solaris 11 Early Adopter release snv_173.  So be prepared to make the following changes.

# ipadm create-if bge0
# ipadm delete-if bge0

Will now be

# ipadm create-ip bge0
# ipadm delete-ip bge0

The following were not changed:

  • ipadm enable-if
  • ipadm disable-if
  • ipadm show-if

UPDATE
February 28, 2012

Another astute user “j.marcos” (comment below) pointed out another change in the GA version of Solaris 11

For Solaris 11, instead of disabling network/physical:nwam and enabling network/physical:default we control NWAM by setting the ncp mode to DefaultFixed

# netadm enable -p ncp DefaultFixed

If you wanted to re-enable NWAM then we can set the ncp mode back to Automatic

# netadm enable -p ncp Automatic
March 23rd, 2011 | Tags: , , , ,

I have always been a big fan of Hyper-V, and was actually a Virtualization MVP 2 years in a row.  However recently I haven’t been able to get past the really poor support for Linux on the Hyper-V platform.  We run a large number of hypervisors in our international sales offices (15+) and have a need to run both Linux and Windows on them.  This has necessitated our switch to Linux-KVM as their Windows support is far better than Hyper-V’s Linux support.  As such with an upcoming trip planned to our European offices I have been planning exactly how our migration will look, of course the first iteration of this plan consisted of me deploying replacement VMs on the KVM stack then migrating to the VMs individually.  Though this plan was lengthy and cumbersome it would work.

Thus it was time to set out on a journey of laziness.  I started by investigating physical to virtual conversion utilities and unfortunately KVM doesn’t seem to have a lot of value in this area.  So I started looking at some more basic methods of accomplishing my task.  Enter kvm-img (also called qemu-img) this is basic disk management utility that you will use to create image files, monitor their growth, snapshot, basically anything to do with managing a disk image…  Including converting from one format to another.  Wait, what formats?

raw – The most simple file format and can be mounted locally

qcow2 – The most versatile file format supporting layered images, encryption, compression, and snapshots although there seems to be some performance issues

vdi – Virtualbox 1.1 compatible image format

vmdk – VMWare 3 and 4 compatible image format

vpc – VirtualPC compatible image format

I left out some formats which aren’t terribly useful, for more information see the man pages.  Now lets review VirtualPC and Virtual Server were the precursors to Hyper-V.  However with the advent of Hyper-V we lost the compatibility of VMs between Hyper-V and Virtual PC.  Or did we?  It was true that you could not take a Hyper-V VM with the Integration Services installed and run it on Virtual PC.  Though the issue was never the VHD format.  That specification remained the same.  The actual issue was that when you installed the Hyper-V Integration Services it forced the install of an APIC HAL, and since Virtual PC did not support an APIC HAL it would fail to boot.  Now since Hyper-V does support APIC this should not be a problem.  Assuming that we can successfully convert it.

Conversion Command from VHD to a Raw Image

# kvm-img convert -f vpc -O raw /kvm/images/disk/disk.vhd /kvm/images/disk/disk.img

Above you will see the -f option indicates the format of the source image and the -O (capital “o”) then followed by the path to the source file and the path to the output file.

I was able to successfully convert a Windows 2003 Server amd64 as well as Windows 2008 R2 Standard.  Once converted I was able to create a VM and boot it straight away.  On the Windows 2003 VM I needed to uninstall the Integration Components, though 2008 R2 includes them in the kernel so it was not necessary to uninstall them.

So obviously the next thing I thought of is that if kvm-img supports vpc as a file format, perhaps KVM can boot it.  Well it can.  So if you are so inclined you could simply copy the file from the Hyper-V server to the KVM server and start it (and uninstall the Integration Components if applicable) I plan on investigating the perfomance implications of this choice though that is not part of this article.

Also please keep in mind that you will need to install the VirtIO drivers into your Windows VMs in order to take advantage of the VirtIO Interface of Storage and Network.  If you do this your performance is very fast.  I will most likely document this in a future article though this is fairly well documented on the Internet currently.

March 22nd, 2011 | Tags: , , , , ,

Windows guests on the KVM hypervisors can get a very large kick in the pants when it comes to performance if you install the drivers necessary to leverage the VirtIO bus.  So first here are the necessary download links so that you can download the drivers.

http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-1.1.16.iso

http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/

Inside of the ISO we will find four directories Balloon, NetKVM, vioserial, and viostor.  We are only concerned with viostor and NetKVM, for the storage and networking drivers.  Vioserial is a serial interface which essentially lifts the single serial device limitation within KVM, and Balloon is a balloon memory driver I personally do not find this necessary due to the efficacy of the Kernel Samepage Merging (KSM) which is essentially memory deduplication.

Now when you are going to install the guest you need to install it with the IDE interface disks and the e1000 or rtl8139 network card in order to use the devices.  Once the install is completed, you can switch the NIC to virtio, attach a secondary disk using the VirtIO interface instead of IDE, and mount the VirtIO driver iso image as your CDROM and then install the device drivers as the new hardware is detected.  Also I noticed that when I installed the XP Guest my video controller did not detect a driver under Windows XP Pro x86, however if I switched the Video Model to cirrus it worked fine and detected the device properly.  If you are using virt-install to create the VM you can use the –video=cirrus parameter.  Once you have your devices installed properly a quick peak at Device Manager inside of the guest should reveal the VirtIO devices.

You can see the devices in question the “Red Hat VirtIO SCSI controller” and the “Red Hat VirtIO Ethernet Adapter.”

 

When I first started using KVM, I was quite disappointed with the graphical management abilities of KVM.  Basically if you want to manage via the command line KVM is a fantasical dream world where anything is possible and it is all “musical sunshine” and “double rainbows” but if you want the ability to manage graphically it was a world which was very one sided in that if you wanted the ability to graphically manage your VMs then you had to use Linux.  Or so I thought…

Graphical Management of Guests

The standard utility for graphical management of guests is virt-manager which is comparable to Hyper-V Manager, in that it can manage the specifics of a given virtual machine on either a local or remote server.  You can also connect to the console regardless of network connectivity, and create and destroy guests.  This is not enterprise management, for that you would need to look at Convirt or RHEV.  Now as you will see there is no Windows version of Virt-Manager.  So how can we use Windows to graphically manage KVM?  X11 Forwarding through SSH.  This requires 2 bits of software to be installed on the Windows machine (1) a X Window Server, I used XMing which is freely available (2) a SSH Client, XMing will install PuTTy for you, if you are using a different X Window Server you may or may not need to install PuTTy manually.

When installing XMing simply accept all defaults.  Then to initiate the connection, launch XMing which will launch into the background.  Then setup your PuTTy connection and under the SSH options enable X11 Forwarding.  Then initiate your connection to the server and launch virt-manager.

On the server side the only configuration that needs to take place is the installation of the utility you wish to use (virt-manager, virt-viewer, etc).

Figure 1 – Showing the X Window server (XMing) running in the taskbar.

Figure 2 – Showing the X11 Forwarding configuration for PuTTy connections.

Figure 3 – Showing the PuTTy session where we launch the virt-manager utility which is running in the background.

Figure 4 – Showing the Console view of a VM via virt-manager running in Windows 7 by X11 Forwarding.

Figure 5 – Showing the Details view of a VM via virt-manager running in Windows 7 by X11 Forwarding.

One small caveat that I have noticed with this configuration, when redirecting X, virt-manager does not close all of the processes it is using when you close it, thus when you go to exit the PuTTy session it hangs.  If you close your X Window server on Windows (in my case XMing) it will close your PuTTy as well (as long as you have already exited from the session).  If you are redirecting X on Linux, a control + c will kill the process.

March 16th, 2011 | Tags: , , , , , ,

Disk images have long been the traditional form of storage for virtualized environments, they are essentially containers in the form of a file on the host’s file system.  These files can be either fully allocated or sparsely allocated at time of disk creation, a fully allocated 20GB disk image will take up 20GB of storage on the host file system, a sparsely allocated 20GB disk image will only take up as much storage on the host file system as has actually been written inside of the disk, so if you have a newly created sparse file the file will be just a few KB in size.

Disk images can be very flexible, especially if they have been sparsely allocated.  This allows you to move VM files between hosts as needed, if they have not been sparsely allocated it could potentially take too long to do any sort of move from one host to another.  If you are using shared storage this can reduce some of the issues relating to large disk images.

When I use disk images I prefer to create them via the virt-install script, however if you’d like to manually create them you could use kvm-img (also known as qemu-img).

View Disk Image Information

# kvm-img info /kvm/images/disk/disk.img

Create a Raw Image (Sparse)

# kvm-img create -f raw /kvm/images/disk/disk.img 20G

Create a QCow2 Image (Sparse)

# kvm-img create -f qcow2 /kvm/images/disk/disk.img 20G

Create Snapshots (qcow2)

# kvm-img snapshot -c snapshot01 /kvm/images/disk/disk.img

List Snapshots (qcow2)

# kvm-img snapshot -l /kvm/images/disk/disk.img

Apply Snapshots (qcow2)

# kvm-img snapshot -a snapshot01 /kvm/images/disk/disk.img

Delete Snapshots (qcow2)

# kvm-img snapshot -d snapshot01 /kvm/images/disk/disk.img

Create Layered Images (qcow2)

# kvm-img create -f qcow2 /kvm/images/disk/base.img 20G
# kvm-img create -f qcow2 -o backing_file=/kvm/images/disk/base.img /kvm/images/disk/disk.img
# kvm-img info /kvm/images/disk/disk.img
image: disk.img
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 136K
cluster_size: 65536
backing file: /kvm/images/disk/base.img (actual path: /kvm/images/disk/base.img)

Commit Image Changes into Backing Image (qcow2)

# kvm-img commit -f qcow2 /kvm/images/disk/disk.img

One of the most important things to keep in mind is when you are moving sparse images you need to ensure that you move it in such a way that you honor the “holes” in the file.  This ensures that after the copy you still have a sparse file.

Copy Sparse File with cp

# cp --sparse=always /kvm/images/disk/sparse.img /kvm/images/disk/newsparse.img

Copy Sparse File with rsync (Locally)

# rsync -S /kvm/images/disk/sparse.img /kvm/images/disk/newsparse.img

Copy Sparse File with rsync (Remotely)

# rsync -S /kvm/images/disk/sparse.img root@remotehost:/kvm/images/disk/newsparse.img


In addition to disk images there is a different (read: better) way of managing guest storage.  LVM Logical Volumes are incredibly flexible, they can be moved from one disk to another, they can be expanded (up to the limitations of the hardware).  The one inflexible portion of logical volumes through LVM is that there is no way to provide sparse storage.  This is my preferred method, as it is fairly standard when it comes to administration between a physical and a virtual machine.  In another article Linux LVM2: Flexible Local Storage Management I have detailed how you would go about expanding a LVM logical volume as well as how to perform a snapshot.  In my environment all of my production VMs run off of LVs however I keep my base images in the form a base image file which can be pasted onto an LV when it is time to put them into production.  In my article Linux-KVM: Converting Raw Disk Images to LVM Logical Volumes to see how you can implement this as well.

 

Comments Off
Page 23 of 27« First...10...2122232425...Last »
TOP