Enterprise Storage is currently broken.  IT departments big and small spend way too much money on completely ineffective storage solutions which only partially fulfil the companies storage need.  All  the while having to justify massive expenditures to the business which do not meet the need.  Essentially IT departments everywhere have become storage sales reps, sure we work with vendors who provide us with quotes, but when the rubber meets the road it is the IT department which is selling the storage to the business.  If any of you have ever been in this position as I have and literally felt sick to your stomach because you knew it was all wrong, then this article is for you.  This article does not provide an Enterprise Storage cure-all.  What it does provide is an interesting look at a solution which really revolutionises the entire Enterprise Storage model.  More importantly it allows you the flexibility, features, and densities needed to make it far superior to any modern day Enterprise Storage solution.  Also it is pretty darn cheap.

If you have read any of my articles you will know that I generally provide very specific articles which address the needs of a particular niche (read: my needs – which surprisingly enough coincide with many others needs as well) this is not the case with this current series.  These articles are really a plea to get you to think about why storage is purchased and provisioned in the ways that it is currently.  Here is how I plan on breaking the series out.

  • Part One – Basic Overview of ZFS and its features.
  • Part Two – Big Differences between Solaris and Linux, and kind of the basics of what you need to know.
  • Part Three – Setting up your first test ZFS machine

Now once we have gone through these I will also do a series of articles on how to perform specific ZFS tasks, such as enabling features such as compression, de-duplication, and encryption.  But this three part series is simply to get you to acknowledge the problem which none of us want to acknowledge.

Also to be fair, I am not a fan of Solaris, and I am also not a fan of Oracle.  However I am a fan of ZFS.  If ZFS were available in Linux or Windows this article would be about how to use ZFS there, however it is what it is and regardless of the shortcomings (usability) of Solaris, to get ZFS is worth the inconvenience.

What is ZFS?

ZFS which originally stood for Zettabyte File System, but now has evolved into more of a standalone trademark, is at a very basic level is a file system and a volume manager.  However to be honest it really cannot be defined that simply.  ZFS is your disk storage, end-to-end.  With ZFS you can take advantage of very advanced features, copy on write, snapshots, clones, de-duplication, encryption, caching, and end-to-end checksums.

Data Integrity

ZFS solves silent data corruption.  Enough said?  Probably not.  Silent data corruption has been a largely ignored problem by the IT community as a whole.  Basically with small and slow disks statistically it takes longer to reach the amount of data written in which it is expected that silent data corruption would happen than the realistic life of the drive.  Now since drives are neither small nor slow, and RAID setups can make them very very fast the amount data written can reach this barrier in a relatively short amount of time.  ZFS can detect silent data corruption and with the redundancy built into your pool can self heal the file system based off of the redundant disks.  ZFS does this by calculating checksums end-to-end.  This was the premise that ZFS was built on.  And at this point indications are that they were successful.  Ultimately the best resource I have found to speak to this feature has been here.

Integrated Volume Management

Part of ZFS includes a pooled storage model which makes managing multiple disks ideal.  These pools can be created on files, partitions, or whole disk devices.  Of course ZFS prefers exclusive access to the disk hardware so whole disk devices are the best way.  The pools are able to be created with the required level of redundancy (mirror, single parity raid – raid5, double parity raid – raid 6, triple parity raid) and can include hot spares, solid state disks for the ZIL (write-cache) or the L2ARC (read-cache).

Fantastic Performance

ZFS achieves great performance using a copy-on-write transactional model.  Additionally another “fact” that ZFS has changed is that software RAID 5 is slow.  In reality ZFS can be and in most cases is faster than traditional hardware RAID 5.  The biggest reason for this is variable stripe width, which makes every write a full stripe.  With RAID 5 if a change is needed to be made then a whole new stripe(s) need to be written and the parity recalculated and written to the parity disk.  With RAIDZ1 (which is the ZFS equivalent of RAID 5) if a change is made then the change is written to disk, the size of the change determines the width of the stripe, which reduces the amount of space that needs to be written.  The fixed stripe width in RAID 5 is also the culprit when discussing the RAID 5 write-hole.

In addition to the variable stripe width that ZFS uses there are a number of things which you can use to speed up disk access within the system.  Every ZFS file system has a ZFS Intent Log or ZIL, which lives on the same pool as the file system (read: same speed) however you can add a faster disk (read: solid-state) in order to allow synchronous writes be committed to disk as fast as possible so that more can be sent, these committed writes would then be written to the file system as i/o permits.  The most common need for synchronous writes is iSCSI and NFS.  On the other hand ZFS will also cache your most commonly read block in the Advanced Read Cache (ARC) which is basically all of the free memory in the system, so you can get really fast reads by having extra memory in the system, the next level of this is the Level 2 Advanced Read Cache (L2ARC) this is basically a fast disk (read: solid-state) which can hold even more cached items than memory, so for a relatively small amount of money you could have a machine with 64GB or RAM and 200GB of SSD L2ARC and end up with 250GB~ of read cache which will speed up your read performance from the zpool.  You could additionally use the ZIL and the L2ARC in combination, you don’t have to pick one or the other (though they need to be separate devices – or at a minimum slices).

ZFS is Scalabe Beyond Any Other File System

ZFS is 128-bit File System so it can address far more than other file systems.

  • Any zpool can be 256 Zettabytes.
  • Each zpool can have 2^64 physical disks in it.
  • Each system can have 2^64 zpools in it.
  • Each zpool can have 2^64 file systems.
  • A single file can be 16 Exabytes.

As you can clearly see this is far more scalable then current generation file systems, ultimately these are theoretical limits and may never be reached, but it was only a couple of years ago that 40GB hard drives was the standard on a home PC, now with 3TB drives on the market, who knows how long it will be before that is the norm, and then when that is unreasonably small (as most of us would like at a 40GB now with the same disdain).

Deduplication

This has been a buzzword around the storage community for quite a few years, and frankly it is not a reason in most cases to buy storage.  Additionally any sort of storage which has included de-duplication has been prohibitively expensive.  ZFS includes it.  Now it is expensive in one way.  It requires resources, if you are planning on using de-duplication you will want a large ARC and L2ARC (if applicable).  De-duplicated data will perform very poorly if your de-duplication tables are stored outside of ARC or L2ARC.  That said make sure you budget for as much RAM as you can afford and plan on having some SSD’s to augment the performance should your data grow.  Bottom line de-duplication is awesome, if properly implemented.

Encryption

ZFS will also allow you to encrypt the blocks that your data lives on.  Frankly I don’t understand enough about ZFS encryption to explain why you should use it, however if you have a need for encryption you know it, and if you can get everything that I have described above with encryption then it ought to be worth a little investigation.

Snapshots

Really snapshots are snapshots and compared to most enterprise snapshot technologies I don’t see a large difference.  The key thing to notice is that if you have a snapshot solution which is not copy-on-write then ZFS snapshots are far superior (an example of non-copy-on-write would be the utility rsnapshot).  The other cool feature with regards to snapshots is that you can destroy snapshots which have other snapshots dependent on them, and the referenced data in that snapshot simply gets merged into the later snapshot (since the later snapshot requires that data to be present).  So for example if you had an few snapshots (A, B, and C) and you deleted B then you really aren’t deleting it, you are merging it with C since C is dependent on B.

I could go on and on about why ZFS is great, but ultimately you don’t care.  By now you are already identifying a machine to play with…

Stay tuned for the next article which will discuss the major differences and pain points of using Solaris that you need to know before you start to play with Solaris to prevent frustration and ultimately failure.

Comments Off
March 30th, 2011 | Tags: , , , , ,

Configure Kerberos

Make a backup of our original, just in case.

# cp /etc/krb5/krb5.conf /etc/krb5/krb5.original.conf

These sections already exist in the file so you will want to replace them with the correct values for your environment.

# vi /etc/krb5/krb5.conf

[libdefaults]
default_realm = ALLANGLESIT.COM

[realms]
allanglesit.com = {
kdc = dc.allanglesit.com
admin_server = dc.allanglesit.com
kpasswd_server = dc.allanglesit.com
kpasswd_protocol = SET_CHANGE
}

[domain_realm]
.allanglesit.com = ALLANGLESIT.COM

Quick recycle of the services or a start if they aren’t running.

# svcadm disable smb/server; svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.

Check Name Resolution Configuration

Your domain and name servers should be configured here.

# cat /etc/resolv.conf
domain  allanglesit.com
nameserver  192.168.100.51
nameserver  192.168.100.52

You need to ensure that dns exists on these two lines as if it does not then Solaris will not even try dns for name resolution.

# cat /etc/nsswitch.conf
.
.
hosts:      files dns
ipnodes:    files dns
.
.

Ensure Proper Time Configuration

You will need to make sure that you have consistent time across the domain for authentication to work.  In this case you can run ntpdate against your NTP server, which should be either your domain controller or an NTP source that your domain controller syncs to.

# ntpdate timeserver.allanglesit.com
24 Mar 11:12:52 ntpdate[1444]: adjust time server 192.168.100.2 offset -0.000204 sec

Join the Domain

# smbadm join -u administrator allanglesit.com
After joining allanglesit.com the smb service will be restarted automatically.
Would you like to continue? [no]: yes
Enter domain password:
Joining allanglesit.com ... this may take a minute ...
failed to join allanglesit.com: UNSUCCESSFUL
Please refer to the system log for more information.

I had problems joining the domain at first, I ended up commenting out this line in the file (which I had put in to get samba working as part of a workgroup with the local accounts).

# cat /etc/pam.conf
other password required pam_smb_passwd.so.1 nowarn
# smbadm join -u administrator allanglesit.com
After joining allanglesit.com the smb service will be restarted automatically.
Would you like to continue? [no]: yes
Enter domain password:
Joining allanglesit.com ... this may take a minute ...
Successfully joined allanglesit.com

So you should now have a successfully joined Active Directory samba system.  We will go into much more detail on what needs to be done to make this a practical file sharing platform in an AD environment.

In my previous post we went over the basics of configuring a network interface in Solaris 11, without using the Network Auto Magic (NWAM) which is enabled by default.  In this article we will go over some of the more advanced features which can be leveraged including VLANs, aggregation groups, and jumbo frames.

Configure VLAN Tagging

Create a VLAN, specify the VLAN id with the “-v”, then specify the interface with the “-l”, and finally create a name for the tagged interface (in this case user0 since this particular system is the user VLAN, if you have more descriptive names of your VLANs you can use them here).

# dladm create-vlan -v 20 -l bge0 user0
# dladm show-vlan
LINK            VID      OVER         FLAGS
user0           20       bge0         -----
# ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---
bge0       ok       bm--------46 -46
user0      ok       bm--------46 -46

For simplicity I will delete the IP configuration on the untagged interface.

# ipadm delete-addr bge0/v4

As we did in our previous article you can now create an IP configuration on top of the new tagged interface (which in this case is DHCP).

# ipadm create-addr -T dhcp user0/v4

Keep in mind if you change the VLAN you will most likely need to change your default route.  The below options “-fp” f will flush meaning delete all current routes, while the p will make the new settings persistent.  In this case afterwards we will only have 192.168.100.1 as a default gateway.

# route -fp add default 192.168.100.1

Delete a VLAN

If you have used the VLAN before you will need to “unwind” the configuration before deleting the VLAN.

# ipadm delete-addr user0/v4
# ipadm delete-if user0

Now that this is done you can delete the VLAN.

# dladm delete-vlan user0

Create an Aggregation Group

With this command we create a new aggregation group and assign interfaces to the aggregation group.  If the interface is already in use then you will need to delete the interface before adding it to an aggregation group.

# dladm create-aggr -l bnx0 -l bnx1 aggr0

Now we can view the details of our aggregation group.

# dladm show-aggr
LINK            POLICY   ADDRPOLICY           LACPACTIVITY  LACPTIMER   FLAGS
aggr0           L4       auto                 off           short       -----
# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
bnx0        phys      1500   up     --         --
bnx1        phys      1500   up       --         --
aggr0       aggr      1500   up  --         bnx0, bnx1

Now we add an IP configuration against the aggregated interface.

# ipadm create-addr -T static -a 192.168.100.172/24 aggr0/v4

Modify an Aggregration Group

If you need to add or remove an interface from an aggregation group then the following commands will allow you to do that.

# dladm add-aggr -l bnx2 aggr0
# dladm remove-aggr -l bnx1 aggr0

You can also adjust the LACP Policy using the below command.  Where L4 is L2, L3, L4, or any combination of them based on the desired behavior.

# dladm modify-aggr -P L4 aggr0

The LACP mode can be configured using the below command where active is either auto, active, or passive.  Additionally if configuring active mode you must also configure a timer value of short or long, this option is not needed for auto or passive.

# dladm modify-aggr -L active -T short aggr0

Delete Aggregation Group

Delete the IP configuration from the aggregation group

# ipadm delete-addr aggr0/v4

Delete the Aggregated Interface

# ipadm delete-if aggr0

Delete the Aggregation Group

# dladm delete-aggr aggr0

Enable Jumbo Frames

Basically Jumbo Frames allow the system to reduce the network overhead by combining more data into a single TCP frame, this is analogous to renting a box truck when you move into a new house.  If you had to use your Prius to move, you would spend much more time waiting to finish the process, as well as expending more resources.  Now Jumbo Frames doesn’t mean that it will always help.  If we step back to our analogy of moving into a new house, if you all of your stuff amounts to a single suitcase then renting a moving truck doesn’t do anything to make your trip more efficient.  So if you are not send large amounts of data then Jumbo Frames will not help you, however if you are working on a storage network and even with some file sharing you will get a bonus.  Also in order for Jumbo Frames to work, both sides of the communication must support it or it will not use the higher MTU, as well as all devices along the way.

To display the current mtu of an interface

# dladm show-linkprop -p mtu bge0
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
bge0         mtu             rw   1500           1500           1500 

To set the mtu to enable jumbo frames

# dladm set-linkprop -p mtu=9000 bge0

To set the mtu to not use jumbo frames

# dladm set-linkprop -p mtu=1500 bge0

 

UPDATE
September 16, 2011

In the comments of my article “Solaris 11: Network Configuration Basics” you will notice “Kristen” mentioned that the ipadm command has changed in newer builds of Solaris 11.  At the time she was using a newer build than I had available to me, so I could not verify her claim, however now I have verified this change against the Solaris 11 Early Adopter release snv_173.  So be prepared to make the following changes.

# ipadm create-if bge0
# ipadm delete-if bge0

Will now be

# ipadm create-ip bge0
# ipadm delete-ip bge0

The following were not changed:

  • ipadm enable-if
  • ipadm disable-if
  • ipadm show-if
March 28th, 2011 | Tags: , , , , ,

I have been doing research for my next big series “So You Want to Learn ZFS.”  This series is basically going to be a multi-part series of How-To’s which hopefully will give you the ability to build a file server (or even a SAN) based on ZFS if you so choose.  However there are a few things that I failed to take into account.

  1. Solaris 11 is so different from Solaris 10.
  2. Solaris 11 is so different from EVERYTHING else.

I figured that I’d be able to kind of gloss over the high points of how to get your system up and running and just dive right into the fun ZFS stuff.  So before we get into the good stuff there are some basics that we will need to go over first.  Today we will cover basic networking.

First off with the acquisition of Sun by Oracle last year the documentation is kind of scattered.  The most important place to know of is here.  I am sure Oracle will get this under control eventually.

Alright so what makes network configuration so difficult with Solaris 11?  Some things are much easier than they should be while others are just ridiculously difficult.  I personally attribute this to a tendency towards over-engineering on the part of Sun Engineers, everything is done in the most correct way.  Now this is not to say that Solaris is better than everything or that Sun hardware was better than anything else.  My basic point is that the most correct way is not always the best way.  I think that Sun’s over-engineering hurt them in the long run (which ultimately is why Oracle bought them and not the other way around).  However there is one area where I think the over-engineering paid off and the most correct way was actually the best way, this would be ZFS.  But I digress that will be for a later article.

If you install Solaris 11 Express then by default a service called Network Auto Magic (NWAM), which simplifies the process significantly, however if you look to do more advanced tasks such as aggregation then this won’t work for you.  NWAM is really very much the same as Network Manger, it can provide location based networking profiles and manage multiple types of interfaces (wireless and wired) seamlessly, although it may not be the best for a server configuration.

Disable Network Auto Magic

# svcadm disable network/physical:nwam
# svcadm enable network/physical:default

Once we have disabled NWAM we will lose all network connectivity and configurations.

View the Datalink Devices

Solaris 11 devices have many layers to their configuration, which makes advanced configurations much simpler however does complicate basic configurations. Basically the kernel is aware of the physical hardware and we can see this visibility with the first command.

# dladm show-phys
LINK         MEDIA                STATE      SPEED  DUPLEX    DEVICE
bge0         Ethernet             unknown    1000   full      bge0

The second command gives us the ability to see the physical interface linked to a logical interface.  After disabling NWAM you will NOT have a logical interface linked to your physical device (in my case bge0) because of this you will see that the state of the data-link device is “unknown”.  Also it is important to note that the device names are based off of vendor bge = broadcom and they are incremented based on the number of devices in the machine.

# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
bge0        phys      1500   unknown  --         --

Also before we move on we will just take a look at our existing logical interfaces, the only one you should have after disabling NWAM is lo0 which is your loopback interface.

# ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---

Create and Configure a Logical Interface

So the first step is creating a logical interface, then we can apply an IP configuration against it.  This will create a link from the logical interface to the physical interface, and will change the state to “up” from “unknown” that we saw before.

# ipadm create-if bge0
# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
bge0        phys      1500   up       --         --
# ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---
bge0       down     bm--------46 -46

Now above we have successfully created the logical interface and we can now apply an IP configuration to it.  This is where it gets a bit tricky.  Notice below we are going to apply DHCP as the configuration, we will end up deleting this configuration and making it static, this way you also get the opportunity to learn how to change the configuration (which is really a delete and add).  We will go through the specifics of the ipadm create-addr command after we also go over the static command as well since they are very similar.

# ipadm create-addr -T dhcp bge0/v4
# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
bge0/v4           dhcp     ok           192.168.100.225/24
lo0/v6            static   ok           ::1/128

Now to delete the DHCP configuration from the logical interface so that we can make it static.

# ipadm delete-addr bge0/v4

And to create a static IP configuration on the logical interface.

ipadm create-addr -T static -a 192.168.100.200/24 bge0/v4
# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
bge0/v4           static   ok           192.168.100.200/24
lo0/v6            static   ok           ::1/128

Alright so as we can see these are the two commands to create the configurations.

# ipadm create-addr -T dhcp bge0/v4
# ipadm create-addr -T static -a 192.168.100.200/24 bge0/v4

Now the -T option defines the type of configuration static and dhcp are the most common options, -a is for the address on a static configuration and you will notice that we are not using the logical interface name (bge0), but instead a variation (bge0/v4).  This represents the version of the IP protocol the configuration is using.  So you can have a bge0/v6 and a bge0/v4.

Alright so you have successfully configured your network interfaces, however NWAM was doing more than just this, so you might not have full network connectivity yet.

Verify Full Network Configuration and Connectivity

Using some of the above commands we can review our configurations.

# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
bge0/v4           static   ok           192.168.100.200/24
lo0/v6            static   ok           ::1/128

Additionally we need to verify name resolution and routing in order to be confident in our configuration.

# netstat -r

Routing Table: IPv4
Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
solaris              solaris              UH        2          0 lo0
192.168.100.0        192.168.100.200      U         3          1 bge0

Routing Table: IPv6
Destination/Mask            Gateway                   Flags Ref   Use    If
--------------------------- --------------------------- ----- --- ------- -----
solaris                     solaris                     UH      2       4 lo0

Above will display the current routing table (which does not have a default route), ensure your default route is defined and correct.  If you need to create it use the below command.

# route -p add default 192.168.100.1
add net default: gateway 192.168.100.1
add persistent net default: gateway 192.168.100.1

Once it has been corrected it should look something like this, and you should be able to ping off-net.

# netstat -r

Routing Table: IPv4
Destination           Gateway           Flags  Ref     Use     Interface
-------------------- -------------------- ----- ----- ---------- ---------
default              fw01.allanglesit.net UG        2      10466
solaris              solaris              UH        2         12 lo0
192.168.100.0        192.168.100.200      U         6       1810 bge0

Routing Table: IPv6
Destination/Mask            Gateway                   Flags Ref   Use    If
--------------------------- --------------------------- ----- --- ------- -----
solaris                     solaris                     UH      2     156 lo0

To verify DNS configuration check the /etc/resolv.conf and then verify the functionality with nslookup or dig.

# cat /etc/resolv.conf
domain  allanglesit.net
nameserver  192.168.100.22
nameserver  192.168.100.25
# dig www.google.com

Solaris additionally uses /etc/nsswitch.conf to tell the system what types of name resolution to use for different types of lookups.  When disabling NWAM (which was configuring /etc/nsswitch.conf for us) then we will have a hosts file only configuration, which means our system won’t attempt to use DNS on its own (nslookup and dig will work since they know to use DNS themselves, but things like Firefox, wget, samba, etc only look to the system for name resolution).

# cat /etc/nsswitch.conf
.
.
hosts:      files dns
ipnodes:    files dns
.
.

I trimmed the above file for brevity.

At this point you should have full network connectivity without using NWAM.  So now just reboot to ensure that your settings persist after a reboot.

For WAY more information…

http://download.oracle.com/docs/cd/E19963-01/pdf/821-1458.pdf

 

UPDATE
September 16, 2011

In the comments below you will notice “Kristen” mentioned that the ipadm command has changed in newer builds of Solaris 11.  At the time she was using a newer build than I had available to me, so I could not verify her claim, however now I have verified this change against the Solaris 11 Early Adopter release snv_173.  So be prepared to make the following changes.

# ipadm create-if bge0
# ipadm delete-if bge0

Will now be

# ipadm create-ip bge0
# ipadm delete-ip bge0

The following were not changed:

  • ipadm enable-if
  • ipadm disable-if
  • ipadm show-if

UPDATE
February 28, 2012

Another astute user “j.marcos” (comment below) pointed out another change in the GA version of Solaris 11

For Solaris 11, instead of disabling network/physical:nwam and enabling network/physical:default we control NWAM by setting the ncp mode to DefaultFixed

# netadm enable -p ncp DefaultFixed

If you wanted to re-enable NWAM then we can set the ncp mode back to Automatic

# netadm enable -p ncp Automatic
March 23rd, 2011 | Tags: , , , ,

I have always been a big fan of Hyper-V, and was actually a Virtualization MVP 2 years in a row.  However recently I haven’t been able to get past the really poor support for Linux on the Hyper-V platform.  We run a large number of hypervisors in our international sales offices (15+) and have a need to run both Linux and Windows on them.  This has necessitated our switch to Linux-KVM as their Windows support is far better than Hyper-V’s Linux support.  As such with an upcoming trip planned to our European offices I have been planning exactly how our migration will look, of course the first iteration of this plan consisted of me deploying replacement VMs on the KVM stack then migrating to the VMs individually.  Though this plan was lengthy and cumbersome it would work.

Thus it was time to set out on a journey of laziness.  I started by investigating physical to virtual conversion utilities and unfortunately KVM doesn’t seem to have a lot of value in this area.  So I started looking at some more basic methods of accomplishing my task.  Enter kvm-img (also called qemu-img) this is basic disk management utility that you will use to create image files, monitor their growth, snapshot, basically anything to do with managing a disk image…  Including converting from one format to another.  Wait, what formats?

raw – The most simple file format and can be mounted locally

qcow2 – The most versatile file format supporting layered images, encryption, compression, and snapshots although there seems to be some performance issues

vdi – Virtualbox 1.1 compatible image format

vmdk – VMWare 3 and 4 compatible image format

vpc – VirtualPC compatible image format

I left out some formats which aren’t terribly useful, for more information see the man pages.  Now lets review VirtualPC and Virtual Server were the precursors to Hyper-V.  However with the advent of Hyper-V we lost the compatibility of VMs between Hyper-V and Virtual PC.  Or did we?  It was true that you could not take a Hyper-V VM with the Integration Services installed and run it on Virtual PC.  Though the issue was never the VHD format.  That specification remained the same.  The actual issue was that when you installed the Hyper-V Integration Services it forced the install of an APIC HAL, and since Virtual PC did not support an APIC HAL it would fail to boot.  Now since Hyper-V does support APIC this should not be a problem.  Assuming that we can successfully convert it.

Conversion Command from VHD to a Raw Image

# kvm-img convert -f vpc -O raw /kvm/images/disk/disk.vhd /kvm/images/disk/disk.img

Above you will see the -f option indicates the format of the source image and the -O (capital “o”) then followed by the path to the source file and the path to the output file.

I was able to successfully convert a Windows 2003 Server amd64 as well as Windows 2008 R2 Standard.  Once converted I was able to create a VM and boot it straight away.  On the Windows 2003 VM I needed to uninstall the Integration Components, though 2008 R2 includes them in the kernel so it was not necessary to uninstall them.

So obviously the next thing I thought of is that if kvm-img supports vpc as a file format, perhaps KVM can boot it.  Well it can.  So if you are so inclined you could simply copy the file from the Hyper-V server to the KVM server and start it (and uninstall the Integration Components if applicable) I plan on investigating the perfomance implications of this choice though that is not part of this article.

Also please keep in mind that you will need to install the VirtIO drivers into your Windows VMs in order to take advantage of the VirtIO Interface of Storage and Network.  If you do this your performance is very fast.  I will most likely document this in a future article though this is fairly well documented on the Internet currently.

Page 20 of 25« First...10...1819202122...Last »
TOP