August 10th, 2011 | Tags: , , ,

Every once and a while you will have the need to make multiple ssh calls to remote systems within a bash script.  Normally you would simply have the user enter their password multiple times, or have a requirement for SSH keys to be configured prior to running the keys.  This is widely accepted as the way to do it, however if that were the best way to do it, then I would not be writing this article.  Now the better way to do it would be to create ssh keys, and upload them as authorized keys to the remote host(s) then upon completion of the work we can then clean up the changes that we made.

tempkey="1"
uniqueoption="option"
remotehost="machine.allanglesit.net"
user="root"

if [ "$tempkey" -eq "1" ]
then
key="/root/.ssh/tempsshkey_id_${uniqueoption}"
publickey="/root/.ssh/tempsshkey_id_${uniqueoption}.pub"
authkeys="/root/.ssh/authorized_keys"
ssh-keygen -t rsa -q -f ${key} -N ''
chmod 600 ${key}
chmod 600 ${publickey}
pubkeystring=`cat ${publickey} | sed -e 's/\//.*/g'`
ssh-copy-id -i ${publickey} root@${remotehost} 2>&1 1>/dev/null
sshstring="ssh -i ${key} ${user}@${remotehost}"
else
sshstring="ssh ${user}@${remotehost}"
fi

${sshstring} "uptime"

if [[ -n "$tempkey" ]]
then
echo "Cleaning up temporary ssh keys..."
${sshstring} "cat ${authkeys} | sed -e '/$pubkeystring/d' > ${authkeys}"
rm ${key} ${publickey}
fi

The above code will require tempkey, uniqueoption, remotehost, and user variables to be set.  Now in my scripts I set these as parameters, and you will probably want to as well.  The uniqueoption should really be renamed to something that makes sense to you.  Basically this bit is so that if you are running your script multiple times on the same machine, you will be able to differentiate which key is for which process, so for example on my vmmigrate.sh script this variable was vmname, which meant that I would have an ssh key for every VM which was being migrated.  So if I wanted to move more than one VM then it would work without destroying the other key.

Basically what we are doing in the first part (line 6-19) is setting up the keys locally in a unique file (this prevents us from destroying any previously defined keys) and the set the permissions appropriately and copy them out to the remote machine.  The second part (line 21) is where we would invoke the script.  Notice we have captured the ssh command in a variable, this allows us to allow our script to either use temporary ssh keys or not, with a minimal amount of code.  Finally in part three (line 23-28) we clean up the the changes we made.  This is done by stripping only the key we added out of the authorized_keys file on the remote host and deleting the local copies of the private and public key.

Comments Off
August 9th, 2011 | Tags: , , , ,

One of the biggest weaknesses of dd is that it has no way to display progress on its actions.  You can send a signal to the process which will pause, display statistics, and resume the process however this takes up alot of your terminal screen if you are doing any sort of long running copy.  Enter pv.  Pv allows us to monitor the progress of data through a pipe.

Create A 1GB Test File

This is the file that we will use to peform the following tests.

# dd if=/dev/zero of=/root/test.file bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.88164 s, 183 MB/s

Insert PV into the Pipe (Progress Bar Not Working)

Now in this example we have the basics of a progress bar.  At least we can see it is working, but the progress bar will just bounce back and forth like we are using Windows.

# dd if=/root/test.file | pv -tpreb | dd of=/root/test.file2
666MB 0:00:11 [36.5MB/s] [          <=>                                      ]

Insert PV into the Pipe (Progress Bar Fixed)

In order to get an accurate progress bar pv needs to be aware of how much data we are expecting.  Below this is showing accurate progress.  You can additionally use pv instead of the first dd, though I prefer to use pv simply to measure instead of rely on it to move the data.

# dd if=/root/test.file | pv -tpreb -s 1024M | dd of=/root/test.file2
666MB 0:00:05 [ 171MB/s] [=====================>             ] 65% ETA 0:00:02

Insert PV into the Pipe (Output is not Working)

Upon completion of the above command we will see that the output is not quite right.  Basically this is because dd is overwriting pv’s output upon completion.

# dd if=/root/test.file | pv -tpreb -s 1024M | dd of=/root/test.file2
2097152+0 records inMB/s] [=============================>     ] 88% ETA 0:00:00
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 6.9598 s, 154 MB/s
1GB 0:00:06 [ 147MB/s] [=================================>] 100%
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 6.86994 s, 156 MB/s

Insert PV into the Pipe (Output is Working)

To fix the above we simply need to dump the output of dd, so that it doesn’t overwrite the pv output.

# dd if=/root/test.file 2&gt;/dev/null | pv -tpreb -s 1024M | dd of=/root/test.file2 2&gt;/dev/null
1GB 0:00:06 [ 148MB/s] [=================================>] 100%

Well there you go.  Keep in mind you can also use pv with netcat, cat, or anything else which ends up sending data through a stream.

Comments Off

When you compare Linux-KVM to Hyper-V or VMWare your initial results will indicate that Linux-KVM is lacking when it comes to management tools, and basic functionality.  You would be correct, however you would also be incorrect.  You see with Linux-KVM we can leverage the underlying power of the Linux userland, and with this frankly all things are possible.  Here is one of the basic bits of management functionality which can be attained with a little bit of bash scripting knowledge.  I started by writing a VM backup script, then a VM export script, and of course a VM import script.  Eventually I ended up with a full-blown end-to-end VM migration script.

My environment is based on Ubuntu 11.04 amd64 with the latest patches as of August 1, 2011.  This script should work elsewhere unless file locations are different (VG Name) or if certain utilities are not included by default.

Features:

  • Compression available via gzip, will reduce the amount of network traffic transferred, could be helpful for migrations over the WAN.
  • Uses temporary SSH keys to minimize password prompts, and save you the trouble of configuring SSH keys yourself.
  • User specified Volume Group on the remote machine, so that if you are using SAN attached storage you can place it on the appropriate storage.
  • Safety checks to ensure that your data remains intact.
  • Integrates a progress monitor if pv is installed.

Requirements:

  • VMs must use LVM Logical Volumes directly, raw images or any other disks that are encapsulated in a file are not supported (at this time).
  • On the receiving host the LV for the new VM cannot already exist.  This script will create the LV, and will not overwrite an LV that it did not create (this is of course to prevent it from overwriting an existing volume which could be in use elsewhere).
  • Obviously the better your network speed the better your mileage.  I am not using any sort of “magic” here if you have a 1TB LV then it will be transferred over the network in full (unless of course you use the compression option and your data lends itself well to compression).
  • As this is an Offline migration that means that the VM will be down during the entire duration of the migration.
  • Each disk is transferred separately, and will not display a progress, unless you have pv installed (Ubuntu 11.04 amd64 doesn’t have it by default but it is in the repos).

Known Issues:

  • When using compression the data transfer rate will vary widely, this is because since we are transferring as a data stream, and performing data compression and decompression within the stream.  So essentially the compression slows down how fast we are able to feed data through the pipe, sometimes this will result in the data being fed at a really low rate and sometimes this will result in a really fast rate.  There is no real way around this, that is the nature of compression.
  • When using compression the progress bar is inaccurate, in that we are measuring progress as the data enters into the compression.  So when the last byte of data is metered it still needs to be compressed, transferred, uncompressed, and written to disk.  So basically what you will see is that the progress bar will show 100% and stop counting up time, and not move on to the next step.  In my tests it has never taken longer than 20% of the time.  I think I will be able to fix this by moving the measurement to the distant end, after the compression and decompression have already taken place, however that will take a bit of work, because if I do that currently I lose the progress through the ssh.
  • Cannot copy ISO images, this is a design choice really.  There is too much complexity in this since it is quite possible that you are hosting your ISO images on NFS, or it could be shared locally.  Either way we strip the ISO image out of the configuration leaving you with a Virtual CD/DVD drive with no disk mounted.  Upon completion of migration you can remount your ISO images on the distant end.
  • Cannot handle disk image files.  Disk images are really quite trivial to handle and don’t fit well into the complexity of this script.  I will most likely add this functionality or separate it into a different script.

Data Flow Example

data mover diagram with compression offFigure 1 – This diagram illustrates the data movers without compression.

Figure 2 – This diagram illustrates the data movers with compression.

When I first started this script I assumed that compression was a must, however as I started doing some test scripts I noticed that compression really did not impact the speed of the migration, now of course using compression will drastically reduce the amount of data which is transferred over the network.  However when factoring in the processing time on either end for compression and decompression it just really didn’t matter.  That said that was on 1Gb networking, so as long as your moves are local then it probably won’t make sense to use compression.  It might make sense to use compression if your networking is less than 1Gb or if you are running it over a WAN connection.  Additionally if you have very large volumes with a large amount of compressible space then you could gain a benefit from the compression option.

Please keep in mind this script has been tested thoroughly in my environments, and other logical environments that I can replicate.  However I did not test this in your environment, that responsibility will lie with you.  So please do not use this in a production environment until you (1) have tested it and (2) understand what it is doing.

Now to get to the code…

Name       :  vmmigrate.sh
Version   :  0.9.3
MD5        :  ff842f77db7225478c3b048af43b00c7
SHA256  :  141174ea2dbfb3c8b7c2753c92f1557c34fba2fd7576582676b3fd06d4e58a7e
URL         :  http://source.allanglesit.net/pub/vmmigrate.sh

#!/bin/bash
# chkconfig:
# description:
#
#: Script Name    : vmmigrate.sh
#: Version    : 0.9.3
#: Author    : Matthew Mattoon - http://blog.allanglesit.com
#: Date Created    : August 1, 2011
#: Date Updated    : February 20, 2013
#: Description    : Offline Migration script for KVM Virtual Machines
#: Examples    : vmmigrate.sh -n VMNAME -r REMOTEHOST -v REMOTEVGNAME -c COMPRESSIONLEVEL[0-9] -t -s
#:         : vmmigrate.sh -n testvm -r testhost -v testhost -c 0 -t -s

usage()
{
cat << EOF
usage: $0 options

This script will allow you to perform an offline migration of an LVM-backed KVM domain from one host to another, using ssh, to monitor progress of the data moves you must have pv installed.

OPTIONS:
-h      Show this message.
-n      Name of the KVM domain to be migrated (required).
-r      Remote KVM server name or IP address (required).
-v      Volume Group on remote server on which to create the LV for the migrated VM (required).
-c      Compression (gzip) level 0-9. 0 is no encryption, 1 is fastest (lowest) compression, 9 is slowest (highest) compression.
-t      Use temporary ssh keys (recommended). script will create and deploy keys on the source and destination server, to eliminate the need for ssh password.  Upon completion these keys will be destroyed.
-s      Start migrated VM on completion of migration.
EOF
}

while getopts “hn:r:v:c:ts” OPTION
do
case $OPTION in
h) usage; exit 1;;
n) vmname=$OPTARG;;
r) remotehost=$OPTARG;;
v) remotevgname=$OPTARG;;
c) compressionlevel=$OPTARG;;
t) tempkey=1;;
s) startoncomplete=1;;
?) usage; exit;;
esac
done

if [[ -z "$vmname" ]] || [[ -z "$remotehost" ]] || [[ -z "$remotevgname" ]] || [[ -z "$compressionlevel" ]] || [ "$compressionlevel" -gt "9" ]
then
usage
exit 1
fi

shutoffcheck=`virsh list --all | grep "$vmname " | sed 's/shut off/shutoff/g' | tr -s [:space:] | cut -d " " -f 4`

if [ "$shutoffcheck" != "shutoff" ]
then
echo "${vmname} is not shut off.  This script requires that virtual machines be shut off prior to beginning the migration process."
exit
fi

disklist=`virsh dumpxml ${vmname} | sed -n "/<disk type='file' device='disk'>/,/<\/disk>/p" | sed -n '/source file/p' | cut -d "'" -f 2`

if [[ -n "$disklist" ]]
then
echo "Script has detected non-LVM disks in use.  This script cannot safely process non-LVM disks."
exit
fi

cdlist=`virsh dumpxml ${vmname} | sed -n "/<disk type='file' device='cdrom'>/,/<\/disk>/p" | sed -n '/source file/p' | cut -d "'" -f 2`

if [[ -n "$cdlist" ]]
then
echo "The safety checks have detected an ISO file attached to this virtual machine.  Migration will strip the ISO image from the configuration of the virtual machine.  If you would like to resolve this in a different way please do not proceed with the migration when given the opportunity to proceed."
fi

user="root"
basesize="1024"

if [[ -n "$tempkey" ]]
then
key="/root/.ssh/vmmigrate_id_${vmname}"
publickey="/root/.ssh/vmmigrate_id_${vmname}.pub"
authkeys="/root/.ssh/authorized_keys"
ssh-keygen -t rsa -q -f ${key} -N ''
chmod 600 ${key}
chmod 600 ${publickey}
pubkeystring=`cat ${publickey} | sed -e 's/\//.*/g'`
ssh-copy-id -i ${publickey} root@${remotehost} 2>&1 1>/dev/null

sshstring="ssh -i ${key} ${user}@${remotehost}"
else
sshstring="ssh ${user}@${remotehost}"
fi

xmlfile="/tmp/${vmname}.xml"

lvlist=`virsh dumpxml ${vmname} | sed -n "/<disk type='block' device='disk'>/,/<\/disk>/p" | sed -n '/source dev/p' | cut -d "'" -f 2`
localvgname=`virsh dumpxml ${vmname} | grep "_boot" | cut -d "'" -f 2 | cut -d "/" -f 3`
localvg=`vgs ${localvgname} -o vg_name,vg_extent_size,vg_free_count | grep ${localvgname}`
remotevg=`${sshstring} "vgs ${remotevgname} -o vg_name,vg_extent_size,vg_free_count" | grep ${remotevgname}`
localvgextentsize=`echo ${localvg} | tr -s [:space:] | cut -d " " -f 2 | cut -d "." -f 1`
remotevgextentsize=`echo ${remotevg} | tr -s [:space:] | cut -d " " -f 2 | cut -d "." -f 1`
remotevgfreeextents=`echo ${remotevg} | tr -s [:space:] | cut -d " " -f 3`
remotevgcheck=`expr $remotevgextentsize \* $remotevgfreeextents`
remotevgcheck=`expr $remotevgcheck \/ $basesize`

for lv in ${lvlist}
do
lvextents=`lvdisplay ${lv} -c | cut -d ":" -f 8`
lvsize=`expr $lvextents \* $localvgextentsize`
lvsizetotal=`expr $lvsizetotal + $lvsize`
done

lvsizetotal=`expr $lvsizetotal \/ $basesize`

if [ "$remotevgcheck" -lt "$lvsizetotal" ]
then
echo "The remote Volume Group does not contain the free space necessary for the creation of all Logical Volumes for this migration, script must exit."
echo "Cleaning up temporary ssh keys..."
${sshstring} "cat ${authkeys} | sed -e '/$pubkeystring/d' > ${authkeys}"
rm ${key} ${publickey}
exit
fi

read -p "Would you like to begin the migration of ${vmname} to ${remotehost}?  (y/n)  "
if [ "$REPLY" != "y" ]
then
echo "User chose to abort migration."
if [[ -n "$tempkey" ]]
then
echo "Cleaning up temporary ssh keys..."
${sshstring} "cat ${authkeys} | sed -e '/$pubkeystring/d' > ${authkeys}"
rm ${key} ${publickey}
fi
echo "Exiting..."
exit
fi

echo "=============== BEGIN DATA MOVES ==============="
echo ""

for lv in ${lvlist}
do
lvname=`basename ${lv}`
lvextents=`lvdisplay ${lv} -c | cut -d ":" -f 8`
lvsize=`expr $lvextents \* $localvgextentsize`
remotelvcheck=`${sshstring} "lvs /dev/${remotevgname}/${lvname} 2>&1"`
if [ "$remotelvcheck" == "  One or more specified logical volume(s) not found." ]
then
echo "Creating Logical Volume ${lvname} on ${remotehost}."
${sshstring} "lvcreate -L${lvsize}M ${remotevgname} -n ${lvname} 2>&1 > /dev/null"
else
echo "Logical Volume (${lvname}) already exists on the remote host, for the safety of your data and system, this script will not overwrite a Logical Volume which it did not create, script must exit."
exit
fi
localpvtest=`hash pv 2>&- || { echo >&2 "pv not installed"; } 2>&1`
remotepvtest=`${sshstring} "hash pv 2>&- || { echo >&2 "pv not installed"; } 2>&1"`
if [[ -n "$localpvtest" ]] || [[ -n "$remotepvtest" ]]
then
message="Transferring ${lvname} to ${remotehost}."
message2="Progress is not available, please install pv for progress..."
if [ "$compressionlevel" -eq "0" ]
then
echo "${message}  With no compression.  ${message2}"
dd if=${lv} 2>/dev/null | ${sshstring} "dd of=/dev/${remotevgname}/${lvname} 2>/dev/null"
else
echo "${message}  With compression.  ${message2}"
dd if=${lv} 2>/dev/null | gzip -${compressionlevel} | ${sshstring} "gunzip | dd of=/dev/${remotevgname}/${lvname} 2>/dev/null"
fi
else
message="Transferring ${lvname} to ${remotehost}."
if [ "$compressionlevel" -eq "0" ]
then
echo "${message}  With no compression..."
dd if=${lv} 2>/dev/null | pv -tpreb -N ${lvname} -s ${lvsize}M | ${sshstring} "dd of=/dev/${remotevgname}/${lvname} 2>/dev/null"
else
echo "${message}  With compression..."
dd if=${lv} 2>/dev/null | pv -tpreb -N ${lvname} -s ${lvsize}M | gzip -${compressionlevel} | ${sshstring} "gunzip | dd of=/dev/${remotevgname}/${lvname} 2>/dev/null"
fi
fi
done

echo ""
echo "================ END DATA MOVES ================"

virsh dumpxml ${vmname} | sed "/<address type\|<uuid>\|vnet\|<alias name\|<label>\|<imagelabel>\|<source file='.*.iso'/d" | sed "s/<domain type='kvm' id='.*'>/<domain type='kvm'>/g" | sed "s/${localvgname}/${remotevgname}/g" > ${xmlfile}

echo "Transferring configuration file."
if [[ -n "$tempkey" ]]
then
scp -i ${key} ${xmlfile} ${user}@${remotehost}:${xmlfile} #2&>1 > /dev/null
else
scp ${xmlfile} ${user}@${remotehost}:${xmlfile} #2&>1 > /dev/null
fi
rm ${xmlfile}

echo "Defining domain from configuration."
${sshstring} "virsh define ${xmlfile}; rm ${xmlfile}" #2&>1 > /dev/null

if [[ -n "$startoncomplete" ]]
then
${sshstring} "virsh start ${vmname}" #2&>1 > /dev/null
fi

if [[ -n "$tempkey" ]]
then
echo "Cleaning up temporary ssh keys..."
${sshstring} "cat ${authkeys} | sed -e '/$pubkeystring/d' > ${authkeys}"
rm ${key} ${publickey}
fi

UPDATE Feb 2, 2013

I have migrated my scripts to a new location.


 						
July 21st, 2011 | Tags: , ,

There will come a time when you need to install a package on a machine in order to take advantage of a utility contained within.  In my case I know that I need kvm-img (which is also called qemu-img), and I assumed that it was either in the libvirt-bin package or the qemu-kvm package.  It turned out for me that it was in qemu-kvm which is not what I was hoping.  But I digress, here are two ways to get this done for anything.

# dpkg -S kvm-img
qemu-kvm: /usr/bin/kvm-img
qemu-kvm: /usr/share/man/man1/kvm-img.1.gz
# apt-file search kvm-img
qemu-kvm: /usr/bin/kvm-img
qemu-kvm: /usr/share/man/man1/kvm-img.1.gz

Both generate the exact same output, but apt-file requires that you install it, and run “apt-file update” before you can use it.  So I would recommend using the dpkg option since it is built-in to debian systems.

Comments Off
May 31st, 2011 | Tags: , , , ,

Currently I have been working on a method of migrating some of our data from Windows file servers to Solaris ZFS CIFS servers, of course trying to retain as much feature parity as possible.  Permissions are an issue, however I have worked through most of those issues (look for this article in the future), now it is time to migrate data from Windows onto ZFS.  This first crack will be about 1TB of data.  Enter rsync.  The “easiest” way to do this would be to use a Linux box in the middle to mount both the Solaris CIFS share and with Windows CIFS share and perform the rsync from there.  A far more efficient way would be to mount the Windows CIFS share on the Solaris box or vice versa and perform the rsync between the two boxes.  This article will document how to mount a Windows CIFS file share onto the local file system of the Solaris 11 Express box.

Make the Migration Directory

# mkdir /mnt/migration

Enable the SMB Client Service

# svcadm enable svc:/network/smb/client:default

Mount the Remote Windows Share in the Local Directory

# mount -F smbfs "//ALLANGLESIT;administrator@winfileserver/share" /mnt/migration/
Password:

A better way would be to mount the data on your production server as read only in case of any syntax issues.

# mount -F smbfs -o ro "//ALLANGLESIT;administrator@winfileserver/share" /mnt/migration/
Password:

Verify the Mount

# mount | grep winfileserver
/mnt/migration on //ALLANGLESIT;administrator@winfileserver/share remote/read/write/setuid/devices/rstchown/intr/acl/xattr/dev=8d40002 on Wed May 25 11:54:14 2011

OR

# cat /etc/mnttab | grep winfileserver
//ALLANGLESIT;administrator@winfileserver/share      /mnt/migration smbfs   rw,intr,acl,xattr,dev=8d40002   1306349654

View the Remote Data Locally

# ls /mnt/migration/
data1  data2

Rsync the Data

Above I mentioned this is for the purpose of migrating data from Windows CIFS to Solaris CIFS, of course I would be remiss not to mention the command I use to perform the rsync.  However this is not the purpose of this article, so you are on your own for the proper switches for your environment.  This will perform a recursive sync, no acls or anything are carried over just the files and the timestamps.  I will be reapplying the appropriate permissions once they land.

# rysnc -rt --modify-window=1 /mnt/migration/data1 /tank/cifs/share1/

If you are performing a migration which will require the use of “incremental” rsyncs you will most likely want to use the –delete-during option so that non-existent data will be deleted on the receiving end.  This is most common when you have a file deleted or renamed on the original location between your rsyncs.

# rysnc -recursive --times --delete-during --modify-window=1 /mnt/migration/data1 /tank/cifs/share1/

Unmount the Mounted File System

# umount /mnt/migration

 

Page 20 of 28« First...10...1819202122...Last »
TOP