Customizing cloned Ubuntu VMs

I was playing with creating and cloning Ubuntu virtual machines the other day, and got to the point where I had a nicely setup reference image that I could just copy to fire up additional VMs that would be in a pretty usable state.

There are a few things within a cloned VM that you'd want to change if you were going to keep the new instance around, such as the hostname, SSH host keys, and disk UUIDs. I threw together a simple shell script to take care of these things automatically.

#!/bin/sh
#
# Updates for cloned Ubuntu VM
#

#
# Some initial settings cloned from the master
#
ROOT=/dev/vda1
SWAP=/dev/vdb1
LONG_HOSTNAME=ubuntu.local
SHORT_HOSTNAME=ubuntu

if [ -z $1 ]
then
    echo "Usage: $0 <new-hostname>"
    exit 1
fi

# 
# Update hostname
#
shorthost=`echo $1 | cut -d . -f 1`
echo $1 >/etc/hostname
hostname $1
sed -i -e "s/$LONG_HOSTNAME/$1/g" /etc/hosts
sed -i -e "s/$SHORT_HOSTNAME/$shorthost/g" /etc/hosts

#
# Generate new SSH host keys
#
rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server

#
# Change root partition UUID
#
OLD_UUID=`blkid -o value $ROOT | head -n 1`
NEW_UUID=`uuidgen`
tune2fs -U $NEW_UUID $ROOT
sed -i -e "s/$OLD_UUID/$NEW_UUID/g" /etc/fstab /boot/grub/grub.cfg

#
# Change swap partition UUID
#
OLD_UUID=`blkid -o value $SWAP | head -n 1`
NEW_UUID=`uuidgen`
swapoff $SWAP
mkswap -U $NEW_UUID $SWAP
swapon $SWAP
sed -i -e "s/$OLD_UUID/$NEW_UUID/g" /etc/fstab

#
# Remove udev lines forcing new MAC address to probably show up as eth1
#
sed -i -e "/PCI device/d"     /etc/udev/rules.d/70-persistent-net.rules
sed -i -e "/SUBSYSTEM==/d" /etc/udev/rules.d/70-persistent-net.rules

echo "UUID and hostname updated, udev nic lines removed,  be sure to reboot"

I'd then run it on the cloned machine with something like

update_clone.sh mynewmachine.foobar.com

This somewhat particular to my specific master VM, in that it's expecting one disk dedicated to root and one disk dedicated to swap, and the VM was created with ubuntu.local as the hostname. Hopefully though it'll give some ideas about what to look for and how to script those changes.

Make sure virtualization is enabled in the BIOS

I just wasted a fair amount of time on a RedHat 6.1 box being setup to be a hypervisor with KVM, trying to figure how why when I ran virsh version it was telling me among other things

internal error Cannot find suitable emulator for x86_64

All the appropriate packages such as qemu-kvm were installed, but it just didn't seem to want to work. Finally as I was about to try reinstalling RHEL, I remoted into the actual console and saw:

kvm: disabled by bios

Doh!, and looking back in /var/log/messages the same thing was buried deep within all the boot noise. While trying to figure this out I managed to just be looking for virt or qemu in the logs and somehow didn't search for kvm. Enabled virtualization in the BIOS and everything's gravy now.

So there you go, if you're Googling that first error message and get lots of other nonsense, look for the message about the BIOS.

Setting up a PXE environment for OS installations

If you're fooling around with various OSes, installing them by first burning CDs or DVDs gets to be a drag - and you end up with piles of old discs that just go into a landfill. Sure, there are rewritable disks, but they wear out and get scratched eventually. USB memsticks can be painful too - sometimes difficult to create and with different BIOSes having different levels of support.

A slick way to go is to set yourself up to do PXE (Preboot eXecution Environment) installations over a network. Most network cards have had PXE support included for many years now. If you have a machine handy that can act as a simple server, you can have an enviroment where you boot a machine, select the OS you want to install from a menu, and everything will just be pulled over your local network.

There are plenty of writeups on how to PXE install Ubuntu from an Ubuntu server, or FreeBSD from a FreeBSD server - but to make things more interesting and explicit I'll go cross-platform and talk about deploying Ubuntu Server 11.04 from a FreeBSD 8.2 server, and try to make it general enough so that later on we can add other OSes to the menu such as CentOS or OpenBSD.

Requirements

PXE booting a machine requires two basic services be present on your network:

  • DHCP - to assign the booted machine an IP address and tell it what "network bootstrap program" (NBP) to fetch from a TFTP server

  • TFTP (Trivial FTP - not to be confused with regular FTP) serves up the initial boot files

OSes such as Ubuntu or CentOS require a third service:

  • HTTP Server - serves up the bulk of the OS install files.

PXELINUX

For the Network Bootstram Program, we'll use PXELINUX, which is available as part of the SYSLINUX project. The name SYSLINUX is a bit misleading in that it's not actually Linux, but rather a collection of bootloaders that are often used with Linux, and capable of loading other OSes as well. Think of something more along the lines of GRUB, than an actual Linux distro.

To start off with, I'll create a /tftpboot directory, download syslinux-4.04.tar.gz from here, extract and copy two files we want:

mkdir /tftpboot
fetch http://www.kernel.org/pub/linux/utils/boot/syslinux/syslinux-4.04.tar.gz
tar xzvf syslinux-4.04.tar.gz
cp syslinux-4.04/core/pxelinux.0 /tftpboot
cp syslinux-4.04/com32/menu/menu.c32 /tftpboot

We're done with the syslinux download now, so you could clean it up if you want with:

rm -rf syslinux-4.04*

Next, create a configuration directory

mkdir /tftpboot/pxelinux.cfg

and in that directory create a file named default with these initial contents:

DEFAULT menu.c32
PROMPT 0
TIMEOUT 200                           

LABEL local                           
    MENU LABEL Local Boot
    LOCALBOOT 0                     

That should be enough to get us a barebones menu when we PXE boot a machine, with a single option to boot off the local harddisk (we'll get to Ubuntu later).

Enable TFTP

TFTP is already included in FreeBSD, just need to make sure it's enabled.

In /etc/inetd.conf make sure this line has the default # removed from the front (so it's not commented out)

tftp   dgram   udp     wait    root    /usr/libexec/tftpd      tftpd -l -s /tftpboot

In /etc/rc.conf, make sure inetd is enabled, adding if necessary:

inetd_enable="YES"

Depending on what you had to do above, start, or reload the inetd daemon with:

service inetd start

or

service inetd reload

Check that the machine is now listing on UDP port 69

sockstat | grep :69

See if you can fetch the NBP using the tftp utility (assuming your server's IPv4 address on the network you'll be doing PXE boots is 10.0.0.1)

cd /tmp
tftp 10.0.0.1
tftp> get /pxelinux.0
tftp> quit
rm pxelinux.0

If it works you should have seen somthing like:

Received 26443 bytes during 0.1 seconds in 53 blocks

Tweak DHCP Server

For this part I'm assuming you're running an ISC dhcpd server (if not, we'll have to cover that in another post). You basically just need to add two lines to /usr/local/etc/dhcpd.conf telling a client what server to use for TFTP and what NBP to fetch:

next-server 10.0.0.1;
filename "/pxelinux.0";

On my server, I just wanted to do this on one particular subnet, so there's a chunk that looks something like this now:

subnet 10.0.0.0 netmask 255.255.255.0 
    {
    range 10.0.0.127 10.0.0.250;
    option routers 10.0.0.1;

    next-server 10.0.0.1;
    filename "/pxelinux.0";
    }

Restart dhcpd

service isc-dhcpd restart

Give it a try

On your client machine, you may have to poke around in the BIOS to enable PXE booting. You'll have to figure out this part for yourself. If you can select your Network Card as the boot device, and everything else is working right, you should see a simple menu something like this:

Initial success

OK! we're at the "Hello World" stage, we know the client and server are doing the bare minimum necessary for PXE to function at all. Time to move on to the good stuff.

Ubuntu Server 11.04

For this next step, I'll assume you've downloaded an ISO into say /foo/ubuntu-11.04-server-amd64.iso The specific version shouldn't matter too much, so if you want to do 10.04 LTS or something else, it should all be about the same.

Mount the ISO image, so we can copy a couple files into /tftpboot and share the rest with a web server.

mkdir -P /iso_images/ubuntu-11.04-server-amd64
mount -t cd9660 /dev/`mdconfig -f /foo/ubuntu-11.04-server-amd64.iso` /iso_images/ubuntu-11.04-server-amd64
mkdir /tftpboot/ubuntu-11.04-server-amd64
cp /iso_images/ubuntu-11.04-server-amd64/install/netboot/ubuntu-installer/amd64/linux /tftpboot/ubuntu-11.04-server-amd64
cp /iso_images/ubuntu-11.04-server-amd64/install/netboot/ubuntu-installer/amd64/initrd.gz /tftpboot/ubuntu-11.04-server-amd64

So now our /tftpboot directory has these five files underneath it:

pxelinux.0
pxelinux.cfg/default
menu.c32
ubuntu-11.04-server-amd64/linux
ubuntu-11.04-server-amd64/initrd.gz

To the /tftpboot/pxelinux.cfg/default file append

LABEL ubuntu-11.04-server-amd64-install             
    MENU LABEL Ubuntu 11.04 Server AMD64 Install
    kernel ubuntu-11.04-server-amd64/linux
    append vga=788 initrd=ubuntu-11.04-server-amd64/initrd.gz

Try PXE booting your client again, this time you'll have "Ubuntu 11.04 Server AMD64 Install" as one of your choices, select that, cross your fingers, and if all goes well in a few seconds you should see:

Initial success

and you can go through and answer the initial questions about the install.

If you're OK with pulling the bulk of the OS over the internet from the official Ubuntu mirrors, it should work although it might be slow. Since we have a nice server sitting on our LAN with a copy of the ISO, we should setup to use that and do a much faster install.

Web Server

For this example, I'll assume nginx has been installed as the webserver (any one will do though, so if you've already got apache installed - that'll work fine too).

The default nginx install uses /usr/local/www/nginx as its docroot, lets put a symlink to our mounted ISO image in there:

ln -s /iso_images/ubuntu-11.04-server-amd64 /usr/local/www/nginx

and also put in a minimal Debian Installer "preseed" file in there that'll help things along by telling the installer to use our webserver for the installation packages. Create a text file named /usr/local/www/nginx/ubuntu-11.04-server-amd64.txt with these contents:

d-i mirror/country string manual
d-i mirror/http/hostname string 10.0.0.1
d-i mirror/http/directory string /ubuntu-11.04-server-amd64
d-i mirror/http/proxy string

Check that you can fetch that file with the URL: http://10.0.0.1/ubuntu-11.04-server-amd64.txt

Edit the /tftpboot/pxelinux.cfg/default file and append

url=http://10.66.0.1/ubuntu-11.04-server-amd64.txt

to the end of the append line of our Ubuntu section, so it now looks like:

LABEL ubuntu-11.04-server-amd64-install             
    MENU LABEL Ubuntu 11.04 Server AMD64 Install
    kernel ubuntu-11.04-server-amd64/linux
    append vga=788 initrd=ubuntu-11.04-server-amd64/initrd.gz url=http://10.66.0.1/ubuntu-11.04-server-amd64.txt

Try PXE booting the Ubuntu install again. You'll still get some initial questions about language and keyboard (we can deal with those in another post), but you shouldn't be asked about mirrors - the installer will know to pull files from your local webserver.

Go through the install on the client, watch the /var/log/nginx-access.log file on the server, you'll see the installer fetching all kinds of files, so you'll know it's all working.

You're in business

So at this point you've got yourself a working PXE installation environment and can do a basic Ubuntu server install.

By adding a few more parameters to your seed file and the PXE configuration you can eliminate some of the installer questions. I'll probably write about that in another post, but if you want to figure it out yourself, check out the Ubuntu Installation Guide - Appendix B. Automating the installation using preseeding

There's so many things you can do with the PXE menus, kernel options, and so on - it can't all be covered in one place. But hopefully you've got a good starting point now, if you know all the basic services are in place and working.

Rebuilding Hyper-V Linux Integration Components for a kernel upgrade

At work I've been running RedHat Enterprise Linux (RHEL) 5.6 on top of a Windows Server 2008 R2 Hyper-V host, with Linux Integration Components (LinuxIC) installed.

That all worked fine until I did a yum update on RHEL to pick up a new kernel and tried to reboot. The new kernel panicked, saying it couldn't find my LVM volume groups. Fortunately, the old kernel was still on the menu and booted OK.
Turns out the new kernel for whatever reason wouldn't see the virtual disks for that machine.

OK, so I needed to rebuild LinuxIC for the new kernel while running the old kernel, how to do that? The Makefile and various scripts that come with LinuxIC basically builds for and installs on the currently running kernel. Fortunately I came across this post showing a trick of replacing /bin/uname with a fake version that shows the kernel version number you want to build for. Tried that and was back in business.

I think this would work too, without messing with the original /bin/uname: create a directory somewhere, say '/tmp/fake_uname' and stick this file in it with the name uname (changing the "echo" line with the installed kernel version number you want to build for.

#/bin/sh case $1 in -r) echo "2.6.18-238.9.1.el5" ;; *) exec /bin/uname $1 ;; esac

Then build and install your Linux IC with the /tmp/fake_uname prepended to your PATH as in

PATH=/tmp/fake_uname:$PATH make
PATH=/tmp/fake_uname:$PATH make install

When the LinuxIC build calls uname it finds the fake version first, and if the argument is -r shows you desired version number, otherwise falls back to the real uname.

KVM Networking

Still playing with KVM (Kernel-based Virtual Machine), this time checking out some networking features. I've been running Ubuntu 8.04 LTS Server (Hardy Heron), both as the host and as a VM on that host. Networking is setup to use a bridge.

KVM offers different emulated NICs, I took a quick look at running iperf between the VM and the host, and got these speeds for a few select NIC models:

  • RTL-8139C+ (the default): ~210 Mb/sec
  • Intel e1000 (somewhat recommended here): ~ 330Mb/sec
  • virtio: ~ 700Mb/sec

The thing about virtio though is that it doesn't work when the VMs RAM is set to 4GB. So I guess you can have fast networking, or lots of memory, but not both.

Playing with KVM and LVM on Linux

I'm still experimenting with Ubuntu 8.04 Server (Hardy Heron), and have switched from Xen to KVM (Kernel-based Virtual Machine). Xen worked well on a little test machine I had, but when I tried it on a brand-new Supermicro server, it turned out to have a problem with the Intel NIC. Since it seems Ubuntu is recommending KVM over Xen, and the server supports hardware virtualization, I figured I'd give it a try.

One big difference is that KVM does full emulation, which means any disk space you give it from LVM (Logical Volume Manager), will be a full virtual disk, with a partition table. It's a little more complicated to access filesystems within the virtual disk that it was with Xen, I wanted to jot some notes down here mostly for myself on how to do that.

If I've created a logical volume named /dev/myvg/test_vm and installed another linux on it with a single ext3 filesystem (/dev/sda1 from the point of view of the VM) and some swap space (/dev/sda5), it can be accessed when the VM isn't running with the help of the kpartx utility...

kpartx -av /dev/myvg/test_vm

would read the partition table on the virtual disk and create:

/dev/mapper/myvg-test_vm1 
/dev/mapper/myvg-test_vm2 
/dev/mapper/myvg-test_vm5

Then you can

mount /dev/mapper/myvg-test_vm1 /mnt

to mess with the VMs /dev/sda1. To clean things up when finished, run:

umount /mnt
kpartx -d /dev/myvg/test_vm

Snapshots

If you want to look at the contents of a running VM's disks (perhaps for backing it up) you can use LVM snapshots. For example:

lvcreate --snapshot --size 1G --name test_snap /dev/myvg/test_vm
kpartx -av /dev/myvg/test_snap
mount /dev/mapper/myvg-test_snap1 /mnt
   .
   (play with VM's /dev/sda1 in /mnt)
   .
umount /mnt
kpartx -dv /dev/myvg/test_snap
lvremove /dev/myvg/test_snap

Ubuntu server locale errors

This is mostly a note to myself... After setting up a minimal Ubuntu server install (in Xen), following these instructions using debootstrap I saw lots of errors like this:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

Checking with locale -a would show

C
POSIX

While a full Ubuntu server install (off a CD) would show:

C
en_US.utf8
POSIX

This command seems to have generated the missing locale and made everybody happy.

localedef --no-archive -i en_US -c -f UTF-8 en_US.UTF-8

Xen and UFW on Ubuntu

I've been experimenting with setting up Ubuntu Server 8.04 (Hardy Heron) to run Xen, and had a minor problem with UFW (Uncomplicated Firewall) running in the dom0 blocking network access to a domU running in bridged mode. It seems the fix is just to edit /etc/defaults/ufw and make this change to enable forwarding:

--- a/default/ufw       Thu Oct 23 10:00:33 2008 -0500
+++ b/default/ufw       Thu Oct 23 10:34:36 2008 -0500
@@ -16,7 +16,7 @@ DEFAULT_OUTPUT_POLICY="ACCEPT"

 # set the default forward policy to ACCEPT or DROP.  Please note that if you
 # change this you will most likely want to adjust your rules
-DEFAULT_FORWARD_POLICY="DROP"
+DEFAULT_FORWARD_POLICY="ACCEPT"

 #
 # IPT backend

and then run ufw disable; ufw enable.

I believe dom0 is now protected, and it'll be up the the domU to protect itself. I can't say I'm entirely comfortable with Linux IPTables, sure wish PF was available as an alternative.