Fooling around a bit more with accessing a VM's serial console from a KVM hypervisor with
virsh console mymachine
I found one thing that doesn't carry over from the host to the VM is the terminal window size, so if you try to use something like vim through the console connection, it seems to assume a 80x25 or so window, and when vim exits your console is all screwed up.
It looks like a serial connection doesn't have an out-of-band way of passing that info the way telnet or ssh does, so you have set it manually. You can discover your settings on the host machine with
stty size
which should show something like:
60 142
on the VM, the same command probably shows
0 0
zero rows and columns, no wonder it's confused. Fix it by setting the VM to have the same rows and columns as the host with something like:
stty rows 60 columns 142
and you're in business.
Enabling VM serial console on stock Ubuntu 10.04 server
So I've been running Ubuntu 10.04 server virtual machines on a host running KVM as the hypervisor, and thought I should take a look at accessing the VM's console from the host, in case there's a problem with the networking on the VM.
The hosts's VM libvirt definition shows a serial port and console defined with
and within the stock Ubuntu 10.04 server VM, dmesg | grep ttyS0 shows:
[ 0.174722] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[ 0.175027] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
So the virtual hardware is all setup on both ends, but ps aux | grep ttyS0 doesn't show anything
We need to have a process listening to that port. To do that, create a file named /etc/init/ttyS0.conf with these contents:
# ttyS0 - getty
#
# This service maintains a getty on ttyS0 from the point the system is
# started until it is shut down again.
start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]
respawn
exec /sbin/getty -L 38400 ttyS0 xterm-color
and then run
initctl start ttyS0
back in the host machine run virsh list to find the name or id number of your VM, and then
virsh console <your-vm-name-or-number>
to connect, hit return and you should see a login prompt.
Customizing cloned Ubuntu VMs
I was playing with creating and cloning Ubuntu virtual machines the other day, and got to the point where I had a nicely setup reference image that I could just copy to fire up additional VMs that would be in a pretty usable state.
There are a few things within a cloned VM that you'd want to change if you were going to keep the new instance around, such as the hostname, SSH host keys, and disk UUIDs. I threw together a simple shell script to take care of these things automatically.
#!/bin/sh## Updates for cloned Ubuntu VM### Some initial settings cloned from the master#ROOT=/dev/vda1
SWAP=/dev/vdb1
LONG_HOSTNAME=ubuntu.local
SHORT_HOSTNAME=ubuntu
if[ -z $1]thenecho"Usage: $0 <new-hostname>"exit1fi# # Update hostname#shorthost=`echo$1| cut -d . -f 1`echo$1 >/etc/hostname
hostname $1
sed -i -e "s/$LONG_HOSTNAME/$1/g" /etc/hosts
sed -i -e "s/$SHORT_HOSTNAME/$shorthost/g" /etc/hosts
## Generate new SSH host keys#
rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server
## Change root partition UUID#OLD_UUID=`blkid -o value $ROOT| head -n 1`NEW_UUID=`uuidgen`
tune2fs -U $NEW_UUID$ROOT
sed -i -e "s/$OLD_UUID/$NEW_UUID/g" /etc/fstab /boot/grub/grub.cfg
## Change swap partition UUID#OLD_UUID=`blkid -o value $SWAP| head -n 1`NEW_UUID=`uuidgen`
swapoff $SWAP
mkswap -U $NEW_UUID$SWAP
swapon $SWAP
sed -i -e "s/$OLD_UUID/$NEW_UUID/g" /etc/fstab
## Remove udev lines forcing new MAC address to probably show up as eth1#
sed -i -e "/PCI device/d" /etc/udev/rules.d/70-persistent-net.rules
sed -i -e "/SUBSYSTEM==/d" /etc/udev/rules.d/70-persistent-net.rules
echo"UUID and hostname updated, udev nic lines removed, be sure to reboot"
I'd then run it on the cloned machine with something like
update_clone.sh mynewmachine.foobar.com
This somewhat particular to my specific master VM, in that it's expecting one disk dedicated to root and one disk dedicated to swap, and the VM was created with ubuntu.local as the hostname. Hopefully though it'll give some ideas about what to look for and how to script those changes.
Make sure virtualization is enabled in the BIOS
I just wasted a fair amount of time on a RedHat 6.1 box being setup to be a hypervisor with KVM, trying to figure how why when I ran virsh version it was telling me among other things
internal error Cannot find suitable emulator for x86_64
All the appropriate packages such as qemu-kvm were installed, but it just didn't seem to want to work. Finally as I was about to try reinstalling RHEL, I remoted into the actual console and saw:
kvm: disabled by bios
Doh!, and looking back in /var/log/messages the same thing was buried deep within all the boot noise. While trying to figure this out I managed to just be looking for virt or qemu in the logs and somehow didn't search for kvm. Enabled virtualization in the BIOS and everything's gravy now.
So there you go, if you're Googling that first error message and get lots of other nonsense, look for the message about the BIOS.
KVM Networking
Still playing with KVM (Kernel-based Virtual Machine), this time checking out some networking features. I've been running Ubuntu 8.04 LTS Server (Hardy Heron), both as the host and as a VM on that host. Networking is setup to use a bridge.
KVM offers different emulated NICs, I took a quick look at running iperf between the VM and the host, and got these speeds for a few select NIC models:
The thing about virtio though is that it doesn't work when the VMs RAM is set to 4GB. So I guess you can have fast networking, or lots of memory, but not both.
Playing with KVM and LVM on Linux
I'm still experimenting with Ubuntu 8.04 Server (Hardy Heron), and have switched from Xen to KVM (Kernel-based Virtual Machine). Xen worked well on a little test machine I had, but when I tried it on a brand-new Supermicro server, it turned out to have a problem with the Intel NIC. Since it seems Ubuntu is recommending KVM over Xen, and the server supports hardware virtualization, I figured I'd give it a try.
One big difference is that KVM does full emulation, which means any disk space you give it from LVM (Logical Volume Manager), will be a full virtual disk, with a partition table. It's a little more complicated to access filesystems within the virtual disk that it was with Xen, I wanted to jot some notes down here mostly for myself on how to do that.
If I've created a logical volume named /dev/myvg/test_vm and installed another linux on it with a single ext3 filesystem (/dev/sda1 from the point of view of the VM) and some swap space (/dev/sda5), it can be accessed when the VM isn't running with the help of the kpartx utility...
kpartx -av /dev/myvg/test_vm
would read the partition table on the virtual disk and create: