Make sure virtualization is enabled in the BIOS

I just wasted a fair amount of time on a RedHat 6.1 box being setup to be a hypervisor with KVM, trying to figure how why when I ran virsh version it was telling me among other things

internal error Cannot find suitable emulator for x86_64

All the appropriate packages such as qemu-kvm were installed, but it just didn't seem to want to work. Finally as I was about to try reinstalling RHEL, I remoted into the actual console and saw:

kvm: disabled by bios

Doh!, and looking back in /var/log/messages the same thing was buried deep within all the boot noise. While trying to figure this out I managed to just be looking for virt or qemu in the logs and somehow didn't search for kvm. Enabled virtualization in the BIOS and everything's gravy now.

So there you go, if you're Googling that first error message and get lots of other nonsense, look for the message about the BIOS.

KVM Networking

Still playing with KVM (Kernel-based Virtual Machine), this time checking out some networking features. I've been running Ubuntu 8.04 LTS Server (Hardy Heron), both as the host and as a VM on that host. Networking is setup to use a bridge.

KVM offers different emulated NICs, I took a quick look at running iperf between the VM and the host, and got these speeds for a few select NIC models:

  • RTL-8139C+ (the default): ~210 Mb/sec
  • Intel e1000 (somewhat recommended here): ~ 330Mb/sec
  • virtio: ~ 700Mb/sec

The thing about virtio though is that it doesn't work when the VMs RAM is set to 4GB. So I guess you can have fast networking, or lots of memory, but not both.

Playing with KVM and LVM on Linux

I'm still experimenting with Ubuntu 8.04 Server (Hardy Heron), and have switched from Xen to KVM (Kernel-based Virtual Machine). Xen worked well on a little test machine I had, but when I tried it on a brand-new Supermicro server, it turned out to have a problem with the Intel NIC. Since it seems Ubuntu is recommending KVM over Xen, and the server supports hardware virtualization, I figured I'd give it a try.

One big difference is that KVM does full emulation, which means any disk space you give it from LVM (Logical Volume Manager), will be a full virtual disk, with a partition table. It's a little more complicated to access filesystems within the virtual disk that it was with Xen, I wanted to jot some notes down here mostly for myself on how to do that.

If I've created a logical volume named /dev/myvg/test_vm and installed another linux on it with a single ext3 filesystem (/dev/sda1 from the point of view of the VM) and some swap space (/dev/sda5), it can be accessed when the VM isn't running with the help of the kpartx utility...

kpartx -av /dev/myvg/test_vm

would read the partition table on the virtual disk and create:

/dev/mapper/myvg-test_vm1 
/dev/mapper/myvg-test_vm2 
/dev/mapper/myvg-test_vm5

Then you can

mount /dev/mapper/myvg-test_vm1 /mnt

to mess with the VMs /dev/sda1. To clean things up when finished, run:

umount /mnt
kpartx -d /dev/myvg/test_vm

Snapshots

If you want to look at the contents of a running VM's disks (perhaps for backing it up) you can use LVM snapshots. For example:

lvcreate --snapshot --size 1G --name test_snap /dev/myvg/test_vm
kpartx -av /dev/myvg/test_snap
mount /dev/mapper/myvg-test_snap1 /mnt
   .
   (play with VM's /dev/sda1 in /mnt)
   .
umount /mnt
kpartx -dv /dev/myvg/test_snap
lvremove /dev/myvg/test_snap