Some thoughts on VM techniques for home use

Many of us are geeks who like to play with technology "because it is there".  We might want to try out a new OS, or a new piece of software.  Maybe install a beta version of something, or be able to test a client-server setup.  Historically that has meant having one (or more) test machines, configured as multi-boot.  In 2002 I spent $600 on a Celeron 1200Mhz machine with 256Mb RAM and a 40Gb disk for precisely this purpose; it multi-booted into XP, NetBSD, Solaris 86, Fedora... at that point I ran out of primary boot partitions.  *sigh* 

Today, however, machines are a lot more powerful.  Since I had a spare case, power supply and 3*500Gb disk sitting around I spent $500 getting a new motherboard ($90), an i5-750 CPU($200) and 8Gb of RAM ($210).  I picked this setup because it was a VT-x compatible CPU so could do hardware virtualization for Windows guests.  Also, in the worst case, it could quickly be used as a standby machine if my primary server (Q6600, 4Gb RAM) died.  Redundancy :-)

Then I had to decide on what virtualization technology to use on this machine.

Types of hypervisor

Hypervisors come labelled in two types; "type 1" and "type 2".  It's really a little but kludgy to have this separation but...

Type 1 hypervisors are typically small kernels that essentially act as a message bus between guest OS's and a "control domain".  The guest may think it's talking to a SCSI controller but really this talks to the hypervisor, which passes the request to the control domain, which then does the work of talking to the hardware.  Hypervisors built on this model can be quite small and are potentially suitable for embedding into server firmware.  Because of this they're commonly known as "bare metal" hypervisors.  This is what Sun have done with some of their newer hardware; the sun4v platform is a hypervisor'd virtual SPARC chip; LDOMs are guest domains.  In the Intel (read "Intel, AMD and similar") space Xen is the common OpenSource type 1 hypervisor; VMware ESX is a common commercial one.  Microsoft's HyperV is a Type 1 solution.

Type 2 hypervisors run a step away from the hardware; you have your primary OS install and the hypervisor runs inside that; this is called "hosted".  As with type 1, guests request access to their SCSI controller, the hypervisors traps this and then makes a request of the primary kernel.  This sounds inefficient and would be, except that common type 2 hypervisors have kernel loadable modules and so essentially run in the kernel space of the primary OS.  This may make them as efficient as Type 1.  VMware Server, VMware Player, and VirtualBox are examples of Type2 hypervisors.

What about full machine virtualisation?  Where the whole machine is "virtual".  I consider this an extreme example of a type 2 setup.  An example of this is 'User Mode Linux', where the Linux kernel itself runs as an application instance inside another install.

Essentialy, with a type 1 hypervisor the hypervisor is loaded first and all the OS instances (control domain, guests) load afterwards.  With a type 2 hypervisor an OS instance loads first, and the hypervisor afterwards (or, with kernel modules to confuse things, at the same time).

Paravirtualization?  Huh?

In order for the guest to access things like network or disks they need to have drivers.  So, typically, your VM solution will emulate some common hardware (maybe a RealTek or Intel ethernet card; an IDE or BusLogic SCSI disk).  The guest will have drivers for this already, so they'll "just work".  But as you can guess this may not be very efficient and may result in many context switches as the guest's requests bounces around the system before it gets to the physical hardware.  To help shortcut this the hypervisor may expose an easier-to-emulate interface to the guest which can be processed a lot quicker.  To the guest these devices will look like a different piece of hardware entirely and so will require drivers.  This technique is called paravirtualization and can result is massively improved performance in the guest.

Hardware assisted virtualization

A hard part of virtualization is emulating the CPU.  Some of the instructions are, apparently, complicated and expensive to virtualize and this causes the hypervisor to do lots of work.  Intel created something known as "VT" (aka 'Vanderpool'); AMD have AMD-V ('Pacifica').  This allows the CPU, itself, to virtualize some parts of the instruction set taking the load off the hypervisor, and speeding up virtualization.

Related to this is the ability to virtualize hardware devices, such as PCIe plugin cards.  Intel call this VT-d (Virtualization Technology for Directed I/O).  With this it's possible for a guest OS to have direct exclusive access to hardware devices (maybe one of the USB controllers, or a disk controller).  This is, really, a layer violation but it can be a performance gain or allow VMs to access hardware that the hypervisor can not emulate.  My machine, apparently, supports VT-d in the BIOS but either the chipset (H55) doesn't support it or something else is wrong; the capability isn't available to the hypervisor.

What are my requirements?

I wanted a virtualization platform that would let me use the computer to host different OS images and run them.  I wanted it to "just work"; even though I'm a geek and enjoy playing with technology this was one area where I wanted to play with the VMs, not with the VM technology.  I didn't want to delay messing around with <x> because I had to fight my VM system of choice in order to deploy new images.

Obviously stability was a requirement; if I had to keep rebooting the host because the platform then I'd get annoyed.

Got to be able to run Windows.  If I want to test-drive Windows 2008 (well, I might, on a trial license) or test W2K3 was an Active Directory controller or whatever...  Hmm, could this be used to work around the expiration of the Windows trial license?  Whenever the license is due to expire just destroy the VM and rebuild it.  For something like AD have two servers, promote the other to master, bring up a new replica.  Heh.  Not that I'd ever advocate working around license agreements like this!

No artificial limitation on the number of guest OSes; I don't want to be stuck just running 4 VMs.

CLI to manage it would be nice; GUI to manage it almost essential (see ease of use, my primary requirement).

You'll notice that speed wasn't a primary goal; full virtualization or paravirtualization wasn't too important, since these weren't high throughput high demand systems. So what I'm testing for isn't performance.  This also isn't a long term test.  I'm primarily focusing on ease of use, of of setup, ease of administration.  That is, after all, my primary goal.

So what did I test?

My primary server OS of choice is CentOS 5 (currently 5.4).  Partly because I've been using RedHat variants for... 14 years?  I started with RedHat 4.0 (no, not Enterprise Linux 4, just standard "RedHat") around 1996, from an InfoMagic disk.  Previously I'd used home-grown setups, Yggdrassil, SLS, MCC-Interim...  Not sure when I first got hold  of a Debian disk but by this point it was too late for me to change.  It also helps that my employer's primary Linux OS is RedHat so what I use at home is very close to what I use at work.  RedHat/CentOS comes with two different virtualization technologies (Xen and KVM) so I tested both of them.  I also tested the commercial version of Xen (Citrix XenServer).  No comparison would be complete without VMware ESXi.  I've previously used VMware Server and VirtualBox.  And I daily use UserModeLinux.

So that's the list:
  1. RedHat (ahem, sorry, CentOS!) 5.4 64bit Xen
  2. CentOS 5.4 64bit KVM
  3. Citrix XenServer 5.5
  4. VMware ESXi 4.0
  5. VirtualBox 2.2
  6. VMware Server (version unknown)
  7. User Mode Linux (2.6.20.7 based kernel)
For the CentOS/Citrix/VMware solutions I let my existing machine act as a DHCP server, so the management interface was picked up from a static DHCP config.

CentOS 5.4 64bit Xen

Stick the DVD in the machine, boot off it, install as if you're doing a normal Linux install, select the "virtualization" group.  And let it go.  20 minutes later your machine will be ready with a CentOS Dom0 control domain and the Xen 3.1.2 hypervisor.  Simple!  Being Xen, there are plenty of command line tools available for the install, create, destruction of guest domains (DomU).  Some of the jargon is a little quirky (for example "destroy" doesn't mean destroy the VM image, just the running VM instance).  But I didn't want to learn the ins and outs of the command line.  Fortunately CentOS comes with an X based GUI, "virt-manager".  This is pretty minimal, but sufficient to build an OS, manage resources, connect to the console...  basic requirements. 

First up, install a CentOS guest image.  Simple process... I told it to bridge its ethernet device to my main network (so it looks like it's on the LAN).  Heh, the Dom0 is Linux so I NFS mounted my ISO images to this and now they're available to act as boot CDs/DVDs for the build process.  This is simple; under 10 minutes later I had a fully working CentOS guest image... and it was running paravirtualized as well!  Neat!  This found my DHCP server and got an address and did everything I expected of a newly built CentOS machine.

This is what I wanted from a virtual server; building a complete server in minutes without getting off my arse.

So, next, Windows.  Follow a similar install path from the virt-manager GUI and... now things didn't quite go according to plan.  Just creating the raw disk image took a long time, then booting for the first time caused a hang.  The guest instance just didn't boot cleanly for installation.  Hmm.  Indeed it looks like the hypervisor started to slowly freeze up.  The DomU became unresponsive.  I couldn't login on the console... needed a physical reboot.  Ugh.  But, OK, after the reboot I started the guest and the install carried on.  The Windows setup process ran, copied data to the hard disk, then rebooted.... and here the Xen guest stopped.  Needed to be started again and the OS install then carried on as normal and completed.  To be fair, this step is documented.  It just makes unattended installs harder.  Not that the standard Windows install is anywhere near unattended anyway (asking questions 5 minutes into the process *sigh*) so it's not a big loss.  But, anyway, I had a Windows instance running.  However I had stability concerns (would I need to reboot every time I created a new windows instance?).  So I tried again... yup, reboot needed!  Oh dear.  I think this fails one of my primary requirements.  It seems a fully patched CentOS 5 64bit Xen install isn't sufficiently stable using full virtualization.  This could easily be because it's an old old version of Xen (3.1.2; current version is 3.4.5).

Data is stored in /var/lib, including the disk images.  So either the relevant areas need to be mounted from a separate partition or a big root disk created.  It is possible to store data elsewhere but SELinux may get in the way and need reconfiguring or turning off.  Personally I turn it off.

CentOS 5.4 64bit KVM

Nicely enough, it's possible to add KVM to my previous install.  "yum install kvm" adds a new kernel and 3 other rpm's.  Reboot into the non-Xen kernel and we have a KVM system.  Apparently this can also be done at initial install time by selecting the right options in the "virtualization" options menu.  So an equally simple install compared to the Xen variant.

I'm not sure if this is considered a Type 1 or Type 2 virtualization.  Not that it really matters.  RedHat nicely hides the differences between Xen and KVM behind the same interfaces, so virt-manager is also used to manage KVM instances.  This time I didn't bother with a CentOS guest, I went straight to building Windows.  And this went off without a hitch.  A fully virtualized Windows machine.

Well, I say without a hitch...  there was no option to select a bridge onto my LAN; it could only use a NAT'd network.  Unlike the Xen install (which creates this bridge out of the box) the KVM install doesn't.  The RedHat documentation does explain how to build a bridge but it's not automatic.  The guide also describes to how convert this instance into a paravirtualized instance.  Except it's not quite so easy and requires manual hacking of config files.  Yes, it does work.  It's definitely a step up from the Xen instance (it's stable!), but I wasn't feeling the "friendliness".

Citrix XenServer 5.5

My hope, here, was to be able to install this onto the 2nd hard disk so it could co-exist with my physical Windows instance and the CentOS instance I've just installed.  This'd give me a multi-boot test platform so I could switch and compare.  Fortunately it allowed me to do exactly that!  At install time a list of disks is presented, so I selected the 2nd one.  A few minutes later we had a XenServer install.

The boot sequence looks familiar... huh, under the covers it's actually CentOS 5.2 (as reported by "rpm -q centos-release") with Xen 3.3.1.  The console comes up with a text menu; let's you select various statistics, start/stop VMs, reboot etc.  It doesn't let you create new VM instances.  You can ssh into the dom0 and access the same menu remotely, so you don't need to be at the console.

Out of the box you get a limited time license; to fully activate the service you need a free license from Citrix.  A little annoying.  More annoying is the fact that this license only lasts 1 year, so it has to be renewed.  Let's hope Citrix don't change their policies and stop giving out free licenses, or use this to force people onto the upgrade treadmill.

To create VM instances requires a piece of software that only runs on Windows.  A little annoying, since it just makes https webservices requests.  Well, I guess it should also be possible to do the same via the dom0 command line.  But see "ease of use".  For my purposes a Windows client it is.  It's pretty small and installs quickly.  From here you can license your server (also see non-subtle adverts for "Xen Essentials", a paid for enhanced interface) and perform various management functions, including create/delete/start/stop VMs. You can also attach to the console of the server and each VM. 

Templated support for Guest OS's is limited (RedHat and variants - Centos/Oracle/XenApp, Debian 4/5, SUSE server, Windows Variants).  Installation of guests using the ISO library I've got was simple. My CentOS guest came up properly virtualised.  I could then attach the xs-tools CD image (directly from the console tab, nice) and install the guest tools which allows the guest to better use the paravirt interfaces and report statistics to the manager.  A Windows install was just as easy; install the OS, attach the CD.  And the tools come with Windows paravirt drivers (at least for XP Pro) which just installed and worked.  I like that.

But that's pretty much where paravirt support ends; I test installed OpenSolaris 2009.06.  I know this has paravirt support...but under XenServer it runs fully virtualized.  I guess I could delve into the internals and work out the right kernel and stuff but...see ease of use.  Similarly a Ubuntu Server 9.10 came up fully virtualized.  XenServer really needs to support more OS platforms, both for paravirt and for xs-tools.

Stability appears to be rock solid.

A nice touch in "guest installation" manual is a description on how to convert a Windows build into a template (including using sysprep) so more instances can be cloned quickly.  Given how slow a standard Windows build is, this is nice information.

It's possible to create virtual networks totally internal to the server (no connection to the outside world) so you can build connections between servers.  I couldn't see an easy way of making the XenServer act as DHCP server for that network so either you'd need to run your own DHCP server or make sure all the machines were statically configured on that LAN.

For the Linux minded, the install takes the whole disk; partition 1 is the dom0 (CentOS 5.2) and takes 4Gb; partition 2 is of  the same size; not sure what that is used for.  The rest of the disk is assigned to a VG.  Guest instances are LVs carved out of the VG.  This is flexible (add a disk, extend the VG, more guest capability) and simple.  You don't need to know any of this, though, to use XenServer.

On the plus side I can also make this work from my primary grub partition with chainloader, so I can have a physical Windows instance for when I actually need one (game playing?) and boot into XenServer for virtualization and my test instances very easily.

Ease of install, ease of use, the simple Windows management GUI are all pluses.  Yearly license and limited paravirt/xs-tools support are minuses.

VMware ESXi

After my experiences with XenServer I was hoping for something similar here.  Obviously that didn't happen.  My hardware was considered "consumer grade" and VMware ESXi only supports server grade hardware.  Essentially, check the hardware compatibility list carefully before installing ESX.  Fortunately other people had worked out how to get ESXi to work on similar hardware, and along with some guesses of my own I managed to get it working.  As with XenServer I could select a disk, so disk 3 went to ESXi.  Install, reboot (more hacking needed here) and I had a running ESXi instance.

Not exactly a friendly out-of-box experience, but now I've documented it (and kept a copy of the required files) I could rebuild this pretty easily.  I won't hold it too badly against VMware :-)

ESXi comes with a 90 days trial license.  You can get a free indefinite license by registering at VMware.  Nice.  Except functionality in this freeware is very limited.  Here's a comparison:
Trial license Product Features:
Up to 8-way virtual SMP
vCenter agent for ESX Server
vStorage APIs
VMsafe
dvFilter
VMware HA
Hot-Pluggable virtual HW
VMotion
VMware FT
Data Recovery
vShield Zones
VMware DRS
Storage VMotion
MPIO / Third-Party Multi-Pathing
Distributed Virtual Switch
Host profiles
The indefinite license has:
Product: ESXi 4 Single Server Licensed for 1 physical CPUs (1-6 cores per CPU)
Expires: Never

Product Features:
Up to 256 GB of memory
Up to 4-way virtual SMP
Pretty minimal set of features!

One thing VMware ESXi does support is "pass-through PCI".  Except my hardware doesn't seem to support it.

ESXi normally hides the control domain; you can't access it. Normally you're meant to use their CLI emulation (perl wrappers calling webservices).
However if you switch to console 1 (ALT-F1) and type   unsupported   and then enter the root password you created then you get access to the command line. It's possible, from here, to enable ssh. Although it's a Linux based OS the software looks more like an embedded Linux system (busybox, ash, dropbear etc). Not that this has any impact on the usage of VMware.
As with XenServer, proper management of the server requires a Windows client.  In fact XenServer is a very clear copy of VMware, from how the console looks to the layout of the Windows GUI.  Except the VMware version is massively heavier.  330Mb for VMware (vs 15Mb for XenServer).  The VMware GUI looks more "next generation" but this can mean "harder to find stuff".  But, yes, all the functionality you need is there to create and manage VMs.

All in all the experience was very similar to XenServer; not surprising if XenServer was designed to copy VMware look'n'feel.  I found the VMware version slightly harder to use; some of the options were not in obvious places.  A few times I remmeber thinking "I know I've seen this option... where was it?!".

Again, virtual networks can be built; again I couldn't see how to make ESXi act as a DHCP server to the network.

Neat idea; effectively there's an app store available from the client where you can download and install prebuilt images into your server.  Want to test Zimbra?  Click-click-deploy... there's a Zimbra instance on your network.  The implementation wasn't the best in the world, but the idea is good.

Windows paravirtualization just worked.  But this time I kinda expected it.  But the slowness of the GUI and lack of direct access to the control domain were beginning to annoy me.  I understand VMware has more guest tools availability than XenServer, but I never tested them.

One downside to the Linux guest tools; they don't appear to dynamically handle changes in kernel so you might find yourself running fully virtualised network/disk devices by mistake.

VirtualBox 2.2

I didn't test his recently; this is from earlier usage (Jan->Jul 2009).

This is a type 2 hypervisor.  I loaded it onto an existing CentOS 5 install.  Because it requires kernel modules to work effeciently it needs to be aware of primary OS kernel updates; to handle this it uses DKMS.  It works well and is, essentially, transparent.  You can upgrade your kernel and on reboot the vbox drivers are recompiled to match.

This comes in 2 editions; VirtualBox and VirtualBox OpenSource (OSE).  Since I wasn't overly concerned about OpenSource purity (I wanted something that worked) I went for the closed source enhanced version.  This adds built in RDP servers, ability to to pass USB ports from the host to the guest, and the ability to use USB ports over the RDP protocol (with a suitably enhanced rdesktop client, that they provide).  Given one idea was to replace my sole Windows desktop with a Linux machine but I still needed Windows for ActiveSync to my cellphone, this looked like a nice idea...

Since I was running this on a Linux machine, the management GUIs were all X based.  I understand it can also be run on a Windows host with Windows GUIs, but I never tested that.

Client installation was simple enough; Windows installed, ran, worked as expected.  At the time I tested there were no paravirt drivers; today it seems there are some.  The wonders of OpenSource; one person creates them, many projects take advantage. Similarly OpenSolaris loaded and installed just fine.

The main problem I had with VirtualBox was that it felt like a desktop virtualization environment.  By default all the images lived in $HOME/.VirtualBox.  This is great is you have many users who want to manage their own virtual environments; not so great if you want simple automation ('start these VMs at boot time').  I wrote some simple rc scripts to work around this, but the resulting solution felt like a kludge.  It also felt too dependent on my keeping the host OS stable; if I ever update from CentOS 5 to CentOS 6 (it'll happen, eventually) or even switch to another Linux distro (maybe not) then I didn't feel comfortable that VBox would easily migrate.

VMware Server (version unknown)

Another type 2 hypervisor; I use this (infrequently) at work to load up an older OS base install that I use to compile software; this install is minimally patched so I can be relatively sure that the software I build is compatible with all the deployed instances in the company; there shouldn't be anything with a lower revision than this!

This doesn't use DKMS, so if I change the kernel then I need to re-run a script to ensure the driver interfaces are recompiled and loaded.  It warns you at boot time, but who reads boot messages?  Really, VMware should look into DKMS for their tools (both server and guest).  Otherwise there's little to say about this... "it works".  It's less desktop oriented (to my feeling) than VirtualBox, but pretty much requires root level access to do anything.

There's a similar worry with Virtualbox about major OS upgrades.

User Mode Linux (2.6.20.7 based kernel)

This is the odd-one of the bunch.  It's not a virtual machine in the above context.  What we have here is a copy of the Linux kernel where the devices are emulated.  The whole Linux instance runs as an application inside the host.  As you can image, this is hardly efficient.  But it works.  There's no pretty GUI, there's no automated deployment tools... there's nothing that I said I wanted.  So why am I looking at User Mode Linux (UML)?

6 years ago I got a Virtual Private Server (VPS) at a company called Linode.  At the time they were using UML.  Also at the time I was using an old Pentium Pro 200 as my firewall, and running Vserver to run security separated instances; I ran one for email (UUCP over SSL), and one for ssh and http.  They were effectively bastion hosts.  The core OS ran PPPoE and so was my internet gateway onto DSL.  Time went forward and the PPPoE gateway got replaced with a Linksys WRT54G, but the PPro still lived on for the bastion hosts.  This was lunacy.  So I sat down and built out my own toolset to build and deploy UML instances.  And, mostly, they work!  I can deploy a new Linux instance in 3 minutes.  Efficiency isn't as bad as I expected either.  I had a P4 3Ghz HT; I loaded a Postgres database and wrote code that took 2 hours to run (heavy transformation of tables).  I created an equivalent UML instance on the Q6600, 2.4Ghz and the same code ran in 1.5 hours.  Despite all the software layers behind UML it ran faster than the physical machine.  Maybe the host OS disk caching helped (1GB RAM on the P4, vs 4Gb RAM on Q6600), but the Q6600 was doing more different work.

The neat thing about UML is that its footprint is so minimal; you can assign 128Mb of RAM and 300Mb of disk to an instance and you know that's exactly what is taken up.  This made it perfect for my bastion hosts.

Indeed I ported this technology to work and one of my DR instances is actually a UML instance running on my desktop.  Sssh!

I'm a fan of UML, but it's really not a general purpose virtualization solution.  Apart from the fact it's Linux only, you do need to be a heavy geek to use it.

Conclusion

RedHat Xen... too unstable.  I can't use it.  Potentially I could use the latest OpenSource Xen drivers instead of the RedHat provided ones, but then I'd need to make sure the virt-manager stuff all worked properly and I'd be worried about OS updates blowing away customization.  I don't want to use a different Linux distro, because that's yet another variation to support and manage.

RedHat KVM... technologically it may be there.  User friendliness, umm.  Maybe if I sat down and built out a toolkit for myself so I could quickly and easily deploy instances (shouldn't really take more than a weekend of trial'n'error) then this may doable.

Citrix XenServer...  best of the bunch so far; just wish it supported more platforms for paravirt and tools.  And that one year license....

VMware ESX4i...  I wanted this to win because it'd give me an idea of how we do stuff at work.  But it was just a headache from start to finish.

VirtualBox... if you want virtualization on a desktop OS then this is a really strong contender.  The USB enhancements make it stand out.  I didn't feel the love as a server based solution, though.

VMware Server.  Strong basic virtualization.  Again, good for your desktop OS, but if you want to use it as a server then you have the additional host OS patching requirements.

User Mode Linux.  Not in the same class, but really good for small footprint minimal overhead low-power installs.

What am I going to use?  I'm not sure!  I'm tending towards Citrix XenServer.  But not decided, yet.

Resources
RedHat Virtualization Guide (Xen, KVM)
Citrix XenServer
VMware ESXi
VirtualBox
VMware Server
User Mode Linux


Last modified: Sunday, 06-Jun-2010 16:59:02 EDT