From PrgmrWiki

All right. So by now you're some kind of Xen expert, we imagine. [note: Or at least, anyone who hasn't thrown this book out the window in disgust must be extremely good at filling in vague directions.] As such, now we'd like to devote a chapter to the more esoteric bits of working with Xen.

Here are some things that just didn't seem to fit anywhere else -- stuff like the framebuffer, or forwarding PCI devices, or building added functionality into the XenStore.

Some of the stuff in the troubleshooting chapter might also come in handy when working through our examples, here -- some bits here are a bit more bleeding-edge than the rest of Xen, which is itself some kind of /heavenly sword/, ravening and incarnadine. What we're trying to get at is that some of this might not work straight off.

Compiling Xen

Although we've relied, for the most part, on Xen packages provided by distro maintainers, we feel that it's generally worthwhile to compile Xen from scratch.

The easiest way to compile is to check out the latest source from the Mercurial repository.

[FIXME etc.]

Compile-time tuning

But the basic compilation with "make world" is just the beginning. Compilation represents the first opportunity we have to configure Xen, and there's a lot more that we can do with it now that we've had some practice.

Most of the compile-time tuning can be done by twiddling variables in, at the top level of the Xen source tree. This file's fairly extensively-commented and amenable to editing -- take a look. You'll find that there's a brief section where you can decide which optional Xen bits to build.

We usually turn on all of the optional components except for the VTPM tools (since we don't use the TPM at all,) leading to a section like this:

VTPM_TOOLS         ?= n
XENFB_TOOLS        ?= y
PYTHON_TOOLS       ?= y

If you're having trouble (and, trust us, you probably will at some point,) it would be a good idea to make a debug build. To do that, set the DEBUG variable at the top of the file:

DEBUG              ?= y

(Don't worry: xend will not run in debug mode unless you specifically instruct it to do so, by setting the XEND_DEBUG variable at runtime.)

After which you can build Xen in the normal fashion.

Note that these optional Xen components have a bunch of undocumented dependencies, some of which aren't checked for by the Makefiles. In particular, the libxenapi_bindings demand libxml2 and curl (or the -devel versions of these packages, if you're using a RedHat derivative.)

Also, if something doesn't work when building the tools, it would probably be a good idea to avoid running make world again, since that's impressively time-consuming. Most likely, you can get by with just make tools.

Configuring the Xen Linux kernel

alternate kernels (dom0 and domU)

The default Xen makefile will build a single kernel that can be used in both the dom0 and domU. If saving memory is a high priority, you can build a separate kernel for each:

# make KERNELS="linux-2.6-dom0 linux-2.6-domU"

Xen's build system builds each kernel in a separate directory, rather than in the kernel source directory.

The default configurations are stored under buildconfigs/ .

These kernels will each have a reasonable set of configuration options -- minimal for the domU, modular for the dom0.

You can also customize these kernels. The configurations are stored under the buildconfigs/ directory of the Xen source tree. Take a look.

The primary reason to do this, of course, is so that you can strip all the non-Xen device drivers out of the domU kernel. This saves memory and -- if you happen to be testing a lot of kernels -- compile time.

The Xen API: The way of the future

The Xen API is an XML-RPC interface to Xen that replaces the old interface used for communication with the hypervisor. It promises to provide a standard, stable interface, so that people can build Xen frontends without worrying about the interface changing out from under them. It also extends the previous Xen command set, so that more of Xen's functionality can be harnessed in a standardized tool.

In current versions of Xen, the API is an optional component, but that shouldn't deter you from using it -- the most recent XenSource product, for example, relies on the API exclusively for communication between the administration frontend and the virtualization host.

The on-the-wire format for the Xen API is based on XML-RPC (Extensible Markup Language - Remote Procedure Call.)

The goal for the Xen API is to provide a stable API for third-party control software.

Ordinarily, even when developing a Xen client, you won't need to interact with the Xen API at a low level -- bindings exist for most of the popular languages, including C and (of course) Python.

Use of the Xen API is controlled by the (xen-api-server) directive in /etc/xen/xend-config.sxp.

(xen-api-server ((9367 pam  /etc/xen/xen-api-key /etc/xen/xen-api.crt)))

Odd subdirectories under /tools



Moving on from compile-time and installation issues to the serious day-to-day business of running Xen, we encounter the problem of memory. As we've mentioned, most Xen installations are limited in practice by physical memory.

Xen expends a great deal of effort on virtualizing memory -- its approach is one of the defining features of paravirtualization, and it usually "just works," on a level low enough to ignore completely. However, it sometimes can benefit from a bit of attention by the administrator.

Well mcaadmaia nuts, how about that.

The config file auto-generation scheme

You can get a giant (and surprisingly useful) full configuration dump in SXP format using the xm list command with the -l option:

# xm dump -l ophelia
[ a lot of information ]

Let's look at this in some detail.

First, it gives us the obvious information -- the domain's name, its [FIXME etc.]

However, it leaves out some information in the config file, particularly the bootloader.

PCI forwarding

You can allow a domU to access arbitrary PCI devices and use them with full privileges.

Of course, there's no such thing as a free lunch; Xen can't miraculously duplicate PCI hardware. For a domU to use a PCI device, it has to be hidden from the dom0. (And not forwarded to any other domUs.)

[diagram: pciback/pcifront]

As the diagram shows, PCI forwarding uses a sort of client-server model, in which the pcifront driver runs in the domU and communicates directly with the pciback driver, which binds to the PCI device and hides it from the dom0.

First, consider the device that you want to forward to the domU. The test machine that I'm sitting in front of appears to have seven (!) USB controllers, so I'll just take a couple of those. Use lspci to determine bus IDs:

# lspci
00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB
UHCI #4 (rev 02)
00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB
UHCI #5 (rev 02)
00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2
EHCI #2 (rev 02)

I'll forward 00:1a.1 and 00:1a.7, the second of the USB controllers listed and the USB2 controller.

If pciback is compiled into the kernel, you can boot the dom0 with a pciback.hide option on the kernel command line. For these two, the option would look like:


Now put these devices into the domU config file:


Note that the Xen authors include a warning that hardware devices on platforms without an IOMMU can DMA to arbitrary memory regions. The moral is to treat all domains with access to the PCI bus as privileged. Make sure you can trust them.

Xen, time, and the ztdummy module

One popular application for Xen is the open-source telephony program Asterisk. With an Asterisk VM, you can set up a self-contained virtual appliance devoted to managing phone calls, with the full power of an enterprise PBX.

Asterisk, however, is also an example of a real-time application that can call for some tuning on the Xen side.

Independent wall clock

Some time-sensitive applications may have trouble dealing with the Xen-provided emulated hardware clock.

Another issue with the emulated clock is that it causes trouble with software that expects to set the hardware clock.

You can set a sysctl from within the domU to enable it to sync its own clock, normally using NTP. Note that this setting doesn't require any intervention from the dom0 administrator.

In /etc/sysctl.conf:

xen.independent_wallclock = 1

GRUB configuration

Of course, we've dealt with GRUB in passing, since it's one of the basic prerequisites for Xen. However, there are a few more aspects of GRUB that are worth mentioning in-depth.

A fair number of Xen's behavior knobs can be tweaked in GRUB at boot time, by adjusting the command-line parameters passed to the hypervisor.

For example, the already-mentioned dom0_mem parameter adjusts the amount of memory that Xen allows the dom0 to see:

kernel /boot/xen.gz dom0_mem=131072

To keep the system from rebooting if you have a kernel panic (which happens. . . more often than we would like, especially when trying to get machines initially set up,) add "noreboot" to the kernel line:

kernel /boot/xen.gz dom0_mem=131072 noreboot

We've already discussed the serial console, of course.

This is, of course, in addition to the plethora of options supported by the Linux kernel, which you can then add to vmlinuz's module line as you see fit.

Xen and LILO

This section only applies for the real dinosaurs out there. But we sympathize.

If you're dead set on using LILO, rather than GRUB, you will be pleased to learn that it is possible. Although it's generally thought that LILO's lack of an equiavalent to GRUB's "module" directive makes it impossible for it to boot Xen, it's possible to get around that by combining the hypervisor, dom0 kernel, and initrd into one file using mbootpack.

Consider the following entry in grub.conf:

title slack-xen

       root (hd0,0)
       kernel /boot/xen.gz
       module /vmlinuz-2.6-xen ro root=/dev/hda1 ro
       module /initrd-2.6.18-xen.gz

It loads the hypervisor, xen-3.0.gz, as the kernel, then unpacks vmlinuz-2.6-xen and initrd.gz into memory. To combine these files, first decompress:

# cd /boot
# gzcat xen-3.0.gz > xen-3.0
# gzcat vmlinuz-2.6-xen0 > vmlinux-2.6-xen0
# gzcat initrd.gz > initrd.img

(Note the change from vmlinuz to vmlinux. It's not important, except insofar as it keeps you from overwriting the kernel at the beginning of the gzcat process.)

Then combine the three files using mbootpack:

# mbootpack -o vmlinux-2.6-xen.mpack -m vmlinux-2.6-xen0 -m initrd.gz
  -m initrd.img xen3.0

The grub.conf entry then becomes a lilo.conf entry:


Finally, run the lilo command.

Virtual framebuffer

Much as purists would like to claim that all administration should be done via serial port, there's something to be said for all this newfangled graphical technology that we've been using for, oh, around the last 25 years. Xen makes a concession to these forward-thinking beliefs by including a facility for a /virtual framebuffer/.

You will need to edit your file to build the VFB:

XENFB_TOOLS        ?= y

At this point you'll also need libvncserver and libsdl-dev. Install them in your chosen way (Come on, we're almost through the book. At this point you don't need handholding.) We installed CentOS's SDL-devel package, and installed libvncserver from source. Then build Xen and install it in the usual way.

To actually use the framebuffer within a domain, you'll need to specify it in the config file. Recent versions of Xen have improved the syntax somewhat. The vfb= option controls all aspects of the virtual framebuffer, just as the vif= and disk= lines control virtual interfaces and virtual block devices. (One supposes that the Xen team wished to demonstrate, with these names, that they were not slaves to consistency.) For example:

vfb = [ 'type=vnc, vncunused=1' ]

Or, if you're feeling /adventurous/, the sdl version:

vfb = [ 'type=sdl' ]


Automatically connecting to the VNC console on domain boot

One neat feature of the Xen liveCD is that Xen domains, when started, will automatically pop up a VNC window once they've finished booting. The infrastructure that makes this possible is a script in the domU, a listener in the dom0, and the Xenbus between them.

The script in the domU, vnc-advertiser, fires off from the domU startup scripts, and waits for an Xvnc session to start. Once it finds one, it writes to the Xenstore:

xenstore-write /tool/vncwatch/${domid} ${local_addr}${screen}

In the dom0, a corresponding script watches for writes to the Xenstore. On the liveCD, it's named . This script is a useful example of general-purpose uses for the Xenstore, so we've copied it wholesale here, with verbose annotations.

#!/usr/bin/env python
# VNC watch utility
# Copyright (C) 2005 XenSource Ltd
# This file is subject to the terms and conditions of the GNU General
# Public License.  See the file "COPYING" in the main directory of
# this archive for more details.
# Watches for VNC appearing in guests and fires up a local VNC
# viewer to that guest.
# Import libraries necessary to interact with the Xenstore.  Xswatch
# watches a Xenstore node and activates a script-defined function
# when the node changes, while xstransact supports standard read and
# write operations.
from xen.xend.xenstore import xswatch
from xen.xend.xenstore.xstransact import xstransact
from os import system
def main():
   # first make the node:
                             { "dom" : 0,
                               "read" : True,
                               "write" : True })
   active_connections = {}
# The watchFired method does the actual work of the script.  When the
# watcher notes changes to the path "/tool/vncwatch/", it calls
# watchFired with the path (and arguments, which are unused in this
# script.)
   def watchFired(path, *args, **nargs):
       if path == "/tool/vncwatch": 
           # not interested:
           return 1
# If we reach this point, something's changed under our path of
# interest.  Let's read the value at the path.
       vncaddr = xstransact.Read(path)
       print vncaddr
# When the vnc-advertiser notices that Xvnc's shut down in the domU,
# it removes the value from the Xenstore.  If that happens, the
# watcher than removes the connection from its internal list (since
# presumably the VNC session no longer exists.)
       if vncaddr == None:
           # server terminated, remove from connection list:
           if path in active_connections:
           # server started or changed, find out what happened:
           if (not active_connections.has_key(path)) or active_connections[path

] != vncaddr:

# Recall that the vnc-advertiser script writes ${domid}
# ${local_addr}${screen}  to the patch /tool/vncwatch/.  The watcher
# takes that information and uses it to execute the vncviewer command
# with appropriate arguments.

active_connections[path] = vncaddr system("vncviewer -truecolour " + vncaddr + " &") return 1

# Associate the watchFired event with a watcher on the path
# "tool/vncwatch"
   mywatch = xswatch.xswatch("/tool/vncwatch", watchFired)

if __name__ == "__main__":


Use of the Xenstore for fun and profit

The Xenstore is the configuration database in which Xen stores information on the running domUs.

Although Xen uses the Xenstore internally for vital matters like setting up virtual devices, you can also write arbitrary data to it, from domUs as well as from dom0. Think of it as some sort of inter-domain socket.

This opens up all sorts of possibilities -- for example, domains could in theory negotiate among themselves for access to shared resources. Or you could have something like the "talk" system on the shared UNIX machines of yore -- multi-user chat between people running on the same host. You could use it to propagate host-specific messages, for example, warning people of impending backups or migration.

To look at the XenStore, you can use the xenstore-ls command, included with recent versions of Xen. Here's a shell script that does the same thing using the xenstore-list command (taken from the XenSource wiki):


function dumpkey() {
   local param=${1}
   local key
   local result
   result=$(xenstore-list ${param})
   if [ "${result}" != "" ] ; then
     for key in ${result} ; do dumpkey ${param}/${key} ; done
     echo -n ${param}'='
     xenstore-read ${param}
for key in /vm /local/domain /tool ; do dumpkey ${key} ; done

3D in domUs

[FIXME make this work or take it out]

Although we've mentioned, in the Windows chapter, that it's not possible to use hardware 3d capabilities in a Windows domU, it is possible to forward a graphics card to a domU and use its hardware capabilities. It's just not possible under /Windows/ -- under Linux it's reasonably easy.

All that you need is a recent version of Xen, a working DRI environment, and a passthrough 3d driver.

Xen Hypervisor Console

As mentioned, we consider the serial console the "gold standard" for console access to any sort of server. It's much simpler than any sort of graphical interface, easy to access with a variety of devices, and is the output most likely to provide useful information when the machine is crashing. Furthermore, because of the "client-server" architecture inherent in the system, anything that a crashing machine manages to print goes to another, physically separate machine, where it can be analyzed at leisure.

Xen adds another layer to the serial console by using it to access extra hypervisor features.

DomU and serial ports

It may be inferred that we have a bit of a 'thing' for serial ports. They're just so useful!

The most common source of pitfalls regarding the serial port -- that we've seen, anyway -- is that each serial port can only be grabbed by one driver. If dom0 is using the serial port, then domUs won't be able to, for example. If the Xen console is on ttyS0, you won't be able to use it to administer other machines.

[FIXME more]

Xen comes with a minimal serial client for this sort of thing, in case you don't have access to a serial client. (This is unlikely, but the client is tiny.)

Miniterm is in the tools/misc/miniterm subdirectory of the Xen source tree. If you've built all the tools with Xen, it'll already be built and possibly even installed; if not, you can simply type "make" in that directory and run the resulting executable.

Telling domains to start automatically on boot

[FIXME isn't this kind of basic?]

By default, Xen ships with a script installed as /etc/init.d/xendomains , and creates symlinks so that it starts in runlevels 3, 4, and 5.

The xendomains script will iterate through the /etc/xen/auto directory, and start each domain with a config file in that directory. When the machine shuts down, it calls xendomains with a stop argument, which shuts down all running domains. (Not just the ones in /etc/xen/auto.)

We usually use symlinks -- simply link each domain that you want started at boot into that directory.


The Trusted Platform Module's been a subject of heavy development. It's got some interesting implications for signed code, and the looming spectre of DRM.

Shoot yourself in the foot

[FIXME put in storage]

Xen will try to protect you, but, like most good unix tools, it'll let you do terrible and dangerous things if you really want to.

If you're *certain* that you want to mount a block device read/write in multiple domains, you can add an exclamation point to its disk specifier:


And Xen will follow instructions. Don't do this. It is a Bad Idea.

Security Policies

Xen supports a robust security architecture.

However, only a couple of sample policies exist, and the entire thing seems to us to be a bit over-engineered.

But then, we are lazy. If the tradeoff's between security and convenience, we'll tend to pick convenience.

Vmcasting and the future

Remember, Xen's power is in its creative use of pre-existing tools. Present implementations barely scratch the surface. Looking to the future, we've got the intersection between Xen and the Web 2.0 people, with stuff like Amazon's EC2 and There's the potential to make Xen (or some other virtualization software) ubiquitous, with integration into the

Always a good job right here. Keep rolling on thruogh.