New Portishead video.

Mar 19, 2008

New Portishead video: Machine Gun. Sounds like a ghost DJing dubstep.

Inbox

Mar 19, 2008

My musical inbox is getting a little out of hand:

The pile of CDs in the background is especially daunting.

How To Create A Running Gag

Mar 18, 2008

How To Create A Running Gag, courtesy of one of the few webcomics I like,
Basic Instructions. I feel like I know the characters in it personally.

When Things Explode

Mar 18, 2008

When Things Explode is a new site about Manchester music, although it’s currently only a blog.
Looks like it’ll be good.

And I'm lost in confusion...

Mar 17, 2008

I’ve moved my blog to blogspot.com, so I can add pictures and the like without having to bugger about too much as I did on Advogato.

Just back from the US. Somehow I spent 3 weeks there last time without noticing my hotel was practically next to Rasputin Music. Anyway, I finally did this time on Saturday morning and thankfully had time to do some shopping. At the exchange rate, the 2-disc 12” of Underworld’s Jumbo was a particularly good buy, though I was most excited over this:

That’s the 12” of Trash. Yes, I went all the way to the US to get a record by a local Manchester band. I missed it first time around, and they’re all bought up over here, so I think that’s why I didn’t own it yet. But it’s a great tune.

DTrace on xenstored

Feb 1, 2008
DTrace support for xenstored has just been merged in the upstream community version of Xen. Why is it useful?

The daemon xenstored runs in dom0 userspace, and implements a simple 'store' of configuration information. This store is used for storing parameters used by running guest domains, and interacts with dom0, guest domains, qemu, xend, and others. These interactions can easily get pretty complicated as a result, and visualizing how requests and responses are connected can be non-obvious.

The existing community solution was a 'trace' option to xenstored: you could restart the daemon and it would record every operation performed. This worked reasonably well, but was very awkward: restarting xenstored means a reboot of dom0 at this point in time. By the time you've set up tracing, you might not be able to reproduce whatever you're looking at any more. Besides, it's extremely inconvenient.

It was obvious that we needed to make this dynamic, and DTrace USDT (Userspace Statically Defined Tracing) was the obvious choice. The patch adds a couple of simple probes for tracking requests and responses; as usual, they're activated dynamically, so have (next to) zero impact when they're not used. On top of these probes I wrote a simple script called xenstore-snoop. Here's a couple of extracts of the output I get when I start a guest domain:

# /usr/lib/xen/bin/xenstore-snoop 
DOM  PID      TX     OP
0    100313   0      XS_GET_DOMAIN_PATH: 6 -> /local/domain/6
0    100313   0      XS_TRANSACTION_START:  -> 930
0    100313   930    XS_RM: /local/domain/6 -> OK
0    100313   930    XS_MKDIR: /local/domain/6 -> OK
...
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/state -> 4
6    0        0      XS_READ: device/vbd/0/state -> 3
0    0        -      XS_WATCH_EVENT: /local/domain/6/device/vbd/0/state FFFFFF0177B8F048
6    0        -      XS_WATCH_EVENT: device/vbd/0/state FFFFFF00C8A3A550
6    0        0      XS_WRITE: device/vbd/0/state 4 -> OK
0    0        0      XS_READ: /local/domain/6/device/vbd/0/state -> 4
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/feature-barrier -> 1
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/sectors -> 16777216
6    0        0      XS_READ: /local/domain/0/backend/vbd/6/0/info -> 0
6    0        0      XS_READ: device/vbd/0/device-type -> disk
6    0        0      XS_WATCH: cpu FFFFFFFFFBC2BE80 -> OK
6    0        -      XS_WATCH_EVENT: cpu FFFFFFFFFBC2BE80
6    0        0      XS_READ: device/vif/0/state -> 1
6    0        0      [ERROR] XS_READ: device/vif/0/type -> ENOENT
...

This makes the interactions immediately obvious. We can observe the Xen domain that's doing the request, the PID of the process (this only applies to dom0 control tools), the transaction ID, and the actual operations performed. This has already proven of use in several investigations.

Of course this being DTrace, this is only part of the story. We can use these probes to correlate system behaviour: for example, xenstored transactions are currently rather heavyweight, as they involve copying a large file; these probes can help demonstrate this. Using Python's DTrace support, we can look at which stack traces in xend correspond to which requests to the store; and so on.

This feature, whilst relatively minor, is part of an ongoing plan to improve the observability and RAS of Xen and the solutions Sun are building on top of it. It's very important to us to bring Solaris's excellent observability features to the virtualization space: you've seen the work with zones in this area, and you can expect a lot more improvements for the Xen case too.

IRC

I meant to say: after my previous post, I resurrected #opensolaris-dev: if you'd like to talk about OpenSolaris development in a non-hostile environment, please join!

Tags:

#opensolaris

Dec 18, 2007
When OpenSolaris got started, #solaris was a channel filled with pointless rants about GNU-this and Linux-that. Beside complete wrong-headedness, it was a total waste of time and extremely hostile to new people. #opensolaris, in contrast, was actually pretty nice (for IRC!) - sure, the usual pointless discussions but it certainly wasn't hateful.

Recently I'm sad to say #opensolaris has become a really hostile, unpleasant place. I've seen new people arrive and be bullied by a small number of poisonous people until they went away (nice own goal, people!). So if anyone's looking for me for xVM stuff or whatever, I'll be in #onnv-scm or #solaris-xen as usual. And if you do so, please try to keep a civil tongue in your head - it's not hard.

Xen compatibility with Solaris

Dec 6, 2007
Maintaining the compatibility of hardware virtualization solutions can be tricky. Below I'll talk about two bugs that needed fixes in the Xen hypervisor. Both of them have unfortunate implications for compatibility, but thankfully, the scope was limited.

6616864 amd64 syscall handler needs fixing for xen 3.1.1

Shortly after the release of 3.1.1, we discovered that all 64-bit processes in a Solaris domain would segfault immediately. After much debugging and head-scratching, I eventually found the problem. On AMD64, 64-bit processes trap into the kernel via the syscall instruction. Under Xen, this will obviously trap to the hypervisor. Xen then 'bounces' this back to the relevant OS kernel.

On real hardware, %rcx and %r11 have specific meanings. Prior to 3.1.1, Xen happened to maintain these values correctly, although the layout of the stack is very different from real hardware. This was broken in the 3.1.1 release: as a result, the %rflags of each process was corrupted, and segfaulted almost immediately. We fixed the bug in Solaris, so we would still work with 3.1.1. This was also fixed (restoring the original semantics) in Xen itself in time for the 3.1.2 release. So there's a small window (early Solaris xVM releases and community versions of Xen 3.1.1) where we're broken, but thankfully, we caught this pretty early. The lesson to be drawn? Clear documentation of the hypervisor ABI would have helped, I think.

6618391 64-bit xVM lets processes fiddle with kernelspace, but Xen bug saves us

Around the same time, I noticed during code inspection that we were still setting PT_USER in PTE entries on 64-bit. This had some nasty implications, but first, some background.

On 32-bit x86, Xen protects itself via segmentation: it carves out the top 64Mb, and refuses to let any of the domains load a segment selector that allows read or write access to that part of the address space. Each domain kernel runs in ring 1 so can't get around this. On 64-bit, this hack doesn't work, as AMD64 does not provide full support for segmentation (given what a legacy technique it is). Instead, and somewhat unfortunately, we have to use page-based permissions via the VM system. Since page table entries only have a single bit ("user/supervisor") instead of being able to say "ring 1 can read, but ring 3 cannot", the OS kernel is forced into ring 3. Normally, ring 3 is used for userspace code. So every time we switch between the OS kernel and userspace, we have to switch page tables entirely - otherwise, the process could use the kernel page tables to write to kernel address-space.

Unfortunately, this means that we have to flush the TLB every time, which has a nasty performance cost. To help mitigate this problem, in Xen 3.0.3, an incompatible change was made. Previously, so that the kernel (running in ring 3, remember) could access its address space, it had to set PT_USER int its kernel page table entries (PTEs). With 3.0.3, this was changed: now, the hypervisor would automatically do that. Furthermore, if Xen did see a PTE with PT_USER set, then it assumed this was a userspace mapping. Thus, it also set PT_GLOBAL, a hardware feature - if such a bit is set, then a corresponding TLB entry is not flushed. This meant that switching between userspace and the OS kernel was much faster, as the TLB entries for userspace were no longer flushed.

Unfortunately, in our kernel, we missed this change in some crucial places, and until we fixed the bug above, we were setting PT_USER even on kernel mappings. This was fairly obviously A Bad Thing: if you caught things just right, a kernel mapping would still be present in the TLB when a user-space program was running, allowing userspace to read from the kernel! And indeed, some simple testing showed this:

dtrace -qn 'fbt:genunix::entry /arg0 > `kernelbase/ { printf("%p ", arg0); }' | \
    xargs -n 1 ~johnlev/bin/i386/readkern | while read ln; do echo $ln::whatis | mdb -k ; done

With the above use of DTrace, MDB, and a little program that attempts to read addresses, we can see output such as:

ffffff01d6f09c00 is ffffff01d6f09c00+0, allocated as a thread structure
ffffff01c8c98438 is ffffff01c8c983e8+50, bufctl ffffff01c8ebf8d0 allocated from as_cache
ffffff01d6f09c00 is ffffff01d6f09c00+0, allocated as a thread structure
ffffff01d44d7e80 is ffffff01d44d7e80+0, bufctl ffffff01d3a2b388 allocated from kmem_alloc_40
ffffff01d44d7e80 is ffffff01d44d7e80+0, bufctl ffffff01d3a2b388 allocated from kmem_alloc_40

Thankfully, the fix was simple: just stop adding PT_USER to our kernel PTE entries. Or so I thought. When I did that, I noticed during testing that the userspace mappings weren't getting PT_GLOBAL set after all (big thanks to MDB's ::vatopfn, which made this easy to see).

Yet more investigation revealed the problem to be in the hypervisor. Unlike certain other popular OSes used with Xen, we set PTE entries in page tables using atomic compare and swap operations. Remember that under Xen, page tables are read-only to ensure safety. When an OS kernel tries to write a PTE, a page fault happens in Xen. Xen recognises the write as an attempt to update a PTE and emulates it. However, since it hadn't been tested, this emulation path was broken: it wasn't doing the correct mangling of the PTE entry to set PT_GLOBAL. Once again, the actual fix was simple.

By the way, that same putback also had the implementation of:

6612324 ::threadlist could identify taskq threads

I'd been doing an awful lot of paging through ::threadlist output recently, and always having to jump through all the (usually irrelevant) taskq threads was driving me insane. So now you can just specify ::threadlist -t and get a much, much, shorter list.

Tags:

OpenSolaris xVM now available in SX:CE

Oct 23, 2007
Build 75 of Solaris Express Community Edition is now out, and it includes our bits. So go ahead, install build 75, select the xVM entry in grub and play around! We're still working on updating the documentation on our community page; in the meantime, you have manpages - start at xVM(5) (and note that the forthcoming build 76 has much improved versions of those docs).

You might be wondering if your machine is capable of running Windows or other operating systems under HVM. Joe Bonasera has a simple program you can run that will tell you. Alternatively, if you're already running with our bits, running 'virt-install' will tell you - if it asks you about creating a fully-virtualized domain, then it should work, and you can end up with a desktop like Russell Blaine's.

Nils, meanwhile, describes how we've improved the RAS of the hypervisor by integrating it with Solaris crash dumps here. This feature has saved our lives numerous times during development as those of us who've done the "hex dump" debugging thing know very well.

Of course, we're not done yet - we have bugs to fix and rough edges to smooth out, and we have significant features to implement. One of the major items we're working on in the near future is the upgrade to Xen 3.1.1 (or possibly 3.1.2, depending on timelines!). This will give us the ability to do live migration of HVM domains, along with a host of other features and improvements.

Tags:

Automatic start/stop of Xen domains

Aug 1, 2007
After answering a query, I said I'd write a blog entry describing what changes we've made to support clean shutdown and start of Xen domains.

Bernd refers to an older method of auto-starting Xen domains used on Linux. In fact, this method has been replaced with the configuration parameters on_xend_start and on_xend_stop. Setting these can ensure that a Xen domain is cleanly shut down when the host (dom0) is shut down, and started automatically as needed. For somewhat obvious reasons, we'd like to have the same semantics as used with zones, if not quite the same implementation (yet, at least).

When I started looking at this, I realised that the community solution had some problems:

Clean shutdown wasn't the default

It seems obvious that by default I'd like my operating systems to shut down cleanly. Only in unusual circumstances would I be happy with an OS being unceremoniously destroyed. We modified our Xen gate to default to on_xend_stop=shutdown.

Suspend on shutdown was dangerous

It is possible to specify on_xend_stop=suspend; this will save the running state to an image file and then destroy the domain (like xm save). However, there is not corresponding on_xend_start setting, nor any logic to ensure that the values match. This is both apparently useless and even dangerous, since starting a new domain but with old file-system state from a suspended domain could be problematic. We've disabled this functionality.

Actions are tied into xend

This was the biggest problem for us: as modelled, if somebody stops xend, then all the domains would be shut down. Similarly, if xend restarts for whatever reason (say, a hardware error), it would start domains again. We've modified this on Solaris. Instead of xend operating on these values, we introduce a new SMF service, system/xctl/domains, that auto-starts/stops domains as necessary. This service is pretty similar to system/zones. We've set up the dependencies such that a restart of the Xen daemons won't cause any running domains to be restarted. For this to work properly within the SMF framework, we also had to modify xend to wait for all domains to finish their state transitions.

You can find our changes here. And yes, we still need to take system/xctl/domains to PSARC.

Clean shutdown implementation

You might be wondering how the dom0 even asks the guest domains to shut down cleanly. This is done via a xenstore entry, control/shutdown. The control tools write a string into this entry, which is being "watched" by the domain. The kernel then reads the value and responds appropriately (xen_shutdown()), triggering a user-space script via the sysevent framework. If nothing happens for a while, it's possible that the script couldn't run for whatever reason. In that case, we time-out and force a "dirty" shutdown from within the kernel.

Tags: