Old web content

Mar 15, 2011
I think it's important that everyone should endeavour to maintain existing web content, even if it's not currently relevant.

Enabling xVM on OpenSolaris

Oct 29, 2009
Another significant usability improvement that landed in build 126 is Gary and Bill's work on enabling Xen. Now, running xVM should be as simple as:

# pkg install xvm-gui
# echo 'set zfs:zfs_arc_max = 0x10000000' >>/etc/system # yes, you still need this, sadly
# svcadm enable -r milestone/xvm
# reboot

There's also a new Visual Panel for doing this if you prefer a graphical method. More in the flag day message.

Tags:

Dry-run migration

Oct 29, 2009
As part of our ongoing work on improving the ease of use of xVM, the newly available build 126 of OpenSolaris has my putback for:

6878952 Would like dry-run migration

This feature is useful for doing a simple check as to whether a guest can successfully migrate to another dom0 host. For example, domu-221 here is using a disk path that doesn't exist on the remote host hiss:

# virsh migrate --dryrun domu-221 xen:/// hiss    
error: POST operation failed: xend_post: error from xen daemon:
(xend.err 'Remote server error: Access to vbd:768 failed: error: "/iscsi/nevada-hvm" is not a valid block device.')

This works both with running and shutdown guests. Currently, the checks are fairly limited: are disks of the same path available on the remote host (note there is no checking of GUIDs or whatever to verify they really are the same piece of shared storage); is there enough memory on the remote host; and is the remote host the same CPU vendor. We expect these checks to improve both in scope and in reliability in the future.

Tags:

xVM and COMSTAR iSCSI

Oct 15, 2009
I recently had cause to try out COMSTAR for the first time, and I thought I'd write up the steps needed. Unfortunately, it's considerably more complex than the fall-over-easy shareiscsi=on ZFS feature.

Configuring the COMSTAR server

First install the storage-server packages and enable the services:

# svcadm enable -r stmf
# svcadm enable -r iscsi/target

We want to create a target group for each of our xVM guests, each of which will have one LUN in it. After creating the LUN, we define a "view" that allows that LUN to be visible for that target group:

# stmfadm create-tg domu-226
# zfs create -V 15G export/domu-226
# stmfadm create-lu /dev/zvol/rdsk/export/domu-226
Logical unit created: 600144F0C73ABF0F00004AD75DF2001A
# stmfadm add-view -t domu-226 600144F0C73ABF0F00004AD75DF2001A

Now we need to create the iSCSI target for this target group, that has our single LUN in it.

# itadm create-target -l domu-226
Target iqn.1986-03.com.sun:02:b8596bb9-9bb9-40e9-8cda-add6073ece46 successfully created

Here (finally) is our iSCSI Alias we can use in the clients. But we're not done yet. By default, this target will be able to see all LUNs not in a target group. So we need to make it a member of our domu-226 target group:

# stmfadm add-tg-member -g domu-226 iqn.1986-03.com.sun:02:b8596bb9-9bb9-40e9-8cda-add6073ece46
# stmfadm list-tg -v
Target Group: domu-226
        Member: iqn.1986-03.com.sun:02:b8596bb9-9bb9-40e9-8cda-add6073ece46

Configuring the iSCSI initiator (client)

We do this in the usual manner:

# svcadm enable -r svc:/network/iscsi/initiator:default
# iscsiadm add discovery-address 10.6.70.43:3260
# iscsiadm modify discovery --sendtargets enable

Installing a guest onto the LUN

We went through the above gymnastics so we can have a human-readable Alias for each of the domu's root LUNs. So now we can do:

# virt-install --paravirt --name domu-226 --ram 1024 --os-type solaris --os-variant opensolaris \
  --location nfs:10.5.235.28:/export/nv/x/latest --network bridge,mac=00:14:4f:0f:b5:3e \
  --disk path=/alias/domu-226,driver=phy,subdriver=iscsi \
  --nographics

Tags:

OpenSolaris 2009.06 guest domain on a Linux dom0

Jun 2, 2009
Just a quick note: you can follow the instructions I provided for the 2008.11 release, with one change. On a 64-bit machine, replace any instances of /boot/x86.microroot with /boot/amd64/x86.microroot. As of 2009.06, the boot archive is split into 32-bit and 64-bit variants. If you get a message like this:

krtld: failed to open '/platform/i86xpv/kernel/amd64/unix'

Then you've probably given the wrong combination of unix and microroot.

By the way, in my previous entry, I mentioned we were working on upstreaming our virt-install changes. During the Xen 3.3 work (more on which soon), I updated to the latest versions and got the needed parts into the upstream version. We've still some ZFS changes to push, but if you're running a recent enough version of Xen on Linux, you may well be able to use virt-install and skip all this horrible hacking!

Begone, trailing spaces!

Feb 3, 2009
I read my work email with mutt on a Solaris 9 box. For a while it's been irritating me that when you attempt to cut and paste, it will include trailing spaces on each line instead of stopping at the last "real" character. Some Googling suggested this was because of the lack of the BCE attribute in my xterm-color terminfo definition. Rather than learn how to compile terminfo entries (I've done it before, but I don't want to learn again!), I took the lazier approach: copy /usr/share/terminfo/s/screen-256color-bce from a Fedora 8 box into /home/johnlev/.terminfo/s/, and start mutt with TERM and TERMINFO set appropriately. Now I can cut and paste sanely again.

Tags:

OpenSolaris 2008.11 as a dom0

Jan 26, 2009
UPDATE: the canonical location for this information is now here - please check there, as it will be updated as necessary, unlike this blog entry.

As a final part to my entries on OpenSolaris and Xen, let's go through the steps needed to turn OpenSolaris into a dom0. Thanks to Trevor O for documenting this for 2008.05. And as before, expect this process to get much, much, easier soon!

I'm going to do the work in a separate BE, so if we mess up, we shouldn't have broken anything. So, first we create our BE:

$ pfexec beadm create -a -d xvm xvm
First, let's install the packages. If you've updated to the development version, a simple pkg install xvm-gui will work, but let's assume you haven't:

$ pfexec beadm mount xvm /tmp/xvm-be
$ pfexec pkg -R /tmp/xvm-be install SUNWvirt-manager SUNWxvm SUNWvdisk SUNWvncviewer
$ pfexec beadm umount xvm

Now we need to actually reboot into Xen. Unfortunately beadm is not yet aware of how to do this, so we'll have to hack it up. We're going to run some awk over the menu.lst file which controls grub:

$ awk '
/^title/ { xvm=0; }
/^title.xvm$/ { xvm=1; }
/^(splashimage|foreground|background)/ {
    if (xvm == 1) next
}
/^kernel\$/ {
    if (xvm == 1) {
       print("kernel\$ /boot/\$ISADIR/xen.gz")
       sub("^kernel\\$", "module$")
       gsub("console=graphics", "console=text")
       gsub("i86pc", "i86xpv")
       $2=$2 " " $2
    }
}
{ print }' /rpool/boot/grub/menu.lst >/var/tmp/menu.lst.xvm

Let's check that the awk script (my apologies) worked properly:

$ tail /var/tmp/menu.lst.xvm 
...
#============ End of LIBBE entry =============
title xvm
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/xvm
kernel$ /boot/$ISADIR/xen.gz
module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=text
module$ /platform/i86pc/$ISADIR/boot_archive
#============ End of LIBBE entry =============

Looks good. We'll move it into place, and reboot:

$ pfexec cp /rpool/boot/grub/menu.lst /rpool/boot/grub/menu.lst.saved
$ pfexec mv /var/tmp/menu.lst.xvm /rpool/boot/grub/menu.lst
$ pfexec reboot

This should boot you into xVM. If everything worked OK, let's enable the services:

$ svcadm enable -r xvm/virtd ; svcadm enable -r xvm/domains

At this point, you should be able to merrily go ahead and install domains!

Update: Todd Clayton pointed out the issue I've filed here: SUNWxvm needs to depend on SUNWvdisk. I've updated the instructions above with the workaround.

Update update: Rich Burridge has fixed it. Nice!

Tags:

OpenSolaris 2008.11 guest domain on a Linux dom0

Dec 11, 2008
My previous blog post described how to install OpenSolaris 2008.11 on a Solaris dom0 under Xen. This also works on with a Linux dom0. However, since upstream is missing some of our dom0 fixes, it's unfortunately more complicated. In particular, we can't use virt-install, as it doesn't know about Solaris ISOs, and later on, we can't use pygrub to boot from ZFS, since it doesn't know how to read such a filesystem. Bear with me, this gets a little awkward.

This example is using a 32-bit Fedora 8 installation. Your milage is likely to vary if you're using a different version, or another Linux distribution. First some of the configuration parameters you might want to change:

export name="domu-224"
export iso="/isos/osol-2008.11.iso"
export dompath="/export/guests/2008.11"
export rootdisk="$dompath/root.img"
export unixfile="/platform/i86xpv/kernel/unix"

If you're on 64-bit Linux, set unixfile="/platform/i86xpv/kernel/amd64/unix" instead. We need to create ourselves a 10Gb root disk:

mkdir -p $dompath
dd if=/dev/zero count=1 bs=$((1024 * 1024)) seek=10230 of=$rootdisk

Now let's use the configuration we need to install OpenSolaris:

cat >/tmp/domain-$name.xml <<EOF
<domain type='xen'>
 <name>$name</name>
 <bootloader>/usr/bin/pygrub</bootloader>
 <bootloader_args>--kernel=/platform/i86xpv/kernel/unix --ramdisk=/boot/x86.microroot</bootloader_args>
 <memory>1048576</memory>
 <on_reboot>destroy</on_reboot>
 <devices>
  <interface type='bridge'>
   <source bridge='eth0' />
   <--
       If you have a static DHCP setup, add the domain's MAC address here
       <mac address='00:16:3e:1b:e8:18' />
   -->
  </interface>
  <disk type='file' device='cdrom'>
   <driver name='file' />
   <source file='$iso' />
   <target dev='xvdc:cdrom' />
  </disk>
  <disk type='file' device='disk'>
   <driver name='file' />
   <source file='$rootdisk' />
   <target dev='xvda' />
  </disk>
 </devices>
</domain>
EOF

And start up the domain:

virsh create /tmp/domain-$name.xml
virsh console $name

Now you're dropped into the domain's console, and you can use the VNC trick I described to do the install. Answer the questions, wait for the domain to DHCP, then:

domid=`virsh domid $name`
ip=`/usr/bin/xenstore-read /local/domain/$domid/ipaddr/0`
port=`/usr/bin/xenstore-read /local/domain/$domid/guest/vnc/port`
/usr/bin/xenstore-read /local/domain/$domid/guest/vnc/passwd
vncviewer $ip:$port

At this point, you can proceed with the installation as normal. Before you reboot though, we need to do some tricks, due to the lack of ZFS support mentioned above. Whilst still in the live CD environment, bring up a terminal. We need to copy the new kernel and ramdisk to the Linux dom0. We can automate this via a handy script:

#/bin/bash

dom0=$1
dompath=$2
unixfile=/platform/i86xpv/kernel/$3/unix

root=`pfexec beadm list -H |  grep ';N*R;' | cut -d \; -f 1`
mkdir /tmp/root
pfexec beadm mount $root /tmp/root 2>/dev/null
mount=`pfexec beadm list -H $root | cut -d \; -f 4`
pfexec bootadm update-archive -R $mount
scp $mount/$unixfile [email protected]$dom0:$dompath/kernel.$root
scp $mount/platform/i86pc/$3/boot_archive [email protected]$dom0:$dompath/ramdisk.$root
pfexec beadm umount $root 2>/dev/null
echo "Kernel and ramdisk for $root copied to $dom0:$dompath"
echo "Kernel cmdline should be:"
echo "$unixfile -B zfs-bootfs=rpool/ROOT/$root,bootpath=/xpvd/[email protected]:a"

For example, we might do:

/tmp/update_dom0 linux-dom0 /export/guests/2008.11
or on 64-bit:
/tmp/update_dom0 linux-dom0 /export/guests/2008.11 amd64

Now, you can finish the installation by clicking the reboot button. This will shut down the domain, ready to run. But first we need the configuration file for running the domain:

cat >/$dompath/$name.xml <<EOF
<domain type='xen'>
 <name>$name</name>
 <os>
  <kernel>$dompath/kernel.opensolaris</kernel>
  <initrd>$dompath/ramdisk.opensolaris</initrd>
  <cmdline>$unixfile -B zfs-bootfs=rpool/ROOT/opensolaris,bootpath=/xpvd/[email protected]:a</cmdline>
 </os>
 <memory>1048576</memory>
 <devices>
  <interface type='bridge'>
   <source bridge='eth0'/>
  </interface>
  <disk type='file' device='disk'>
   <driver name='file' />
   <source file='$rootdisk' />
   <target dev='xvda' />
  </disk>
 </devices>
</domain>

virsh define $dompath/$name.xml
virsh start $name
virsh console $name

It should be booting, and you're (finally) done!

Updating the guest

Unfortunately we're not quite out of the woods yet. What we have works fine, but if we update the guest via pkg image-update, we'll need to make changes in dom0 to boot the new boot environment. The update_dom0 script above will do a fine job of copying out the new kernel and ramdisk for the BE that's active on reboot, but you also need to edit the config file. For example, if I wanted to boot into the new BE called opensolaris-1, I'd replace these lines:

<kernel>$dompath/kernel.opensolaris</kernel>
<initrd>$dompath/ramdisk.opensolaris</initrd>
<cmdline>$unixfile -B zfs-bootfs=rpool/ROOT/opensolaris,bootpath=/xpvd/[email protected]:a</cmdline>

with these:

<kernel>$dompath/kernel.opensolaris-1</kernel>
<initrd>$dompath/ramdisk.opensolaris-1</initrd>
<cmdline>$unixfile -B zfs-bootfs=rpool/ROOT/opensolaris-1,bootpath=/xpvd/[email protected]:a</cmdline>

then re-configure the domain (whist it's shut down) via virsh undefine $name ; virsh define $dompath/$name.xml.

Yes, we're aware this is rather over-complicated. We're trying to find the time to send our changes to virt-install upstream, as well as ZFS support. Eventually this will make it much easier to use a Linux dom0.

Tags:

OpenSolaris 2008.11 as a para-virtual Xen guest

Dec 10, 2008
UPDATE: the canonical location for this information is now here - please check there, as it will be updated as necessary, unlike this blog entry.

As well obviously working with VirtualBox, OpenSolaris can also run as a guest domain under Xen. The installation CD ships with the paravirtual extensions so you can run it as a fully para-virtualized guest. This provides a significant advantage over fully-virtualized guests, or even guests with para-virtual drivers like Solaris 10 Update 6. Of course, if you choose to, you can still run OpenSolaris fully-virtualized (a.k.a. HVM mode), but there's little advantage to doing so.

One slight wrinkle is that Solaris guests don't yet implement the virtual framebuffer that the Xen infrastructure supports. Since OpenSolaris doesn't yet have a text-mode install, this means that to install such a PV guest, we need a way to bring up a graphical console.

With 2008.11, this is considerably easier. Presuming we're running a Solaris dom0 (either Nevada or OpenSolaris, of course), let's start an install of 2008.11:

# zfs create rpool/zvol
# zfs create -V 10G rpool/zvol/domu-220-root
# virt-install --nographics --paravirt --ram 1024 --name domu-220 -f /dev/zvol/dsk/rpool/zvol/domu-220-root -l /isos/osol-2008.11.iso

This will drop you into the console for the guest to ask you the two initial questions. Since they're not really important in this circumstance, you can just choose the defaults. This example presumes that you have a DHCP server set up to give out dynamic addresses. If you only hand out addresses statically based on MAC address, you can also specify the --mac option. As OpenSolaris more-or-less assumes DHCP, it's recommended to set one up.

Now we need a graphical console in order to interact with the OpenSolaris installer. If the guest domain successfully finished booting the live CD, a VNC server should be running. It has recorded the details of this server in XenStore. This is essentially a name/value config database used for communicating between guest domains and the control domain (dom0). We can start a VNC session as follows:

# domid=`virsh domid domu-220`
# ip=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/ipaddr/0`
# port=`/usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/port`
# /usr/lib/xen/bin/xenstore-read /local/domain/$domid/guest/vnc/passwd
DJP9tYDZ
# vncviewer $ip:$port

At the VNC password prompt, enter the given password, and this should bring up a VNC session, and you can merrily install away.

Implementation

The live CD runs a transient SMF service system/xvm/vnc-config. If it finds itself running on a live CD, it will generate a random VNC password, configure application/x11/x11-server to start Xvnc, and write the values above to XenStore. When application/graphical-login/gdm starts, it will read these service properties and start up the VNC server. The service system/xvm/ipagent tracks the IPv4 address given to the first running interface and writes it to XenStore.

By default, the VNC server is configured not to run post-installation due to security concerns. This can be changed though, as follows:

# svccfg -s x11-server
setprop options/xvm_vnc = "true"

Please remember that VNC is not secure. Since you need elevated privileges to read the VNC password from XenStore, that's sufficiently protected, as long as you always run the VNC viewer locally on the dom0, or via SSH tunnelling or some other secure method.

Note that this works even with a Linux dom0, although you can't yet use virt-install, as the upstream version doesn't yet "know about" OpenSolaris (more on this later).

Tags:

Building OpenSolaris ISOs

Oct 22, 2008
I've recently been figuring out to build OpenSolaris ISOs (from SVR4 packages). It's surprisingly easy, but at least the IPS part is not well documented, so I thought I'd write up how I do it.

There are three main things you're most likely to want to do: build IPS itself, populate an IPS repository, and build an install ISO based on that repository. First, you'll want a copy of the IPS gate:

hg clone ssh://[email protected]/hg/pkg/gate pkg-gate
For some of my testing, I wanted to test some changed packages. So I mounted a Nevada DVD on /mnt/, then, using mount -F lofs, replaced some of the package directories with ones I'd built previously with my fixes. This effectively gave me a full Nevada DVD with my fixes in, avoiding the horrors of making one. I then cd pkg-gate, and run something like this:

$ cat build-ips
export WS=$1
export REPO=http://localhost:$2
unset http_proxy || true
set -e
echo "START `date`"
cd $WS/src
make install packages
cd $WS/src/util/distro-import
export NONWOS_PKGS="/net/paradise/export/integrate_dock/nv/nv_osol0811/all \
/net/paradise/export/integrate_dock/nv/nv_osol0811/i386"
export WOS_PKGS="/mnt/Solaris_11/Product/"
export PYTHONPATH=$WS/proto/root_i386/usr/lib/python2.4/vendor-packages/
export PATH=$WS/proto/root_i386/usr/bin/:$WS/proto/root_i386/usr/lib:$PATH
nohup pkg.depotd -p $2 -d /var/tmp/$USER/repo &
sleep 5
make -e 99/slim_import
echo "END `date`"
$ ./build-ips `pwd` 10023

In fact, since I was running on an older version Nevada (89, precisely), I had to stop after the make install and change src/pyOpenSSL-0.7/setup.py to pick up OpenSSL from /usr/sfw:

IncludeDirs =  [ '/usr/sfw/include' ]
LibraryDirs =  [ '/usr/sfw/lib' ]

(If /usr/bin/openssl exists, you don't need this). So, after this step, which build the IPS tools (and SVR4 package for it), it moves into the "distro-import" directory. This is really a completely different thing from IPS itself, but for convenience it lives in the IPS gate. Its job is to take a set of SVR4 packages (that is, the old Solaris package format) and upload them to a given IPS network repository: in this case, http://localhost:10023.

So, making sure we use the IPS tools we just built, we point a couple of environment variables to the package locations. "WOS" stands for, charmingly, "Wad Of Stuff", and in this context means "packages delivered to Solaris Nevada". There's also some extra packages used for OpenSolaris, listed here as NONWOS_PKGS. I'm not sure where external people can get them from, though.

The core of distro-import is the solaris.py script, which does the job of transliterating from SVR4-speak into pkgsend(1)-speak. As well as a straight translation, though, a small number of customisations to the existing packages are also made to account for OpenSolaris differences. These are done by dropping the original file contents and picking them up from an ad-hoc SUNWfixes SVR4 package built in the same directory.

Of course, each build has its differences, so they're separated out into sub-directories. As you can see above, to run the import, we make a 99/slim_import target. This basically runs solaris.py for every package listed in the file 99/slim_custer. This list is more or less what makes up the contents of the live CD. Also of interest is the redist_import target, which builds every package available (see http://pkg.opensolaris.org). By the way, watch out for distro-import/README: it's not quite up to date.

Another super useful environment variable is JUST_THESE_PKGS: this will only build and import the packages listed. Very useful if you're tweaking a package and don't want to re-import the whole cluster!

At the end of this build, we now have a populated IPS repository living at http://localhost:10023. If we already have an installed OpenSolaris, we could easily use this to install individual new packages, or do an image update (where ipshost is the remote name of your build machine):

# pkg set-authority -P -O http://ipshost:10023 myipsrepo
# pkg install SUNWmynewpackage # or...
# pkg image-update

If we want to test installer or live CD changes, though, we'll need to build an ISO. I did this for the first time today, and it's fall-over easy. First you need an OpenSolaris build machine, and type:

# pkg install SUNWdistro-const

Modify slim_cd.xml to point to your repository, as described here. It's not immediately obvious, but you can specify your URL as http://ipshost:10023 if you're not using the standard port, like me. Then:

# distro_const build ./slim_cd.xml

And that's it: you'll have a fully-working OpenSolaris ISO in /export/dc_output/ (I understand it's a different location after build 99, though). I never knew building an install ISO could be so simple!

Tags: