Welcome back!
In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain. We saw how resources were put aside for guest systems and what infrastructure we need for them. With that, we are now ready to create a first, very simple guest
domain. In this first example, we'll keep things very simple. Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security.
For now,let's start with this very simple guest. It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot
disk and one network port. CPU and RAM are easy. The network port
we'll create by attaching a virtual network port to the vswitch we created in the primary domain. This is very much like plugging a cable into a computer system on one end and a network switch on the other. For the boot disk, we'll need two
things: A physical piece of storage to hold the data - this is called
the backend device in LDoms speak. And then a mapping between that
storage and the guest domain, giving it access to that virtual disk.
For this example, we'll use a ZFS volume for the backend. We'll discuss what other options there are for this and how to chose the right one in a later article. Here we go:
root@sun # ldm create mars
root@sun # ldm set-vcpu 8 mars
root@sun # ldm set-mau 1 mars
root@sun # ldm set-memory 8g mars
root@sun # zfs create rpool/guests
root@sun # zfs create -V 32g rpool/guests/mars.bootdisk
root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \
mars.root@primary-vds
root@sun # ldm add-vdisk root mars.root@primary-vds mars
root@sun # ldm add-vnet net0 switch-primary mars
That's all, mars is now ready to power on. There are just three
commands between us and the OK prompt of mars: We have to "bind" the
domain, start it and connect to its console. Binding is the process
where the hypervisor actually puts all the pieces that we've configured
together. If we made a mistake, binding is where we'll be told
(starting in version 2.1, a lot of sanity checking has been put into the
config commands themselves, but binding will catch everything else).
Once bound, we can start (and of course later stop) the domain, which
will trigger the boot process of OBP. By default, the domain will then
try to boot right away. If we don't want that, we can set "auto-boot?"
to false. Finally, we'll use telnet to connect to the console of our newly created guest. The output of "ldm list" shows us what port has been assigned to mars. By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here.
root@sun # ldm set-variable auto-boot\?=false mars
root@sun # ldm bind mars
root@sun # ldm start mars
root@sun # ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m
mars active -t---- 5000 8 8G 12% 1s
root@sun # telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
~Connecting to console "mars" in group "mars" ....
Press ~? for control options ..
{0} ok banner
SPARC T3-4, No Keyboard
Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131.
Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50.
{0} ok
We're done, mars is ready to install Solaris, preferably using AI, of course ;-) But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here:
{0} ok printenv auto-boot?
auto-boot? = false
{0} ok printenv boot-device
boot-device = disk net
{0} ok devalias
root /virtual-devices@100/channel-devices@200/disk@0
net0 /virtual-devices@100/channel-devices@200/network@0
net /virtual-devices@100/channel-devices@200/network@0
disk /virtual-devices@100/channel-devices@200/disk@0
virtual-console /virtual-devices/console@1
name aliases
We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked. Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started. The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image. The actual devices these aliases point to are shown with the command "devalias". Here, we have one line for both "disk" and "net". The device paths speak for themselves. Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device. These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk". Remember this, as it is very useful once you have several dozen disk devices...
To wrap this up, in this part we've created a simple guest
domain, complete with CPU, memory, boot disk and network connectivity.
This should be enough to get you going. I will cover all the more
advanced features and a little more theoretical background in several
follow-on articles. For some background reading, I'd recommend the following links:
LDoms 2.2 Admin Guide: Setting up Guest Domains
Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session.
OpenBoot 4.x command reference - All the things you can do at the ok prompt