Sunteți pe pagina 1din 40

How to Create I/O Domain - LDOM

Raj Kumar
on 14:15
No Comment

In earlier post we achieve the knowledge about Control, Service


and Guest Domain. Now we are going to discuss about the I/O domain functions and
configuration methods. I/O Domain is the domain that has direct access of physical I/O devices.
We can create maximum number of I/O Domain, it depends on the number of PCI buses
available on the Server. For example, If we have Sun SPARC Enterprises T5440 server which
has 4 PCI-E buses, then we can create 4 I/O domains. If we have SUN SPARC Enterprises
T5140 server which has 2 PCI-E buses, then we can create 2 I/O domains. Let we start with High
Level Plan for creating I/O Domain.
List of Topics :

Basic of LDOM

Install Prerequisites Packages for LDM.

Install LDM Agents. (LDM download link)

Allocate Resource to Control Domain.

Adding/Deleting Service Domain

Adding/Deleting Logical Domain.

Export ISO Image to Logical Domain

Adding I/O Domain.

Migrate the Logical Domain to another the Primary Domain.

LDOM Quick Reference Guide

High Level Plan :

Identify PCI-E buses which are not currently used by the Control Domain.

Remove the identified PCI-E buses from Control Domain.

Save the Configuration in to Service Processor (spconfig)

Reboot the Primary or Control Domain to take effect

Stop the Logical Domain which you are going to add the PCI bus.

Add the PCI-E bus to the Logical Domain.

Start the Logical Domain and Verifying the PCI bus.

Checking PCI buses which are available in Primary domain

[root@unixrock /]# ldm list-devices primary |grep -i pci


pci@400
pci_0
yes
pci@500
pci_1
yes
[root@unixrock /]#

Identify PCI buses which are not currently used by the Control Domain. In the below example,
we could confirm "pci@500" not used, as another "pci@400" already in use.
[root@unixrock /]# echo | format
Searching for disks...done

AVAILABLE DISK SELECTIONS:


0. c1t0d0
solaris
/pci@400/pci@0/pci@8/scsi@0/sd@0,0
1. c1t1d0
/pci@400/pci@0/pci@8/scsi@0/sd@1,0
[root@unixrock /]#

[root@unixrock /]# cat /etc/path_to_inst | grep -i e1000g


"/pci@400/pci@0/pci@9/network@0" 0 "e1000g"
"/pci@400/pci@0/pci@9/network@0,1" 1 "e1000g"
"/pci@400/pci@0/pci@9/network@0,2" 2 "e1000g"
"/pci@400/pci@0/pci@9/network@0,3" 3 "e1000g"
[root@unixrock /]#

Remove the identified PCI bus from Control Domain.


[root@unixrock /]# ldm remove-io pci@500 primary
Initiating a delayed reconfiguration operation on the primary domain.
All configuration changes for other domains are disabled until the primary
domain reboots, at which time the new configuration for the primary domain
will also take effect.
[root@unixrock /]#

Save the Configuration in to Service Processor (spconfig) and Reboot the Control Domain to
take effect the changes.
[root@unixrock
[root@unixrock
[root@unixrock
[root@unixrock

/]# ldm add-spconfig config_pci


/]#
/]# shutdown -i6 -g0 -y
/]#

After the Reboot, verifying that "pci@500" has removed from Control Domain.
[root@unixrock /]# ldm list-devices primary |grep -i pci
pci@400
pci_0
yes
[root@unixrock /]#
[root@unixrock /]# ldm list-spconfig
factory-default
config_initial
config_pci [current]
[root@unixrock /]#

Stopping the Logical Domain which we are going to add the PCI bus.
[root@unixrock /]# ldm list-domain
NAME
STATE
FLAGS
CONS
VCPU
primary
active
-n-cv- SP
8
testldom01
active
-t---- 5001
8
[root@unixrock /]#
[root@unixrock /]# ldm stop-domain testldom01
LDom testldom01 stopped
[root@unixrock /]#

MEMORY
2G
2G

UTIL
0.4%
0.1%

UPTIME
55m
25m

Add the PCI-E bus to the Logical Domain and start.


[root@unixrock /]# ldm add-io pci@500 testldom01
[root@unixrock /]#
[root@unixrock /]# ldm list-devices testldom01 |grep -i pci
pci@500
pci_0
yes
[root@unixrock /]#
[root@unixrock /]#
[root@unixrock /]# ldm start-domain testldom01
LDom testldom01 started
[root@unixrock /]#

Perfect ....!!! we have successfully configured our I/O Domain. Thanks for reading this post,
Please leave your valuable comments or queries, I will respond to you at earliest.

LDOM setup with IO domain step by step


First i will remove the existing configuration on the LDOM. So that we can get enough
resources in Guest LDOM

-bash-3.2# ldm start-reconf primary

Initiating a delayed reconfiguration operation on the primary domain.


All configuration changes for other domains are disabled until the primary
domain reboots, at which time the new configuration for the primary domain
will also take effect.
-bash-3.2# ldm rm-io pci_1 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------Once we have removed the IO from the primary controller then ldm list-io will not
show the bus in ldm ls-io
bash-3.2# ldm ls-io
NAME TYPE BUS DOMAIN STATUS
---- ---- --- ------ -----pci_0 BUS pci_0 primary
niu_0 NIU niu_0 primary
pci_1 BUS pci_1
niu_1 NIU niu_1
/SYS/MB/PCIE0 PCIE pci_0 primary OCC
/SYS/MB/PCIE2 PCIE pci_0 primary OCC

/SYS/MB/PCIE4 PCIE pci_0 primary EMP


/SYS/MB/PCIE6 PCIE pci_0 primary EMP
/SYS/MB/PCIE8 PCIE pci_0 primary OCC
/SYS/MB/SASHBA PCIE pci_0 primary OCC
/SYS/MB/NET0 PCIE pci_0 primary OCC
/SYS/MB/PCIE1 PCIE pci_1 UNK
/SYS/MB/PCIE3 PCIE pci_1 UNK
/SYS/MB/PCIE5 PCIE pci_1 UNK
/SYS/MB/PCIE7 PCIE pci_1 UNK
/SYS/MB/PCIE9 PCIE pci_1 UNK
/SYS/MB/NET2 PCIE pci_1 UNK
bash-3.2#

First Create a Console for LDOM


-----------------------------------------

-bash-3.2# ldm add-vcc port-range=5000-5020 primary-vcc primary

Once we have removed the bus from the primary controller then same need to be
added in controller domain. First save the ldom configuration then take a reboot
then assign the same bus to iodmain
# ldm add-config test
# init 6
----------------------------------------------------------------------------------------------------------------# ldm add-io pci_1 iodR083
Now PCI devices which is attached to the specific bus ( pci_1 ) will be added to the
iodomain.
in my setup I got 3 PCI devices while adding the bus to iodomain ( 2 networkcard
and 1 HBA card )

Now install the iodomain


*******************************************************************
Adding cdrom as virtual disk to all guest domian so that we can install all server same time
-----------------------------------------------------------------------------------------------------------------------------------

-bash-3.2# ldm add-vds primary-vds0 primary


-bash-3.2# ldm ls-services
VCC
NAME LDOM PORT-RANGE
primary-vcc primary 5000-5020
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary
-bash-3.2#

Then add the iso cdrom image to iodomain and start the os installation in iodomain
-bash-3.2# ldm add-vdisk cdrom iso0@primary-vds0 iodR080
-bash-3.2# ldm start iodR080
LDom iodR080 started
-bash-3.2# telnet localhost 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.
Connecting to console "iodR080" in group "iodR080" ....
Press ~? for control options ..

{0} ok
{0} ok
{0} ok boot cdrom
Boot device: /virtual-devices@100/channel-devices@200/disk@0 File and args:
SunOS Release 5.10 Version Generic_147147-26 64-bit
Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
Skipped interface igb1
Attempting to configure interface igb0...
Skipped interface igb0
Attempting to configure interface nxge7...

**************************************************************
Once completed Now letus create some LDOM , Here I am creating 7 LDOMs
Create the logical Domain using ldm create

ldm create Tivoli-ldap1


ldm create Int-cooking-app1
ldm create Int-active-app1
ldm create Int-UGC-app1
ldm create Int-UGC-app1
ldm create HR-intranet-app
ldm create Alfresco-wcm-dms1

Now add the CPU and Memory to the LDOMs


--------------------------------------------------------Add VCPU ( thread )
-------------------------------------

ldm set-vcpu 32 Tivoli-ldap1


ldm set-vcpu 16 Int-cooking-app1
ldm set-vcpu 16 Int-active-app1
ldm set-vcpu 16 Int-UGC-app1
ldm set-vcpu 32 Int-UGC-app1
ldm set-vcpu 32 HR-intranet-app
ldm set-vcpu 32 Alfresco-wcm-dms1

Add memory to the LDOMs


------------------------------------------ldm set-memory 8G primary
ldm set-memory 8G iodR083
ldm set-memory 16G Tivoli-ldap1
ldm set-memory 8G Int-cooking-app1
ldm set-memory 8G Int-active-app1
ldm set-memory 8G Int-UGC-app1
ldm set-memory 16G KMS-app1
ldm set-memory 16G HR-intranet-app
ldm set-memory 16G Alfresco-wcm-dms1

********************************************************************************
************
Adding network card to the LDOMs
-------------------------

First create virtual switch and assign to the primary domain


-bash-3.2# ldm add-vsw net-dev=nxge0 linkprop=phys-state primary-vsw0 primary

-bash-3.2# ldm add-vsw net-dev=nxge1 linkprop=phys-state primary-vsw1 primary


-bash-3.2# ldm add-vsw net-dev=nxge2 linkprop=phys-state primary-vsw2 primary
-bash-3.2# ldm add-vsw net-dev=nxge3 linkprop=phys-state primary-vsw3 primary
-bash-3.2# and another set of virtual switch will add to the io-domai
-----------------------------------------------------------------------------------bash-3.2#
-bash-3.2# ldm add-vsw net-dev=nxge0 linkprop=phys-state iodR083-vsw0 iodR083
-bash-3.2# ldm add-vsw net-dev=nxge1 linkprop=phys-state iodR083-vsw1 iodR083
-bash-3.2# ldm add-vsw net-dev=nxge2 linkprop=phys-state iodR083-vsw2 iodR083
-bash-3.2# ldm add-vsw net-dev=nxge3 linkprop=phys-state iodR083-vsw3 iodR083
-bash-3.2#

Once completed add the vnet to the LDOMs ( iam taking one connection from
primary and another from IO domain and will do an IPMP, so that we can ensure the
redundancy )
----------------------------------------------------ldm add-vnet linkprop=phys-state net0 primary-vsw0@primary Alfresco-wcm-dms1
ldm add-vnet linkprop=phys-state net0 primary-vsw0 Alfresco-wcm-dms1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw0 Alfresco-wcm-dms1
ldm add-vnet linkprop=phys-state net0 iodR083-vsw0 HR-intranet-app
ldm add-vnet linkprop=phys-state net1 primary-vsw0 HR-intranet-app
ldm add-vnet linkprop=phys-state net0 primary-vsw1 Int-UGC-app1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw1 Int-UGC-app1
ldm add-vnet linkprop=phys-state net0 iodR083-vsw1
ldm add-vnet linkprop=phys-state net0 primary-vsw2 Int-cooking-app1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw2 Int-cooking-app1
ldm add-vnet linkprop=phys-state net0 iodR083-vsw2 KMS-app1
ldm add-vnet linkprop=phys-state net1 primary-vsw2 KMS-app1
ldm add-vnet linkprop=phys-state net0 primary-vsw3 Tivoli-ldap1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw3 Tivoli-ldap1
*******************************************************************
Adding disk to the LDOMs
---------------------

Now we will export the disk as a block device , here i am giving full disk that is *s2
and adding the same in multipathing group. ( disks are from storage device )
members in the same multipathing group will communicate each other.
-------------------------------------------------------------------------------------------------bash-3.2# ldm add-vdsdev mpgroup=group1
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E6d0s2 disk1@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group2
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E7d0s2 disk2@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group3
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E8d0s2 disk3@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group4
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E9d0s2 disk4@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group5
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003EAd0s2 disk5@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group6
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003EBd0s2 disk6@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group7
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003ECd0s2 disk7@primary-vds0
-bash-3.2#

add the iodomain path in multipathing group


--------------------------------------------------------------bash-3.2#
-bash-3.2# ldm add-vds iodr083-vds0 iodR083
-bash-3.2# ldm add-vdsdev mpgroup=group1
/dev/rdsk/c3t60060E8006D3E3000000D3E3000003E6d0s2 disk1@iodr083-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group2
/dev/rdsk/c3t60060E8006D3E3000000D3E3000003E7d0s2 disk2@iodr083-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group3
/dev/rdsk/c3t60060E8006D3E3000000D3E3000003E8d0s2 disk3@iodr083-vds0

-bash-3.2# ldm add-vdsdev mpgroup=group4


/dev/rdsk/c3t60060E8006D3E3000000D3E3000003E9d0s2 disk4@iodr083-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group5
/dev/rdsk/c3t60060E8006D3E3000000D3E3000003EAd0s2 disk5@iodr083-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group6
/dev/rdsk/c3t60060E8006D3E3000000D3E3000003EBd0s2 disk6@iodr083-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group7
/dev/rdsk/c3t60060E8006D3E3000000D3E3000003ECd0s2 disk7@iodr083-vds0
-bash-3.2#

export the virtual disk which we have created now


--------------------------------------------------------------bash-3.2# ldm add-vdisk disk disk1@primary-vds0 Alfresco-wcm-dms1
-bash-3.2# ldm add-vdisk disk disk2@primary-vds0 HR-intranet-app
-bash-3.2# ldm add-vdisk disk disk3@primary-vds0 Int-UGC-app1
-bash-3.2# ldm add-vdisk disk disk4@primary-vds0 Int-active-app1
-bash-3.2# ldm add-vdisk disk disk5@primary-vds0 Int-cooking-app1
-bash-3.2# ldm add-vdisk disk disk6@primary-vds0 KMS-app1
-bash-3.2# ldm add-vdisk disk disk7@primary-vds0 Tivoli-ldap1

*********************************************************************
Now we have assigned the disk and network card to the logical domain. Now bind
the domain and start then install OS

ldm
ldm
ldm
ldm
ldm
ldm
ldm

bind
bind
bind
bind
bind
bind
bind

Alfresco-wcm-dms1
HR-intranet-app
Int-UGC-app1
Int-active-app1
Int-cooking-app1
KMS-app1
Tivoli-ldap1

ldm
ldm
ldm
ldm
ldm
ldm
ldm

start
start
start
start
start
start
start

Alfresco-wcm-dms1
HR-intranet-app
Int-UGC-app1
Int-active-app1
Int-cooking-app1
KMS-app1
Tivoli-ldap1

SPARC Logical Domains: Alternate Service


Domains Part 1
How To, Solaris, SPARC Logical Domains
matthew.mattoon
December 15, 2014
In this series we will be going over configuring alternate I/O and Service domains, with the goal
of increasing the serviceability the SPARC T-Series servers without impacting other domains on
the hypervisor. Essentially enabling rolling maintenance without having to rely on live migration
or downtime. It is important to note, that this is not a cure-all, for example base firmware
updates would still be interruptive, however minor firmware such as disk and I/O cards only
should be able to be rolled.
In Part One we will go through the initial Logical Domain configuration, as well as mapping out
the devices we have and if they will belong in the primary or the alternate domain.
In Part Two we will go through the process of creating the alternate domain and assigning the
devices to it, thus making it independent of the primary domain.
In Part Three we will create redundant services to support our Logical Domains as well as create
a test Logical Domain to utilize these services.
Initial Logical Domain Configuration
I am going to assume that your configuration is currently at the factory default, and that you like
me are using Solaris 11.2 on the hypervisor.
# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
primary active -n-cv- UART 256 511G 0.4% 0.3% 6h 24m

The first thing we need to do is remove some of the resources from the primary domain, so that
we are able to assign them to other domains. Since the primary domain is currently active and
using these resources we will enable delayed reconfiguration mode, this will accept all changes,

and then on a reboot of that domain (in this case primary which is the control domain or the
physical machine) it will enable the configuration.
# ldm start-reconf primary
Initiating a delayed reconfiguration operation on the primary domain.
All configuration changes for other domains are disabled until the primary
domain reboots, at which time the new configuration for the primary domain
will also take effect.

Now we can start reclaiming some of those resources, I will assign 2 cores to the primary domain
and 16GB of RAM.
# ldm set-vcpu 16 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
-----------------------------------------------------------------------------ldm set-memory 16G primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------

I like to add configurations often when we are doing a lot of changes.


# ldm add-config reduced-resources

Next we will need some services to allow us to provision disks to domains and to connect to the
console of domains for the purposes of installation or administration.
# ldm add-vdiskserver primary-vds0 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
-----------------------------------------------------------------------------# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------

Lets add another configuration to bookmark our progress.


# ldm add-config initial-services

We need to enable the Virtual Network Terminal Server service, this allows us to telnet from the
control domain into the other domains.
# svcadm enable vntsd

Finally a reboot will put everything into action.

# reboot

When the system comes back up we should see a drastically different LDM configuration.
Identify PCI Root Complexes
All the T5-2s that I have looked at have been laid out the same, with the SAS HBA and onboard
NIC on pci_0 and pci_2, and the PCI Slots on pci_1 and pci_3. So to split everything evenly
pci_0 and pci_1 stay with the primary, while pci_2 and pci_3 go to the alternate. However so
that you understand how we know this I will walk you through identifying the complex as well
as the discreet types of devices.
# ldm ls -l -o physio primary
NAME
primary
IO
DEVICE PSEUDONYM OPTIONS
pci@340 pci_1
pci@300 pci_0
pci@3c0 pci_3
pci@380 pci_2
pci@340/pci@1/pci@0/pci@4
pci@340/pci@1/pci@0/pci@5
pci@340/pci@1/pci@0/pci@6
pci@300/pci@1/pci@0/pci@4
pci@300/pci@1/pci@0/pci@2
pci@300/pci@1/pci@0/pci@1
pci@3c0/pci@1/pci@0/pci@7
pci@3c0/pci@1/pci@0/pci@2
pci@3c0/pci@1/pci@0/pci@1
pci@380/pci@1/pci@0/pci@5
pci@380/pci@1/pci@0/pci@6
pci@380/pci@1/pci@0/pci@7

/SYS/MB/PCIE5
/SYS/MB/PCIE6
/SYS/MB/PCIE7
/SYS/MB/PCIE1
/SYS/MB/SASHBA0
/SYS/MB/NET0
/SYS/MB/PCIE8
/SYS/MB/SASHBA1
/SYS/MB/NET2
/SYS/MB/PCIE2
/SYS/MB/PCIE3
/SYS/MB/PCIE4

This shows us that pci@300 = pci_0, pci@340 = pci_1, pci@380 = pci_2, and pci@3c0 = pci_3.
Map Local Disk Devices To PCI Root
First we need to determine which disk devices are in the zpool, so that we know which ones that
cannot be removed.
# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 70.3G in 0h8m with 0 errors on Fri Feb 21 05:56:34 2014
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0

c0t5000CCA04385ED60d0 ONLINE 0 0 0
c0t5000CCA0438568F0d0 ONLINE 0 0 0
errors: No known data errors

Next we must use mpathadm to find the Initiator Port Name. To do that we must look at slice 0
of c0t5000CCA04385ED60d0.
# mpathadm show lu /dev/rdsk/c0t5000CCA04385ED60d0s0
Logical Unit: /dev/rdsk/c0t5000CCA04385ED60d0s2
mpath-support: libmpscsi_vhci.so
Vendor: HITACHI
Product: H109060SESUN600G
Revision: A606
Name Type: unknown type
Name: 5000cca04385ed60
Asymmetric: no
Current Load Balance: round-robin
Logical Unit Group ID: NA
Auto Failback: on
Auto Probing: NA
Paths:
Initiator Port Name: w5080020001940698
Target Port Name: w5000cca04385ed61
Override Path: NA
Path State: OK
Disabled: no
Target Ports:
Name: w5000cca04385ed61
Relative ID: 0

Our output shows us that the initiator port is w5080020001940698.


# mpathadm show
Initiator Port:
Transport Type:
OS Device File:
Initiator Port:
Transport Type:
OS Device File:
Initiator Port:
Transport Type:
OS Device File:
Initiator Port:
Transport Type:
OS Device File:

initiator-port w5080020001940698
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@1
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@2
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@8
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@4

So we can see that this particular disk is on pci@300, which is pci_0.


Map Ethernet Cards To PCI Root

First we must determine the underlying device for each of our network interfaces.
# dladm show-phys net0
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 10000 full ixgbe0

In this case ixgbe0, we can then look at the device tree to see where it is pointing to to find which
PCI Root this device is connected to.
# ls -l /dev/ixgbe0
lrwxrwxrwx 1 root root 53 Feb 12 2014 /dev/ixgbe0 ->
../devices/pci@300/pci@1/pci@0/pci@1/network@0:ixgbe0

Now we can see that it is using pci@300, which translates into pci_0.
Map Infiniband Cards to PCI Root
Again lets determine the underlying device name of our infiniband interfaces, on my machine
they were defaulted at net2 and net3, however I had previously renamed the link to ib0 and ib1
for simplicity. This procedure is very similar to Ethernet cards.
# dladm show-phys ib0
LINK MEDIA STATE SPEED DUPLEX DEVICE
ib0 Infiniband up 32000 unknown ibp0

In this case our device is ibp0. So now we just check the device tree.
# ls -l /dev/ibp0
lrwxrwxrwx 1 root root 83 Nov 26 07:17 /dev/ibp0 ->
../devices/pci@380/pci@1/pci@0/pci@5/pciex15b3,673c@0/hermon@0/ibport@1,0,ipib
:ibp0

We can see by the path, that this is using pci@380 which is pci_2.
Map Fibre Channel Cards to PCI Root
Now perhaps we need to have some Fibre Channel HBAs split up as well, first thing we must do
is look at the cards themselves.
# luxadm -e port
/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0:devctl NOT CONNECTED
/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0,1/fp@0,0:devctl NOT CONNECTED

We can see here that these use pci@300 which is pci_0.


The Plan

Basically we are going to split our PCI devices by even and odd, with even staying with Primary
and odd going with Alternate. On the T5-2, this will result on the PCI-E cards on the left side
being for the primary, and the cards on the right for the alternate.
Here is a diagram of how the physical devices are mapped to PCI Root Complexes.

FIGURE 1.1 Oracle SPARC T5-2 Front View

FIGURE 1.2 Oracle SPARC T5-2 Rear View


References

SPARC T5-2 I/O Root Complex Connections


https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.z40005601508415.html
SPARC T5-2 Front Panel Connections
https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgcddce.html#scrolltoc
SPARC T5-2 Rear Panel Connections
https://docs.oracle.com/cd/E28853_01/html/E28854/pftsm.bbgdeaei.html#scrolltoc
Related posts:
1. SPARC Logical Domains: Live Migration
2. Solaris Virtualization: Using Logical Domains on Solaris 11 Part Two
3. Solaris Virtualization: Using Logical Domains on Solaris 11 Part One
4. Solaris Virtualization: Using Logical Domains on Solaris 11 Part Three
5. Solaris 11 on SPARC: What the Piss is the OK Prompt
How To, Solaris, SPARC Logical Domains
matthew.mattoon
December 9, 2014
One of the ways that we are able to accomplish regularly scheduled maintenance is by utilizing
Live Migration, with this we can migrate workloads from one physical machine to another
without having service interruption. The way that it is done with Logical Domains is much more
flexible than with most other hypervisor solutions, it doesnt require any complicated cluster
setup, no management layer, so you could literally utilize any compatible hardware at the drop of
the hat.
This live migration article also focuses on some technology that I have written on, but not yet
published (should be published within the next week), this technology is Alternate Service
Domains, if you are using this then Live Migration is still possible, and if you are not using it,
then Live Migration is actually easier (as the underlying devices are simpler, so it is simpler to
match them).
Caveats to Migration

Virtual Devices must be accessible on both servers, via the same service name (though
the underlying paths may be different).

IO Domains cannot be live migrated.

Migrations can be either online live or offline cold the state of the domain determines
if it is live or cold.

When doing a cold migration virtual devices are not checked to ensure they exist on the
receiving end, you will need to check this manually.

Live Migration Dry Run


I recommend performing a dry run of any migration prior to performing the actual migration.
This will highlight any configuration problems prior to the migration happening.
# ldm migrate-domain -n ldom1 root@server
Target Password:

This will generate any errors that would generate in an actual migration, however it will do so
without actually causing you problems.
Live Migration
When you are ready to perform the migration then remove the dry run flag. This process will
also do the appropriate safety checks to ensure that everything is good on the receiving end.
# ldm migrate-domain ldom1 root@server
Target Password:

Now the migration will proceed and unless something happens it will come up on the other
system.
Live Migration With Rename
We can also rename the logical domain as part of the migration, we simply specify the new
name.
# ldm migrate-domain ldom1 root@server:ldom2
Target Password:

In this case the original name was ldom1 and the new name is ldom2.
Common Errors
Here are some common errors.
Bad Password or No LDM on Target
# ldm migrate-domain ldom1 root@server
Target Password:
Failed to establish connection with ldmd(1m) on target: server
Check that the 'ldmd' service is enabled on the target machine and

that the version supports Domain Migration. Check that the 'xmpp_enabled'
and 'incoming_migration_enabled' properties of the 'ldmd' service on
the target machine are set to 'true' using svccfg(1M).

Probable Fixes Ensure you are attempting to migrate to the correct hypervisor, you have the
username/password combination correct, and that the user has the appropriate level of access to
ldmd and that ldmd is running.
Missing Virtual Disk Server Devices
# ldm migrate-domain ldom1 root@server
Target Password:
The number of volumes in mpgroup 'zfs-ib-nfs' on the target (1) differs
from the number on the source (2)
Domain Migration of LDom ldom1 failed

Probable Fixes Ensure that the underlying virtual disk devices match, if you are using
mpgroups, then the entire mpgroup must match on both sides.
Missing Virtual Switch Device
# ldm migrate-domain ldom1 root@server
Target Password:
Failed to find required vsw alternate-vsw0 on target machine
Domain Migration of LDom logdom1 failed

Probable Fixes Ensure that the underlying virtual switch devices match on both locations.
Check Migration Progress
One thing to keep in mind, is that during the migration process, the hypervisor that is being
evacuated is the authoritative one in terms of controlling the process, so status should be checked
there.
source# ldm list -o status ldom1
NAME
logdom1
STATUS
OPERATION PROGRESS TARGET
migration 20% 172.16.24.101:logdom1

It can however be checked on the receiving end, though it will look a little bit different.
target# ldm list -o status logdom1
NAME
logdom1
STATUS
OPERATION PROGRESS SOURCE
migration 30% ak00176306-primary

The big thing to notice is that it shows the source on this side, also if we changed the name as
part of the migration it will also show the name using the new name.
Cancel Migration
Of course if you need to cancel a migration, this would be done on the hypervisor that is being
evacuated, since it is authoritative.
# ldm cancel-operation migration ldom1
Domain Migration of ldom1 has been cancelled

This will allow you to cancel any accidentally started migrations, however likely anything that
you needed to cancel would generate an error before needing to do this.
Cross CPU Considerations
By default logical domains are created to use very specific CPU features based on the hardware
it runs on, as such live migration only works by default on the exact same CPU type and
generation. However if we change the CPU
Native Allows migration between same CPU type and generation.
Generic Allows the most generic processor feature set to allow for widest live migration
capabilities.
Migration Class 1 Allows migration between T4, T5 and M5 server classes (also supports M10
depending on firmware version)
SPARC64 Class 1 Allows migration between Fujitsu M10 servers.
Here is an example of how you would change the CPU architecture of a domain. I personally
recommend using this sparingly and building your hardware infrastructure in a way where you
have the capacity on the same generation of hardware, however in certain circumstances this can
make a lot of sense if the performance implications are not too great.
# ldm set-domain cpu-arch=migration-class1 ldom1

I personally wouldnt count on the Cross-CPU functionality, however in some cases it might
make sense for your situation, either way Live Migration of Logical Domains is done in a very
effective manner and adds a lot of value.
Related posts:
1. SPARC Logical Domains: Alternate Service Domains Part 3
2. SPARC Logical Domains: Alternate Service Domains Part 1

3. SPARC Logical Domains: Alternate Service Domains Part 2


4. Solaris Virtualization: Using Logical Domains on Solaris 11 Part Two
5. Solaris Virtualization: Using Logical Domains on Solaris 11 Part One

SPARC Logical Domains: Alternate Service


Domains Part 2
How To, Solaris, SPARC Logical Domains
matthew.mattoon
December 16, 2014
In Part One of this series we went through the initial configuration of our Logical Domain
hypervisor, and took some time to explain the process of mapping out the PCI Root Complexes,
so that we would be able to effectively split them between the primary and an alternate domain.
In Part Two (this article) we are going to take that information and split out our PCI Root
Complexes and configure and install an alternate domain. At the end of this article you will be
able to reboot the primary domain without impacting the operation of the alternate domain.
In Part Three we will be creating redundant virtual services as well as some guests that will use
the redundant services that we created, and will go through some testing to see the capabilities of
this architecture.
Remove PCI Roots From Primary
The changes that we need to make will require that we put LDM into dynamic reconfiguration
mode, which will require a reboot to implement the changes. This mode also prevents further
changes to other domains.
# ldm start-reconf primary
Initiating a delayed reconfiguration operation on the primary domain.
All configuration changes for other domains are disabled until the primary
domain reboots, at which time the new configuration for the primary domain
will also take effect.

Now we remove the unneeded PCI Roots from the primary domain, this will allow us to assign
them to the alternate domain.
# ldm remove-io pci_1 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
-----------------------------------------------------------------------------# ldm remove-io pci_3 primary

-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.


Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------

Lets save our configuration.


# ldm add-config reduced-io

Now a reboot to make the configuration active.


# reboot

When it comes back up we should see the PCI Roots unassigned.


Create Alternate Domain
Now we can create our alternate domain and assign it some resources.
# ldm add-domain alternate
# ldm set-vcpu 16 alternate
# ldm set-memory 16G alternate

We have set this with 2 cores and 16GB of RAM. Your sizing will depend on your use case.
Add PCI Devices to Alternate Domain
We are assigning pci_1 and pci_3 to the alternate domain, this will have direct access to two of
the on-board NICs, two of the disks, and half of the PCI slots. It also will inherit the CDROM as
well as the USB controller.
Also really quick I just wanted to point this out quickly. The disks are not split evenly, pci_0 has
4 disks, while pci_3 only has two. So that said if your configuration included 6 disks then I
would recommend using the third and fourth in the primary as non-redundant storage pool,
perhaps to be used to stage firmware and such for patching. But the bottom line is that you need
to purchase the hardware with 4 drives minimum.
# ldm add-io pci_1 alternate
# ldm add-io pci_3 alternate

Here we have NICs and disks on our alternate domain, now we just need something to boot from
and we can get the install going.
Lets save our config before moving on.
# ldm add-config alternate-domain

With the config saved we can move on to the next steps.

Install Alternate Domain


We should still have our CD in from the install of the primary domain. After switching the PCI
Root Complexes the CD drive will be presented to the alternate domain (as it is attached to
pci_3).
First thing to do is bind our domain.
# ldm bind alternate

Then we need to start the domain.


# ldm start alternate

We need to do is determine what port telnet is listening on for this particular domain. In our case
we can see it is 5000.
# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
primary active -n-cv- UART 16 16G 0.2% 0.2% 17h 32m
alternate active -n--v- 5000 16 16G 0.0% 0.0% 17h 45m

When using these various consoles you always need to be attentive to the escape sequence,
which in the case of telnet it is ^], which is CTRL + ] once we have determined where we
can telnet to, then we can start the connection. Also important to note. You will see ::1:
Connection refused. This is because we are connecting to localhost, if you dont want to see that
error connect to 127.0.0.1 (which is the IPv4 local address).
# telnet localhost 5000
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to AK00176306.
Escape character is '^]'.
Connecting to console "alternate" in group "alternate" ....
Press ~? for control options ..
telnet> quit
Connection to AK00176306 closed.

I will let you go through the install on your own, but I am assuming that you know how to install
the OS itself.
Now lets save our config, so that we dont lose our progress.
# ldm add-config alternate-domain-config

At this point if we have done everything correctly, we can reboot the primary domain without
disrupting service to the alternate domain. Doing pings during a reboot will show illustrate
where we are in the build. Of course you would have to have networking configured on the
alternate domain, and dont forget the simple stuff like mirroring your rpool and such, it would
be a pity to go to all this trouble to not have a basic level of redundancy such as mirrored disks.
Test Redundancy
At this point the alternate and the primary domain are completely independent. To validate this I
recommend setting up a ping to both the primary and the alternate domain and rebooting the
primary. If done correctly then you will not lose any pings to the alternate domain. Keep in
mind that while the primary is down you will not be able to utilize the control domain in other
words the only one which can configure and start/stop other domains.
Related posts:
1. SPARC Logical Domains: Alternate Service Domains Part 1
2. SPARC Logical Domains: Live Migration
3. Solaris Virtualization: Using Logical Domains on Solaris 11 Part One
4. Solaris Virtualization: Using Logical Domains on Solaris 11 Part Three
5. Solaris Virtualization: Using Logical Domains on Solaris 11 Part Two
LDOM: Adding a virtual disk to a LDOM guest domain
# Tested on Solaris 10 (both Service Domain and Guest Domain) with ZFS
volumes acting as
# virtual disks

# I'm using a SAN LUN presented to service domain like following one:
echo | format
[...]
33. c4t60060481234290101102103030343635d0 <EMC-SYMMETRIX-5771
cyl 16378 alt 2 hd 15 sec 128>
/scsi_vhci/ssd@g60060481234290101102103030343635
[...]

# Being this my guest domain:


ldm list | grep MYGUEST05
MYGUEST05
active

-n----

5002

16

8G

0.1%

181d

22h
# and its dedicated zpools and zfs these ones:
zpool list | grep P05
P05-0
119G 95,5G
P05-2
149G
122G
zfs list | grep P05
P05-0
P05-0/VD_DATA-0
P05-0/VD_OS-0
P05-0/admin
P05-0/export
P05-2
P05-2/VD_DATA-2

23,5G
26,6G

115G
85G
30G
18K
126K
147G
146G

80%
82%

ONLINE
ONLINE

2,11G
5,25G
18,5G
482K
374K
176M
24,2G

21K
81,9G
13,6G
18K
126K
21K
122G

/P05-0
/P05-0/admin
/P05-0/export
/P05-2
-

# I'll take "P05-3" as name for the new zpool and "VD_DATA-3" for
virtual disk name

# Create the new zpool on selected disk


zpool create P05-3 c4t60060481234290101102103030343635d0

zpool list | grep P05


P05-0
119G 95,5G
P05-2
149G
122G
P05-3
14,9G
110K

23,5G
26,6G
14,9G

zpool get
NAME
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3

VALUE
SOURCE
14,9G
0%
default
ONLINE
6696447729916926011
29
default
default
on
default
off
default

all P05-3
PROPERTY
size
capacity
altroot
health
guid
version
bootfs
delegation
autoreplace

80%
82%
0%

ONLINE
ONLINE
ONLINE

default

P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3

cachefile
failmode
listsnapshots
autoexpand
free
allocated
readonly

wait
on
off
14,9G
92,5K
off

default
default
default
default
-

# and a new volume (in order to follow standard being used in my


platform) taking the
# whole size of the new zpool
zfs list P05-3
NAME
USED
P05-3 14,4G

AVAIL
206M

REFER
31K

MOUNTPOINT
/P05-3

zfs create -V 14G P05-3/VD_DATA-3

zfs list | grep P05-3


P05-3
P05-3/VD_DATA-3

14,4G
14,4G

206M
14,6G

31K
16K

/P05-3
-

# This has created a link to the new volume under /dev/zvol/dsk


ll /dev/zvol/dsk/P05-3/*
lrwxrwxrwx
1 root
root
36 avr
9 12:35
/dev/zvol/dsk/P05-3/VD_DATA-3 -> ../../../../devices/pseudo/zfs@0:25c

# Our volume is ready, let's add it as virtual disk to guest domain;


first, check current
# virtual disks assigned to guest domain
ldm list -o disk MYGUEST05
NAME
MYGUEST05
DISK

NAME
SERVER

VOLUME

TOUT ID

DEVICE

MPGROUP
myguest05-vdos-0 P050-VDOS-0@primary-vds00

disk@0

myguest05-data-0 P050-VDATA-0@primary-vds00

disk@1

myguest05-data-2 P052-VDATA-2@primary-vds00

disk@3

primary
primary
primary

# Export the virtual disk backend from the service domain


ldm add-vdsdev /dev/zvol/dsk/P05-3/VD_DATA-3 P053-VDATA-3@primaryvds00
# and assign the backend to my guest domain
ldm add-vdisk myguest05-data-3 P053-VDATA-3@primary-vds00 MYGUEST05

ldm list -o disk MYGUEST05


NAME
MYGUEST05
DISK
NAME
SERVER

VOLUME
MPGROUP
myguest05-vdos-0 P050-VDOS-0@primary-vds00

TOUT ID

DEVICE

disk@0

myguest05-data-0 P050-VDATA-0@primary-vds00

disk@1

myguest05-data-2 P052-VDATA-2@primary-vds00

disk@3

myguest05-data-3 P053-VDATA-3@primary-vds00

disk@2

primary
primary
primary
primary

# We are done and new disk should be visible on guest domain (it may
be necessary to
# rescan devices):
# Before:

myguest05:/#> echo | format


Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <SUN-DiskImage-30GB cyl 65533 alt 2 hd 1 sec 960>
/virtual-devices@100/channel-devices@200/disk@0
1. c0d1 <SUN-DiskImage-85GB cyl 65533 alt 2 hd 1 sec 2720>
/virtual-devices@100/channel-devices@200/disk@1
2. c0d3 <SUN-DiskImage-142GB cyl 4037 alt 2 hd 96 sec 768>
/virtual-devices@100/channel-devices@200/disk@3
# After:
myguest05:/#> echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <SUN-DiskImage-30GB cyl 65533 alt 2 hd 1 sec 960>
/virtual-devices@100/channel-devices@200/disk@0
1. c0d1 <SUN-DiskImage-85GB cyl 65533 alt 2 hd 1 sec 2720>
/virtual-devices@100/channel-devices@200/disk@1
2. c0d2 <SUN-DiskImage-14GB cyl 396 alt 2 hd 96 sec 768>
/virtual-devices@100/channel-devices@200/disk@2
3. c0d3 <SUN-DiskImage-142GB cyl 4037 alt 2 hd 96 sec 768>
/virtual-devices@100/channel-devices@200/disk@3

# Do not forget to update configuration before quitting the service


domain
DATE=`date +"%Y-%m-%d"_"%Hh%Mmn%S"`
ldm add-spconfig Ldom-primary_${DATE}
ldm ls-spconfig
factory-default
Ldom-primary_2014-10-13_12h52mn01
Ldom-primary_2014-11-17_10h25mn45
Ldom-primary_2014-11-17_11h07mn25
Ldom-primary_2014-12-08_15h04mn27
Ldom-primary_2015-01-12_14h27mn25
Ldom-primary_2015-04-09_13h16mn10 [current]
xvmconfig

#
----------------------------------------------------------------------------------------# If we use physical devices as virtual disks instead, we'll do
something like this

root@my_prim_ldom:/> echo | format


[...]
113. c0t600601606DA0300012493AE91BD5E311d0 <DGC-VRAID-0532250.00GB>
/scsi_vhci/ssd@g600601606da0300012493ae91bd5e311
[...]

root@my_prim_ldom:/> ldm list -o disk guest01


NAME
guest01
DISK
NAME
DEVICE SERVER
guest01_sys0
disk@0 primary
guest01_d0
disk@1 primary
guest01_d01
disk@2 primary
guest01_d02
disk@3 primary
guest01_d03
disk@4 primary
guest01_d04
disk@5 primary
guest01_d05
disk@6 primary
guest01_d06
disk@7 primary
guest01_d07
disk@8 primary
guest01_d08

VOLUME
MPGROUP
guest01_sys0vol@primary-vds0
guest01_sys0map
guest01_d0vol@primary-vds0
guest01_d0map
guest01_d01vol@primary-vds0
guest01_d01map
guest01_d02vol@primary-vds0
guest01_d02map
guest01_d03vol@primary-vds0
guest01_d03map
guest01_d04vol@primary-vds0
guest01_d04map
guest01_d05vol@primary-vds0
guest01_d05map
guest01_d06vol@primary-vds0
guest01_d06map
guest01_d07vol@primary-vds0
guest01_d07map
guest01_d08vol@primary-vds0

TOUT ID
0
1
2
3
4
5
6
7
8
9

disk@9 primary
guest01_d09
disk@10 primary
guest01_d10
disk@11 primary
guest01_d11
disk@12 primary
guest01_d12
disk@13 primary
guest01_d13
disk@14 primary
guest01_d14
disk@15 primary
guest01_d15
disk@16 primary
guest01_d16
disk@17 primary
guest01_d17
disk@18 primary
guest01_d18
disk@19 primary
guest01_d19
disk@20 primary

guest01_d08map
guest01_d09vol@primary-vds0
guest01_d09map
guest01_d10vol@primary-vds0
guest01_d10map
guest01_d11vol@primary-vds0
guest01_d11map
guest01_d12vol@primary-vds0
guest01_d12map
guest01_d13vol@primary-vds0
guest01_d13map
guest01_d14vol@primary-vds0
guest01_d14map
guest01_d15vol@primary-vds0
guest01_d15map
guest01_d16vol@primary-vds0
guest01_d16map
guest01_d17vol@primary-vds0
guest01_d17map
guest01_d18vol@primary-vds0
guest01_d18map
guest01_d19vol@primary-vds0
guest01_d19map

10
11
12
13
14
15
16
17
18
19
20

root@my_prim_ldom:/> ldm add-vdsdev mpgroup=guest01_d20map


/dev/dsk/c0t600601606DA0300012493AE91BD5E311d0s2
guest01_d20vol@primary-vds0
root@my_prim_ldom:/> ldm add-vdisk guest01_d20 guest01_d20vol@primaryvds0 guest01

root@my_prim_ldom:/> ldm list -o disk guest01


NAME
guest01
DISK
NAME
DEVICE SERVER
guest01_sys0
disk@0 primary
guest01_d0
disk@1 primary
guest01_d01

VOLUME
MPGROUP
guest01_sys0vol@primary-vds0
guest01_sys0map
guest01_d0vol@primary-vds0
guest01_d0map
guest01_d01vol@primary-vds0

TOUT ID
0
1
2

disk@2 primary
guest01_d02
disk@3 primary
guest01_d03
disk@4 primary
guest01_d04
disk@5 primary
guest01_d05
disk@6 primary
guest01_d06
disk@7 primary
guest01_d07
disk@8 primary
guest01_d08
disk@9 primary
guest01_d09
disk@10 primary
guest01_d10
disk@11 primary
guest01_d11
disk@12 primary
guest01_d12
disk@13 primary
guest01_d13
disk@14 primary
guest01_d14
disk@15 primary
guest01_d15
disk@16 primary
guest01_d16
disk@17 primary
guest01_d17
disk@18 primary
guest01_d18
disk@19 primary
guest01_d19
disk@20 primary
guest01_d20
disk@21 primary

guest01_d01map
guest01_d02vol@primary-vds0
guest01_d02map
guest01_d03vol@primary-vds0
guest01_d03map
guest01_d04vol@primary-vds0
guest01_d04map
guest01_d05vol@primary-vds0
guest01_d05map
guest01_d06vol@primary-vds0
guest01_d06map
guest01_d07vol@primary-vds0
guest01_d07map
guest01_d08vol@primary-vds0
guest01_d08map
guest01_d09vol@primary-vds0
guest01_d09map
guest01_d10vol@primary-vds0
guest01_d10map
guest01_d11vol@primary-vds0
guest01_d11map
guest01_d12vol@primary-vds0
guest01_d12map
guest01_d13vol@primary-vds0
guest01_d13map
guest01_d14vol@primary-vds0
guest01_d14map
guest01_d15vol@primary-vds0
guest01_d15map
guest01_d16vol@primary-vds0
guest01_d16map
guest01_d17vol@primary-vds0
guest01_d17map
guest01_d18vol@primary-vds0
guest01_d18map
guest01_d19vol@primary-vds0
guest01_d19map
guest01_d20vol@primary-vds0
guest01_d20map

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

Check Operating System Version


This is the released version of Solaris 11 for SPARC, which of course means it can
only be used on the SPARC processor architecture.
root@labsrv:~# uname a
SunOS t4 5.11 11.0 sun4v sparc sun4v

View Logical Domains Software Version


The release version of Solaris 11 for SPARC comes with Logical Domains 2.1. If you
have purchased support and configured access to the repositories then pkg update
will give you access to Logical Domains 2.2.

root@labsrv:~# ldm -V

Logical Domains Manager (v 2.1.0.4)


Hypervisor control protocol v 1.7
Using Hypervisor MD v 1.3

System PROM:
Hostconfig

v. 1.1.1

@(#)Hostconfig 1.1.1 2011/08/03

Hypervisor

v. 1.10.1.

@(#)Hypervisor 1.10.1.b 2011/09/12

OpenBoot

v. 4.33.1

@(#)OpenBoot 4.33.1 2011/08/03

View Existing Logical Domains


Here we see the specifics of our physical machine. The primary domain has all of
the resources and before we will be able to allocate resources to other Logical
domains we will need to remove some from the primary domain.
root@labsrv:~# ldm list
NAME

STATE

FLAGS

CONS

VCPU

MEMORY

UTIL

UPTIME

primary

active

-n-c--

UART

64

32256M

0.0%

1d 10h 15m

Create Services for Control Domain

Here we are creating the Virtual Console Concentrator service. This is what allows
us to connect to the console of the domains for installation and reconfiguration.

root@labsrv:~# ldm add-vcc port-range=5000-5100 primary-vcc0 primary

Virtual Disk Service


root@labsrv:~# ldm add-vds primary-vds0 primary

Virtual Switch Service

root@labsrv:~# ldm add-vsw net-dev=net0 primary-vsw0 primary

Verify Changes
root@labsrv:~# ldm list-services primary
VCC
NAME

LDOM

primary-vcc0

PORT-RANGE

primary

5000-5100

VSW
NAME
LINKPROP
LINK

LDOM
MAC
DEFAULT-VLAN-ID PVID VID

Primary-vsw0
switch@0
on

VDS

primary
1

NET-DEV
ID
MTU
MODE

00:14:4f:fb:bd:bd net0
1

DEVICE
INTER-VNET-

0
1500

NAME
MPGROUP

LDOM
DEVICE

Primary-vds0

VOLUME

OPTIONS

primary

Confi gure the Control Domain

By default we have primary domain, which mean the host operating system.

root@labsrv:~# ldm list


NAME

STATE

FLAGS

CONS

VCPU

MEMORY

UTIL

UPTIME

primary

active

-n-c--

UART

64

32256M

0.0%

1d 30m

We have to reduce the number of vCPU and Memory from the the control domain to
make available to the logical domains.

root@labsrv:~# ldm set-vcpu 4 primary


root@labsrv:~# ldm set-memory 4G primary

Reconfigure primary domain to update changes.


root@labsrv:~# ldm start-reconf primary

Initiatingadelayedreconfigurationoperationontheprimarydomain.Allconfigurationchangesforother
domainsaredisableduntiltheprimarydomainreboots,atwhichtimethenewconfigurationforthe
primarydomainwillalsotakeeffect.

Create alternate confi guration fi le.

root@labsrv:~# ldm add-config primary-config


root@labsrv:~# ldm list-config
factory-default
primary-config [current]

Enable the Virtual Network Terminal Server Daemon.


root@labsrv:~# svcadm enable vntsd
root@labsrv:~# svcs | grep vntsd
online

13:10:01 svc:/ldoms/vntsd:default

Reboot now.
root@labsrv:~# reboot

After a reboot, make sure the primary domain must reflect the new memory and
vCPU configuration.

root@labsrv:~# ldm list


NAME

STATE

FLAGS

CONS

VCPU

MEMORY

UTIL

UPTIME

primary

active

-n-cv-

UART

4G

0.4%

00h 15m

Create Install Media Repository


Create a ZFS file system to store ISO images of Solaris operating system.
root@labsrv:~# zfs create -p rpool/ldoms/OSImages

Add the Solaris iso images to the Virtual Disk Server service.
root@labsrv:~# ldm add-vdsdev /rpool/ldoms/OSImages/sol-10-u10-ga2-sparcdvd.iso solaris10_media@primary-vds0

root@labsrv:~# ldm add-vdsdev /rpool/ldoms/OSImages/sol-11-11.11-sparc.iso


solaris11_media@primary-vds0

Create the Logical Domain


root@labsrv:~# ldm add-domain newdom1
root@labsrv:~# ldm add-vcpu 4 newdom1
root@labsrv:~# ldm add-memory 4G newdom1
root@labsrv:~# ldm add-vnet vnet1 primary-vsw0 newdom1

Add installation media to newly create logical domain


root@labsrv:~# ldm add-vdisk sol-11-sparc.iso solaris11_media@primary-vds0
newdom1

Configure Virtual Disks


root@labsrv:~# zfs create -p -V 30G rpool/ldoms/newdom1/disk0
root@labsrv:~# zfs create -p -V 30G rpool/ldoms/newdom1/disk1

Add Vdisks volumes to the Virtual Disk Server service (pvdisk0).


root@labsrv:~# ldm add-vdsdev /dev/zvol/dsk/rpool/ldoms/newdom1/disk0 newdom1disk0@primary-vds0

root@labsrv:~# ldm add-vdsdev /dev/zvol/dsk/rpool/ldoms/newdom1/disk1 newdom1disk1@primary-vds0

Assign Virtual Disks to newly created logical domain


root@labsrv:~# ldm add-vdisk disk0 newdom1-disk0@primary-vds0 newdom1
root@labsrv:~# ldm add-vdisk disk1 newdom1-disk1@primary-vds0 newdom1

Configure Boot Devices (to autoboot from primary disk0)


root@labsrv:~# ldm set-var auto-boot\?=true newdom1
root@labsrv:~# ldm set-var boot-device=disk0 newdom1

Set the Host ID


root@labsrv:~# ldm set-domain hostid=58e92820 newdom1

Before we start the logical domain, we need to bind the associated resources to the
logical domain. This will also show us the port that the console is running on, so
that we can connect to it.

root@labsrv:~# ldm list


NAME

STATE

FLAGS

CONS

VCPU

MEMORY

UTIL

UPTIME

primary

active

-n-cv-

UART

4G

0.8%

4h 20m

newdom1

inactive

------

4G

root@labsrv:~# ldm bind-domain newdom1


root@labsrv:~# ldm list
NAME

STATE

FLAGS

CONS

VCPU

MEMORY

UTIL

UPTIME

primary

active

-n-cv-

UART

4G

3.0%

4h 21m

newdom1

bound

------

5000

4G

Connect newdom1 prior to starting the domain, because we prefer to watch the
boot up process, so that we know everything is working properly, this means we will
have a separate ssh session which we use for telnet.

root@labsrv:~# telnet localhost 5000

Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to t4.
Escape character is '^]'.

Connecting to console "newdom1" in group "newdom1" ....


Press ~? for control options ..

Start the domain.


root@labsrv:~# ldm start-domain newdom1
LDom newdom1 started

SPARC T4-1, No Keyboard


Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.1, 4096 MB memory available, Serial #58496027.
Ethernet address 0:14:4f:fb:e:4a, Host ID: 58fb0e4a.

Boot device: disk0

File and args:

Bad magic number in disk label


ERROR: /virtual-devices@100/channel-devices@200/disk@0: Can't open disk label
package

ERROR: boot-read fail

Evaluating:

Can't open boot device

{0} ok

Now boot newdom1 from Solaris 11 installation media


{0} ok boot sol-11-sparc.i

S-ar putea să vă placă și