Documente Academic
Documente Profesional
Documente Cultură
Raj Kumar
on 14:15
No Comment
Basic of LDOM
Identify PCI-E buses which are not currently used by the Control Domain.
Stop the Logical Domain which you are going to add the PCI bus.
Identify PCI buses which are not currently used by the Control Domain. In the below example,
we could confirm "pci@500" not used, as another "pci@400" already in use.
[root@unixrock /]# echo | format
Searching for disks...done
Save the Configuration in to Service Processor (spconfig) and Reboot the Control Domain to
take effect the changes.
[root@unixrock
[root@unixrock
[root@unixrock
[root@unixrock
After the Reboot, verifying that "pci@500" has removed from Control Domain.
[root@unixrock /]# ldm list-devices primary |grep -i pci
pci@400
pci_0
yes
[root@unixrock /]#
[root@unixrock /]# ldm list-spconfig
factory-default
config_initial
config_pci [current]
[root@unixrock /]#
Stopping the Logical Domain which we are going to add the PCI bus.
[root@unixrock /]# ldm list-domain
NAME
STATE
FLAGS
CONS
VCPU
primary
active
-n-cv- SP
8
testldom01
active
-t---- 5001
8
[root@unixrock /]#
[root@unixrock /]# ldm stop-domain testldom01
LDom testldom01 stopped
[root@unixrock /]#
MEMORY
2G
2G
UTIL
0.4%
0.1%
UPTIME
55m
25m
Perfect ....!!! we have successfully configured our I/O Domain. Thanks for reading this post,
Please leave your valuable comments or queries, I will respond to you at earliest.
Once we have removed the bus from the primary controller then same need to be
added in controller domain. First save the ldom configuration then take a reboot
then assign the same bus to iodmain
# ldm add-config test
# init 6
----------------------------------------------------------------------------------------------------------------# ldm add-io pci_1 iodR083
Now PCI devices which is attached to the specific bus ( pci_1 ) will be added to the
iodomain.
in my setup I got 3 PCI devices while adding the bus to iodomain ( 2 networkcard
and 1 HBA card )
Then add the iso cdrom image to iodomain and start the os installation in iodomain
-bash-3.2# ldm add-vdisk cdrom iso0@primary-vds0 iodR080
-bash-3.2# ldm start iodR080
LDom iodR080 started
-bash-3.2# telnet localhost 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.
Connecting to console "iodR080" in group "iodR080" ....
Press ~? for control options ..
{0} ok
{0} ok
{0} ok boot cdrom
Boot device: /virtual-devices@100/channel-devices@200/disk@0 File and args:
SunOS Release 5.10 Version Generic_147147-26 64-bit
Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.
Skipped interface igb1
Attempting to configure interface igb0...
Skipped interface igb0
Attempting to configure interface nxge7...
**************************************************************
Once completed Now letus create some LDOM , Here I am creating 7 LDOMs
Create the logical Domain using ldm create
********************************************************************************
************
Adding network card to the LDOMs
-------------------------
Once completed add the vnet to the LDOMs ( iam taking one connection from
primary and another from IO domain and will do an IPMP, so that we can ensure the
redundancy )
----------------------------------------------------ldm add-vnet linkprop=phys-state net0 primary-vsw0@primary Alfresco-wcm-dms1
ldm add-vnet linkprop=phys-state net0 primary-vsw0 Alfresco-wcm-dms1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw0 Alfresco-wcm-dms1
ldm add-vnet linkprop=phys-state net0 iodR083-vsw0 HR-intranet-app
ldm add-vnet linkprop=phys-state net1 primary-vsw0 HR-intranet-app
ldm add-vnet linkprop=phys-state net0 primary-vsw1 Int-UGC-app1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw1 Int-UGC-app1
ldm add-vnet linkprop=phys-state net0 iodR083-vsw1
ldm add-vnet linkprop=phys-state net0 primary-vsw2 Int-cooking-app1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw2 Int-cooking-app1
ldm add-vnet linkprop=phys-state net0 iodR083-vsw2 KMS-app1
ldm add-vnet linkprop=phys-state net1 primary-vsw2 KMS-app1
ldm add-vnet linkprop=phys-state net0 primary-vsw3 Tivoli-ldap1
ldm add-vnet linkprop=phys-state net1 iodR083-vsw3 Tivoli-ldap1
*******************************************************************
Adding disk to the LDOMs
---------------------
Now we will export the disk as a block device , here i am giving full disk that is *s2
and adding the same in multipathing group. ( disks are from storage device )
members in the same multipathing group will communicate each other.
-------------------------------------------------------------------------------------------------bash-3.2# ldm add-vdsdev mpgroup=group1
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E6d0s2 disk1@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group2
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E7d0s2 disk2@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group3
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E8d0s2 disk3@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group4
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003E9d0s2 disk4@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group5
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003EAd0s2 disk5@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group6
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003EBd0s2 disk6@primary-vds0
-bash-3.2# ldm add-vdsdev mpgroup=group7
/dev/rdsk/c0t60060E8006D3E3000000D3E3000003ECd0s2 disk7@primary-vds0
-bash-3.2#
*********************************************************************
Now we have assigned the disk and network card to the logical domain. Now bind
the domain and start then install OS
ldm
ldm
ldm
ldm
ldm
ldm
ldm
bind
bind
bind
bind
bind
bind
bind
Alfresco-wcm-dms1
HR-intranet-app
Int-UGC-app1
Int-active-app1
Int-cooking-app1
KMS-app1
Tivoli-ldap1
ldm
ldm
ldm
ldm
ldm
ldm
ldm
start
start
start
start
start
start
start
Alfresco-wcm-dms1
HR-intranet-app
Int-UGC-app1
Int-active-app1
Int-cooking-app1
KMS-app1
Tivoli-ldap1
The first thing we need to do is remove some of the resources from the primary domain, so that
we are able to assign them to other domains. Since the primary domain is currently active and
using these resources we will enable delayed reconfiguration mode, this will accept all changes,
and then on a reboot of that domain (in this case primary which is the control domain or the
physical machine) it will enable the configuration.
# ldm start-reconf primary
Initiating a delayed reconfiguration operation on the primary domain.
All configuration changes for other domains are disabled until the primary
domain reboots, at which time the new configuration for the primary domain
will also take effect.
Now we can start reclaiming some of those resources, I will assign 2 cores to the primary domain
and 16GB of RAM.
# ldm set-vcpu 16 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
-----------------------------------------------------------------------------ldm set-memory 16G primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------
Next we will need some services to allow us to provision disks to domains and to connect to the
console of domains for the purposes of installation or administration.
# ldm add-vdiskserver primary-vds0 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
-----------------------------------------------------------------------------# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------
We need to enable the Virtual Network Terminal Server service, this allows us to telnet from the
control domain into the other domains.
# svcadm enable vntsd
# reboot
When the system comes back up we should see a drastically different LDM configuration.
Identify PCI Root Complexes
All the T5-2s that I have looked at have been laid out the same, with the SAS HBA and onboard
NIC on pci_0 and pci_2, and the PCI Slots on pci_1 and pci_3. So to split everything evenly
pci_0 and pci_1 stay with the primary, while pci_2 and pci_3 go to the alternate. However so
that you understand how we know this I will walk you through identifying the complex as well
as the discreet types of devices.
# ldm ls -l -o physio primary
NAME
primary
IO
DEVICE PSEUDONYM OPTIONS
pci@340 pci_1
pci@300 pci_0
pci@3c0 pci_3
pci@380 pci_2
pci@340/pci@1/pci@0/pci@4
pci@340/pci@1/pci@0/pci@5
pci@340/pci@1/pci@0/pci@6
pci@300/pci@1/pci@0/pci@4
pci@300/pci@1/pci@0/pci@2
pci@300/pci@1/pci@0/pci@1
pci@3c0/pci@1/pci@0/pci@7
pci@3c0/pci@1/pci@0/pci@2
pci@3c0/pci@1/pci@0/pci@1
pci@380/pci@1/pci@0/pci@5
pci@380/pci@1/pci@0/pci@6
pci@380/pci@1/pci@0/pci@7
/SYS/MB/PCIE5
/SYS/MB/PCIE6
/SYS/MB/PCIE7
/SYS/MB/PCIE1
/SYS/MB/SASHBA0
/SYS/MB/NET0
/SYS/MB/PCIE8
/SYS/MB/SASHBA1
/SYS/MB/NET2
/SYS/MB/PCIE2
/SYS/MB/PCIE3
/SYS/MB/PCIE4
This shows us that pci@300 = pci_0, pci@340 = pci_1, pci@380 = pci_2, and pci@3c0 = pci_3.
Map Local Disk Devices To PCI Root
First we need to determine which disk devices are in the zpool, so that we know which ones that
cannot be removed.
# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 70.3G in 0h8m with 0 errors on Fri Feb 21 05:56:34 2014
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t5000CCA04385ED60d0 ONLINE 0 0 0
c0t5000CCA0438568F0d0 ONLINE 0 0 0
errors: No known data errors
Next we must use mpathadm to find the Initiator Port Name. To do that we must look at slice 0
of c0t5000CCA04385ED60d0.
# mpathadm show lu /dev/rdsk/c0t5000CCA04385ED60d0s0
Logical Unit: /dev/rdsk/c0t5000CCA04385ED60d0s2
mpath-support: libmpscsi_vhci.so
Vendor: HITACHI
Product: H109060SESUN600G
Revision: A606
Name Type: unknown type
Name: 5000cca04385ed60
Asymmetric: no
Current Load Balance: round-robin
Logical Unit Group ID: NA
Auto Failback: on
Auto Probing: NA
Paths:
Initiator Port Name: w5080020001940698
Target Port Name: w5000cca04385ed61
Override Path: NA
Path State: OK
Disabled: no
Target Ports:
Name: w5000cca04385ed61
Relative ID: 0
initiator-port w5080020001940698
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@1
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@2
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@8
w5080020001940698
unknown
/devices/pci@300/pci@1/pci@0/pci@2/scsi@0/iport@4
First we must determine the underlying device for each of our network interfaces.
# dladm show-phys net0
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 10000 full ixgbe0
In this case ixgbe0, we can then look at the device tree to see where it is pointing to to find which
PCI Root this device is connected to.
# ls -l /dev/ixgbe0
lrwxrwxrwx 1 root root 53 Feb 12 2014 /dev/ixgbe0 ->
../devices/pci@300/pci@1/pci@0/pci@1/network@0:ixgbe0
Now we can see that it is using pci@300, which translates into pci_0.
Map Infiniband Cards to PCI Root
Again lets determine the underlying device name of our infiniband interfaces, on my machine
they were defaulted at net2 and net3, however I had previously renamed the link to ib0 and ib1
for simplicity. This procedure is very similar to Ethernet cards.
# dladm show-phys ib0
LINK MEDIA STATE SPEED DUPLEX DEVICE
ib0 Infiniband up 32000 unknown ibp0
In this case our device is ibp0. So now we just check the device tree.
# ls -l /dev/ibp0
lrwxrwxrwx 1 root root 83 Nov 26 07:17 /dev/ibp0 ->
../devices/pci@380/pci@1/pci@0/pci@5/pciex15b3,673c@0/hermon@0/ibport@1,0,ipib
:ibp0
We can see by the path, that this is using pci@380 which is pci_2.
Map Fibre Channel Cards to PCI Root
Now perhaps we need to have some Fibre Channel HBAs split up as well, first thing we must do
is look at the cards themselves.
# luxadm -e port
/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0/fp@0,0:devctl NOT CONNECTED
/devices/pci@300/pci@1/pci@0/pci@4/SUNW,qlc@0,1/fp@0,0:devctl NOT CONNECTED
Basically we are going to split our PCI devices by even and odd, with even staying with Primary
and odd going with Alternate. On the T5-2, this will result on the PCI-E cards on the left side
being for the primary, and the cards on the right for the alternate.
Here is a diagram of how the physical devices are mapped to PCI Root Complexes.
Virtual Devices must be accessible on both servers, via the same service name (though
the underlying paths may be different).
Migrations can be either online live or offline cold the state of the domain determines
if it is live or cold.
When doing a cold migration virtual devices are not checked to ensure they exist on the
receiving end, you will need to check this manually.
This will generate any errors that would generate in an actual migration, however it will do so
without actually causing you problems.
Live Migration
When you are ready to perform the migration then remove the dry run flag. This process will
also do the appropriate safety checks to ensure that everything is good on the receiving end.
# ldm migrate-domain ldom1 root@server
Target Password:
Now the migration will proceed and unless something happens it will come up on the other
system.
Live Migration With Rename
We can also rename the logical domain as part of the migration, we simply specify the new
name.
# ldm migrate-domain ldom1 root@server:ldom2
Target Password:
In this case the original name was ldom1 and the new name is ldom2.
Common Errors
Here are some common errors.
Bad Password or No LDM on Target
# ldm migrate-domain ldom1 root@server
Target Password:
Failed to establish connection with ldmd(1m) on target: server
Check that the 'ldmd' service is enabled on the target machine and
that the version supports Domain Migration. Check that the 'xmpp_enabled'
and 'incoming_migration_enabled' properties of the 'ldmd' service on
the target machine are set to 'true' using svccfg(1M).
Probable Fixes Ensure you are attempting to migrate to the correct hypervisor, you have the
username/password combination correct, and that the user has the appropriate level of access to
ldmd and that ldmd is running.
Missing Virtual Disk Server Devices
# ldm migrate-domain ldom1 root@server
Target Password:
The number of volumes in mpgroup 'zfs-ib-nfs' on the target (1) differs
from the number on the source (2)
Domain Migration of LDom ldom1 failed
Probable Fixes Ensure that the underlying virtual disk devices match, if you are using
mpgroups, then the entire mpgroup must match on both sides.
Missing Virtual Switch Device
# ldm migrate-domain ldom1 root@server
Target Password:
Failed to find required vsw alternate-vsw0 on target machine
Domain Migration of LDom logdom1 failed
Probable Fixes Ensure that the underlying virtual switch devices match on both locations.
Check Migration Progress
One thing to keep in mind, is that during the migration process, the hypervisor that is being
evacuated is the authoritative one in terms of controlling the process, so status should be checked
there.
source# ldm list -o status ldom1
NAME
logdom1
STATUS
OPERATION PROGRESS TARGET
migration 20% 172.16.24.101:logdom1
It can however be checked on the receiving end, though it will look a little bit different.
target# ldm list -o status logdom1
NAME
logdom1
STATUS
OPERATION PROGRESS SOURCE
migration 30% ak00176306-primary
The big thing to notice is that it shows the source on this side, also if we changed the name as
part of the migration it will also show the name using the new name.
Cancel Migration
Of course if you need to cancel a migration, this would be done on the hypervisor that is being
evacuated, since it is authoritative.
# ldm cancel-operation migration ldom1
Domain Migration of ldom1 has been cancelled
This will allow you to cancel any accidentally started migrations, however likely anything that
you needed to cancel would generate an error before needing to do this.
Cross CPU Considerations
By default logical domains are created to use very specific CPU features based on the hardware
it runs on, as such live migration only works by default on the exact same CPU type and
generation. However if we change the CPU
Native Allows migration between same CPU type and generation.
Generic Allows the most generic processor feature set to allow for widest live migration
capabilities.
Migration Class 1 Allows migration between T4, T5 and M5 server classes (also supports M10
depending on firmware version)
SPARC64 Class 1 Allows migration between Fujitsu M10 servers.
Here is an example of how you would change the CPU architecture of a domain. I personally
recommend using this sparingly and building your hardware infrastructure in a way where you
have the capacity on the same generation of hardware, however in certain circumstances this can
make a lot of sense if the performance implications are not too great.
# ldm set-domain cpu-arch=migration-class1 ldom1
I personally wouldnt count on the Cross-CPU functionality, however in some cases it might
make sense for your situation, either way Live Migration of Logical Domains is done in a very
effective manner and adds a lot of value.
Related posts:
1. SPARC Logical Domains: Alternate Service Domains Part 3
2. SPARC Logical Domains: Alternate Service Domains Part 1
Now we remove the unneeded PCI Roots from the primary domain, this will allow us to assign
them to the alternate domain.
# ldm remove-io pci_1 primary
-----------------------------------------------------------------------------Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
-----------------------------------------------------------------------------# ldm remove-io pci_3 primary
We have set this with 2 cores and 16GB of RAM. Your sizing will depend on your use case.
Add PCI Devices to Alternate Domain
We are assigning pci_1 and pci_3 to the alternate domain, this will have direct access to two of
the on-board NICs, two of the disks, and half of the PCI slots. It also will inherit the CDROM as
well as the USB controller.
Also really quick I just wanted to point this out quickly. The disks are not split evenly, pci_0 has
4 disks, while pci_3 only has two. So that said if your configuration included 6 disks then I
would recommend using the third and fourth in the primary as non-redundant storage pool,
perhaps to be used to stage firmware and such for patching. But the bottom line is that you need
to purchase the hardware with 4 drives minimum.
# ldm add-io pci_1 alternate
# ldm add-io pci_3 alternate
Here we have NICs and disks on our alternate domain, now we just need something to boot from
and we can get the install going.
Lets save our config before moving on.
# ldm add-config alternate-domain
We need to do is determine what port telnet is listening on for this particular domain. In our case
we can see it is 5000.
# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
primary active -n-cv- UART 16 16G 0.2% 0.2% 17h 32m
alternate active -n--v- 5000 16 16G 0.0% 0.0% 17h 45m
When using these various consoles you always need to be attentive to the escape sequence,
which in the case of telnet it is ^], which is CTRL + ] once we have determined where we
can telnet to, then we can start the connection. Also important to note. You will see ::1:
Connection refused. This is because we are connecting to localhost, if you dont want to see that
error connect to 127.0.0.1 (which is the IPv4 local address).
# telnet localhost 5000
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to AK00176306.
Escape character is '^]'.
Connecting to console "alternate" in group "alternate" ....
Press ~? for control options ..
telnet> quit
Connection to AK00176306 closed.
I will let you go through the install on your own, but I am assuming that you know how to install
the OS itself.
Now lets save our config, so that we dont lose our progress.
# ldm add-config alternate-domain-config
At this point if we have done everything correctly, we can reboot the primary domain without
disrupting service to the alternate domain. Doing pings during a reboot will show illustrate
where we are in the build. Of course you would have to have networking configured on the
alternate domain, and dont forget the simple stuff like mirroring your rpool and such, it would
be a pity to go to all this trouble to not have a basic level of redundancy such as mirrored disks.
Test Redundancy
At this point the alternate and the primary domain are completely independent. To validate this I
recommend setting up a ping to both the primary and the alternate domain and rebooting the
primary. If done correctly then you will not lose any pings to the alternate domain. Keep in
mind that while the primary is down you will not be able to utilize the control domain in other
words the only one which can configure and start/stop other domains.
Related posts:
1. SPARC Logical Domains: Alternate Service Domains Part 1
2. SPARC Logical Domains: Live Migration
3. Solaris Virtualization: Using Logical Domains on Solaris 11 Part One
4. Solaris Virtualization: Using Logical Domains on Solaris 11 Part Three
5. Solaris Virtualization: Using Logical Domains on Solaris 11 Part Two
LDOM: Adding a virtual disk to a LDOM guest domain
# Tested on Solaris 10 (both Service Domain and Guest Domain) with ZFS
volumes acting as
# virtual disks
# I'm using a SAN LUN presented to service domain like following one:
echo | format
[...]
33. c4t60060481234290101102103030343635d0 <EMC-SYMMETRIX-5771
cyl 16378 alt 2 hd 15 sec 128>
/scsi_vhci/ssd@g60060481234290101102103030343635
[...]
-n----
5002
16
8G
0.1%
181d
22h
# and its dedicated zpools and zfs these ones:
zpool list | grep P05
P05-0
119G 95,5G
P05-2
149G
122G
zfs list | grep P05
P05-0
P05-0/VD_DATA-0
P05-0/VD_OS-0
P05-0/admin
P05-0/export
P05-2
P05-2/VD_DATA-2
23,5G
26,6G
115G
85G
30G
18K
126K
147G
146G
80%
82%
ONLINE
ONLINE
2,11G
5,25G
18,5G
482K
374K
176M
24,2G
21K
81,9G
13,6G
18K
126K
21K
122G
/P05-0
/P05-0/admin
/P05-0/export
/P05-2
-
# I'll take "P05-3" as name for the new zpool and "VD_DATA-3" for
virtual disk name
23,5G
26,6G
14,9G
zpool get
NAME
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
VALUE
SOURCE
14,9G
0%
default
ONLINE
6696447729916926011
29
default
default
on
default
off
default
all P05-3
PROPERTY
size
capacity
altroot
health
guid
version
bootfs
delegation
autoreplace
80%
82%
0%
ONLINE
ONLINE
ONLINE
default
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
P05-3
cachefile
failmode
listsnapshots
autoexpand
free
allocated
readonly
wait
on
off
14,9G
92,5K
off
default
default
default
default
-
AVAIL
206M
REFER
31K
MOUNTPOINT
/P05-3
14,4G
14,4G
206M
14,6G
31K
16K
/P05-3
-
NAME
SERVER
VOLUME
TOUT ID
DEVICE
MPGROUP
myguest05-vdos-0 P050-VDOS-0@primary-vds00
disk@0
myguest05-data-0 P050-VDATA-0@primary-vds00
disk@1
myguest05-data-2 P052-VDATA-2@primary-vds00
disk@3
primary
primary
primary
VOLUME
MPGROUP
myguest05-vdos-0 P050-VDOS-0@primary-vds00
TOUT ID
DEVICE
disk@0
myguest05-data-0 P050-VDATA-0@primary-vds00
disk@1
myguest05-data-2 P052-VDATA-2@primary-vds00
disk@3
myguest05-data-3 P053-VDATA-3@primary-vds00
disk@2
primary
primary
primary
primary
# We are done and new disk should be visible on guest domain (it may
be necessary to
# rescan devices):
# Before:
#
----------------------------------------------------------------------------------------# If we use physical devices as virtual disks instead, we'll do
something like this
VOLUME
MPGROUP
guest01_sys0vol@primary-vds0
guest01_sys0map
guest01_d0vol@primary-vds0
guest01_d0map
guest01_d01vol@primary-vds0
guest01_d01map
guest01_d02vol@primary-vds0
guest01_d02map
guest01_d03vol@primary-vds0
guest01_d03map
guest01_d04vol@primary-vds0
guest01_d04map
guest01_d05vol@primary-vds0
guest01_d05map
guest01_d06vol@primary-vds0
guest01_d06map
guest01_d07vol@primary-vds0
guest01_d07map
guest01_d08vol@primary-vds0
TOUT ID
0
1
2
3
4
5
6
7
8
9
disk@9 primary
guest01_d09
disk@10 primary
guest01_d10
disk@11 primary
guest01_d11
disk@12 primary
guest01_d12
disk@13 primary
guest01_d13
disk@14 primary
guest01_d14
disk@15 primary
guest01_d15
disk@16 primary
guest01_d16
disk@17 primary
guest01_d17
disk@18 primary
guest01_d18
disk@19 primary
guest01_d19
disk@20 primary
guest01_d08map
guest01_d09vol@primary-vds0
guest01_d09map
guest01_d10vol@primary-vds0
guest01_d10map
guest01_d11vol@primary-vds0
guest01_d11map
guest01_d12vol@primary-vds0
guest01_d12map
guest01_d13vol@primary-vds0
guest01_d13map
guest01_d14vol@primary-vds0
guest01_d14map
guest01_d15vol@primary-vds0
guest01_d15map
guest01_d16vol@primary-vds0
guest01_d16map
guest01_d17vol@primary-vds0
guest01_d17map
guest01_d18vol@primary-vds0
guest01_d18map
guest01_d19vol@primary-vds0
guest01_d19map
10
11
12
13
14
15
16
17
18
19
20
VOLUME
MPGROUP
guest01_sys0vol@primary-vds0
guest01_sys0map
guest01_d0vol@primary-vds0
guest01_d0map
guest01_d01vol@primary-vds0
TOUT ID
0
1
2
disk@2 primary
guest01_d02
disk@3 primary
guest01_d03
disk@4 primary
guest01_d04
disk@5 primary
guest01_d05
disk@6 primary
guest01_d06
disk@7 primary
guest01_d07
disk@8 primary
guest01_d08
disk@9 primary
guest01_d09
disk@10 primary
guest01_d10
disk@11 primary
guest01_d11
disk@12 primary
guest01_d12
disk@13 primary
guest01_d13
disk@14 primary
guest01_d14
disk@15 primary
guest01_d15
disk@16 primary
guest01_d16
disk@17 primary
guest01_d17
disk@18 primary
guest01_d18
disk@19 primary
guest01_d19
disk@20 primary
guest01_d20
disk@21 primary
guest01_d01map
guest01_d02vol@primary-vds0
guest01_d02map
guest01_d03vol@primary-vds0
guest01_d03map
guest01_d04vol@primary-vds0
guest01_d04map
guest01_d05vol@primary-vds0
guest01_d05map
guest01_d06vol@primary-vds0
guest01_d06map
guest01_d07vol@primary-vds0
guest01_d07map
guest01_d08vol@primary-vds0
guest01_d08map
guest01_d09vol@primary-vds0
guest01_d09map
guest01_d10vol@primary-vds0
guest01_d10map
guest01_d11vol@primary-vds0
guest01_d11map
guest01_d12vol@primary-vds0
guest01_d12map
guest01_d13vol@primary-vds0
guest01_d13map
guest01_d14vol@primary-vds0
guest01_d14map
guest01_d15vol@primary-vds0
guest01_d15map
guest01_d16vol@primary-vds0
guest01_d16map
guest01_d17vol@primary-vds0
guest01_d17map
guest01_d18vol@primary-vds0
guest01_d18map
guest01_d19vol@primary-vds0
guest01_d19map
guest01_d20vol@primary-vds0
guest01_d20map
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
root@labsrv:~# ldm -V
System PROM:
Hostconfig
v. 1.1.1
Hypervisor
v. 1.10.1.
OpenBoot
v. 4.33.1
STATE
FLAGS
CONS
VCPU
MEMORY
UTIL
UPTIME
primary
active
-n-c--
UART
64
32256M
0.0%
1d 10h 15m
Here we are creating the Virtual Console Concentrator service. This is what allows
us to connect to the console of the domains for installation and reconfiguration.
Verify Changes
root@labsrv:~# ldm list-services primary
VCC
NAME
LDOM
primary-vcc0
PORT-RANGE
primary
5000-5100
VSW
NAME
LINKPROP
LINK
LDOM
MAC
DEFAULT-VLAN-ID PVID VID
Primary-vsw0
switch@0
on
VDS
primary
1
NET-DEV
ID
MTU
MODE
00:14:4f:fb:bd:bd net0
1
DEVICE
INTER-VNET-
0
1500
NAME
MPGROUP
LDOM
DEVICE
Primary-vds0
VOLUME
OPTIONS
primary
By default we have primary domain, which mean the host operating system.
STATE
FLAGS
CONS
VCPU
MEMORY
UTIL
UPTIME
primary
active
-n-c--
UART
64
32256M
0.0%
1d 30m
We have to reduce the number of vCPU and Memory from the the control domain to
make available to the logical domains.
Initiatingadelayedreconfigurationoperationontheprimarydomain.Allconfigurationchangesforother
domainsaredisableduntiltheprimarydomainreboots,atwhichtimethenewconfigurationforthe
primarydomainwillalsotakeeffect.
13:10:01 svc:/ldoms/vntsd:default
Reboot now.
root@labsrv:~# reboot
After a reboot, make sure the primary domain must reflect the new memory and
vCPU configuration.
STATE
FLAGS
CONS
VCPU
MEMORY
UTIL
UPTIME
primary
active
-n-cv-
UART
4G
0.4%
00h 15m
Add the Solaris iso images to the Virtual Disk Server service.
root@labsrv:~# ldm add-vdsdev /rpool/ldoms/OSImages/sol-10-u10-ga2-sparcdvd.iso solaris10_media@primary-vds0
Before we start the logical domain, we need to bind the associated resources to the
logical domain. This will also show us the port that the console is running on, so
that we can connect to it.
STATE
FLAGS
CONS
VCPU
MEMORY
UTIL
UPTIME
primary
active
-n-cv-
UART
4G
0.8%
4h 20m
newdom1
inactive
------
4G
STATE
FLAGS
CONS
VCPU
MEMORY
UTIL
UPTIME
primary
active
-n-cv-
UART
4G
3.0%
4h 21m
newdom1
bound
------
5000
4G
Connect newdom1 prior to starting the domain, because we prefer to watch the
boot up process, so that we know everything is working properly, this means we will
have a separate ssh session which we use for telnet.
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to t4.
Escape character is '^]'.
Evaluating:
{0} ok