Sunteți pe pagina 1din 184

str.

1



Transition to Oracle Solaris 11.x


Module 1 - Introducing the Oracle Solaris 11 New Features and Enhancements
Module 2 - Managing Software Packages in Oracle Solaris 11 IPS
Module 3 - Installing the Oracle Solaris 11 Operating System
Module 4 - Automatic Installer
Module 5 - Oracle Solaris 11 Network Administration
Module 6 - Installing and Administering Oracle Solaris 11 Zones
Module 7 - Oracle Solaris 11 ZFS Enhancements
Module 8 - Oracle Solaris 11 Security Enhancements
str. 2





Module 1
Introducing Oracle Solaris 11
New Features and Enhancements






str. 3

New operating system installation features

New software packages updating feature
Oracle Solaris 10 zone features
New networking features and enhancements
Storage enhancements
System security enhancements

str. 4

Image Packaging System (IPS)
Completely redesigned software packaging system

Comprehensive delivery framework for software life cycle:
Software installation
Software updates
Operating system upgrades
Removal of software packages
Intelligent package management






str. 5

Operating System Installation
Unattended installation
Oracle Solaris 11 Automated Installer (AI)
Network installation
Installation manifest
Client profiles
Interactive installation
Oracle Solaris 11 LiveCD installation
Suited for desktops and notebooks
GUI interface
Interactive text install
Suited for server deployments
Text-based interface


str. 6

Oracle Solaris 11 Zones
Support for Oracle Solaris 10 Zones
New boot environment for zones
Zone resource monitoring
Delegated administration

Networking Features and Enhancements
Network virtualization
Network Auto-Magic (NWAM)
Improved IP multipathing (IPMP)
New sockets architecture
Load balancing
Bridging and tunneling
The ipadm command

str. 7

Storage Enhancements
ZFS enhancements
Default file system
Deduplication
ZFS snapshot differences (zfs diff)
ZFS shadow migration
COMSTAR
CIFS support
System Security Enhancements
Secure by default
Root treated as a role
Robust data encryption
Driver support for Trusted Platform Module (TPM)
Trusted Extensions enhancements
str. 8

Comparing Key Features














str. 9









Module 2
Managing Software Packages in Oracle Solaris 11
(IPS)



str. 10

Design Goals of New Packaging System

No difference in patching and packaging single stream
All required data included in packages no cluster definition files or
external metadata
Repository-based
Dependencies completed and managed
Easy to recover from errors
Changes have to take place on a live system safely
Package management across different environments


str. 11

Image Packaging System (IPS)
No difference in patching and packaging single stream

IPS Naming - packages specified by an FMRI
pkg://{publisher}/{package name}@{version}

Version specified as
{component version},{build version}-{branch version}:{time}

Example:
pkg://solaris/package/pkg@0.5.11,5.11-0.151:20101027T054323Z

Oracle Solaris 11 2010_11 or later
SPARC and x86 architectures
Web-based or local package repository
Repository mirroring
Client access to IPS server
str. 12

IPS Package Contents

Contents defined by a manifest
Manifest contains actions, which might have attributes

Actions include
Files, directories, symlinks, hard links
Devices, users, groups
Set generic key=value package metadata
Legacy SVR4 compatibility information
Dependencies
Signatures
str. 13

Installation Bundles

solaris-large-server
Pretty much the whole Solaris bundle, including desktop.
Like SUNWCall

solaris-small-server
Installation bundle appropriate for a smaller server

solaris-desktop
Installation bundle appropriate for a desktop
str. 14

Image Packaging System (IPS)
Delivery framework for software life cycle:
str. 15

Typical Deployment

str. 16

Package Repository
str. 17

Create Local IPS Repository
From an ISO

Sol11# zfs create -p -o mountpoint=/export/repo/solaris11 \
rpool/export/repo/solaris11
Sol11# mount -F hsfs /var/tmp/sol-11-repo-full.iso /mnt
Sol11# rsync -aqP /mnt/repo/ /export/repo/solaris11
Sol11# pkgrepo refresh -s /export/repo/solaris11/repo

Replicating Another Network Repository

Sol11# zfs create -p -o mountpoint=/export/repo/solaris11 \
rpool/export/repo/solaris11
Sol11# pkgrepo create /export/repo/solaris11
Sol11# pkgrecv -s http://pkg.oracle.com/solaris/release \
-d /export/repo/solaris11 '*'
Sol11# pkgrepo refresh -s /export/repo/solaris11
str. 18

Configuring IPS Repository Services

Sol11# svccfg -s application/pkg/server \
setprop pkg/inst_root=/export/repo/solaris11
Sol11# svccfg -s application/pkg/server setprop pkg/readonly=true
Sol11# svccfg -s application/pkg/server setprop pkg/port=portnumber
Sol11# svcadm refresh application/pkg/server
Sol11# svcadm enable application/pkg/server
str. 19

Package Repository
I. Default package repository: http://pkg.oracle.com/solaris/release


II. Creating a Local Repository:
download ISO image or copy from the default package repository.

1. Obtain software packages:
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html






2. Create a ZFS file system for the repository
A good practice is to store the repository in a separate ZFS file system.

str. 20

Package Repository (cont.)
3. Copy the packages to the repository.
If you copy from an ISO image, use the rsync command. If you copy directly from
another repository use the pkgrecv command. When copying from another repository,
you should have already obtained a key and certificate and installed them on your system.

# zpool create zasoby cxtxdx ; zfs set mountpoint=none zasoby
# zfs create o mountpoint=/IPS zasoby/IPS
# lofiadm a /../sol-11-xxx-xxx-repo-full.iso
# mount F hsfs /dev/lofi/1 /mnt
# rsync -aP /mnt/repo /IPS

4. Set the appropriate pkg.repotd properties.
Make sure pkg/inst_root and pkg/readonly are setup appropriate
# svccfg -s application/pkg/server setprop \
pkg/inst_root=/IPS/repo
# svccfg -s application/pkg/server setprop pkg/readonly=true
# svcadm refresh application/pkg/server
# svcadm enable application/pkg/server
# pkgrepo refresh -s /IPS/repo
str. 21

Package Repository (cont.)
5. Set the preferred publisher.
Default preferred publisher for Oracle Solaris 11.1 systems is Solaris and the default origin
for that publisher is http://pkg.oracle.com/solaris/release. If you want your clients to get
packages from your local repository, you must reset the origin for the Solaris publisher.

# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://pkg.oracle.com/solaris/release/

# pkg set-publisher -G '*' -g http://Solaris11.1-Server/ solaris
# pkg set-publisher -m file:///IPS/repo solaris
# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://Solaris11.1-Server/
solaris mirror online F file:///IPS/repo/

6. Refresh the repository catalog.
Be sure to use the pkgrepo refresh command to update the repository catalogs and
any new packages found in the repository.
# pkgrepo refresh -s /IPS/repo
str. 22

Configuring the IPS Clients

# pkg publisher
PUBLISHER TYPE STATUS URI
Solaris (preferred) origin online http://pkg.oracle.com/solaris/release/

# pkg set-publisher -G * -g http://servername.example.com/ solaris

# pkg publisher
PUBLISHER TYPE STATUS URI
solaris (preferred) origin online http://servername.example.com/



zone1# pkg publisher
PUBLISHER TYPE STATUS URI
solaris (syspub) origin online proxy://http://solaris/





str. 23

Package Management: pkg

str. 24

Example New Package Searching
str. 25

Package Installation

str. 26

Package Installation (cont.)


str. 27

Package Contents

str. 28

Package Contents (cont.)


str. 29

Repairing Packages

Sol11# rm /kernel/drv/nxge.conf OOPS!
Sol11# pkg search -l -Ho pkg.name /kernel/drv/nxge.conf
driver/network/ethernet/nxge

Sol11# pkg verify -v driver/network/ethernet/nxge
PACKAGE STATUS
pkg://driver/network/ethernet/nxge ERROR
file: kernel/drv/nxge.conf
Missing: regular file does not exist

Sol11# pkg fix driver/network/ethernet/nxge
Verifying: pkg://solaris/system/install/auto-install/auto-install-common
ERROR
file: kernel/drv/nxge.conf
Missing: regular file does not exist
Created ZFS snapshot: 2012-08-28-05:34:02
str. 30

Upgrade = pkg update

Sol11# pkg update
Packages to update: 266
Create boot environment: Yes
DOWNLOAD PKGS FILES XFER (MB)
Completed 266/266 4496/4496 179.2/179.2
PHASE ACTION
Removal Phase 983/983
Install Phase 1116/1116
Update Phase 6677/6677
PHASE ITEMS
Package State Update Phase 532/532
Package Cache Update Phase 266/266
Image State Update Phase 2/2

A clone of solaris exists and has been updated and activated.
On the next boot the Boot Environment solaris-1 will be mounted on '/'.
Reboot when ready to switch to this updated BE.

str. 31

Boot Environments
Sol11# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
b-140 - - 11.51M static 2012-05-26 12:47
b-141 - - 11.98M static 2012-06-10 15:40
b-142 - - 10.14M static 2012-06-24 08:05
b-143 - - 13.85M static 2012-07-12 09:47
b-144 - - 1.48G static 2012-07-22 12:09
b-145 - - 14.64M static 2012-08-03 22:23
b-146 - - 10.43M static 2012-08-20 15:31
b-147 - - 12.29M static 2012-09-06 19:28
b-148 - - 13.11M static 2012-09-23 17:05
b-149 - - 14.49M static 2012-09-30 18:53
b-150 - - 11.83M static 2012-10-15 10:32
b-151 - - 130.94M static 2012-11-15 10:10
b-152 NR / 56.03G static 2012-11-17 16:32


str. 32

Boot Environments (cont.)

Sol11# beadm activate b-151
Sol11# beadm mount b-151 /tmp/mnt
Sol11# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
b-140 - - 11.51M static 2012-05-26 12:47
b-141 - - 11.98M static 2012-06-10 15:40
b-142 - - 10.14M static 2012-06-24 08:05
b-143 - - 13.85M static 2012-07-12 09:47
b-144 - - 1.48G static 2012-07-22 12:09
b-145 - - 14.64M static 2012-08-03 22:23
b-146 - - 10.43M static 2012-08-20 15:31
b-147 - - 12.29M static 2012-09-06 19:28
b-148 - - 13.11M static 2012-09-23 17:05
b-149 - - 14.49M static 2012-09-30 18:53
b-150 - - 11.83M static 2012-10-15 10:32
b-151 R /tmp/mnt 53.82G static 2012-11-15 10:10
b-152 N / 1.71G static 2012-11-17 16:32
str. 33

Boot Environments (cont.)

Sol11# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris11-b149 N / 81.66M static 2011-10-13 14:07
solaris11-b160 R - 27.74G static 2012-03-11 10:14

Sol11# beadm destroy solaris11-b160
Are you sure you want to destroy solaris11-b160?
This action cannot be undone(y/[n]): y

Sol11# beadm list
BE Active Mountpoint Space Policy Created
-- ------ ---------- ----- ------ -------
solaris11-b149 R / 20.15G static 2011-10-13 14:07
str. 34





Module 3
Installing the Oracle Solaris 11
Operating System

str. 35

Oracle Solaris 11 Installation Options

Oracle Solaris 11 LiveCD installation

Oracle Solaris 11 Text installation

Oracle Solaris 11 Automated installation


Installation images can be downloaded from:
http://www.oracle.com/technetwork/server-storage/solaris11/downloads



str. 36

Oracle Solaris 11 LiveCD installation
str. 37

Oracle Solaris 11 LiveCD installation
str. 38

Oracle Solaris 11 Text installation


str. 39

Oracle Solaris 11 Text installation
str. 40

Oracle Solaris 11 Text installation
str. 41

Oracle Solaris 11 Automated installation
str. 42

SMF-Based System and Network Configuration

System and network configuration files moved from /etc
to SMF repository
System and network configuration changes:
File system sharing File system sharing
Network configuration commands ipadm,dladm,svccfg,svcprop
The system host name config/nodename
Power management poweradm command
Time zone system/timezone
Naming services system/identity
Domain name system/identity/domain
Environment variables system/environment

str. 43

Configuring an Oracle Solaris 11 Image

The sysconfig utility
Replaces sysunconfig and sysidtool
Unconfigure the system
sysconfig unconfigure
The unconfigure operation
Configure the system
sysconfig configure
System configuration (SC) profile creation
sysconfig create-profile
str. 44





Module 4
Oracle Solaris 11 Automated Installation
(AI)



str. 45

Using AI
ok> boot cdrom - install prompt
Enter the URL for the AI manifest [HTTP, default]:

str. 46

Automated Installation
str. 47

Basic Flow of Solaris Automated Installation
str. 48

Configure AI install service

str. 49

Associate Clients with Install Services


str. 50

Example

Sol11# installadm create-client -e 00:14:4f:fc:00:02 -n basic_ai
Warning: Service svc:/network/dns/multicast:default is not online.
Installation services will not be advertised via multicast DNS.

Sol11# svcadm enable network/dns/multicast
root@solaris:/# svcs network/dns/multicast
STATE STIME FMRI
online 20:38:32 svc:/network/dns/multicast:default

Sol11# installadm delete-client 00:14:4f:fc:00:02
Sol11# installadm create-client -e 00:14:4f:fc:00:02 -n basic_ai
Sol11# installadm create-client -e 00:14:4f:fc:00:03 -n basic_ai
Sol11# installadm list -c
Service Name Client Address Arch Image Path
------------ -------------- ---- ----------
basic_ai 00:14:4F:FC:00:03 sparc /AI/basic_ai
00:14:4F:FC:00:02 sparc /AI/basic_ai

Sol11# installadm list -m
Service Name Manifest Status
------------ -------- ------
basic_ai orig_default Default
default-sparc orig_default Default
str. 51

Minimum Requirements for AI Use
Make sure the install server has a static IP address and default route.
Install the installation tools package, install/installadm.
Run the installadm create-service command.
Make sure the clients can access a DHCP server.
Make sure the necessary information is available in the DHCP configuration
Make sure the clients can access an IPS software package repository.







Default service is used for all installations on clients of that architecture that are not explicitly
associated with a different install service with the create-client subcommand.
str. 52

Customize Installation Instructions
Create custom AI manifest
Run installadm create-manifest command to add new manifest to default-
arch install service. Specify criteria for client to select this manifest













str. 53

Static Manifests - default manifest
Installs solaris-large-server package set from Oracle's
Solaris repository to firmware-designated boot disk. Sysconfig
invoked automatically at first boot to interactively configure basic
system

Package repositories and lists; major group packages:
solaris-small-server, solaris-large-server,
solaris-desktop
Target disk: choose by device path, volume id, type,
vendor, size, container/receptacle/occupant (CRO)
label; ZFS configuration
Locales are installed/removed using package facets; all locales are
installed by default
str. 54

Derived Manifests

Dynamically generate manifest in a script
Scales AI management by reducing number of manifests
maintained by administrators
Most effective model is to load template manifest, modify
specific elements
Script uses the aimanifest command as interface
to generate AI manifest
Generated manifest located on the client at:
/system/volatile/manifest.xml
str. 55

Criteria for client to select manifest
str. 56

Criteria for client to select manifest

Sol11# vi /manifests/criteria_basic_ai.xml





















Sol11# installadm create-manifest -n basic_ai -f
/manifests/serverA_manifest.xml -c /manifests/criteria_basic_ai.xml
<ai_criteria name="mac">
<value>0:14:4F:20:53:97</value>
</ai_criteria>

<ai_criteria
name="mac">
<range>
0:14:4F:20:53:94
0:14:4F:20:53:A0
</range>
</ai_criteria>

<ai_criteria name="ipv4">
<value>10.6.68.127</value>
</ai_criteria>

<ai_criteria name="ipv4">
<range>
10.6.68.1
10.6.68.200
</range>
</ai_criteria>

<ai_criteria name="platform">
<value>
SUNW,Sun-Fire-T200
</value>
</ai_criteria>

<ai_criteria name="cpu">
<value>sparc</value>
</ai_criteria>

<ai_criteria name="network">
<value>10.0.0.0</value>
</ai_criteria>
<ai_criteria name="network">
<range>
11.0.0.0
12.0.0.0
</range>
</ai_criteria>

<ai_criteria name="mem">
<value>4096</value>
</ai_criteria>
<ai_criteria name="mem">
<range>
2048
unbounded
</range>
</ai_criteria>

<ai_criteria name="hostname">
<value>host1 host2 </value>
</ai_criteria>

<ai_criteria name="zonename">
<value> zoneA zoneB </value>
</ai_criteria>

str. 57

Deploying Zones with AI

Zones can be specified in the AI manifest
<configuration type=zone name=zone1
source=http://server/zone1/config/>
<configuration type=zone name=zone2
source=file:///net/server/zone2/config/>

config file is the zone's configuration file as output from
zonecfg export

Automatically installed on first boot of the global zone
svc:/system/zones-install:default
str. 58

Customize Installation
Sol11# ls /usr/share/auto_install/manifest/
ai_manifest.xml default.xml zone_default.xml
Sol11# ls /AI/basic_ai/auto_install/manifest/
ai_manifest.xml default.xml zone_default.xml

Sol11# cp /AI/basic_ai/auto_install/manifest/default.xml \
/manifests/server_manifest.xml
Sol11# vi /manifests/serverA_manifest.xml
<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
<auto_install>
<ai_instance name="default"> "serverA_ai_instance"
<target>
<logical>
<zpool name="rpool" is_root="true"> "zasoby"
<filesystem name="export" mountpoint="/export"/>
<filesystem name="export/home"/>
<filesystem name="soft" mountpoint="/soft"/>
<be name="solaris"/> "be_systemA"
</zpool>
</logical>
</target>
<software type="IPS">
<destination>

</destination>
<configuration type="zone" name="zone1" source="http://server/zone1/config"/>
<configuration type="zone" name="zone2" source="file:///net/server/zone2/config"/>
<source>
<publisher name="solaris">
<origin name="http://solaris/"/>
<origin name="http://pkg.oracle.com/solaris/release"/>
</publisher>
</source>
<!--
By default the latest build available, in the specified IPS repository, is installed.
If another build is required, the build number has to be appended to the 'entire'
package in the following form: <name>pkg:/entire@0.5.11-0.build#</name>
-->
<software_data action="install">
<name>pkg:/entire@latest</name>
<name>pkg:/group/system/solaris-large-server</name>
</software_data>
</software>
</ai_instance>
</auto_install>
str. 60

Customize Installation (cont.)

Sol11# installadm create-manifest -n basic_ai \
-f /manifests/ serverA_manifest.xml -c mac="0:14:4f:fc:0:2"

Sol11# installadm list -m -n basic_ai
Manifest Status Criteria
-------- ------ --------
serverA_ai_instance mac = 00:14:4F:FC:00:02
orig_default Default None

Sol11# installadm export -n basic_ai -m serverA_ai_instance

<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
<auto_install>
<ai_instance name="serverA_ai_instance">
<target>
<logical>
<zpool name="zasoby" is_root="true">
<filesystem name="export" mountpoint="/export"/>
<filesystem name="export/home"/>
<filesystem name="soft" mountpoint="/soft"/>
<be name="be_systemA"/>
str. 61

System Configuration Profiles
To specify system configuration parameters such as time zone, user accounts, and networking,
provide a SMF system configuration profile file.
Create a system configuration profile
installadm create-profile validate and profile to default-arch install service. Specify
criteria to select which clients should use this system configure profile. If no criteria are specified,
the profile is used by all clients of the service.










Sol11# installadm list -p
There are no profiles configured for local services.
str. 62

System Configuration Profiles

Common parameters available in Oracle Solaris 11:
User account, including RBAC roles, profiles and sudo
Root user: password, role/normal
Timezone, locale
Hostname
Console terminal type, keyboard layout
IPv4 and/or IPv6 interface, default route
DNS, NIS, LDAP clients
Name service switch
str. 63

System Configuration Profile
Run the interactive configuration tool and save the output to a file.
Sol11# sysconfig create-profile -o /profiles/serverA_profile.xml

str. 64

Specifying System Configuration Profile
Sol11# sysconfig create-profile -g users -o /profiles/serverA_users.xml
Sol11# sysconfig create-profile -g identity -o /profiles/serverA_identity.xml
Sol11# sysconfig create-profile -g location -o /profiles/serverA_location.xml
Sol11# sysconfig create-profile -g kdb_layout -o /profiles/serverA_kdb.xml
Sol11# sysconfig create-profile -g network -o /profiles/serverA_network.xml
Sol11# sysconfig create-profile -g naming_services o /profiles/serverA _ns.xml

Sol11# ls /usr/share/auto_install/sc_profiles/
enable_sci.xml sc_sample.xml static_network.xml
Sol11# ls /AI/basic_ai/auto_install/sc_profiles/
enable_sci.xml sc_sample.xml static_network.xml

Sol11# cp /AI/basic_ai/auto_install/sc_profiles/static_network.xml \
/profiles/serverA_profile.xml
Sol11# vi /profiles/serverA_profile.xml

<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="system configuration"> serverA_profile
<service name="system/config-user" version="1">
<instance name="default" enabled="true">
<property_group name="user_account">
<propval name="login" value="jack"/>
<propval name="password" value="9Nd/cwBcNWFZg"/>
<propval name="description" value="default_user"/>
<propval name="shell" value="/usr/bin/bash"/>
<propval name="gid" value='10'/>
str. 65

Specifying System Configuration Profile (cont.)

<propval name="type" value="normal"/>
<propval name="roles" value="root"/>
<propval name="profiles" value="System Administrator"/>
</property_group>
<property_group name="root_account">
<propval name="password"
value="$5$dnRfcZse$Hx4aBQ161Uvn9ZxJFKMdRiy8tCf4gMT2s2rtkFba2y4"/>
<propval name="type" value="role"/>
</property_group>
</instance>
</service>

<service version="1" name="system/identity">
<instance enabled="true" name="node">
<property_group name="config">
<propval name="nodename" value="solaris"/> serverA
</property_group>
</instance>
</service>

<service name="system/console-login" version="1">
<instance name='default' enabled='true'>
<property_group name="ttymon">
<propval name="terminal_type" value="sun"/> vt100
</property_group>
</instance>
</service>
str. 66

Specifying System Configuration Profile (cont.)


<service name='system/keymap' version='1'>
<instance name='default' enabled='true'>
<property_group name='keymap'>
<propval name='layout' value='US-English'/>
</property_group>
</instance>
</service>

<service name='system/timezone' version='1'>
<instance name='default' enabled='true'>
<property_group name='timezone'>
<propval name='localtime' value='UTC'/>
</property_group>
</instance>
</service>

<service name='system/environment' version='1'>
<instance name='init' enabled='true'>
<property_group name='environment'>
<propval name='LANG' value='en_US.UTF-8'/>
</property_group>
</instance>
</service>



str. 67

Specifying System Configuration Profile (cont.)

<service name="network/physical" version="1">
<instance name="default" enabled="true">
<property_group name='netcfg' type='application'>
<propval name='active_ncp' type='astring' value='DefaultFixed'/>
</property_group>
</instance>
</service>

<service name='network/install' version='1' type='service'>
<instance name='default' enabled='true'>
<property_group name='install_ipv4_interface' type='application'>
<propval name='name' type='astring' value='net0/v4'/>
<propval name='address_type' type='astring' value='static'/>
<propval name='static_address' type='net_address_v4' value='x.x.x.x/n'/> 192.168.1.110
<propval name='default_route' type='net_address_v4' value='x.x.x.x'/> 192.168.1.1
</property_group>

<property_group name='install_ipv6_interface' type='application'>
<propval name='name' type='astring' value='net0/v6'/>
<propval name='address_type' type='astring' value='addrconf'/>
<propval name='stateless' type='astring' value='yes'/>
<propval name='stateful' type='astring' value='yes'/>
</property_group>
</instance>
</service>


str. 68

Specifying System Configuration Profile (cont.)
<service name='network/dns/client' version='1'>
<property_group name='config'>
<property name='nameserver'>
<net_address_list>
<value_node value='x.x.x.x'/> 192.168.1.1
</net_address_list>
</property>
<property name='search'>
<astring_list>
<value_node value='example.com'/>
</astring_list>
</property>
</property_group>
<instance name='default' enabled='true'/>
</service>
<service version="1" name="system/name-service/switch">
<property_group name="config">
<propval name="default" value="files"/>
<propval name="host" value="files dns mdns"/>
<propval name="printer" value="user files"/>
</property_group>
<instance enabled="true" name="default"/>
</service>

<service version="1" name="system/name-service/cache">
<instance enabled="true" name="default"/>
</service>
</service_bundle>
str. 69

Specifying System Configuration Profile (cont.)
Sol11# installadm create-profile -n basic_ai -f /profiles/serverA_profile.xml
Profile serverA_profile.xml added to database.

Sol11# installadm list -p
Service Name Profile
------------ -------
basic_ai serverA_profile.xml

Sol11# installadm set-criteria -n basic_ai -p serverA_profile.xml \
-m serverA_ai_instance -c mac="00:14:4F:FC:00:02"
Criteria updated for manifest serverA_ai_instance.
Criteria updated for profile serverA_profile.xml.

Sol11# installadm list -cpm -n basic_ai
Service Name Client Address Arch Image Path
------------ -------------- ---- ----------
basic_ai 00:14:4F:FC:00:03 sparc /AI/basic_ai
00:14:4F:FC:00:02 sparc /AI/basic_ai

Manifest Status Criteria
-------- ------ --------
serverA_ai_instance mac = 00:14:4F:FC:00:02
orig_default Default None

Profile Criteria
------- --------
serverA_profile.xml mac = 00:14:4F:FC:00:02
str. 70

JumpStart to AI Mapping

js2ai JumpStart to AI translation tool
Automatically converts existing JumpStart rules,profiles, sysidcfg files
to AI equivalents
str. 71

Distribution Constructor (DC)
Install Distribution Constructor:
pkg install distribution-constructor
Copy base AI image manifest, customize. Basic SPARC manifest at:
/usr/share/distro_const/auto_install/ai_sparc_image.xml
Build the image
distro_const build my_ai_image.xml
Deploy to AI service
installadm create-service ...

str. 72

str. 73





Module 5
Oracle Solaris 11 Network Administration



str. 74

Solaris 10 Network Stack
str. 75

Solaris 11 Network Stack
str. 76

Bridges in theNetwork Stack

str. 77

Configuring Network in Oracle Solaris 11
Sol11# svcs -a | grep physical
disabled Jul_18 svc:/network/physical:nwam
online Jul_18 svc:/network/physical:upgrade
online 14:01:36 svc:/network/physical:default

Active Automatic Network Configuration - NCP (Network Configuration Profiles)

Sol11# netadm enable -p ncp Automatic
Sol11# netadm list
TYPE PROFILE STATE
ncp Automatic online
ncu:phys net0 online
ncu:ip net0 online
ncu:phys net1 online
ncu:ip net1 online
loc Automatic online
loc NoNet offline
loc User disabled

Active Network Manual configuration:

Sol11# netadm enable -p ncp DefaultFixed
Sol11# netadm list
netadm: DefaultFixed NCP is enabled; automatic network management is not available.
'netadm list' is only supported when automatic network management is active.
str. 78

Manual Mode - Configuring Network
Persistent network configuration is now managed through SMF, not by editing the
following files: /etc/defaultdomain, /etc/dhcp.,/etc/hostname.*,
/etc/hostname.ip*.tun*,/etc/nodename, /etc/nsswitch.conf
Sol11# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net1 Ethernet down 0 unknown vnet1
net0 Ethernet up 0 unknown vnet0

Sol11# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 down --
net0 phys 1500 up --
zoneA/net0 vnic 1500 up net0


Sol11# dladm show-phys -L net0
LINK DEVICE LOC
net0 vnet0 --

Sol11# cat /etc/path_to_inst | grep net
"/virtual-devices@100/channel-devices@200/network@0" 0 "vnet"
"/virtual-devices@100/channel-devices@200/network@1" 1 "vnet"


str. 79

Manual Mode - Configuring Network
Sol11# ipadm create-ip net0
Sol11# ipadm create-addr -T static -a local=192.168.1.137/24 net0/addr

-T option can be used to specify three address types: static, dhcp, and
addrconf (for auto-configured IPv6 addresses)


If the net0 interface in this example was created, and you then wanted to change the IP address that
was provided for this interface, you would need to first remove the interface and then re-add it:

Sol11# ipadm delete-ip net0
Sol11# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 down --
net0 phys 1500 up --
zoneA/net0 vnic 1500 up net0
Sol11# dladm rename-link net0 eth0
Sol11# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 down --
eth0 phys 1500 up --
zoneA/net0 vnic 1500 up eth0
Sol11# ipadm create-ip net0
Sol11# ipadm create-addr -T static -a local=192.168.1.137/24 eth0/addr


str. 80

Manual Mode - Configuring Network
Sol11# dladm show-ether
LINK PTYPE STATE AUTO SPEED-DUPLEX PAUSE
net1 current down no 0M none
net0 current up no 0M none
Sol11# dladm show-linkprop
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
net0 duplex r- unknown unknown half,full
net0 adv_10gfdx_cap r- -- 0 1,0
. . .
Sol11# dladm show-linkprop -p adv_1000fdx_cap net0
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
net0 adv_1000fdx_cap r- -- 0 1,0
. . .
Sol11# ipadm show-addrprop
net0/addr broadcast r- 192.168.1.255 -- 192.168.1.255 --
net0/addr deprecated rw off -- off on,off
net0/addr prefixlen rw 24 24 24 1-30,32
net0/addr transmit rw on -- on on,off
net0/addr zone rw global -- global --
. . .
Sol11# ipadm delete-if net0
Sol11# dladm set-linkprop -p _tx_bcopy_threshold=1024 net0
Sol11# dladm set-linkprop -p _intr_adaptive=0 net0
Sol11# dladm set-linkprop -p _intr-throttling_rate=1024 net0
Sol11# ipadm create-addr -T static -a 192.168.1.137/24 net0/v4addr
Sol11# dladm show-linkprop -p _tx_bocopy_threshold=1024 net0
str. 81

Manual Mode - Configuring Naming Services
















Sol11# vi /etc/resolv.conf
Sol11# /usr/sbin/nscfg import -f dns/client

Sol11# cp /etc/nsswitch.dns /etc/nsswitch.conf
Sol11# /usr/sbin/nscfg import -f name-service/switch
Sol11# svcadm enable dns/client
Sol11# svcadm refresh name-service/switch



str. 82


Configuring Naming Services (cont.)


# svccfg
svc:> select dns/client
svc:/network/dns/client> setprop config/search = astring: ("example.com")
svc:/network/dns/client> setprop config/nameserver = net_address:(192.168.1.1)
svc:/network/dns/client> select dns/client:default
svc:/network/dns/client:default> refresh
svc:/network/dns/client:default> validate
svc:/network/dns/client:default> select name-service/switch
svc:/system/name-service/switch> setprop config/host = astring: "files dns"
svc:/system/name-service/switch> select system/name-service/switch:default
svc:/system/name-service/switch:default> refresh
svc:/system/name-service/switch:default> validate
svc:/system/name-service/switch:default>
# svcadm enable dns/client
# svcadm refresh name-service/switch







str. 83

Automatic Mode - Configuring Network
Sol11# netadm list
netadm: DefaultFixed NCP is enabled; automatic network management is not available.
'netadm list' is only supported when automatic network management is active.

Sol11# netcfg
netcfg> list
NCPs:
Automatic
Locations:
Automatic
NoNet
User

netcfg> select ncp Automatic
netcfg:ncp:Automatic> list
NCUs:
phys net0
ip net0
phys net1
ip net1

netcfg:ncp:Automatic> select ncu phys net0





netcfg:ncp:Automatic:ncu:net0> list
ncu:net0
type link
class phys
parent "Automatic"
activation-mode prioritized
enabled true
priority-group 0
priority-mode shared
netcfg:ncp:Automatic:ncu:net0> cancel
netcfg:ncp:Automatic> select ncu ip net0
netcfg:ncp:Automatic:ncu:net0> list
ncu:net0
type interface
class ip
parent "Automatic"
enabled true
ip-version ipv4,ipv6
ipv4-addrsrc dhcp
ipv6-addrsrc dhcp,autoconf
netcfg:ncp:Automatic:ncu:net0> exit


str. 84

Zone Network Interfaces
Two IP types available for non-global zones: shared-IP and exclusive-IP (default)


shared-IP zone shares a network interface with the global zone. Configuration in the
global zone must be done by ipadm utility to use shared-IP zones.

exclusive-IP zone is configured using the anet resource, a dedicated VNIC is
automatically created and assigned to that zone.

Oracle Solaris 11 introduces a new network stack architecture, previously known as
Crossbow. This new architecture provides highly flexible network virtualization
through the addition of Virtual NICs, which are tightly integrated with zones. In
addition, the new architecture introduces the ability to perform resource
management via bandwidth and flow control.
str. 85

Exclusive-IP Data-Link Interfaces
- IP Filter in Exclusive-IP Zones
- IP Network Multipathing in Exclusive-IP Zones

str. 86

Exclusive-IP Data-Link Interfaces
Create a virtual NIC, limit SPEED of VNIC, create address for it, and then assign it to zone.
Sol11# dladm create-vnic -l net0 -p maxbw=600 vnic0
Sol11# ipadm create-addr -T static -a local=x.x.x.x/yy vnic0/v4static


zonecfg:s11zone> set ip-type=exclusive
zonecfg:s11zone> add net
zonecfg:s11zone:net> set physical=vnic0
zonecfg:s11zone:net> end


zonecfg:zone1> select anet linkname=net0
zonecfg:zone1:anet> set allowed-address=192.168.1.138/24
zonecfg:zone1:anet> set defrouter=192.168.1.1
zonecfg:zone1:anet> set configure-allowed-address=true
zonecfg:zone1:anet> end
zonecfg:zone1> exit



str. 87

Bridging
Sol11# dladm create-bridge bridge_one
Sol11# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 down --
net0 phys 1500 up --
zoneA/net0 vnic 1500 up net0
bridge_one0 bridge 1500 unknown --

Sol11# dladm add-bridge -l net0 bridge_one
Sol11# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 down --
net0 phys 1500 up --
zoneA/net0 vnic 1500 up net0
bridge_one0 bridge 1500 up net0

Sol11# dladm show-bridge
BRIDGE PROTECT ADDRESS PRIORITY DESROOT
bridge_one stp 32768/0:14:4f:fc:0:1 32768 32768/0:14:4f:fc:0:1

Sol11# svcs -a | grep bridge
online 15:23:15 svc:/network/bridge:bridge_one

Sol11# dladm remove-bridge -l net0 bridge_one
Sol11# dladm delete-bridge bridge_one
str. 88

Coniguring VLANs
Sol11# dladm create-vlan -l net0 -v 111 app1
Sol11# dladm create-vlan -l net0 -v 112 app2
Sol11# dladm create-vlan -l net0 -v 113 app3
Sol11# dladm delete-vlan app3
Sol11# dladm show-vlan
LINK VID OVER FLAGS
app1 111 net0 -----
app2 112 net0 -----
Sol11# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 down --
net0 phys 1500 up --
zoneA/net0 vnic 1500 up net0
app1 vlan 1500 up net0
app2 vlan 1500 up net0


Sol11# zonecfg -z zone1
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=app1
zonecfg:zone1:net> end


Sol11# zonecfg -z zone2
zonecfg:zone2> add net
zonecfg:zone2:net> set physical=app2
zonecfg:zone2:net> end

zone1# ipadm create-ip app1
zone1# ipadm create-addr -T static -a 192.168.1.111.0/24 app1/v4
zone1# ipadm create-ip app2
zone2# ipadm create-addr -T static -a 192.168.1.112.0/24 app2/v4

str 89



Private VirtualNetwork on a Single System



str 90



Private VirtualNetwork on a Single System
Sol11# dladm show-link
LINK CLASS MTU STATE OVER
net1 phys 1500 down --
net0 phys 1500 up --
Sol11# ipadm show-if
IFNAME CLASS STATE ACTIVE OVER
lo0 loopback ok yes --
net0 ip ok yes --
Sol11# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.1.137/24
Sol11# dladm create-vnic -l net0 vnic1
Sol11# dladm create-vnic -l net0 vnic2
Sol11# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE VID
vnic1 net0 0 2:8:20:b1:73:5e random 0
vnic2 net0 0 2:8:20:d6:53:47 random 0
Sol11# ipadm create-ip vnic1
Sol11# ipadm create-addr -T static -a 192.168.5.10/24 vnic1/v4address
Sol11# ipadm create-ip vnic2
Sol11# ipadm create-addr -T static -a 192.168.5.20/24 vnic2/v4address
Sol11# ipadm show-addr
ADDROBJ TYPE STATE ADDR
net0/v4 static ok 192.168.1.137/24
vnic1/v4address static ok 192.168.5.10/24
vnic2/v4address static ok 192.168.5.20/24

str 91



Private VirtualNetwork on a Single System (cont.)
Sol11# vi /etc/hosts
192.168.1.80 vnic1
192.168.1.90 vnic2
Sol11# dladm create-etherstub stub0
Sol11# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE VID
vnic1 net0 0 2:8:20:b1:73:5e random 0
vnic2 net0 0 2:8:20:d6:53:47 random 0
Sol11# dladm create-vnic -l stub0 vnic3
Sol11# ipadm create-ip vnic3
Sol11# ipadm create-addr -T static -a 192.168.1.100 vnic3/privaddr
Sol11# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE VID
vnic1 net0 0 2:8:20:b1:73:5e random 0
vnic2 net0 0 2:8:20:d6:53:47 random 0
vnic3 stub0 0 2:8:20:f4:cb:f2 random 0
Sol11# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
net0/v4 static ok 192.168.1.137/24
vnic1/v4address static ok 192.168.5.10/24
vnic2/v4address static ok 192.168.5.20/24
vnic3/privaddr static ok 192.168.1.100/24
Sol11# vi /etc/hosts
192.168.5.10 vnic1
192.168.5.20 vnic2
192.168.1.100 vnic3

str 92



Working With VNICs and Zones

Sol11# routeadm -u -e ipv4-forwarding
Sol11# routeadm
Configuration Current Current
Option Configuration System State
---------------------------------------------------------------
IPv4 routing enabled enabled
IPv6 routing disabled disabled
IPv4 forwarding enabled enabled
IPv6 forwarding disabled disabled

root@zone1:~# dladm show-link
LINK CLASS MTU STATE OVER
vnic1 vnic 1500 up ?
net0 vnic 1500 up ?
root@zone1:~# ipadm create-ip vnic1
root@zone1:~# ipadm create-addr -T static -a 192.168.5.20/24 vnic1/v4address


root@zone2:~# dladm show-link
LINK CLASS MTU STATE OVER
vnic2 vnic 1500 up ?
net0 vnic 1500 up ?
root@zone2:~# ipadm create-ip vnic2
root@zone2:~# ipadm create-addr -T static -a 192.168.5.20/24 vnic2/v4address

str 93



Conigure a CPU Pool for aDatalink

Sol11# dladm create-vnic -l net0 -p cpus=2,3 vnic1
Sol11# dladm create-vnic -l net0 -p pool99 vnic1






















str 94



Coniguring Flows on Network
Sol11# flowadm add-flow -l vnet0 -a transport=udp udpflow
Sol11# flowadm set-flowprop -p maxbw=80,priority=low udpflow

str 95



Network Statistics

Sol11# dlstat
Sol11# dlstat show-phys
Sol11# dlstat show-link
Sol11# dlstat show-aggr
Sol11# dlstat -i 1
Sol11# dlstat
LINK IPKTS RBYTES OPKTS OBYTES
net1 0 0 0 0
net0 5.93K 499.47K 488 48.36K
app1 0 0 0 0
app2 0 0 0 0
vnic1 4.63K 365.44K 115 9.06K
vnic2 4.62K 365.38K 142 10.33K
zone2/vnic2 4.62K 365.38K 142 10.33K
stub0 0 0 0 0
vnic3 0 0 133 8.29K







str 96



Network Statistics

Sol11# flowstat -i 1
FLOW IPKTS RBYTES IERRS OPKTS OBYTES OERRS
flow1 528.45K 787.39M 0 179.39K 11.85M 0
flow2 742.81K 1.10G 0 0 0 0
flow3 0 0 0 0 0 0
flow1 67.73K 101.02M 0 21.04K 1.39M 0
flow2 0 0 0 0 0 0
flow3 0 0 0 0 0 0
...
^C

Sol11# flowstat -t
FLOW OPKTS OBYTES OERRS
flow1 24.37M 1.61G 0
flow2 0 0 0
flow1 4 216 0




str 97



IP Multipathing (IPMP)

str 98



Configuring IPMP: Active-Active

# dladm rename-link net0 link0_ipmp0
# dladm rename-link net1 link1_ipmp0
# ipadm create-ip link0_ipmp0
# ipadm create-ip link1_ipmp0
# ipadm create-ipmp ipmp0
# ipadm add-ipmp i link0_ipmp0 i link1_ipmp0 ipmp0
# ipadm create-addr T static \
a 192.168.0.112/24 ipmp0/v4add1
# ipadm create-addr T static \
a 192.168.0.113/24 ipmp0/v4add2
# ipadm create-addr T static \
a 192.168.0.142/24 link0_ipmp0/test
# ipadm create-addr T static \
a 192.168.0.143/24 link1_ipmp0/test



str 99



Configuring IPMP: Active-Standby
# dladm rename-link net0 link0_ipmp0
# dladm rename-link net1 link1_ipmp0
# dladm rename-link net2 link2_ipmp0
# ipadm create-ip link0_ipmp0
# ipadm create-ip link1_ipmp0
# ipadm create-ip link2_ipmp0
# ipadm create-ipmp ipmp0
# ipadm add-ipmp i link0_ipmp0 \
i link1_ipmp0 i link2_ipmp0 ipmp0
# ipadm set-ifprop -p standby=on -m ip link2_ipmp0
# ipadm create-addr T static \
a 192.168.0.112/24 ipmp0/v4add1
# ipadm create-addr T static \
a 192.168.0.113/24 ipmp0/v4add2
# ipadm create-addr T static \
a 192.168.0.142/24 link0_ipmp0/test
# ipadm create-addr T static \
a 192.168.0.143/24 link1_ipmp0/test
root@s11-serv1:~# ipadm create-addr T static \
a 192.168.0.144/24 link2_ipmp0/test

str 100



Monitoring IPMP

# ipmpstat g | -i | -an | -pn

The interface flags defined as:

i Unusable due to being INACTIVE
s Masked STANDBY
m Nominated to send/receive IPv4 multicast for its IPMP group
b Nominated to send/receive IPv4 broadcast for its IPMP group
M Nominated to send/receive IPv6 multicast for its IPMP group
d Unusable due to being down
H Unusable due to being brought OFFLINE by in.mpathd(IPMP daemon)
because of a duplicate hardware address


str 101







Module 6
Installing and Administering
Oracle Solaris 11 Zones





str 102




Oracle Solaris 10 vs. Oracle Solaris 11

str 103



Solaris 11 Containers Concept


















Consequently, processes executing within a zone experience little or no overhead
(a high estimate is 5% of total execution time) and thus come close to achieving
bare-metal performance. Zone resource monitoring - zonestat


Integration with the new Oracle Solaris 11
network stack architecture


str 104



Solaris 11 Containers Concept

The following brands of non-global zones are no longer offered in Oracle Solaris 11 :
Oracle Solaris Containers for Linux Applications (lx)
Oracle Solaris 8 Containers brand (solaris8)
Oracle Solaris 9 Containers brand (solaris9)

The zone root must be a ZFS dataset, which means it is either a ZFS volume or ZFS
file sys-tem. In particular UFS is not supported anymore.

Only whole root zone model is available in Oracle Solaris 11.

Oracle Solaris 11 Zones are delivered using the new Image Packaging System (IPS)
and the system software packages within a non-global zone are managed by IPS.
Only minimal system software is installed in the zone when it is created. Any
additional packages the zone requires must be added after the zone is first booted
through the IPS commands.

Delegated administration - RBAC

str 105



Services which can now be run inside a zone:
- DHCP (client and server)
- Routing daemon
- IPsec and IPfilter
- IP Multipathing (IPMP)
- ndd commands
- ifconfig with set or modify capabilities (usage of dladm , ipadm is recommended)
- Oracle Solaris 10 Zones on Oracle Solaris 11 (Oracle Solaris 10 9/10 or later)
- Physical to Virtual (P2V) migration


str 106



Configuring Non-Global Zone Solaris 10 vs. 11

str 107



New zone anet resource
When a non-global zone is created the default networking is configured as
ip-type is set to exclusive with an anet resource. anet resource creates a VNIC
for non-global zone and VNIC is present when the non-global zone is booted and
destroyed when the non-global zone is shutdown.

lower-link: auto
Defines the link in the global zone that will be used for the
VNIC. Property can be set to any existing link as described by
dladm. When set to auto the link selection order is first a
configured link aggregation in the up state, next a Ethernet
link in the up state chosen based on a alphabetic sort , the
net0 link if available.
mac-address: random
Can be set to factory, random or auto. Auto attempts to use a
factory MAC, if no factory address is available then random is
used. A random addressed is preserved cross reboots to
support DHCP.
mac-prefix
Sets a prefix for the random MAC address if required.
mac-slot
A slot location for a specific factory MAC address.

str 108



New net resource properties
allowed-address

Used with exclusive-IP zones only. If used, this
property constrain IP address(es) that can be
used to configure interface in the. When set the
allowed-address property also sets the
configure-allowed-address property to
true.
configure-allowed-address

When this property is set to true the address
defined by the allowed-address property
will be configured on the interface when the
non-global zone boots.
defrouter
The property is optional and should only be set
to a address on a different subnet than is
configured for the global zone.

zonecfg:zoneA:net> set
set address= set configure-allowed-address= set physical=
set allowed-address= set defrouter=

str 109



New device resource properties
allow-partition - allows a disk to be labeled with format command.
allow-raw-io - allows use user SCSI interface commands (uscsi) to execute.
These resource properties are configured as either true or false with default setting as false.

zonecfg:zoneA> add device
zonecfg:zoneA:device> set
set allow-partition= set allow-raw-io= set match=


New zone max-processes property
sets the maximum number of process table slots simultaneously available to this zone.
This property is preferred way to set zone.max-processes resource control.

zonecfg:zoneA> set max-processes=100
zonecfg:zoneA> info
. . .
rctl:
name: zone.max-processes
value: (priv=privileged,limit=100,action=deny)



str 110



New zone zone.max-lofi property
resource control defines the maximum number of lofi devices available to a zone.

zonecfg:zoneA> add rctl
zonecfg:zoneA:rctl> set name=zone.max-lofi
zonecfg:zoneA:rctl> set value=(priv=privileged,limit=10,action=deny)
zonecfg:zoneA:rctl> end



New zone admin property
allows delegation of administrator tasks for zone to a non-root or a role user.

user property defines a user or role which must exist in the global zone.

auths property defines authorizations. Possible values are login (authenticated
login to this zone), manage (allows management for this zone using zoneadm and
copyfrom (allows cloning of zone).


zonecfg:zoneA> add admin
zonecfg:zoneA:admin> set
set auths= set user=


str 111



The file-mac-profile Property

zonecfg:zoneA> set file-mac-profile=


none - setting value to none is equivalent to not setting file-mac-profile property.


fixed-configuration - set this value allows zone to write to files in and below
/var, except directories containing configuration files:
- /var/ld
- /var/lib/postrun
- /var/pkg
- /var/spool/cron,
- /var/spool/postrun
- /var/svc/manifest
- /var/svc/profiles


flexible-configuration
Permits modification of files in /etc/* directories, changes to root's home directory, and updates to
/var/* directories. Logging and auditing configuration files can be local. syslog and audit
configuration can be changed. Functionality is similar to a sparse root model zone in Solaris 10.


str 112



The file-mac-profile Property (cont.)

strict - this value allows no exceptions to the read-only policy.
- IPS packages cannot be installed.
- SMF services are fixed.
- Logging and auditing configuration files are fixed. Data can only be logged remotely.

Zone booted, not configured:
Sol11# zoneadm -z zoneA list -p
1:readonly:running:/zoneA/readonly:8a079b62-bb36-6a1a-f08a-
b68f4a7e7d2a:solaris:shared:W:stric


Zone configured and booted read-only:
Sol11# zoneadm -z readonly list -p
2:readonly:running:/zones/readonly:8a079b62-bb36-6a1a-f08a-
b68f4a7e7d2a:solaris:shared:R:strict

Zone configured and booted witable:
Sol11# zoneadm -z zoneA reboot -w
3:readonly:running:/zoneA/readonly:8a079b62-bb36-6a1a-f08a-
b68f4a7e7d2a:solaris:shared:W:stric

str 113




The fs-allowed Property
Setting this property gives the zone administrator the ability to mount any
file system of that type, either created by the zone administrator or imported
by using NFS, and administer that file system. File system mounting
permissions within a running zone are also restricted by the fs-allowed
property.

By default, only mounts of hsfs file systems and NFS, are allowed.

Property can be used with a block device or ZVOL device delegated into zone


zonecfg:zone1 > set fs-allowed=ufs,pcfs



str 114



SC Profile and AI Manifest used to install the zone.
Oracle Solaris 11 zone install first verifies access to a IPS repository and a plan is
created, the packages are downloaded to the non-global zone and installed.

AI Manifest describes software and other configuration information used to install
the zone. There are zone default AI manifest. A custom manifest can be created and
used to define what software and other configuration information will be used for
zone. This custom manifest can be passed by option to the zoneadm command
when zone is installed.

SC Profile is a System Configuration Profile, in the default instance this points to
/usr/share/auto_install/sc_profiles/enable_sci.xml profile
(SCI System Configuration Interactive) which starts interactive system
configuration when zone is booted. Hands free configuration using profile xml file
which is provided as option to zoneadm command when zone is installed.Profile is
applied to zone after the zone is installed and is used to configure the zone.

zoneadm -z zone1 install -m /zone1_manifest.xml \
-c /zone1_profile.xml

str 115



Configuring Non-Global Zones by Using AI
Use configuration element in AI manifest for client system to specify non-global zones. Use name
attribute of configuration element to specify name of zone. Use source attribute to specify location of
config file for zone. Source location can be http:// or file:// location that client can access.

Default Zone AI Manifest is used if you do not provide a custom AI manifest for a zone.
Sol11# ls /usr/share/auto_install/manifest/
ai_manifest.xml default.xml zone_default.xml
Sol11# ls /AI/basic_ai/auto_install/manifest/
ai_manifest.xml default.xml zone_default.xml

Sol11# cp /AI/basic_ai/auto_install/manifest/zone_default.xml \
/manifests/zoneA_manifest.xml
Sol11# vi /manifests/zoneA_manifest.xml

<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
<auto_install>
<ai_instance name="zone_default"> "zoneA_ai_instance"
<target>
<logical>
<zpool name="rpool"> "zasoby"
<filesystem name="export" mountpoint="/export"/>
<filesystem name="export/home"/>
<filesystem name="soft" mountpoint="/soft"/>


str 116



Configuring Non-Global Zones by Using AI

<be name="solaris"/> "be_zoneA"
<options>
<option name="compression" value="on"/>
</options>
</be>
</zpool>
</logical>
</target>
<software type="IPS">
<destination>

</destination>
<software_data action="install">
<name>pkg:/group/system/solaris-small-server</name>
</software_data>
</software>
</ai_instance>
</auto_install>


str 117



Configuring Non-Global Zones by Using AI


Sol11# installadm list -cpm -n basic_ai
Service Name Client Address Arch Image Path
------------ -------------- ---- ----------
basic_ai 00:14:4F:FC:00:03 sparc /AI/basic_ai
00:14:4F:FC:00:02 sparc /AI/basic_ai

Manifest Status Criteria
-------- ------ --------
serverA_ai_instance mac = 00:14:4F:FC:00:02
orig_default Default None

Profile Criteria
------- --------
serverA_profile.xml mac = 00:14:4F:FC:00:02

Sol11# installadm create-manifest -n basic_ai \
-f /manifests/zoneA_manifest.xml \
-c zonename="zoneA"



str 118



Configuring Non-Global Zones by Using AI

Sol11# installadm list -cpm -n basic_ai

Service Name Client Address Arch Image Path
------------ -------------- ---- ----------
basic_ai 00:14:4F:FC:00:03 sparc /AI/basic_ai
00:14:4F:FC:00:02 sparc /AI/basic_ai

Manifest Status Criteria
-------- ------ --------
serverA_ai_instance mac = 00:14:4F:FC:00:02
zoneA_ai_instance zonename = zoneA
orig_default Default None











str 119



Configuring Non-Global Zones by Using AI (cont.)
Zone Configuration Profile for a zone to configure zone parameters such as language, locale, time zone,
terminal, users, and root password. You can configure time zone, but you cannot set time etc.
Sample profiles are localised : /usr/share/auto_install/sc_profiles

Sol11# cp /AI/basic_ai/sc_profiles/sc_sample.xml /profiles/zoneA_profile.xml
Sol11# vi /profiles/zoneA_profile.xml

<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name=" system configuration"> zoneA_profile
<service name="system/config-user" version="1">
<instance name="default" enabled="true">
<property_group name="user_account">
<propval name="login" value="jack"/> leon
<propval name="password" value="9Nd/cwBcNWFZg"/>
<propval name="description" value="default_user"/>
<propval name="shell" value="/usr/bin/bash"/>
<propval name="gid" value='10'/>
<propval name="type" value="normal"/>
<propval name="roles" value="root"/>
<propval name="profiles" value="System Administrator"/>
</property_group>
<property_group name="root_account">
<propval name="password"
value="$5$dnRfcZse$Hx4aBQ161Uvn9ZxJFKMdRiy8tCf4gMT2s2rtkFba2y4"/>
<propval name="type" value="role"/>
</property_group>
</instance>
</service>

str 120



Configuring Non-Global Zones by Using AI (cont.)

<service version="1" name="system/identity">
<instance enabled="true" name="node">
<property_group name="config">
<propval name="nodename" value="solaris"/> zoneA
</property_group>
</instance>
</service>

<service name="system/console-login" version="1">
<instance name='default' enabled='true'>
<property_group name="ttymon">
<propval name="terminal_type" value="sun"/> vt100
</property_group>
</instance>
</service>

<service name='system/keymap' version='1'>
<instance name='default' enabled='true'>
<property_group name='keymap'>
<propval name='layout' value='US-English'/>
</property_group>
</instance>
</service>

<service name='system/timezone' version='1'>
<instance name='default' enabled='true'>
<property_group name='timezone'>
<propval name='localtime' value='UTC'/>

str 121



Configuring Non-Global Zones by Using AI (cont.)

</property_group>
</instance>
</service>

<service name='system/environment' version='1'>
<instance name='init' enabled='true'>
<property_group name='environment'>
<propval name='LANG' value='en_US.UTF-8'/>
</property_group>
</instance>
</service>

<service name="network/physical" version="1">
<instance name="default" enabled="true">
<property_group name='netcfg' type='application'>
<propval name='active_ncp' type='astring' value='Automatic'/>
</property_group>
</instance>
</service>
</service_bundle>


Sol11# installadm create-profile -n basic_ai -f \
/profiles/zoneA_profile.xml -c zonename= "zoneA"
Profile zoneA_profile.xml added to database.


str 122



Configuring Non-Global Zones by Using AI (cont.)

Sol11# installadm list -cmp -n basic_ai
Service Name Client Address Arch Image Path
------------ -------------- ---- ----------
basic_ai 00:14:4F:FC:00:03 sparc /AI/basic_ai
00:14:4F:FC:00:02 sparc /AI/basic_ai
Manifest Status Criteria
-------- ------ --------
serverA_ai_instance mac = 00:14:4F:FC:00:02
zoneA_ai_instance zonename = zoneA
orig_default Default None

Profile Criteria
------- --------
serverA_profile.xml mac = 00:14:4F:FC:00:02
zoneA_profile.xml zonename = zoneA







str 123



Installing Zone
Install the zone:
Sol11# zoneadm -z zoneA install

Install the zone from the repository:
Sol11# zoneadm -z zoneA install -c /profiles/zoneA_profile.xml
Progress being logged to /var/log/zones/zoneadm.20120717T200129Z.zoneA.install
Image: Preparing at /zoneA/root.

Install Log: /system/volatile/install.8371/install_log
AI Manifest: /tmp/manifest.xml.kYaivq
SC Profile: /profiles/zoneA_profile.xml
Zonename: zoneA
Installation: Starting ...

Creating IPS image
Installing packages from:
solaris
origin: http://solaris/
Install the zone from an image:
Sol11# zoneadm -z zoneA install -a archive -s -u

Install the zone from a directory:
Sol11# zoneadm -z zoneA install -d path -p -v

str 124



Installing Zone

Sol11# zoneadm -z zone1 install

Progress being logged to /var/log/zones/zoneadm.20120715T090014Z.zone1.install
Image: Preparing at /zone1/root.

Install Log: /system/volatile/install.1807/install_log
AI Manifest: /tmp/manifest.xml.NuaOGd
SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Zonename: zone1
Installation: Starting ...

Creating IPS image
Installing packages from:
solaris
origin: http://pkg.oracle.com/solaris/release/
mirror: http://pkg-cdn1.oracle.com/solaris/release/

str 125



Installing Zone
Sol11# zfs create -o mountpoint=/zoneA zasoby/zoneA
Sol11# chmod 700 /zoneA
Sol11# df -h /zoneA
Filesystem Size Used Available Capacity Mounted on
zasoby/zoneA 49G 31K 41G 1% /zoneA

Sol11# zonecfg -z zoneA
zoneA: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zoneA> create
create: Using system default template 'SYSdefault'
zonecfg:zoneA> set zonename=zoneA
zonecfg:zoneA> set zonepath=/zoneA
zonecfg:zoneA> exit

Sol11# zoneadm -z zoneA install -m /manifests/zoneA_manifest.xml \
-c /profiles/zoneA_profile.xml

Progress being logged to /var/log/zones/zoneadm.20120718T105043Z.zoneA.install
Image: Preparing at /zoneA/root.
Install Log: /system/volatile/install.13959/install_log
AI Manifest: /tmp/manifest.xml.1saOpB
SC Profile: /profiles/zoneA_profile.xml
Zonename: zoneA
Installation: Starting ...
Creating IPS image
Installing packages from:
solaris
origin: http://solaris/

str 126



Commands to Administer and Monitor Zones

str 127



Zone Commands for Use

str 128



Zone Commands for Use




Module 8
Managing Packages Within Zones




str 129



Transitioning an Oracle Solaris 10 to Solaris 11

1. Install the Oracle Solaris 10 zone package on your Oracle Solaris 11 system

s11# pkg install system/zones/brand/brand-solaris10

2. Copy the zonep2vchk script from your Oracle Solaris 11 system to the Oracle Solaris 10
instance or system to identify any issues that might prevent the instance from running as
a solaris10 zone.

Sol11# scp /usr/sbin/zonep2vchk Sol10:/
Sol10# /zonep2vchk



NOTE:
To use the Oracle Solaris 10 package and patch tools in your Oracle Solaris 10 zones, install
the following patches on your source Oracle Solaris 10 system before the image is created.
119254-75, 119534-24, 140914-02 (SPARC platforms)
119255-75, 119535-24 and 140915-02 (x86 platforms)

str 130



System Migrations Using zonep2vchk Tool


str 131



Using zonep2vchk
Sol10# /zonep2vchk b

--Executing Version: 1.0.5-11-16135
- Source System: T1000
Solaris Version: Solaris 10 10/09 s10s_u8wos_08a SPARC
Solaris Kernel: 5.10 Generic_141444-09
Platform: sun4v SUNW,Sun-Fire-T1000
- Target System:
Solaris_Version: Solaris 10
Zone Brand: native (default)
IP type: shared

--Executing basic checks
- The following /etc/system tunables exist. These tunables will not function inside a
zone. The /etc/system tunable may be transfered to the target global zone, but it will
affect the entire system, including all zones and the global zone. If there is an
alternate tunable that can be configured from within the zone, this tunable is described:
set zfs:zfs_arc_max = 0x40000000

- The system has the following lofi devices configured. Lofi devices cannot be configured
in the destination zone. Lofi devices must be created in the global zone and added to the
zone using "zonecfg add device". See lofiadm(1M) and zonecfg(1M) for details:
Device File
/dev/lofi/1 /zasoby/Sol11iso/sol-11-1111-repo-full.iso




str 132



Using zonep2vchk (cont.)

- The following SMF services will not work in a zone:
svc:/ldoms/ldmd:default
svc:/network/iscsi/initiator:default
svc:/network/nfs/server:default
svc:/system/iscsitgt:default
svc:/system/pools/dynamic:default

- The following SMF services require ip-type "exclusive" to work in a zone. If they are
needed to support communication after migrating to a shared-IP zone, configure them in the
destination system's global zone instead:
svc:/network/ipsec/ipsecalgs:default
svc:/network/ipsec/policy:default
svc:/network/ipv4-forwarding:default
svc:/network/routing-setup:default

- When migrating to an exclusive-IP zone, the target system must have an available
physical interface for each of the following source system interfaces:
vsw0

- When migrating to an exclusive-IP zone, interface name changes may impact the following
configuration files:
/etc/hostname.vsw0
/etc/hostname.vsw0:1
/etc/ipf/ipnat.conf




str 133



Using zonep2vchk and generate a template
Sol10# /zonep2vchk -c
create -b
set zonepath=/zones/T1000
add attr
set name="zonep2vchk-info"
set type=string
set value="p2v of host T1000"
end
set ip-type=shared
# Uncomment the following to retain original host hostid:
# set hostid=84218a08
# Max lwps based on max_uproc/v_proc
set max-lwps=40000
add attr
set name=num-cpus
set type=string
set value="original system had 8 cpus"
end
# Only one of dedicated or capped cpu can be used.
# Uncomment the following to use cpu caps:
# add capped-cpu
# set ncpus=8.0
# end
# Uncomment the following to use dedicated cpu:
# add dedicated-cpu
# set ncpus=8
# end
# Uncomment the following to use memory caps.

str 134



Using zonep2vchk and generate a template (cont.)

# Values based on physical memory plus swap devices:
# add capped-memory
# set physical=4096M
# set swap=8191M
# end
# Original vsw0 interface configuration:
# Statically defined 192.168.1.170 (T1000)
# Statically defined T1000_servers/24
# Factory assigned MAC address 0:14:4f:fb:fd:88
add net
set address=T1000
set physical=change-me
end
add net
set address=T1000_servers/24
set physical=change-me
end
exit


str 135



Transitioning an Oracle Solaris 10 to Solaris 11

1. Create a ZFS
Sol11# zfs create zasoby/s10archive
Sol11# zfs set share=name=s10share,path=/zasoby/s10archive,prot=nfs,\
root=s10 zasoby/s10archive
Sol11# zfs set sharenfs=on zasoby/s10archive

2. Create an archive of Oracle Solaris 10
a) instance global system that you would like to migrate to a non-global zone on Solaris 11 system

Sol10# flarcreate -S -n s10sysA -L cpio \
/net/Sol11/zasoby/s10archive/s10.flar

b) instance non-global system that you would like to migrate to a non-global zone on Solaris 11
Sol10:zoneS10# find zoneS10 -print | cpio -oP@/ | gzip > \
zoneS10.cpio.gz

3. Create a ZFS file system for the Oracle Solaris 10 zone.

Sol11# zfs create -o mountpoint=/zones/s10zone zasoby/zones/s10zone1
Sol11# chmod 700 /zones/s10zone

str 136



Transitioning an Oracle Solaris 10 to Solaris 11

4. Create the non-global zone for the Oracle Solaris 10 instance.

Sol11# zonecfg -z s10zone
s10zone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:s10zone> create -t SYSsolaris10
zonecfg:s10zone> set zonepath=/zones/s10zone
zonecfg:s10zone> set ip-type=exclusive
zonecfg:s10zone> add anet
zonecfg:s10zone:net> set lower-link=auto
zonecfg:s10zone:net> end
zonecfg:s10zone> set hostid=8439b629
zonecfg:s10zone> verify
zonecfg:s10zone> commit
zonecfg:s10zone> exit

5. Install the Oracle Solaris 10 non-global zone.

Sol11# zoneadm -z s10zone install -u -a /zasoby/s10archive/s10.flar
A ZFS file system has been created for this zone.
Progress being logged to /var/log/zones/zoneadm.20110921T135935Z.s10zone.install
Installing: This may take several minutes...
Postprocess: Updating the image to run within a zone
Postprocess: Migrating data
from: zasoby/zones/s10zone/rpool/ROOT/zbe-0
to: zasoby/zones/s10zone/rpool/export

str 137



Transitioning an Oracle Solaris 10 to Solaris 11

6. Boot the Oracle Solaris 10 zone.

Sol11# zoneadm -z s10zone boot

7. Configure the Oracle Solaris 10 non-global zone.

Sol11# zlogin -C s10zone
[Connected to zone 's10zone' console]
. . .
s10zone console login: root
Password: xxxxxxxx

s10zone# cat /etc/release
Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Assembled 23 August 2011

s10zone# uname -a
SunOS supernova 5.10 Generic_Virtual sun4v sparc SUNW,Sun-Fire-T1000

s10zone# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.53G 52.2G 106K /rpool
rpool/ROOT 4.53G 52.2G 31K legacy
rpool/ROOT/zbe-0 4.53G 52.2G 4.53G /
rpool/export 63K 52.2G 32K /export
rpool/export/home 31K 52.2G 31K /export/home

str 138



Monitoring Zone Resource Consumption

The zonestat utility monitors zone resources:
CPU consumption
Memory consumption
Resource control utilization

The utility can print:
A series of reports at specified intervals
One or more summary reports

The utility runs as a service in the global zone.

str 139



Monitoring Zone Resource Consumption
Sol11# zonestat 1
zonestat: Error: Zones monitoring service "svc:/system/zones-
monitoring:default" not enabled or responding.

Sol11# svcadm enable /system/zones-monitoring
Sol11# zonestat 1
Interval: 7, Duration: 0:00:07
SUMMARY Cpus/Online: 6/6 PhysMem: 2048M VirtMem:
3071M
---CPU---- --PhysMem-- --VirtMem-- --PhysNet--
ZONE USED %PART USED %USED USED %USED PBYTE %PUSE
[total] 0.19 3.31% 780M 38.1% 1326M 43.1% 1006 0.00%
[system] 0.01 0.23% 561M 27.4% 1138M 37.0% - -
global 0.18 3.01% 151M 7.38% 132M 4.30% 1006 0.00%
zone1 0.00 0.06% 67.7M 3.30% 56.1M 1.82% 0 0.00%

str 140



Monitoring Zone Memory Consumption

# zonestat -z global -r physical-memory 5

Collecting data for first interval...
Interval: 1, Duration: 0:00:05
PHYSICAL-MEMORY SYSTEM MEMORY
mem_default 2048M
ZONE USED %USED CAP %CAP
[total] 851M 41.5% - -
[system] 550M 26.8% - -
global 151M 7.37% - -

Interval: 2, Duration: 0:00:10
PHYSICAL-MEMORY SYSTEM MEMORY
mem_default 2048M
ZONE USED %USED CAP %CAP
[total] 855M 41.7% - -
[system] 550M 26.8% - -
global 151M 7.38% - -

str 141



Monitoring Zone CPU Consumption

# zonestat -r default-pset 1 1m
Interval: 8, Duration: 0:00:08

PROCESSOR_SET TYPE ONLINE/CPUS MIN/MAX
pset_default default-pset 1/1 1/1
ZONE USED PCT CAP %CAP SHRS %SHR %SHRU
[total] 0.11 11.0% - - - - -
[system] 0.03 3.11% - - - - -
global 0.06 6.01% - - - - -
zone1 0.01 1.11% - - - - -
zone2 0.00 0.82% - - - - -

str 142



Total and High Zone Resource Consumption
# zonestat -q -R total,high 10s 1m 1m
Report: Total Usage
Start: Sun Jul 15 12:21:24 CEST 2012
End: Sun Jul 15 12:21:44 CEST 2012
Intervals: 3, Duration: 0:00:20
SUMMARY Cpus/Online: 6/6 PhysMem: 2048M VirtMem: 3071M
---CPU---- --PhysMem-- --VirtMem-- --PhysNet--
ZONE USED %PART USED %USED USED %USED PBYTE %PUSE
[total] 0.03 0.64% 770M 37.6% 1316M 42.8% 6 0.00%
[system] 0.00 0.13% 551M 26.9% 1128M 36.7% - -
global 0.03 0.50% 151M 7.38% 132M 4.32% 42 0.00%
zone1 0.00 0.00% 67.5M 3.29% 54.9M 1.78% 0 0.00%

Report: High Usage
Start: Sun Jul 15 12:21:24 CEST 2012
End: Sun Jul 15 12:21:44 CEST 2012
Intervals: 3, Duration: 0:00:20
SUMMARY Cpus/Online: 6/6 PhysMem: 2048M VirtMem: 3071M
---CPU---- --PhysMem-- --VirtMem-- --PhysNet--
ZONE USED %PART USED %USED USED %USED PBYTE %PUSE
[total] 0.03 0.65% 770M 37.6% 1316M 42.8% 86 0.00%
[system] 0.00 0.12% 551M 26.9% 1128M 36.7% - -
global 0.03 0.57% 151M 7.38% 132M 4.31% 86 0.00%
zone1 0.00 0.01% 67.5M 3.29% 54.9M 1.78% 0 0.00%

str 143








Module 7
Oracle Solaris 11 ZFS Enhancements


str 144



Oracle Solaris 11 new ZFS features

ZFS default root file system:
ZFS is the default root file system for the Oracle Solaris 11 operating system. With a ZFSroot
pool, you do not have to worry about calculating slice sizes for /, /var, /export, and so on.

Migrating UFS and ZFS file systems
You can use the ZFS Shadow Migration feature You can use the ZFS Shadow Migration
feature to migrate data from old UFS and ZFS file systems to new file systems while
simultaneously allowing access and modification of the new file systems during the
migration process.

Splitting mirrored ZFS storage pools
A mirrored ZFS storage pool can be quickly cloned as a backup pool.



str 145



Oracle Solaris 11 new ZFS features

ZFS deduplication
Deduplication is the process of eliminating duplicate copies of data. ZFS deduplication saves
space and unnecessaryI/O, which can lower storage costs and improve performance. ZFS
deduplication automatically avoids writing the same data twice on your drive by detecting
duplicate data blocks and keeping track of the multiple places where the same block is
needed.

Greater Microsoft interoperability with fully integrated CIFS
Oracle Solaris 11 includes fully integrated CIFS The Common Internet File System (CIFS) also
known as includes fully integrated CIFS. The Common Internet File System (CIFS), also known
as SMB, is the standard for Microsoft file-sharing services. The Oracle Solaris CIFS service
provides file sharing and MS-RPC administration services required for Windows-like behavior
for interoperability with CIFS clients, including many new features such as host-based access
control, which allows a CIFSserver to restrict access to specific clients by IP address, ACLs
(access control lists) on shares, and synchronization of client-side offline file caching during
reconnection. Microsoft ACLs are also supported in ZFS

str 146



Oracle Solaris 11 new ZFS features

COMSTAR targets for iSER, SRP, and FCoE
COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables the
ability to turn any Oracle Solaris host into a target device that can be accessed over a storage
network. The COMSTAR framework makes it possible for all SCSI device types (tape, disk,
and the like) to connect to a transport (such as Fibre Channel) with concurrent access to all
logical unit numbers (LUN)and a single point of management. Support for a number of
protocols has been added: iSCSI Extensions for RDMA (iSER) and SCSI RDMA Protocol (SRP)
for hosts that include an InfiniBand Host Channel Adapter, iSCSI, and Fibre Channel over
Ethernet (FCoE). Oracle Solaris DTrace probes have also been added to COMSTAR in the SCSI
Target Mode Framework (STMF) and SCSI Block Device (SBD).

ZFS snapshot differences
Allows you to list all file changes between two snapshots of a Oracle Solaris 11, which allows
you to list all file changes between two snapshots of a ZFS file system


str 147



ZFS Shadow Data Migration

Supported file system types:
- A local or remote ZFS file system to a target ZFS file system
- A local or remote UFS file system to a target ZFS file system

Shadow migration method:
- Create an empty ZFS file system.
- Set shadow property on an empty ZFS file system to point to file system to be
migrated.
- Data from source file system is copied to the shadow file Data from source file
system is copied to the shadow file system.






str 148



Shadow Migration Considerations

Source file system must be set to read-only.
The target file system must be completely empty.
Migration continues across reboots.
Determine whether UID GID and ACL information is to be Determine
whether UID, GID, and ACL information is to be migrated.
Use the shadowstat command to monitor shadow migration activity








str 149



Configuring ZFS Shadow Data Migration

root@s11-source:~# share F nfs o ro /export/UFS_data
root@s11-source:~# share F nfs o ro /export/ZFS_data
root@s11-target:~# pkg install shadow-migration
root@s11-target:~# svcadm enable shadowd
root@s11-target:~# zfs create -o \
shadow=nfs://s11-source/export/UFS_data \
rpool/export/shadow_UFS_data
root@s11-target:~# zfs create -o \
shadow=nfs://s11-source/export/ZFS_data \
rpool/export/shadow_ZFS_data

root@s11-target:~ # shadowstat


str 150



Splitting a ZFS Mirrored Pool: Example

# zpool create newpool mirror c7t2d0 c7t3d0
# zpool split -n newpool newpool1
would create 'newpool1' with the following layout:
newpool1
c7t3d0
# zpool split newpool newpool1
# zpool import newpool1
# zpool status
pool: newpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
newpool ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0


str 151



Identifying ZFS Snapshot Differences

Determine ZFS snapshot differences by using zfs diff command.
The zfs diff command gives a high-level description of the differences
between a snapshot and a descendent dataset.
The type of change is described along with the name of the file:

+ indicates that the file was added in the later dataset.
- indicates that file was removed in later dataset.
M indicates that the file was modified in the later dataset.
R indicates that the file was renamed in the later dataset.





str 152



Identifying ZFS Snapshot Differences: Example

# zfs snapshot newpool/mydata@before
# touch /newpool/mydata/newfile
# zfs snapshot newpool/mydata@after
# zfs list-r-t snapshot-o name,creation
NAME CREATION
newpool/mydata@before Mon Apr 6 14:54 2011
newpool/mydata@after Mon Apr 6 14:59 2011
rpool/ROOT/solaris@install Fri Mar 4 22:33 2011

# zfs diff newpool/mydata@before newpool/mydata@after
M /newpool/mydata/
+ /newpool/mydata/newfile






str 153



ZFS Deduplication Properties

One new ZFS file system property: dedup

Two new ZFS pool properties
dedupratio
dedupditto

str 154



ZFS Deduplication: Example

# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
newpool 1.07G 169K 1.07G 0% 1.00x ONLINE
newpool1 1.07G 130K 1.07G 0% 1.00x ONLINE
rpool 15.9G 4.12G 11.8G 25% 1.00x ONLINE

# zpool get all newpool|grep dedup
newpool dedupditto 0 default
newpool dedupratio 1.00x

# zfs get all newpool/mydata|grep dedup
newpool/mydata dedup off default

# zfs set dedup=on newpool/mydata
# zfs get all newpool/mydata|grep dedup
newpool/mydata dedup on local

str 155



ZFS Deduplication: Example

# cp /opt/ora/course_files/bigfile.zip /newpool/mydata/dir1
# cp /opt/ora/course_files/bigfile.zip /newpool/mydata/dir2
# cp /opt/ora/course_files/bigfile.zip /newpool/mydata/dir3

# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
newpool 1.07G 302M 794M 27% 3.00x ONLINE
newpool1 1.07G 130K 1.07G 0% 1.00x ONLINE
rpool 15.9G 4.12G 11.8G 25% 1.00x ONLINE

# zpool get all newpool|grep dedup
newpool dedupditto 0 default
newpool dedupratio 3.00x -

str 156



Common Multiprotocol SCSI Target (COMSTAR)


str 157



Configuring COMSTAR

Install the storage-server software package.

Create an iSCSI LUN.
Enable stmf service.
Identify a disk volume to serve as the SCSI target.
Run stmfadm utility to create a LUN.
Make LUN viewable to the initiators.

Create the iSCSI target.
Enable the target service.
Run itadm utility to create aniSCSI target.


str 158



Configuring COMSTAR

Configure an iSCSI initiator.
Enable initiator service.
Configure target device discovery method.
Reconfigure /dev namespace to recognize iSCSI disk.

Access the iSCSI disk.
Use the format utility to identify the iSCSI LUN information.
Create a ZFS file system on the iSCSI LUN





str 159



ZFS dataset encryption: Example

# zpool create -O encryption=on encryptedpool \
c7t4d0 c7t5d0
Enter passphrase for 'encryptedpool': password
Enter again: password
# zfs create encryptedpool/mysecrets
# zfs get encryption encryptedpool/mysecrets
NAME PROPERTY VALU SOURCE
encryptedpool/mysecrets encryption on local

# zfs get keysource encryptedpool/mysecrets
NAME PROPERTY VALUE SOURCE
encryptedpool/mysecrets keysource passphrase,prompt
inherited from encryptedpool

str 160



ZFS dataset encryption: Example

# pktool genkey keystore=file \
outkey=/myzfskey keytype=aes keylen=256
Enter PIN for Sun Software PKCS#11 softtoken: password
# zfs create -o encryption=aes-256-ccm \
-o keysource=raw,file:///myzfskey newpool/mysecretdata

# zfs get keysource newpool/mysecretdata
NAME PROPERTY VALUE SOURCE
newpool/mysecretdata encryption aes-256-ccm local

# zfs get keysource newpool/mysecretdata
NAME PROPERTY VALUE SOURCE
newpool/mysecretdata keysource raw,file:///myzfskey local


str 161






Module 8
Oracle Solaris 11
Security Enhancements



str 162



RBAC Elements and Basic Concepts


str 163



RBAC Databases and the Naming Services

/etc/security/policy.conf database contains authorizations, privileges, and rights
profiles that are applied to all users.


Extended user attributes database
(/etc/user_attr, /etc/user_attr.d)
Associates users and roles with authorizations, privileges, keywords, and rights profiles

Sol11# getent user_attr | more
root::::type=role;auths=solaris.*;profiles=All;audit_flags=lo\:no;lock_after_retries
=no;min_label=admin_low;clearance=admin_high
euler::::type=normal;audit_flags=^+pf,fw,lo\:^-
no;auths=solaris.zone.manage/zoneA,solaris.zone.login/zoneA,solaris.zone.clonefro
m/zoneA;profiles=Zone Management,System
Administrator;roles=root;lock_after_retries=no
oracle::::type=normal;roles=root;audit_flags=^pf,fw,lo\:^-no



str 164



RBAC Databases and the Naming Services
Rights profile attributes database
(/etc/security/prof_attr,/etc/security/prof_attr .d)
Defines rights profiles, lists the profiles' assigned authorizations, privileges, and keywords,
and identifies the associated help file.

Sol11# getent prof_attr | more
Audited System Administrator:::Can perform most non-security administrative
tasks:profiles=Audit Review,Printer Management,Cron
Management,Device Management,File System Management,Mail Management,Maintenance and
Repair,Media Backup,Media Restore,Name Servi
ce Management,Network Management,Object Access Management,Process Management,Shadow
Migration Monitor,Software Installation,Syst
em Configuration,User Management,Project Management,LDoms
Management;help=RtSysAdmin.html;audit_flags=fw,as\:no
Audited System User:::Audited User with login Oracle:audit_flags=^pf,fw,lo\:no
oracle:::User with login Oracle:audit_flags=^pf,fw,lo\:-no





str 165



Rights Profiles
Sol11# profiles -a
TPM Administration
NTP Management
All
Audit Configuration
Audit Control
Audit Review
. . .
Sol11# profiles -p "Zone Security" info
Found profile in files repository.
name=Zone Security
desc=Zones Virtual Application Environment Security
auths=solaris.zone.*,solaris.auth.delegate
help=RtZoneSecurity.html
cmd=/usr/sbin/txzonemgr
cmd=/usr/sbin/zonecfg

Sol11# profiles -p "Zone Management" info
Found profile in files repository.
name=Zone Management
desc=Zones Virtual Application Environment Administration
help=RtZoneMngmnt.html
cmd=/usr/sbin/zoneadm
cmd=/usr/sbin/zlogin

str 166



RBAC Databases and the Naming Services

Authorization attributes database
(/etc/security/auth_attr,/etc/security/auth_attr.d)
Defines authorizations and their attributes, and identifies the associated help file

Sol11# getent auth_attr | more
solaris.smf.read.pkg-server:::Read permissions for protected pkg(5) Server
Service Properties::
solaris.smf.value.pkg-sysrepo:::Change pkg(5) System Repository Service
values::

Execution attributes database
(/etc/security/exec_attr, /etc/security/exec_attr.d)
Identifies the commands with security attributes that are assigned to specific rights profiles

Sol11# getent exec_attr | more
Basic Solaris
User:solaris:cmd:RO::/usr/bin/cdrecord.bin:privs=file_dac_read,sys_devices,pro
c_lock_memory,proc_priocntl,net_privaddr
Desktop Configuration:solaris:cmd:RO::/usr/bin/scanpci:euid=0;privs=sys_config

str 167



Privileges

Sol11# ppriv -lv | more
contract_event
Allows a process to request critical events without
limitation.
Allows a process to request reliable delivery of all events on
any event queue.
contract_identity
Allows a process to set the service FMRI value of a process
contract template.
contract_observer
Allows a process to observe contract events generated by
contracts created and owned by users other than the process's
effective user ID.
Allows a process to open contract event endpoints belonging to
contracts created and owned by users other than the process's
effective user ID.
. . .

str 168



Status of Privileges in Zones







str 169



User Privileges

Sol11# profiles oracle
oracle:
Basic Solaris User
All
Sol11# roles oracle
No roles
oracle@solaris:~$ ppriv $$
24851: -bash
flags = <none>
E: basic
I: basic
P: basic
L: all
oracle@solaris:~$ ppriv -lv basic
file_link_any
Allows a process to create hardlinks to files owned by a uid
different from the process' effective uid.
file_read
Allows a process to read objects in the filesystem.


str 170



User Privileges

Sol11# roleadd -c "User Administrator role, local" -s /usr/bin/pfbash\
> -m -K profiles="User Security,User Management" useradm
80 blocks
Sol11# passwd useradm
New Password:
Re-enter new Password:
passwd: password successfully changed for useradm

Sol11# usermod -R +useradm oracle
Found user in files repository.

Sol11# su - oracle
Oracle Corporation SunOS 5.11 11.0 November 2011
oracle@solaris:~$ su - useradm
Password:
Oracle Corporation SunOS 5.11 11.0 November 2011
useradm@solaris:~$ id
uid=60007(useradm) gid=10(staff)
useradm@solaris:~$ /usr/sbin/useradd -md /export/home/user1 user1
80 blocks

str 171



Delegating Zone Administration

The auth property:

login (solaris.zone.login)
manage (solaris zone manage)
clone (solaris.zone.clonefrom)

The admin zone property

. . .
zonecfg:zone1> add admin
zonecfg:zone1:admin> set user=oracle
zonecfg:zone1:admin> set auths=login,manage,clonefrom
zonecfg:zone1:admin> end
. . .

str 172



Auditing and Audit Events
Audit events represent auditable actions on a system. Audit events are listed in the
/etc/security/audit_event file.
# System Adminstrators: Do NOT modify or add events with an event number less than 32768.
# These are reserved by the system.
#
# 0 Reserved as an invalid event number.
# 1 - 2047 Reserved for the Solaris Kernel events.
# 2048 - 32767 Reserved for the Solaris TCB programs.
# 32768 - 65535 Available for third party TCB applications.
#
# Allocation of reserved kernel events:
# 1 - 511 allocated for Solaris
# 512 - 2047 (reserved but not allocated)
#
# Allocation of user level audit events:
# 2048 - 5999 (reserved but not allocated)
# 6000 - 9999 allocated for Solaris
# 10000 - 32767 (reserved but not allocated)
# 32768 - 65535 (Available for third party TCB applications)



0:AUE_NULL:indir system call:no
1:AUE_EXIT:exit(2):ps
2:AUE_FORK:fork(2):ps
3:AUE_OPEN:open(2) - place holder:no
4:AUE_CREAT:creat(2):fc
5:AUE_LINK:link(2):fc
6:AUE_UNLINK:unlink(2):fd
7:AUE_EXEC:exec(2):ps,ex
8:AUE_CHDIR:chdir(2):pm
9:AUE_MKNOD:mknod(2):fc
10:AUE_CHMOD:chmod(2):fm
11:AUE_CHOWN:chown(2):Fm
. . .

str 173



Audit Events (cont.)

Sol11# cat /etc/security/audit_event
116:AUE_PFEXEC:execve(2) with pfexec enabled:ps,ex,ua,as,pf
. . .
6153:AUE_logout:logout:lo,ea
6154:AUE_telnet:login - telnet:lo
6155:AUE_rlogin:login - rlogin:lo
6158:AUE_rshd:rsh access:lo
6159:AUE_su:su:lo
6162:AUE_rexecd:rexecd:lo
6163:AUE_passwd:passwd:lo
6164:AUE_rexd:rexd:lo

Each audit event is connected to a system call or user command

Sol11# auditrecord -e login
terminal login
program /usr/sbin/login See login(1)
/usr/dt/bin/dtlogin See dtlogin
event ID 6152 AUE_login
class lo (0x0000000000001000)
header
subject
[text] error message
Return


str 174



Audit Classes and Preselection
Each audit event belongs to an audit class(es). Audit classes are containers for large numbers
of audit events. When we preselect a class to be audited, all events in that class are recorded
in audit queue. Audit classes are defined in /etc/security/audit_class file.

0x0000000000000000:no:invalid class
0x0000000000000001:fr:file read
0x0000000000000002:fw:file write
0x0000000000000004:fa:file attribute access
0x0000000000000008:fm:file attribute modify
0x0000000000000010:fc:file create
0x0000000000000020:fd:file delete
0x0000000000000040:cl:file close
0x0000000000000080:ft:file transfer
0x0000000000000100:nt:network
0x0000000000000200:ip:ipc
0x0000000000000400:na:non-attributed
0x0000000000000800:frcp:forced preselection
0x0000000000001000:lo:login or logout
0x0000000000004000:ap:application
0x0000000000008000:cy:cryptographic
0x0000000000010000:ss:change system state
0x0000000000020000:as:system-wide administration
0x0000000000040000:ua:user administration
0x0000000000070000:am:administrative (meta-class)
0x0000000000080000:aa:audit utilization
0x00000000000f0000:ad:old administrative (meta-class)
0x0000000000100000:ps:process start/stop
0x0000000000200000:pm:process modify
0x0000000000300000:pc:process (meta-class)
0x0000000000400000:xa:X - server access
0x0000000000800000:xp:X - privileged/administrative operations
0x0000000001000000:xc:X - object create/destroy
0x0000000002000000:xs:X - operations that always silently fail, if bad
0x0000000003c00000:xx:X - all X events (meta-class)
0x0000000040000000:io:ioctl
0x0000000080000000:ex:exec
0x0000000100000000:ot:other
0xffffffffffffffff:all:all classes (meta-class)
0x0100000000000000:pf:profile command


str 175



Audit policy
auditing options that you can enable or disable at your site. These options include
whether to record certain kinds of audit data for example whether to suspend
auditable actions when the audit queue is full.

Display the audit policy:

Sol11# auditconfig -getpolicy
configured audit policies = cnt
active audit policies = cnt


cnt
disabled, this policy blocks a user or application from running. The blocking happens
when audit records cannot be added to audit trail because the audit queue is full.

enabled, this policy allows the event to complete without an audit record being
generated.


str 176



Audit policy (cont.)
perzone
disabled - policy maintains single audit configuration for a system. One audit service runs in
global zone. Audit events in specific zones can be located in audit record if the zonename
audit token was preselected. Disabled option is useful when we have no special reason to
maintain a separate audit log, queue, and daemon for each zone.

enabled - policy maintains a separate audit configuration, audit queue, and audit logs for
each zone. An audit service runs in each zone. This policy can be enabled in global zone only.
No policies can be set from a local zone unless perzone policy is first set from the global zone
The enabled option is useful when we cannot monitor our system effectively by simply
examining audit records with zonename audit token.

zonename
disabled, this policy does not include a zonename token in audit records. The disabled
option is useful when we do not need to track audit behavior per zone.

enabled, this policy includes a zonename token in every audit record. The enabled option is
useful when we want to isolate and compare audit behavior across zones by post-selecting
records according to zone.

str 177



Managing Audit Policy
Sol11# auditconfig -lspolicy
policy string description:
ahlt halt machine if it can not record an async event
all all policies
arge include exec environment args in audit recs
argv include exec command line args in audit recs
cnt when no more space, drop recs and keep a cnt
group include supplementary groups in audit recs
none no policies
path allow multiple paths per event
perzone use a separate queue and auditd per zone
public audit public files
seq include a sequence number in audit recs
trail include trailer token in audit recs
windata_down include downgraded window information in audit recs
windata_up include upgraded window information in audit recs
zonename include zonename token in audit recs


No policies can be set from local zone unless perzone policy is first set from global zone.
Do not configure system-wide audit settings perzone or ahlt policy to non-global zone.
Note - We are not required to enable audit service in the global zone.

Sol11# auditconfig -setpolicy +perzone
Sol11# auditconfig -getpolicy
configured audit policies = cnt,perzone
active audit policies = cnt,perzone

str 178



Plugins
audit plugin ia module that transfers audit records in queue to a specified location.
audit_binfile plugin creates binary audit files.
audit_remote plugin sends binary audit records to a remote repository.
audit_syslog plugin summarizes selected audit records in the syslog logs.

Sol11# auditconfig -getplugin
Plugin: audit_binfile (active)
Attributes: p_dir=/var/audit;p_fsize=0;p_minfree=1;
Plugin: audit_syslog (inactive)
Attributes: p_flags=;
Plugin: audit_remote (inactive)
Attributes: p_hosts=;p_retries=3;p_timeout=5;

p_minfree indicates % of free space required on the target p_dir. If free space falls
below this threshold, the audit daemon auditd invokes the shell script
/etc/security/audit_warn. If no threshold is specified default is 1%.
p_dir list of directories, where the audit files will be created.
p_fsize defines the maximum size that an audit file can become before it is
automatically closed and a new audit file is opened. The default size no limited.
Value specified must be higher than 500KB and lower than 16 exabytes (EB).

str 179



Managing Audit Queue
Sol11# auditconfig -getqctrl
no configured audit queue hiwater mark
no configured audit queue lowater mark
no configured audit queue buffer size
no configured audit queue delay
active audit queue hiwater mark (records) = 100
active audit queue lowater mark (records) = 10
active audit queue buffer size (bytes) = 8192
active audit queue delay (ticks) = 20
Sol11# auditconfig -setqbufsz 8192
Sol11# auditconfig -t -setqbufsz 12288
Sol11# auditconfig -setqdelay 20
Sol11# auditconfig -t -setqdelay 25
Sol11# auditconfig -getqctrl
no configured audit queue hiwater mark
no configured audit queue lowater mark
configured audit queue buffer size (bytes) = 8192
configured audit queue delay (ticks) = 20
active audit queue hiwater mark (records) = 100
active audit queue lowater mark (records) = 10
active audit queue buffer size (bytes) = 12288
active audit queue delay (ticks) = 25


auditconfig [ -t ] -setqctrl hiwater lowater bufsz interval


str 180



System Audit Characteristics
Preselected classes for attributable events:
Sol11# auditconfig -getflags
active user default audit flags = lo(0x1000,0x1000)
configured user default audit flags = lo(0x1000,0x1000)

Sol11# auditconfig -setflags pf,lo
user default audit flags = pf,lo(0x100000000001000,0x100000000001000)
Sol11# auditconfig -getflags
active user default audit flags = pf,lo(0x100000000001000,0x100000000001000)
configured user default audit flags = pf,lo(0x100000000001000,0x100000000001000)

Preselected classes for non-attributable events:
Sol11# auditconfig -getnaflags
active non-attributable audit flags = lo(0x1000,0x1000)
configured non-attributable audit flags = lo(0x1000,0x1000)

Sol11# auditconfig -setnaflags pf,na
non-attributable audit flags = pf,na(0x100000000000400,0x100000000000400)
Sol11# auditconfig getflags
active non-attributable audit flags = pf,na(0x100000000000400,0x100000000000400)
configured non-attributable audit flags = pf,na(0x100000000000400,0x100000000000400)


str 181



always-audit:never-audit




Success is not to be
audited (^+) or a failure
is not to be audited (^-).
User's Audit Characteristics
Display the audit classes that are preselected for existing users:
Sol11# useradd -md /export/home/oracle oracle
Sol11# passwd oracle
Sol11# userattr audit_flags root
lo:no
Sol11# userattr audit_flags oracle

Preselect the attributable classes:
Sol11# usermod -K audit_flags= ^pf,fw,lo:^-no oracle
Found user in files repository.
Sol11# userattr audit_flags oracle
^+pf,fw,lo:^-no

Sol11# auditconfig -getpinfo 23946 23946 is PID of euler's login shell.
audit id = oracle(60005)
process preselection mask = pf,lo,fw(0x100000000001002,0x100000000001002)
terminal id (maj,min,host) = 13644,131094,unknown(192.168.1.180)
audit session id = 231343543

Sol11# cat /etc/user_attr | grep oracle
oracle::::type=normal;audit_flags=^pf,fw,lo\:^-no

str 182



User's Audit Characteristics
To set audit flags for a rights profile, use the profiles command.

Sol11# profiles -p oracle
profiles:oracle> set name="Audited System User"
profiles:Audited System User> set always_audit=^pf,fw,lo
profiles:Audited System User> set never_audit=-no
profiles:Audited System User> set desc=" User with login Oracle"
profiles:oracle> info
name=oracle
desc=User with login Oracle
always_audit=^pf,fw,lo
never_audit=-no
profiles:oracle> set
set always_audit= set defaultpriv= set help= set name=" set privs=
set auths= set desc=" set limitpriv= set never_audit= set profiles="
profiles:oracle> verify
profiles:oracle> commit
profiles:oracle> exit

Sol11# profiles -p oracle S ldap
ERROR:ldap client not configured. Unable to access the ldap repository.



str 183



Managing Audit
Sol11# svcs auditd
STATE STIME FMRI
online 18:23:20 svc:/system/auditd:default
Sol11# auditconfig -getcond
audit condition = auditing
Sol11# svcadm disable auditd
Sol11# auditconfig -getcond
audit condition = noaudit
Sol11# ls /var/audit/
20120715075726.20120715080037.solaris 20120718161511.20120721161926.solaris
20120715080654.20120718154956.solaris 20120721162320.20120721163310.solaris

Sol11# svcadm enable auditd
Sol11# auditconfig -getcond
audit condition = auditing
Sol11# ls /var/audit/
20120715075726.20120715080037.solaris 20120718161511.20120721161926.solaris
20120721163629.not_terminated.solaris
20120715080654.20120718154956.solaris 20120721162320.20120721163310.solaris

str 184



Managing Audit

oracle@solaris:~$ touch /plik
touch: cannot create /plik: Permission denied
oracle@solaris:~$ touch /tmp/cos

Sol11# auditreduce -d 20120721 -u oracle -c fw | praudit -x | more

<record version="2" event="open(2) - write,creat,trunc" modifier="fp:fe"
host="solaris" iso8601="2012-07-21 21:16:23.982 +02:00">
<path>/plik</path><subject audit-uid="oracle" uid="oracle" gid="staff"
ruid="oracle" rgid="staff" pid="24568" sid="120761579" tid="13655 22
192.168.1.180"/>
<use_of_privilege result="failed use of priv">ALL</use_of_privilege>
<return errval="failure: Permission denied" retval="-1"/></record>

<record version="2" event="open(2) - write,creat,trunc" host="solaris"
iso8601="2012-07-21 21:16:28.595 +02:00">
<path>/tmp/cos</path><attribute mode="100644" uid="oracle" gid="staff"
fsid="594" nodeid="115885168" device="18446744073709551615"/>
<subject audit-uid="oracle" uid="oracle" gid="staff" ruid="oracle"rgid="staff"
pid="24569" sid="120761579" tid="13655 22 192.168.1.180"/>
<return errval="success" retval="3"/>
</record>

S-ar putea să vă placă și