Sunteți pe pagina 1din 13

Basic Commands The basic commands for managing services under SMF (Service Management Facility) control are

svcs, svccfg and svcadm. The man pages for these commands are a good source of detailed information. inetadm can be used to monitor services under inetd control. Many commands require referencing the service identifier, also known as an FMRI. svcs svcadm svcadm clear FMRI: Clear faults for FMRI. svcadm disable FMRI: Disable FMRI. svcadm enable FMRI: Enable FMRI. svcadm refresh FMRI: Force FMRI to read config file. svcadm restart FMRI: Restart FMRI. svcs -a: Lists all services currently installed, including their state. svcs -d FMRI: Lists dependencies for FMRI. svcs -D FMRI: Lists dependents for FMRI. svcs -l FMRI: Provides a long listing of information about FMRI; includes dependency information svcs -p FMRI: Shows relationships between services and processes. svcs -t: This change is temporary (does not persist past a boot). svcs -x: Explains why a service is not available. svcs -xv: Verbose debugging information.

To make configuration changes to a non-inetd service, edit the configuration file, then enter the svcadm restart command. svccfg svccfg: Enter interactive mode. svccfg -s FMRI setenv ENV_VARIABLE value: Set an environment variable for FMRI. Follow by svcadm refresh and restart commands.

inetadm inetadm -l FMRI: Displays properties for FMRI. inetadm -m FMRI property_name=value: Set a property for FMRI.

In particular, the "exec" value for an inetd-controlled service is the command line executed for that service by SMF. It may be desirable, for example, to change this value to add logging or other commandline flags. To convert an inetd.conf file to SMF format, run the command: inetconv -i /etc/inet/inetd.conf Service Identifiers

Services are identified by their FMRI. (This stands for Fault Managed Resource Identifier.) An example is: svc:/system/system-log:default Some commands do not require the full FMRI if there is no ambiguity. Legacy init scripts have FMRIs starting with lrc. For example: lrc:/etc/rcS_d/S35cacheos_sh Converted inetd services have a syntax like one of the following, depending on whether or not they are rpc services: svc:network/service-name/protocol svc:network/rpc-service-name/rpc_protocol SMF Service Starts The svc.startd daemon is the master process starter and restarter for SMF. It tracks service state and manages dependencies. Services that are managed through init scripts can be added to SMF via the inetconv command. Such additions are only monitored for status, but other SMF facilities may not work. Maintenance If a service is in the maintenance state, first make sure that all associated processes have died: svcs -p FMRI Next, (for all processes displayed by the above): pkill -9 PID Consult the appropriate logs in /var/svc/log to check any errors; perform any needed maintenance. Restore the service: svcadm clear FMRI Scripts The scripts that implement the startups and shutdowns are located in their usual place in /etc/init.d for the lrc services, or in /lib/svc/method for most of the other services. Other locations may be specified for a particular service. To track down the script locations for a particular service, do something like the following: # svccfg -s smtp svc:/network/smtp> list :properties sendmail svc:/network/smtp> select sendmail svc:/network/smtp:sendmail> list :properties svc:/network/smtp:sendmail> listprop *exec start/exec astring "/lib/svc/method/smtp-sendmail start" stop/exec astring "/lib/svc/method/smtp-sendmail stop %{restarter/contract}" refresh/exec astring "/lib/svc/method/smtp-sendmail refresh" Boot Messages Boot messages are much less verbose than previously. To get verbose output, boot with the boot -v

or boot -m verbose commands. svcadm can be used to change the run levels. The FMRIs associated with the different run levels are: S: milestone/single-user:default 2: milestone/multi-user:default 3: milestone/multi-user-server:default

Run levels can be displayed with who -r SMF Profiles SMF profiles are XML files in /var/svc/profile which list sets of service instances which are enabled and disabled. Different SMF profiles can be used. They are stored in /var/svc/profile. To use a different one, perform the following procedure: svccfg apply /var/svc/profile/desired_profile.xml The local profile /var/svc/profile/site.xml allows local customizations. This profile is applied after the standard profiles. To make a copy of the current profile for editing, run: svccfg extract> profile-file.xml Service Configuration Repository Stores persistent configuration information and SMF runtime data for services. Each service's manifest is in an XML-formatted text file located in /var/svc/manifest. The information from the manifests is imported into the repository through svccfg import or during a reboot. This is covered in the svccfg, svcprodp, service_bundle and svc.configd man pages. If the repository is corrupted, it can be restored from an automatic backup using the /lib/svc/bin/restore_repository command. The svcadm refresh; svc adm restart command will make a snapshot active. Automatic snapshots are taken for initial (import of the manifest), running (when service methods are executed) and start (last successful start). Revert to a Snapshot The procedure to revert to a snapshot is the following: Run svccfg in interactive mode: svccfg In the svc:> prompt, select the desired service with a full FMRI: select FMRI List the available snapshots: listsnap Revert to the desired snapshot:

revert desired_snapshot_label Quit out of the svccfg interactive mode: quit Update the service configuration repository information: svcadm refresh FMRI svcadm restart FMRI Boot Troubleshooting To step through the SMF portion of the boot process, start with: boot -m milestone=none Then step through the milestones for the different boot levels: svcadm milestone svc:/milestone/single-user:default svcadm milestone svc:/milestone/multi-user:default svcadm milestone svc:/milestone/multi-user-server:default Several things should be examined if a service fails to start: Is the service in maintenance mode? (svcs -l FMRI) If so, why? Check the log file specified in the svcs -l FMRI | grep logfile output, and run svcs -xv FMRI If the problem has been resolved, clear the fault with svcadm clear FMRI Check for service dependencies with svcs -d FMRI The output from svcs -l distinguishes between optional and mandatory dependencies. Check the startup properties with svcprop -p start FMRI The startup for the process can be trussed to get some visibility into where it is failing by inserting a truss into the start or exec statement for the service. To do this, just add truss -f -a -o /path/service-truss.out to the beginning of the start or exec statement with an svccfg -s statement.

Solaris Fault Management The Solaris Fault Management Facility is designed to be integrated into the Service Management Facility to provide a self-healing capability to Solaris 10 systems. The fmd daemon is responsible for monitoring several aspects of system health. The fmadm config command shows the current configuration for fmd. The Fault Manager logs can be viewed with fmdump -v and fmdump -e -v. fmadm faulty will list any devices flagged as faulty. fmstat shows statistics gathered by fmd. Fault Management With Solaris 10, Sun has implemented a daemon, fmd, to track and react to fault management. In addition to sending traditional syslog messages, the system sends binary telemetry events to fmd for correlation and analysis. Solaris 10 implements default fault management operations for several pieces of hardware in Sparc systems, including CPU, memory, and I/O bus events. Similar capabilities are being implemented for x64 systems.

Once the problem is defined, failing components may be offlined automatically without a system crash, or other corrective action may be taken by fmd. If a service dies as a result of the fault, the Service Management Facility (SMF) will attempt to restart it and any dependent processes. The Fault Management Facility reports error messages in a well-defined and explicit format. Each error code is uniquely specified by a Universal Unique Identifier (UUID) related to a document on the Sun web site at http://www.sun.com/msg/ . Resources are uniquely identified by a Fault Managed Resource Identifier (FMRI). Each Field Replaceable Unit (FRU) has its own FMRI. FMRIs are associated with one of the following conditions: ok: Present and available for use. unknown: Not present or not usable, perhaps because it has been offlined or unconfigured. degraded: Present and usable, but one or more problems have been identified. faulted: Present but not usable; unrecoverable problems have been diagnosed and the resource has been disabled to prevent damage to the system.

The fmdump -V -u eventid command can be used to pull information on the type and location of the event. (The eventid is included in the text of the error message provided to syslog.) The -e option can be used to pull error log information rather than fault log information. Statistical information on the performance of fmd can be viewed via the fmstat command. In particular, fmstat -m modulename provides information for a given module. The fmadm command provides administrative support for the Fault Management Facility. It allows us to load and upload modules and view and update the resource cache. The most useful capabilities of fmadm are provided through the following subcommands: config: Display the configuration of component modules. faulty: Display faulted resources. With the -a option, list cached resource information. With the -i option, list persistent cache identifier information, instead of most recent state and UUID. load /path/module: Load the module. unload module: Unload module; the module name is the same as reported by fmadm config. rotate logfile: Schedule rotation for the specified log file. Used with the logadm configuration file.

The first step to setting up mirroring using DiskSuite is to install the DiskSuite packages and any necessary patches for systems prior to Solaris 9. SVM is part of the base system in Solaris 9. The latest recommended version of DiskSuite is 4.2 for systems running Solaris 2.6 and Solaris 7, and 4.2.1 for Solaris 8. There are currently three packages and one patch necessary to install DiskSuite 4.2. They are: SUNWmd (Required) SUNWmdg (Optional GUI) SUNWmdn (Optional SNMP log daemon) 106627-19 (obtain latest revision) The packages should be installed in the same order as listed above. Note that a reboot is necessary after the install as new drivers will be added to the Solaris kernel. For DiskSuite 4.2.1, install the following packages: SUNWmdu (Commands) SUNWmdr (Drivers) SUNWmdx (64-Bit Drivers)

SUNWmdg (Optional GUI) SUNWmdnr (Optional log daemon configs) SUNWmdnu (Optional log daemon) For Solaris 2.6 and 7, to make life easier, be sure to update your PATH and MANPATH variables to add DiskSuite's directories. Executables reside in /usr/opt/SUNWmd/sbin and man pages in /usr/opt/SUNWmd/man. In Solaris 8, DiskSuite files were moved to "normal" system locations (/usr/sbin) so path updates are not necessary. The Environment In this example we will be mirroring two disks, both on the same controller. The first disk will be the primary disk and the second will be the mirror. The disks are: Disk 1: c0t0d0 Disk 2: c0t1d0 The partitions on the disks are presented below. There are a few items of note here. Each disk is partitioned exactly the same. This is necessary to properly implement the mirrors. Slice 2, commonly referred to as the 'backup' slice, which represents the entire disk must not be mirrored. There are situations where slice 2 is used as a normal slice, however, this author would not recommend doing so. The three unassigned partitions on each disk are configured to each be 10MB. These 10MB slices will hold the DiskSuite State Database Replicas, or metadbs. More information on the state database replicas will be presented below. In DiskSuite 4.2 and 4.2.1, a metadb only occupies 1034 blocks (517KB) of space. In SVM, they occupy 8192 blocks (4MB). This can lead to many problems during an upgrade if the slices used for the metadb replicas are not large enough to support the new larger databases. Disk 1: c0t0d0s0: c0t0d0s1: c0t0d0s2: c0t0d0s3: c0t0d0s4: c0t0d0s5: c0t0d0s6: c0t0d0s7: Disk 2: c0t1d0s0: c0t1d0s1: c0t1d0s2: c0t1d0s3: c0t1d0s4: c0t1d0s5: c0t1d0s6: c0t1d0s7: / swap backup unassigned /var unassigned unassigned /export / swap backup unassigned /var unassigned unassigned /export

The Database State Replicas The database state replicas serve a very important function in DiskSuite. They are the repositories of information on the state and configuration of each metadevice (A logical device created through DiskSuite is known as a metadevice). Having multiple replicas is critical to the proper operation of DiskSuite. There must be a minimum of three replicas. DiskSuite requires at least half of the replicas to be present in order to continue to operate.

51% of the replicas must be present in order to reboot. Replicas should be spread across disks and controllers where possible. In a three drive configuration, at least one replica should be on each disk, thus allowing for a one disk failure. In a two drive configuration, such as the one we present here, there must be at least two replicas per disk. If there were only three and the disk which held two of them failed, there would not be enough information for DiskSuite to function and the system would panic. Here we will create our state replicas using the metadb command: # metadb -a -f /dev/dsk/c0t0d0s3 # metadb -a /dev/dsk/c0t0d0s5 # metadb -a /dev/dsk/c0t0d0s6 # metadb -a /dev/dsk/c0t1d0s3 # metadb -a /dev/dsk/c0t1d0s5 # metadb -a /dev/dsk/c0t1d0s6 The -a and -f options used together create the initial replica. The -a option attaches a new database device and automatically edits the appropriate files. Initializing Submirrors Each mirrored meta device contains two or more submirrors. The meta device gets mounted by the operating system rather than the original physical device. Below we will walk through the steps involved in creating metadevices for our primary filesystems. Here we create the two submirrors for the / (root) filesystem, as well as a one way mirror between the meta device and its first submirror. # metainit -f d10 1 1 c0t0d0s0 # metainit -f d20 1 1 c0t1d0s0 # metainit d0 -m d10 The first two commands create the two submirrors. The -f option forces the creation of the submirror even though the specified slice is a mounted filesystem. The second two options 1 1 specify the number of stripes on the metadevice and the number of slices that make up the stripe. In a mirroring situation, this should always be 1 1. Finally, we specify the logical device that we will be mirroring. After mirroring the root partition, we need to run the metaroot command. This command will update the root entry in /etc/vfstab with the new metadevice as well as add the appropriate configuration information into /etc/system. Ommitting this step is one of the most common mistakes made by those unfamiliar with DiskSuite. If you do not run the metaroot command before you reboot, you will not be able to boot the system! # metaroot d0 Next, we continue to create the submirrors and initial one way mirrors for the metadevices which will replace the swap, and /var partitions. # metainit -f d11 1 1 c0t0d0s1 # metainit -f d21 1 1 c0t1d0s1 # metainit d1 -m d11 # metainit -f d14 1 1 c0t0d0s4 # metainit -f d24 1 1 c0t1d0s4 # metainit d4 -m d14 # metainit -f d17 1 1 c0t0d0s7 # metainit -f d27 1 1 c0t1d0s7

# metainit d7 -m d17 Updating /etc/vfstab The /etc/vfstab file must be updated at this point to reflect the changes made to the system. The / partition will have already been updated through the metaroot command run earlier, but the system needs to know about the new devices for swap and /var. The entries in the file will look something like the following: /dev/md/dsk/d1 - - swap - no /dev/md/dsk/d4 /dev/md/rdsk/d4 /var ufs 1 yes /dev/md/dsk/d7 /dev/md/rdsk/d7 /export ufs 1 yes Notice that the device paths for the disks have changed from the normal style /dev/dsk/c#t#d#s# and /dev/rdsk/c#t#d#s# to the new metadevice paths, /dev/md/dsk/d# and /dev/md/rdsk/d#. The system can now be rebooted. When it comes back up it will be running off of the new metadevices. Use the df command to verify this. In the next step we will attach the second half of the mirrors and allow the two drives to synchronize. Attaching the Mirrors Now we must attach the second half of the mirrors. Once the mirrors are attached it will begin an automatic synchonization process to ensure that both halves of the mirror are identical. The progress of the synchonization can be monitored using the metastat command. To attach the submirrors, issue the following commands: # metattach # metattach # metattach # metattach d0 d20 d1 d21 d4 d24 d7 d27

Final Thoughts With an eye towards recovery in case of a future disaster it may be a good idea to find out the physical device path of the root partition on the second disk in order to create an Open Boot PROM (OBP) device alias to ease booting the system if the primary disk fails. In order to find the physical device path, simply do the following: # ls -l /dev/dsk/c0t1d0s0 This should return something similar to the following: /sbus@3,0/SUNW,fas@3,8800000/sd@1,0:a Using this information, create a device alias using an easy to remember name such as altboot. To create this alias, do the following in the Open Boot PROM: ok nvalias altboot /sbus@3,0/SUNW,fas@3,8800000/sd@1,0:a It is now possible to boot off of the secondary device in case of failure using boot altboot from the OBP. Important Notes for Installing VERITAS Volume Manager (VxVM) * Check what VERITAS packages are currently running:

# pkginfo | grep i VRTS * Make sure the boot disk has at least two free partitions with 2048 contiguous sectors (512 bytes) aviable. # prtvtoc /dev/rdsk/c0t0d0 * Make sure to save the boot disk information by using the prtvtoc command. # prtvtoc /dev/rdsk/c0t0d0 > /etc/my_boot_disk_information * Make sure to have a backup copy of the /etc/system and /etc/vfstab files. * Add packages to your system. # cd 2location_of_your_packages # pkgadd d . VRTSvxvm VRTSvmman VRTSvmdoc * Add the license key by using vxlicinst. # vxlicinst * Then run the Volume Manager Installation program. # vxinstall * Check the .profile file to ensure the following paths: # PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSobgui/bin:/usr/sbin:/opt/VRTSob/bin # MANPATH=$MANPATH:/opt/VRTS/man # export PATH MANPATH The VERITAS Enterprise Administrator (VEA) provides a Java-based graphical user interface for managing Veritas Volume Manager (VxVM). Important Notes for how to set up VEA: * Install the VEA software. # cd 2location_of_your_packages # pkgadd a ../scripts/VRTSobadmin d . VRTSob VRTSobgui VRTSvmpro VRTSfspro * Start the VEA server if not, running. # vxsvc m (Check or monitor the VEA server is running) # vxsvc (Start the VEA server) * Start the Volume Manager User interface. # vea & The Most handy Volume Manager commands: * # vxdiskadm * # vxdctl enable (Force the VxVM configuration to rescan for the disks. See devfsadm) * # vxassist (Assist to create a VxVM volume.) * # vxdisk list rootdisk (Displays information about the header contents of the root disk.) * # vxdg list rootdg (Displays information about the content of the rootdg disk group.) * # vxprint g rootdg thf | more (Displays information about volumes in rootdg.) In order to create VERITAS Volume Manager, you may use the following three methods: (This article emphases on the CLI method.) * VEA

* Command Line Interface (CLI) * vxdiskadm Steps to create a disk group: * # vxdg init accountingdg disk01=c1t12d0 Steps to add a disk to a disk group: * View the status of the disk. # vxdisk list --or-- # vxdisk s list * Add one un-initialized disk to the free disk pool. # vxdisksetup i c1t8d0 * Add the disk to a disk group called accoutingdg. # vxdg init accountingdg disk01=c1t8d0 # vxdg g accountingdg adddisk disk02=c2t8d0 Steps to split objects between disk groups: * # vxdg split sourcedg targetdg object Steps to join disk groups: * # vxdg join sourcedg targetdg Steps to remove a disk from a disk group: * Remove the disk01 disk from the accountingdg diskgroup. # vxdg g accountingdg rmdisk=disk01 Steps to remove a device from the free disk pool: * Remove the c1t8d0 device from the free disk pool. # vxdiskunsetup c2t8d0 Steps to manage disk group: * To deport and import the accountingdg disk group. # vxdg deport accountingdg # vxdg C import accountingdg # vxdg h other_hostname deport accountingdg * To destroy the accountingdg disk group. # vxdg destroy accountingdg Steps to create a VOLUME: * # vxassist g accountingdg make payroll_vol 500m * # vxassist g accountingdg make gl_vol 1500m Steps to mount a VOLUME: If using ufs: * # newfs /dev/vx/rdsk/accountingdg/payroll_vol

* # mkdir /payroll * # mount F ufs /dev/vx/dsk/accountingdg/payroll_vol /payroll If using VxFS: * # mkfs f vxfs /dev/vx/rdsk/accountingdg/payroll_vol * # mkdir /payroll * # mount F vxfs /dev/vx/dsk/accountingdg/payroll_vol /payroll Steps to resize a VOLUME: * # vxresize g accountingdg payroll_vol 700m Steps to remove a VOLUME: * # vxedit g accountingdg rf rm payroll_vol Steps to create a two striped and a mirror VOLUME: * # vxassist g accounting make ac_vol 500m layout=stripe,mirror Steps to create a raid5 VOLUME: * # vxassist g accounting make ac_vol 500m layout=raid5 ncol=5 disk01 Display the VOLUME layout: * # vxprint rth Add or remove a mirror to an existing VOLUME: * # vxassist g accountingdg mirror payroll_vol * # vxplex g accounitngdg o rm dis payroll_plex01 Add a dirty region log to an existing VOLUME and specify the disk to use for the drl: * # vxassist g accountingdg addlog payroll_vol logtype=drl disk04 Move an existing VOLUME from its disk group to another disk group: * # vxdg move accountingdg new_accountingdg payroll_vol To start a VOLUME: * #vxvol start Steps to encapsulate and Root Disk Mirroring * Use vxdiskadm to place another disk in rootdg with the same size or greater. * Set the eeprom variable to enable VxVM to create a device alias in the openboot program. # eeprom use-nvramrc?=true * Use vxdiskadm to mirror the root volumes. (Option 6) * Test you can reboot from mirror disk. # vxmend off rootvol-01 (disable the boot disk) # init 6 OK> devalias (check available boot disk aliases) OK> boot vx-disk01 Write a script to use the for statement to do some work. # for i in 0 1 2 3 4

>do >cp r /usr/sbin /mydir${i} >mkfile 5m /mydir${i} >dd if=/mydir/my_input_file of=/myother_dir/my_output_file & >done Tuesday, May 20, 2008 Veritas Volume Manager - Quick Start Command Reference Setting Up Your File System Make a VxFS file system - mkfs F vxfs [generic_options] [-o vxfs_options] char_device [size] Mount a file system - mount F vxfs [generic_options] [-o vxfs_options] block_device mount_point Unmount a file system - umount mount_point Determine file system type - fstype [-v] block_device Report free blocks/inodes - df F vxfs [generic_options] [-o s] mount_point Check/repair a file system - fsck F vxfs [generic_options] [y|Y] [n|N] character_device Online Administration Resize a file system - fasdm [-b newsize] [-r raw_device] mount_point Dump a file system - vxdump [options] mount_point Restore a file system - vxrestore [options] mount_point Create a snapshot file system - mount F vxfs o snapof=source_block_device,[snapsize=size] destination_block_device snap_mount_point Create a storage checkpoint - fsckptadm [-nruv] create ckpt_name mount_point List storage checkpoints - fsckptadm [-clv] list mount_point Remove a checkpoint - fsckptadm [-sv] remove ckpt_name mount_point Mount a checkpoint - mount F vxfs o ckpt=ckpt_name pseudo_device mount_point Unmount a checkpoint - umount mount_point Change checkpoint attributes - fsckptadm [-sv] set [nodata|nomount|remove] ckpt_name Upgrade the VxFS layout - vxupgrade [-n new_version] [-r raw_device] mount_point Display layout version - vxupgrade mount_point Defragmenting a file system Report on directory fragmentation - fsadm D mount_point Report on extent fragmentation - fsadm E [-l largesize] mount_point Defragment directories - fsadm d mount_point Defragment extents - fsadm e mount_point Reorganize a file system to support files > 2GB - fsadm o largefiles mount_point Intent Logging, I/O Types, and Cache Advisories Change default logging behavior - fsck F vxfs [generic_options] o delaylog|tmplog|nodatainlog| blkclear block_device mount_point Change how VxFS handles buffered I/O operations - mount F vxfs [generic_options] o mincache=closesync|direct|dsync|unbuffered| tmpcache block_device mount_point Change how VxFS handles I/O requests for files opened with O_SYNC and O_DSYNC - mount F vxfs [generic_options] o convosync=closesync|direct|dsync|unbuffered |delay block_device mount_point Quick I/O Enable Quick I/O at mount - mount F vxfs o qio mount_point

Disable Quick I/O - mount F vxfs o noqio mount_point Treat a file as a raw character device - filename::cdev:vxfs: Create a Quick I/O file through a symbolic link - qiomkfile [-h header_size] [-a] [-s size] [-e|-r size] file Get Quick I/O statistics - qiostat [-i interval][-c count] [-l] [-r] file Enable cached QIO for all files in a file system - vxtunefs s o

S-ar putea să vă placă și