Sunteți pe pagina 1din 25

SuSE Linux Path Failover

MDADM
HDLM
RAIDTOOLS

• HDS Support
• Well tested
• Free of charge
• Stable
• Updates from Internet
• Standard interface
• SAN Boot
• Many algorithms
• Kernel-independent
• Path Health Checking
• Distributions-independent
• Auto Failback
• Processor-independent
• Recognises 9500V Default-Ctlr
• Supports Emulex and Qlogic
• Autoscan
• Supports Emulex and Qlogic

• No SAN Boot • Only Active/Passive


• Costs money • Manual configuration
• Very Kernel-dependent dot-level) • No Auto Failback
• Distribution-dependent • No Path Health Checking
• Processor-dependent • LVM Filter as of Kernel 2.6
• No Source Code • Loss of Linux support
• Not Up-to-date with manual driver updates

• SuSE currently favored • mdadm and raidtools use


• Support SLES 9 SP 1 2.6.5-7.139. End July GA MD Device Driver
• Later no Kernel Dot-Level dependance • mdadm offers better interface
• Auto Failback is scriptable
MULTIPATH Qlogic
TOOLS Driver Failover

• SuSE/NOVELL Support as of
SLES 9 SP2 for HDS Storage • SAN Boot
• Free of charge • Several Algorithms
• Updates from Internet • Kernel-independent
• SAN Boot • Distribution-independent
• Several Algorithms • Processor-independent
• Kernel-independent • Autoscan
• Distribution-independent • AutoFailback
• Processor-independent • LVM transparence
• Autoscan
• Supports Emulex and Qlogic
• Only Active/Passive
• SuSE Support as of SLES 9 SP 2 • Path priority only with SANSurfer
• Badly documented • Only Qlogic HBA
• Errors in documentation •Static Load-Balancing parameters
• Reboot needed after partitioning do not work
• Problems with Partitioned • A stable path recognition only possible
Devices After reboot

• Stable
• OS transparent
• No Experience
• Very stable
• SuSE Support
Basic Qlogic Setup
SLES 9
Module parameters • Qlogic driver is installed and automatically started
• Module parameters:
# vi /etc/modprobe.conf.local
# modinfo –p qla2300
add: options qla2xxx qlport_down_retry=1 ql2xfailover=0
-> show possible parameters
ql2xretrycount=5 ql2xplogiabsentdevice=1
• in YAST / Hardware / Harddisk-Controller / Qlogic “Load module in initrd”
# cat /proc/scsi/qla2300/1
Deactivate
-> show active parameters
• Start module from RAMdisk automatically at boot:
# vi /etc/sysconfig/kernel
INITRD_MODULES=„mptscsih reiserfs qla2xxx qla2300“
SLES 8 # mkinitrd
• Qlogic module is installed, but not automatically started
• Module parameters:
# vi /etc/modules.conf
add: options qla2300 qlport_down_retry=1
Scannen
• Manual Start (check modules.conf): # echo „scsi-qlascan“ >
# depmod -a
/proc/scsi/driver-name/adapter-id
# modprobe qla2300
Check in „/var/log/messages“ after „RESCAN“
• Automatic Start (check modules.conf):
# vi /etc/init.d/boot.local
# cat /proc/scsi/scsi
add: modprobe qla2300 qlport_down_retry=1
# echo "scsi add-single-device 0 1 2 3" >
• Automatisches Starten in RD (check modules.conf): /proc/scsi/scsi
# vi /etc/sysconfig/kernel
-> scsi mid layer re-scans
INITRD_MODULES=„mptscsih reiserfs qla2300“
-> "0 1 2 3" = "HOST CHANNEL ID LUN"
# mk_initrd
# reboot
LVM
• MDs and LVM VGs are only started automatically at reboot in SLES 8 and 9
if the Qlogic Driver in RAMDisk is used. Otherwise the MDs
and VGs have to be started manually.
HDLM Installation/Configuration HDLM for Kernel 2.4

Check HDLM Release Notes for supported Kernel versions:


# uname -a
# rpm –q k_deflt or # rpm –q k_smp

# mkdir /etc/opt/DynamicLinkManager
# mount /media/cdrom (License CD)

# cp /media/cdrom/*.plk /var/tmp/hdlm_license
OR
# echo "A8GPQRS3CDEIJK012C73" > /etc/opt/DynamicLinkManager/dlm.lic_key

# umount /media/cdrom

# mount /media/cdrom (Program CD)


# cd /media/cdrom
# ./installhdlm

a) # insmod sddlmadrv
b) # insmod sddlmfdrv
c) # /etc/init.d/DLMManager start
d) # /opt/DynamicLinkManager/bin/dlmcfgmgr –r

a) To d) a reboot is better

# /opt/DynamicLinkManager/bin/dlnkmgr set –afb on –intvl 5


-> AutoFailback auf 5 min
# /opt/DynamicLinkManager/bin/dlnkmgr set –pchk on –invl 5
-> PathHealth Check auf 5 min
# /opt/DynamicLinkManager/bin/dlnkmgr set –ellv 2
-> LogLevel auf 2 setzen, sonst zu viele Einträge
HDLM LVM Setup

# fdisk /dev/sddlmad
-> Set Linux Partition ID to 0x83

# vi /etc/raidtab
raiddev /dev/md0
raid-level linear
chunk-size 32
nr-raid-disks 1
persistent-superblock 1
device /dev/sddlmad1
raid-disk 0

# mkraid –R /dev/md0
# vgscan
# pvcreate /dev/md0
# vgcreate vg01 /dev/md0
# vgchange –an vg01
# raidstop /dev/md0
# raidstart /dev/md0
# vgchange –ay vg01
# lvcreate –L 1G –n lvol1 vg01
# mkfs –t etx3 /dev/vg01/lvol1
# mount /dev/vg01/lvol1 /mnt/fs1

Do this after a SLES8 reboot if the Qlogic module is not in the RamDisk RD
# raidstart /dev/md0
# vgscan
# vgchange –ay vg01
# mount /dev/vg01/lvol1 /mnt/fs1
HDLM Administration

# dlmcfgmgr -v
HDevName Management Device Host Channel Target Lun
/dev/sddlmaa configured /dev/sdc 0 0 0 2
/dev/sdl 1 0 1 2
/dev/sddlmac configured /dev/sdk 1 0 1 1
/dev/sdb 0 0 0 1
# dlmcfgmgr -r
-> Reconfigure after LUN add
# dlmcfgmgr -u all
-> Check after LUN delete
# dlmcfgmgr -o <device> | all
-> exclude
# dlmcfgmgr -i <device> | all
-> include

# cd /opt/DynamicLinkManager/bin
# ./dlnkmgr view -drv
PathID HDevName Device LDEV
000000 sddlmaa /dev/sdc 9970/9980.50118.0D0F
000001 sddlmab /dev/sdd 9970/9980.50118.0D10
000002 sddlmaa /dev/sde 9970/9980.50118.0D0F
000003 sddlmab /dev/sdf 9970/9980.50118.0D10

# cat /proc/mdstat
-> Check, that the MD Devices for the LVM are active
HDLM Deinstallation

# umount X
# vgchange –an vgX
# raidstop /dev/mdX
# rpm –e HDLM
# rpm –e HDLMhelp-en
# reboot
MDADM Installation

Contained in the RedHat and SuSE Distribution. Updates in Internet:

- http://www.cse.unsw.edu.au/~neilb/source/mdadm/
- download mdadm-1.9.0.tgz
# gzip -d mdadm-1.9.0.tgz
# tar xvf mdadm-1.9.0.tar
# cd mdadm-1.9.0
# make
# make install
# mdadm -V
mdadm - v1.9.0 - 04 February 2005

LVM2 (Kernel 2.6) needs following Filter settings in „/etc/lvm/lvm.conf“:

filter = [ "a|/dev/md.*|", "r/.*/" ]


MDADM Configuration

Display the paths by using the HORCM inqraid command:


# ls /dev/sd* | inqraid -CLI | grep CL
sdc CL1-A 266 124 - s/s/ss 0000 5:00-00 DF600F
sdd CL1-A 266 125 - s/s/ss 0000 5:01-00 DF600F
sde CL1-A 266 126 - s/s/ss 0000 5:01-00 DF600F
sdf CL1-A 266 127 - s/s/ss 0000 5:01-00 DF600F
sdh CL1-A 50118 769 - s/s/ss 9973 5:01-03 OPEN-9
sdi CL1-A 50118 770 - s/s/ss 9973 5:01-03 OPEN-9
sdj CL1-A 50118 771 - s/s/ss 9973 5:01-03 OPEN-9
sdl CL2-A 50118 769 - s/s/ss 9973 5:01-03 OPEN-9
sdm CL2-A 50118 770 - s/s/ss 9973 5:01-03 OPEN-9
sdn CL2-A 50118 771 - s/s/ss 9973 5:01-03 OPEN-9
sdo CL2-A 266 124 - s/s/ss 0000 5:00-00 DF600F
sdp CL2-A 266 125 - s/s/ss 0000 5:01-00 DF600F
sdq CL2-A 266 126 - s/s/ss 0000 5:01-00 DF600F
sdr CL2-A 266 127 - s/s/ss 0000 5:01-00 DF600F

# fdisk /dev/sdc
-> Set up partition 1 with ID 0xfd
# fdisk /dev/sdo
-> read and save is enough to set up the alternate path of /dev/sdc

# mdadm --create --verbose /dev/md0 --level=multipath --raid-devices=2 /dev/sdc1 /dev/sdo1


-> RAID create

# echo 'DEVICE /dev/sd*1' > /etc/mdadm.conf


# mdadm --detail --scan | grep UUID >> /etc/mdadm.conf
-> create /etc/mdadm.conf, Scanfilter on Partition 1
MDADM Administration

# mdadm --stop –scan


-> Stop all software RAID‘s
# mdadm --assemble /dev/md0
-> Start /dev/md0
-> /etc/mdadm.conf entry has to exist
# mdadm --assemble /dev/md0 /dev/sdg1 /dev/sdk1
-> Start without /etc/mdadm.conf entry
# mdadm --zero-superblock /dev/sdg1
-> Erase Superblock = Erase RAID
# mdadm /dev/md0 --remove /dev/sdc1
-> Delete faulty Path
# mdadm /dev/md0 --add /dev/sdc1
-> Reactivate deleted path
# cat /proc/mdstat
-> Status
# mdadm --detail /dev/md0
-> Show Status
# detect_multipath
-> Tool to discover the Paths , only works for Lightning/USP
MULTIPATH General
TOOLS

• Supplied with RedHat and SuSE Distributions

• Tested with SuSE Enterprise Server SP 2

• Updates and Information in Internet: http://christophe.varoqui.free.fr

• SUSE Support for HDS DF400, DF500 and DF600 ab SLES 9 SP 2

• Setup /etc/multipath.conf to control Lightning and USP Active/Active

• Partitions NOT supported,. Only recognised after Reboot


MULTIPATH Activation 1
TOOLS

• for Qlogic 2xxx Adapter set following parameters in „/etc/modprobe.conf.local” :


• # vi /etc/modprobe.conf.local
• Add: options qla2xxx qlport_down_retry=1 ql2xfailover=0 ql2xretrycount=5
ql2xplogiabsentdevice=1

• The Qlogic driver must be loaded in RamDisk :


• # vi /etc/sysconfig/kernel
• Add: INITRD_MODULES=„mptscsih reiserfs qla2xxx qla2300“
• # mk_initrd
• Lilo needs to be recreated if you use it::
• # lilo

• for LVM2 are filter settings neccessary:


• # vi „/etc/lvm/lvm.conf
• Change to: filter = [ "a|/dev/disk/by-name/.*|", "r|.*|" ]
• Change to: types = [ "device-mapper", 253 ]

• In HotPlug these changes need to be made:


• # vi /etc/sysconfig/hotplug
• Change to: HOTPLUG_USE_SUBFS=no
MULTIPATH Activation 2
TOOLS

• You need to update /etc/multipath.conf, is you need support for new LUNs (eg. for USP OPEN-V or
Command Devices):
devnode_blacklist {
devnode cciss
devnode fd
devnode hd
devnode md
devnode sr
devnode scd
devnode st
devnode ram
devnode raw
devnode loop
devnode sda # internal Bootdisk
}

devices {
device {
vendor "HITACHI "
product "DF600F "
path_grouping_policy failover
path_checker tur
}
device {
vendor "HITACHI "
product "OPEN-9 "
path_grouping_policy multibus
path_checker tur
}
}
MULTIPATH Activation 3
TOOLS

• Start Multipath:
• # /etc/init.d/boot.multipath start
• # /etc/init.d/multipathd start

• Activate automatically during boot:


• # insserv boot.multipath multipathd
• Note: You may have to activate other things with the RunLevel Editor
• (boot.scsidev, boot.udev, boot.device-mapper, boot.lvm, …)

• Create virtual devices


• # multipath –v2 –d
• shows all paths, not activated
• # multipath
• create the virtual devices in;….. /dev/disk/by-name

• There is a bug (bugzilla.novell.com #102937) so you do not have access to partitioned devices after reboot. The
reason is that boot.multipath is startend earlier as hotplug manager. For workaround move the hotplug manager in
RunLevel B:

YAST – System – Runlevel Editor – Expert Mode – Hotplug only to Runlevel „B“
MULTIPATH Administration
TOOLS

• Show Path Status:


• # multipath -l

• Delete all paths and virtual devices (do not do this online!):
• # multipath -F

• Check, theat the Multipath Deamon is still running:


• # /etc/init.d/multipathd status

• Switch the daemon on and off:


• # chkconfig multipathd on/off

• Show Device Mapper Devices :


• # dmsetup ls

• Show UDEV Infomation :


• # udevinfo -d
MULTIPATH Prioritizer for Thunder
TOOLS

With an active passive system like the Thunder you only usually use the first path.
The Second HBA is standby.

If you want to do static load balancing you can use Matthias Prioritizer.

Copy the Linux HORCM command „inqraid“ and the Perl Script „pp_HDS_ODD_EVEN.sh“ to „/sbin/“ and set
the file rights.

This priotizer muss be added to „/etc/multipath.conf“ under „prio_callout“

After this change, and the deleting and recreation of the paths with „multipath –F“ ; „multipath“ ,
You can use „multipath –l“ to see which paths will be used (indicated by „best“).
MULTIPATH Prioritizer Shell Script for Thunder
TOOLS
#! /bin/sh

PATH=/bin:/usr/bin:/sbin:/usr/sbin

MINOR_MAJOR=$1

MAJOR=$(echo $MINOR_MAJOR | awk -F : '{print $1}')


MINOR=$(echo $MINOR_MAJOR | awk -F : '{print $2}')

ls -l /dev/sd* | grep $MAJOR | grep $MINOR | {


while read LINE
do
MIN=$(echo $LINE | awk '{print $6}')
MAJ=$(echo $LINE | awk '{print $5}' | awk -F , '{print $1}')
if [ "$MINOR" = "$MIN" ] && [ "$MAJOR" = "$MAJ" ]
then
DEVICE=$(echo $LINE | awk '{print $10}')
BOOTSHIFT=$(echo $LINE | awk '{print $9}' | awk -F / '{print $2}')
if [ "$BOOTSHIFT" = "dev" ]
then
DEVICE=$(echo $LINE | awk '{print $9}')
fi
break
fi
done

CTRL=$(inqraid -CLI $DEVICE | sed -n '2,$p' | awk '{print $2}' | awk -F - '{print $1}')
LDEV=$(inqraid -CLI $DEVICE | sed -n '2,$p' | awk '{print $4}')

if [ "$CTRL" = "CL1" ] && [ "$(($LDEV%2))" = "1" ]


then
echo 0
exit 0
fi

if [ "$CTRL" = "CL1" ] && [ "$(($LDEV%2))" = "0" ]


then
echo 1
exit 0
fi

if [ "$CTRL" = "CL2" ] && [ "$(($LDEV%2))" = "1" ]


then
echo 1
exit 0
fi

if [ "$CTRL" = "CL2" ] && [ "$(($LDEV%2))" = "0" ]


then
echo 0
exit 0
fi

exit 1; }
MULTIPATH Prioritizer Shell Script for Thunder
TOOLS /etc/multipath.conf changes
# cat /etc/multipath.conf

devnode_blacklist {
devnode cciss
devnode fd
devnode hd
devnode md
devnode sr
devnode scd
devnode st
devnode ram
devnode raw
devnode loop
devnode sda # interne Bootdisk
}

devices {
device {
vendor "HITACHI "
product "DF600F "
path_grouping_policy failover
prio_callout "/sbin/pp_HDS_ODD_EVEN.sh %d"
path_checker tur
}
device {
vendor "HITACHI "
product "OPEN-9 "
path_grouping_policy multibus
path_checker tur
}
}
MULTIPATH Deactivate
TOOLS

• Delete all paths and virtual devices (do not do this online!):
• # multipath -F

• remove from bootsequence:


• # insserv –r boot.multipath multipathd

• Stop Deamon :
• # chkconfig multipathd off
Qlogic Storage Configuration for SLES 9 SP 2
Driver Failover

Lightning / USP: - Host Mode „Standard“


- „dmesg“ shows this as „XP device“ (HP sponsored?)
- The Driver recognises MultiPathing even with different WWNN

Thander 9500V: - Host Connection Mode 1 „Standard“


- Host Connection Mode 2 „Same Node Name Mode“
- The driver recognises Multipathing only , if each Storage Port has the same WWNN
Qlogic Watch it!
Driver Failover

•Multipath Tools must be deactivated.


• In YAST / Hardware / Hard-Disk-Controller / Qlogic, deactivate “Module load to initrd” and delete Module
parameters.
• Remove Qlogic driver parameters from „/etc/modprobe.conf“.
• The parameter „ql2xlbType=1“ should activate Static Load Balancing for all LUN‘s. Unfortuneately it only works
from the SANSurfer GUI.
• Only the Module load (qla2xxx, qla2300) from RAMDisk runs fast enough by a boot for an automatic
VolumeGroup activation and filecheck to take place.
• SANSurfer GUI configurations are not passed on to RAMDisk.
• The SANSurfer CLI cannot change multipath settings.
Qlogic Server Konfiguration for SLES 9 SP 2 ohne
Driver Failover SANSurfer GUI/CLI

Set the Module parameters :


# vi /etc/modprobe.conf.local
-> Add: options qla2xxx qlport_down_retry=1 ql2xretrycount=5 ql2xfailover=1 ql2xlbType=1
ql2xplogiabsentdevice=1

Stop Module :
# modprobe –r qla2300
# modprobe –r qla2xxx

Create Modul Dependance :


# depmod –a

Start Module after boot :


# modprobe qla2300

Start Module automatically after boot:


# vi /etc/sysconfig/kernel
-> Edit: MODULES_LOADED_ON_BOOT=„qla2xxx qla2300“

Start Module automatically with boot from RAMdisk :


# vi /etc/sysconfig/kernel
-> Edit: INITRD_MODULES=„mptscsih reiserfs qla2xxx qla2300“
# mkinitrd

Check and mount LVM Filesystem at Boot (RAMDisk only):


# vi /etc/fstab
-> Add: /dev/vg01/lv01 /mnt/fs01 ext3 defaults 0 2
Qlogic Driver Check
Driver Failover

# ls /proc/scsi/qla2xxx
# cat /proc/scsi/qla2xxx/<Adapter-ID>
-> Now shows also . „Driver version 8.00.02-fo“

# tail –f /var/log/messages
-> Shows activation and deactivation of paths

# modinfo qla2300
# modinfo qla2xxx

# modinfo –p qla2xxx
-> Shows all possible settings with explanations
Qlogic SANSurfer GUI Setup
Driver Failover

• Download „sansurfer2.0.30b17_linux_install.bin“ von www.qlogic.com

# chmod 777 sansurfer2.0.30b17_linux_install.bin


-> turn on execute rights

• in X, Click on the binary to start the installlation


• Choose „ALL GUIs and ALL Agents“
• Choose „Enable QLogic Failover Configuration“

• Start SANSurfer Client with Click on „/opt/Qlogic_Corporation/SANsurfer/SANsurfer“


• Connect „localhost“
• Passwort ist „config“
• Configre both HBAs (Point-to-Point, 2 Gbit/sec, Failover, ….)
• Menu „Configure – LUNs – Load Balance – All LUNs“ activates Static Load Balancing
•Reboot

• SANSurfer addsthe line „ConfigRequired=1“ to „/etc/modprobe.conf.local“

• alle Settings finden sich dann in der Datei „/etc/qla2xxx.conf“

NOTE : The settings do not go to RAMdisk!!.

TIP: You can administer Linux from a Windows SANSurfer Client via LAN.

S-ar putea să vă placă și