Documente Academic
Documente Profesional
Documente Cultură
Dell Training Tool | MD3000i
•
•
Introduction
Welcome
Welcome
• Getting Started
•
•
Course Introduction
System Overview
Welcome
• Second Generation
• FW 07.35.38.60 This course is designed to teach the basics of the MD3000i iSCSI direct and network attached storage enclosure. The course is to be used in conjunction with th
• FW Ver. 07.35.31.XX
A08 RTS Dates: Americas - 08/31/2007 This image illustrates the MD3000i
◦ Firmware Upgrade
◦ Controller Departments: GSS L&D
Replacement
◦ Configuration Authors: John Ingle
Enhancements Ed Boehm
◦ Windows 2008 R2 David Spencer
◦ SLES 11
• FW Ver. 07.35.22.XX Contributing Sources:Dell
A07 GSS L&D
◦ FW Ver. PG Engineering
07.35.22.XX A07 Remote Installation teams
Overview
◦ New Features Contacting Dell: To contact Dell regarding issues with this training material, click the following link: Feedback.
■ New Features
Overview
■ Greater than Information in this document is subject to change without notice.
2TB LUN
Support ©2007 Dell Inc. All rights reserved. ™ Rev. A21
■ RAID 6
Support Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
■ IPV6 Support
For All Ports Trademarks used in this text: Dell, the DELL logo, and Dimension are trademarks of Dell Inc.; Intel, Pentium, and Celeron are registered trademarks of Intel Co
■ Smart BatteryCorporation.
(Smart BBU)
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaim
◦ Changes
■ Changes
Overview
■ Enhanced Printed 2/2/2011 12:22:48 PM MD3000i For Dell Employees Only
Load Expires 2/3/2011 12:22:48 PM Welcome This document is Dell Confidential
Balancing for
Multi-Path
Page Break Divider. Does not show when printed.
Drivers
■ New
Premium
Features
Getting Started
■ Firmware
Identification
and MDSM Using this Material
Version
■ MD The following sections provide information to help you effectively use this training material.
Firmware
Update
Utility Navigating the Material
■ Disk Group
Changes To navigate through this course, select topics using either the left navigation menu or the Previous/Next buttons at the top right corner of each page.
■ Disk
Metadata and This course is designed to be completed in the order in which the topics are presented. However, refresher training can be accomplished in any desired order.
Format
Changes
■ Simplex
Important Symbols
Controller
The following symbols are used to emphasize important notations in this material:
Replacement
■ Duplex
Controller A NOTE indicates important information that helps you make better use of your computer.
Replacement
■ Serial Shell
and Console A WARNING indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
Procedures
• Chassis A CAUTION indicates a potential for property damage, personal injury, or death. A Customer Experience (CE) Tip indicates important information
• Chassis 360
• Chassis Teardown
• Component Information Browser Requirements
• RAID Controller Module
◦ RAID Controller Dell's online courses are designed to work with Internet Explorer® 5.x and later, Netscape® versions 6.x and later, and Mozilla® 1.0.1. If you experience proble
Module Overview
◦ Enclosure Modes Additional Required Software
• Control Panel
• Bezel LEDs
• Physical Disks Adobe® Acrobat® (.pdf) files require Acrobat Reader®. You can download Acrobat Reader and get additional information from Adobe's website: http://www.a
• Power Supply/Cooling
Fan Module LEDs
• Racking and Cabling Printed 2/2/2011 12:22:48 PM MD3000i For Dell Employees Only
• Racking Expires 2/3/2011 12:22:48 PM Getting Started This document is Dell Confidential
• Enclosure Cabling
◦ Host to MD3000i Page Break Divider. Does not show when printed.
Enclosure Cabling
◦ MD3000i to
MD1000 Enclosure
Expansion Cabling
Course Introduction
• Support Matrix
• Field-Replaceable Units The following sections provide general information about the course goal, objectives, delivery method, and prerequisites.
• RAID Controller Module
• Bezel Goal
• Physical Disks
• Physical Disk Blanks The goal of this course is to prepare you to effectively and efficiently troubleshoot technical issues with the PowerVault MD3000i.
• Control Panel
• Enclosure Midplane
• Power Supply/Cooling Objectives
Fan Modules
• System Management Second Generation PowerVault MD3000i
• System Architecture
• MDSM Compared to CLI The second generation objectives focus on what is new, and different about the second generation PowerVault MD3000i RAID controller firmware update and h
• MDSM Installation and
Host Configuration Given the information in this course and available tools, you will be able to:
◦ Windows Hosts
■ MDSM • identify the new features of the second generation PowerVault MD3000i firmware and software.
Installation • explain how to successfully migrate an MD3000 with all previous versions of firmware to the second generation firmware version without losing data.
■ iSCSI Host • identify what features have been enhanced with the release of second generation PowerVault MD3000i RAID controller firmware.
Configuration • explain how to successfully install and configure and PowerVault MD3000i in either a full or mixed IPv6 environment as well as an IPv4 environment, fo
using IPV4 • explain how to successfully configure the PowerVault MD3000i with RAID 6 LUNs.
■ iSCSI Host • explain how to successfully migrate a RAID5 LUN to a RAID 6 LUN without losing data.
Configuration • identify the requirements in order to successfully replace a Generation 2 or 1 controller on the PowerVault MD3000i.
using IPV6 • identify the differences between a first and second generation controller.
◦ RHEL Hosts • explain what happens when swapping a second generation PowerVault MD3000i RAID controller with a first generation PowerVault MD3000i controller
• MDSM Usage • explain what happens when repalcing a first generation PowerVault MD3000i controller with a second generation PowerVault MD3000i RAID controller
• Premium Features
◦ Comparison of NOTE:
Premium Features You will find all of the training information for the second generation of the PowerVault MD3000i RAID controller in the Generation 2 section of this onli
◦ Premium Feature -
Snapshot Virtual • New Features of the firmware update
Disk • Changes with the firmware update
◦ Premium Feature - • Troubleshooting information specific to the firmware update
Virtual Disk Copy • Firmware Update and Downrev Process
◦ Premium Feature
Recovery
Procedures First Generation PowerVault MD3000i
• SMcli Overview - The
Command Line Interface
Given the information in this course and available tools, you will be able to:
• SMcli - Command Line
Details • List the type and specifications of physical disks supported in the MD3000i Storage Array.
• Clustering • List the Field-Replaceable Units (FRU) used in the MD3000i Storage Array.
• Configuration Utility • Locate the removal/replacement procedures for all available FRUs.
• Overview • Demonstrate the proper cabling to be used.
• Launching the Utility • Name the functions and features of the MD3000i Storage Array and its components.
• Array Configuration • Describe the system architecture of the MD3000i Storage Array.
• Host Configuration • Describe the steps necessary to set up the iSCSI initiator
• Maintenance • Describe the steps necessary to initiate a iSCSI session
• Disk Operations and • Describe the steps necessary to set up CHAP on the host and storage array
Maintenance • Describe a host group.
• Configuration • List the steps necessary to create a host group.
Maintenance • List the steps necessary to create a disk group.
• Firmware Updates • List the steps necessary to create virtual disks within a disk group.
◦ Firmware and • List the steps necessary to perform host-to-virtual disk mapping.
NVSRAM • Describe the activation of the snapshot premium feature software.
◦ Physical Disk • Describe the activation of the virtual disk copy software.
Firmware • Describe the recovery procedure for the premium feature key.
• Service Tag Reset • Describe the recovery procedure for a lost or corrupted configuration.
Procedure • Describe the recovery procedure for an accidentally deleted virtual disk.
• Diagnosis and • Use the command line to accomplish routine as well as advanced configuration of the disk groups, virtual disks, and storage array.
Replacement of a • Describe the steps necessary to reset the administrator password from the serial interface.
Controller • Describe the steps necessary to collect and save the support files for troubleshooting.
• Upgrade a Simplex Array • Describe the steps necessary for the replacement of a RAID controller module.
to Duplex
• MD3000i Utilities
• SATA Support and Delivery Method
Procedures
• Product Positioning This original first generation curriculum was designed to be delivered by an instructor in a classroom setting. However, since the second generation curriculum
• Global Differences
• Configuration Changes
• SAS SATA II Hardware Prerequisites
• Firmware Update
• Single Controller SAS Technology
Replacement (DSP and MD3000 Basics
CRU)
System Overview
The MD3000i RAID Enclosure is a 3U external RAID storage array that is leveraged from the MD1000 chassis, and shares the same internal and back end arch
operating in either a single controller (simplex) or an active/active (duplex) redundant controller configuration. The MD3000i can be daisy-chained with up to tw
• Manual Controller The MD3000i enclosure supports up to fifteen 3.5" SAS physical disks and has redundant power supplies and fans in the base chassis. The MD3000i is also able
Replacement ports to provide true redundancy of the virtual disk paths. Either controller is able to take ownership of a virtual disk as changes in path availability dictate. Furt
• ESX 3.5 and 4.0 Best
Practices The MD3000i is positioned as the next evolution in iSCSI attached external storage. Because of its dual port external RAID controllers, and ability to switch-co
• Virtualization Support
• PowerVault VMware The MD3000i allows multiple servers access to a common, shared pool of storage without the cost and complexities of dealing with a storage infrastructure netw
Documentation and fabrics.
Media
• Diagnostic MD3000i Support Matrix
• Using the Support Bundle
and Logs
Click the link below for the latest systems support matrix information:
• Serial Shell and Console
MD3000i Support Matrix
Procedures
• Critical Events
• SNMP Alerts and Traps NOTE:
• Recovery Guru The support matrix at this link is updated once every 3 months only. This page will be updated with each new supported operating system (OS) release d
• Controller Log Collection
• Troubleshooting
• Hard Disk Drive
Troubleshooting
MD3000i RAID Enclosure Description
• Controller
Troubleshooting • iSCSI connected support for up to sixteen Windows®, Red Hat Linux, and SUSE Linux host servers via Fast or Gigabit Ethernet cables
• Virtual Disk Not on • Support for redundant or non-redundant configuration
Prefered Path • Two redundant, hot-pluggable active/active RAID controller modules
• Power Issues • Two redundant, hot-pluggable power supply/fan modules
• Cooling Issues • 512 MB of mirrored cache on each RAID controller module
• Log Reading • Battery backup in each RAID controller module that protects against cache data loss for up to 72 hours
• Recovery Guru • Online firmware updates for the RAID controller modules, NVSRAM, and physical disks
• MD3000/MD3000i Delta • Optional snapshot software
Knowledge Articles • Optional virtual disk copy
• Procedures • Multi-path failover for redundant configurations, which automatically reroutes I/O activity from a failed, offline, or removed RAID controller module to i
• Procedures - Setup and • Support for RAID levels 0, 1, 5, and 10
Usage • Support for 255 virtual disks with a maximum capacity of 2 TB each
• Tools and Utilities • Stripe element sizes of 8, 16, 32, 64, 128, 256, or 512 KB (default is 128 KB)
• Modular Disk Storage • 3U Enclosure with 15 3.5" SAS hard physical disk connectors
Manager Simulator • Physical disk support for 10 K and 15 K 3 Gbps SAS physical disks
• MD Utility and Analyzer • Integrated power supply/fan module
Tools • Activity LEDs for each slot
• Knowledge Check • Bi-color Status LEDs for each slot
• 2nd Gen Online KC • Support for up to two cascaded MD1000 enclosures for a total of 45 physical disks
• Release 2.1 Online KC
• Remote Installation
Resources
Operating System Support for iSCSI Host Servers
• Remote Install
Documentation Generation 2 - Firmware Version 07.xx.xx.xx
• Windows Install Video
• RHEL5 Install Video • Red Hat® Linux 5.2 Enterprise and Advanced Server, 32-bit and 64-bit
• VMware Install Video • Red Hat Linux 5.3 Enterprise and Advanced Server, 32-bit and 64-bit
• Appendixes • Solaris 10.6
• Course Outline
• Instructor Materials NOTE:
◦ PowerPoint All of the Operating Systems below are supported with Generation 2 firmware, but the Operating Systems above are only supported by Generation 2.
Presentations
◦ Classroom Setup Generation 1 - Firmware Version 06.xx.xx.xx
◦ Class Overview
◦ Class Outline
• Windows Server 2008 Web Edition, 32-bit and 64-bit version
• Useful Documents
• Windows Server 2008 Standard Edition, 32-bit and 64-bit version
• Document History
• Windows Server 2008 Enterprise Edition, 32-bit and 64-bit version
• Printer-Friendly Format
• Windows Server 2008 Datacenter Edition, 64-bit version
• Windows Server 2008 Small Business Edition, 64-bit version
• Windows Server 2008 Core Edition, 32-bit and 64-bit version
• Windows 2003 32-bit Standard Server and Enterprise Edition (SP1)
• Windows 2003 32-bit Standard Server and Enterprise Edition R2
• Windows 2003 64-bit Edition EM64T (SP1) & R2
• Red Hat® Linux 3.0 Update 6 Enterprise and Advanced Server, 32bit and 64bit
• Red Hat® Linux 4 Update 6 Enterprise and Advanced Server, 32bit and 64bit
• Red Hat® Linux 5 Update 1 (5.1) Enterprise and Advanced Server, 32bit and 64bit
• SUSE® Linux Enterprise Server 9 service pack 4 (x86-64)
• SUSE® Linux Enterprise Server 10 service pack 2(x86-64)
• VMWare® ESX 3.5 and 3.5i
NOTE:
Windows Server 2008 support requires controller firmware rev 06.70.15.60 and NVSRAM rev 67.08.90 or later.
• Windows Vista
• Windows XP Professional SP2
NOTE:
These operating systems are supported as a management station only.
They are not supported for iSCSI I/O attachment to the storage array.
Clustering Support
• Support for up to 16 node clusters with Windows 2008
• Support for up to 8 node clusters on Windows 2003 32-bit and R2
• Support for up to 8 node clusters on Windows 2003 EM64T and R2
MDSM also contains an optional event monitoring service that is used to send alerts when a critical problem with the storage array occurs and a command line i
FW 07.35.38.60
This firmware adds the following features to the MD 3000/i:
• Support for the coexistence of MD3000/i and MD32XX/i systems connected to the same host.
• Support and certification for ESX 4.1.
• Support and certification for Red Hat Enterprise Linux releases 4.9, 5.5 and SUSE Linux Enterprise 11 SP1.
NOTE:
Instructions on updating second generation firmware can be found here.
NOTE:
The latest resource DVD for the MD32xxi has all the drivers necessary for coexistence for both SAS and ISCSI MDs. This version of the RDVD will be
1. Insert and install the MD32xxi RDVD. The Host component is required for coexistence support and must be selected as a part of the installation procedur
2. Open a command (shell) prompt.
3. Navigate to the linux/coexistence directory on the resource media.
4. Run the following command to install:
./md_coexistence_util.sh install
5. Follow the on-screen instructions and answer the prompts according to the configuration you intend to deploy on the current host.
6. Reboot if requested.
Firmware Upgrade
The release 2.1 firmware is displayed as version 07.35.31.60 within the storage array profile.
NOTE:
It is possible to downgrade to a previous firmware as long as it is still in the same generation. You cannot downgrade to a previous generation.
• Upgrading to the 07.35.31.60 firmware from a previous generation (i.e.; 06.xx.xx.xx) is performed using the process documented HERE.
NOTE:
You can upgrade directly to 07.35.31.60 from a previous generation but you cannot downgrade back to a previous generation.
Controller Replacement
Always replace the MD3000 RAID Controller hot while the system is up and operational to ensure that the firmware remains as established. Failure to do so cou
Configuration Enhancements
This page lists the new configuration enhancements as part of release 2.1.
WARNING:
These configurations are only supported with Second Generation firmware on the MD3000 storage arrays.
• First time upgrade from 1st generation firmware to 2nd generation firmware
• Every 13 weeks.
The next scheduled learn cycle can be seen in the MD storage manager. This can be postponed by up to 7 days at a time through the GUI. The following SMcli
Virtual disk expansion can be achieved through the CLI. The command to expand a virtual disk is as follows: set virtualDisk [VDname] addCapacity=sizetob
During disk group and virtual disk expansion operations, there is complete access to the virtual disks and disk group redundancy is maintained at all times. How
Refer to the User’s Guide for additional details on Disk Group and Virtual Disk expansion.
Windows 2008 R2
Release 2.1 adds support and is required for Windows 2008 R2.
Options like “Failover only”, “Round Robin” and “Least Blocks” may be visible but are not supported.
Load Balancing with the MD3000/MD3000i is only supported by using the settings under the MPIO tab and not under the “Session Connections” tab.
NOTE:
The settings located within the MPIO tab are for each individual LUN.
Navigating to the MPIO tab
1. Open the iSCSI initiator properties from the Control Panel in Windows 2008.
2.
Click the Targets tab.
iSCSI MPIO
3.
Select the Target. Click the Details button.
iSCSI MPIO
4.
From the Devices tab, Select a Device. Click the Advanced button.
5.
Select the MPIO tab. Click OK when finished making any changes.
iSCSI MPIO
2.
Click the Discovery tab.
3.
Under Target Portals, click Discover Portal and enter the IP address or DNS name of the iSCSI port on the storage array.
4.
If the iSCSI storage array uses a custom TCP port, change the Port number. The default is 3260.
5.
Click Advanced and set the following values on the General tab:
6. Click OK to exit the Advanced menu, and OK again to exit the Add Target Portals screen. To exit the Discovery tab, click OK.
If you plan to configure CHAP authentication, do not perform discovery on more than one iSCSI port at this point. Stop here and go to the next step, Step 4: Co
If you do not plan to configure CHAP authentication, repeat step 1 thorough step 6 (above) for all iSCSI ports on the storage array.
2.
If you are NOT using mutual CHAP authentication skip to the step 4 below.
3.
If you are using mutual CHAP authentication:
4.
Click the Discovery tab.
5.
Under Target Portals, select the IP address of the iSCSI port on the storage array and click Remove. The iSCSI port you configured on the storage array d
6.
Under Target Portals, click Discover Portal and re-enter the IP address or DNS name of the iSCSI port on the storage array (removed above).
7.
Click Advanced and set the following values on the General tab:
NOTE:
IPSec is not supported.
8.
Click OK. If discovery session failover is desired, repeat step 5 and step 6 (in this step) for all iSCSI ports on the storage array. Otherwise, single-host por
NOTE:
If the connection fails, make sure that all IP addresses are entered correctly. Mistyped IP addresses are a common cause of connection problems.
2.
Click the Targets tab. If previous target discovery was successful, the iqn of the storage array should be displayed under Targets
3.
Click Connect.
4.
Select Automatically restore this connection when the system boots.
5.
Select Enable multi-path.
6.
Click Advanced and configure the following settings under the General tab:
NOTE:
IPSec is not supported.
To support storage array controller failover, the host server must be connected to at least one iSCSI port on each controller. Repeat step 3 through step 8 f
NOTE:
To enable the higher throughput of multipathing I/O, the host server must connect to both iSCSI ports on each controller, ideally from separate host-
between the controllers.
The Status field on the Targets tab should now display as Connected.
8.
Click OK to close the Microsoft iSCSI initiator.
SLES 11
Release 2.1 adds support for SLES 11.
• Linux Installation
• Installing the iSCSI Initiator on a SLES 11 System
• Supported File Systems in SLES 11
Linux Installation
During Linux OS installation, the installer will list all volumes that are discovered from MD3000/MD3000i, including virtual disks that are mapped to the host a
selected. The Universal Xport disk(s) cannot be selected for installation, otherwise it will result in non-responsiveness from the array.
NOTE:
iSCSI boot of LUN through a target IPv6 address needs a workaround from Novell. Details please refer to document 7004216 on the Novell website.
• explain the new features of the second generation PowerVault MD3000i when compared to the first generation.
• explain what has changed with the second generation PowerVault MD3000i.
• explain how to troubleshoot issues with the second generation of the PowerVault MD3000i.
NOTE:
To see a complete listing of the objectives for the PowerVault MD3000i training documentation, please refer to the course introduction page of this docum
NOTE:
Any existing first generation controllers in stock will not be upgraded.
• the size of the LUN is limited by number of disks in the RAID array.
• up to 45 disks can now be used in a RAID0 and 44 disks in a RAID 10. RAID 5 and RAID 6 are limited to a maximum of 30 disks. The table below bette
RAID 6 Support
With the second generation RAID controller firmware, Dell Engineering has enabled support for RAID 6 with the PowerVault MD3000i.
What is RAID 6?
RAID 6 (Striping With Dual Distributed Parity), provides data redundancy by using data striping in combination with distributed parity information. Similar to
maintains two disk blocks with parity information. The additional parity provides data protection in the event of two disk failures.
NOTE:
Although the industry standard permits creation of RAID 6 with four drives, the MD3000/MD3000i requires five. If one wants a configuration with
redundant drives, and thus increase the efficiency.
• Virtual Disks with RAID 6 can be created, migrated and adjusted the same way as other RAID types.
NOTE:
Note that disk groups cannot be migrated to PowerVault MD3000i arrays with first generation RAID controllers.
• a RAID 6 virtual disk are limited to a maximum of 30 physical disks.
The figure below illustrates RAID 6. In this figure the second set of parity drives are denoted by Q. The P drives follow the RAID 5 parity scheme. The parity b
needs to be generated for each write operation, there is a performance hit during write. Due to dual data protection, a RAID 6 VD can survive the loss of two dri
This image illustrates RAID 6, with five drives using a data stripe and a two parity stripes. Since RAID 6 has two parity stipes
up to two drives can fail without data loss.
Creating a RAI
1. From the Summary Tab of MDSM, click the Configure tab.
2. On the configure tabe of MDSM, click the Create Disk Groups and Virtual Disks option.
On the configure tabe of MDSM, click the Create Disk Groups and Virtual Disks
option.
3. An error message may be displayed stating that "No Hosts Have Been Configured." You can configure hosts later. Click ok, to continue.
4. You will now create a Disk Group for the RAID 6 array. Select the option, Disk Group: Create a new disk group using the unconfigured capacity in the st
You will now create a Disk Group for the RAID 6 array. Select the option, Disk Group: Create a new disk group using the unconfigured capacity in the storage
array.
5. Now you need to specify the name of the new Disk Group. Be specific, so that you will remember the name of the group in the future. You may use upto
Now you need to specify the name of the new Disk Group. Be specific, so that you will remember the name of the group in the
future.
6. Now you will choose the option for, "Automatic: Choose from a list of automatically generated physical disks and capacity options" and then click next to
7. On this screen you will select the RAID type and overall capacity for the newly specified RAID Group. RAID 5 is the default option, used the pull down
On this screen you will select the RAID type and overall capacity for the newly specified RAID Group. RAID 5 is the default option, used the pull down menu to select RAID
6.
8. Once you have selected RAID 6, notice that the options for capacity have changed from a minimum of 3 drives to a minimum of 5 drives. Choose the 5 dr
Once you have selected RAID 6, notice that the options for capacity have changed from a minimum of 3 drives to a minimum of 5 drives. Choose the 5 drive option, or any other option to continue.
9. Now that you have selected the RAID type and the capacity (number of disks) for the RAID group, click finish. This will create the RAID group.
Now that you have selected the RAID type and the capacity (number of disks) for the RAID group, click finish. This will create the RAID
group.
10. MDSM will now report to you that the RAID group has been successfully created. Now you need to create a virtual disk or disks within your newly create
MDSM will now report to you that the RAID group has been successfully created. Now you need to create a virtual disk or disks within your newly created RAID group. Select the option
for
11. On this next screen you will set the new virtual disk's capacity, name of the virtual disk and virtual disk I/O characteristics. Use the defaults here unless yo
click next to continue.
On this next screen you will set the new virtual disk's capacity, name of the virtual disk and virtual disk I/O characteristics. Use the defaults here unless you want a specific configuration for the virtu
continue.
Creating a RAI
12. On this next screen you will set the new virtual disk's capacity, name of the virtual disk and virtual disk I/O characteristics. Use the defaults here unless yo
click next to continue.
On this next screen you will set the new virtual disk's capacity, name of the virtual disk and virtual disk I/O characteristics. Use the defaults here unless you want a specific configuration for the virtu
continue.
13. On this screen you will map the virtual disk to a host. You have two options, specify the host now (this can be i-SCSI hosts with the PowerVault MD3000
On this screen you will map the virtual disk to a host. You have two options, specify the host now (this can be i-SCSI hosts with the PowerVault MD3000i and SAS Hosts with the PowerVault MD300
disk.
14. MDSM will now state that the vitual disk has been successfully completed and will ask if you want to create another virtual disk. For our purposes we wil
MDSM will now state that the vitual disk has been successfully completed and will ask if you want to create another virtual disk. For our purposes we will select
No.
15. MDSM will now return to the default screen of the configure tab, click the Summary tab to return to it.
MDSM will now state that the vitual disk has been successfully completed and will ask if you want to create another virtual disk. For our purposes we will select
No.
16. On the Summary tab of MDSM you will notice that there is currently one Operation in Progress under the status section. The process currently in progress
Disk Groups and Virtual Disks section of the summary tab. Click Diskgroups and Virtual Disks link.
On the Summary tab of MDSM you will notice that there is currently one Operation in Progress under the status section. The process currently in progress, is the background initialization of the new
link.
17. On the Blink Disk Groups window, you will see the total unconfigured disk capacity (if physical disks are unassigned and available), and disk groups curr
On the Summary tab of MDSM you will notice that there is currently one Operation in Progress under the status section. The process currently in progress, is the background initialization of the new
link.
18. After expanding the disk group you will notice that the virtual disk is still being created, this is because the background initialization is still taking place.
After expanding the disk group you will notice that the virtual disk is still being created, this is because the background initialization is still taking
place.
19. Once the background initialization is complete, the virtual disk will return to the optimal state. From here the RAID 6 virtual disk is ready for use, if you h
Once the background initialization is complete, the virtual disk will return to the optimal state. From here the RAID 6 virtual disk is ready for use, if you have not done so already, you should map hos
System.
IPv6 is now supported on Linux. However, this is only supported on open-iSCSI initiators (SLES10 and RHEL5). In order to prevent login failures, IPv6 should
NOTE:
With the Smart Battery technology, the age of the battery is no longer used to determine battery replacement. The battery uses a “learn cycle” to recalibrate
• the first time the controller is powered up because a learn cycle has not run earlier.
• when a replacement battery module is installed.
• every 91 days (13 weeks), the controllers will initiate a learn cycle.
Actions
A battery learn cycle consists of the following actions:
Events
The following events occur during a learn cycle:
1. approximately one hour prior to the next scheduled Battery Learn Cycle start, a MEL (Major Event Log) message will be generated.
2. an hour later the learn cycle starts and cache is disabled, both of these events are logged in MEL.
3. when the 24 hour threshold is crossed, a low battery capacity message is logged in MEL.
4. after a period of approximately 2 hours, the battery will hold enough charge to cross the 24 hour threshold again and a "battery capacity is sufficient" mes
5. with the learn cycle for battery completed, a MEL event will signal end of learn cycle.
6. finally the controller cache is re-enabled once the learn cycle is complete.
NOTE:
During a learn cycle, the system cache is turned off. Thus if I/O is taking place on the PowerVautl MD3000i, the cache being disabled can impact performa
Previous versions of controller firmware relied solely on battery age and the ability of the battery to charge, to guarantee the 72 hour cache offload requirement.
Because the battery needs to be discharged, it will eventually cross the threshold where cache offload can only be guaranteed for a period of 24 hours. When thi
battery will hold enough charge to cross the 24 hour threshold again, registering the following event in the MEL: 0x730E Battery capacity is sufficient.
From the Change Battery Settings window, you can set the approximate day and time of the week that the next battery learn cycle starts.
NOTE:
This time is not exact, but correct to within 1 hour.
Changes Overview
What has Changed With the Second Generation Firmware?
With the PowerVault MD3000i second generation RAID controller firmware release, Dell Engineering has made the following changes:
Customer Documentation
Customer documentation for first and second generation PowerVault MD3000i releases will be marginally different. Both generations customer documentation
• selects the path based on the round robin policy and returns to the caller for sending I/O to the LUN.
• all paths are treated equally.
• is not optimal for mixed-host support since the host may have different bandwidth or transfer speeds.
Second Generation
The second generation firmware release adds new additional load-balancing policies, specifically:
Features for the New Load Balancing Policies With Second Generation Firmware
Least Queue Depth
• selects a path based on least number of outstanding commands on the initiator port.
• treats large block request and small block request equally.
• turns into round robin if the number of outstanding commands are the same on each initiator port.
• can be used in mixed host port configurations.
Operating System Policy Applied Per Device or Per Host Policies Supported for MD3000i
Windows Server 2008 per device allowed for both MD3000 and MD3000i Round Robin, Least Queue Depth, Least Path Weight
Windows Server 2003 per device allowed for iSCSI initiator associated devices only (MD3000i) Round Robin, Least Queue Depth, Least Path Weight
Linux RHEL 4 policy applied per host only (all devices for that host) Round Robin, Least Queue Depth
Linux RHEL 5 policy applied per host only (all devices for that host) Round Robin, Least Queue Depth
Linux SLES 9 policy applied per host only (all devices for that host) Round Robin, Least Queue Depth
Linux SLES 10 policy applied per host only (all devices for that host) Round Robin, Least Queue Depth
You can save the settings for persistence by using the SaveSettings parameter as follows:
This change is lost after reboot unless the setting is saved and the RAMdisk is rebuilt using mkinitrd.
Using the iSCSI Initiator Menu Options to Set Load Balancing Policy
2. The iSCSI Initiator Properties window appears, select the targets tab.
3. Select the Target from the list, and click the Details button.
Select the Target from the list, and click the Details
button.
4. The Target Properties window is displayed, with the sessions tab displayed. Select the Devices tab.
The Target Properties window is displayed, with the sessions tab displayed. Select the Devices
tab.
5. Select the Dell MD3000i SCSI Disk Device, and click the Advanced button.
Select the Dell MD3000i SCSI Disk Device, and click the Advanced
button.
6. The Device Details window is now revealed with the General tab displayed, select the MPIO tab.
The Device Details window is now reavealed with the General tab displayed, select the MPIO
tab.
7. On the MPIO tab you will be able to set the Load Balance Policy using the pull down menu.
On the MPIO tab you will be able to set the Load Balance Policy using the pull down
menu.
NOTE:
Remember Least Queue Depth is set as the Default option in Windows.
Using the Device Manager Menu Options to Set Load Balancing Policy
3. Right click on the Dell MD3000i Multi-Path Disk Device, and select Properties.
Right click on the Dell MD3000i Multi-Path Disk Device, and select
Properties.
4. The Dell MD3000i Multi-Path Disk Device Properties windows is displayed. Select the MPIO tab.
The Dell MD3000i Multi-Path Disk Device Properties windows is displayed. Select the MPIO
tab.
5. On the MPIO tab you will be able to set the Load Balance Policy using the pull down menu.
On the MPIO tab you will be able to set the Load Balance Policy using the pull down
menu.
Using the Disk Management Menu Options to Set Load Balancing Policy
From the Disk Management Snapin of Server Manager, Select the MD3000i virtual disk
volume..
2. Right click on the MD3000/MD3000i virtual disk volume, and select properties from the menu.
Right click on the MD3000/MD3000i virtual disk volume, and select properties from the
menu.
3. The General tab of the Dell MD3000i Multi-Path Disk Device Properties window is displayed. Select the MPIO tab.
The General tab of the Dell MD3000i Multi-Path Disk Device Properties window is displayed. Select the MPIO
tab.
4. On the MPIO tab you will be able to set the Load Balance Policy using the pull down menu.
On the MPIO tab you will be able to set the Load Balance Policy using the pull down
menu.
NOTE:
It is important to keep in mind the following when using Load Balancing with Multi-Path Drivers in windows:
• Enabling 16 to 32 Storage Partitions - This increases the number of partitions on an MD3000i from 16 to 32.
• 8 Snapshots
• 8 Virtual Disk Copies
This page provides more information about these Premium Feature Keys.
• it can be used when running the iSCSI initiator in the guest OS to enable up to 32 virtual machines, but running the iSCSI initiator in the guest OS con
iSCSI initiator, 4 of the 32 storage partitions will be consumed.
• A better option may be to run the iSCSI initiator at the hypervisor layer, there is no need for additional partitions, because Running the iSCSI initiato
use the same single initiator. However, one problem with this type of solution is, hypervisors at this time do not officially support Multi-Path with iSCSI c
• This key does not allow for support of greater than 16 physical servers per MD3000i.
8 Snapshots (PFK)
Up to 8 Snapshots are now supported through a Premium Feature Key (PFK):
This image illustrated the View/Enable Premium Features screen of MDSM. Here you can enable Premium Features.
Second generation firmware can be identified by a version number like this: 07.xx.xx.xx.
You can determine the customer's MDSM version by invoking the About Modular Disk Storage Manager link on the support tab.
• The MD Firmware Upgrade Utility is only used to upgrade from 1st generation firmware to 2nd generation firmware, once the MD3000/MD3000
• This upgrade must be performed as an offline task without any I/O.
• All volume configuration, premium features and host mappings will be preserved through the update process.
• This is a one-way upgrade. It is not possible to return to 1st generation firmware after starting this upgrade procedure without losing all data and configura
• The MD Firmware Upgrade Utility is installed along with the latest version of the MD Storage Manager. Both applications can be installed using the M
• MD Storage Manager can be used to perform all other firmware updates with the exclusion of the 1st generation to 2nd generation upgrade.
• The latest version of the MD Storage Manager can still be used to manage any MD3000/MD3000i running 1st generation firmware. However, the 2nd ge
different. In fact, a single host should NOT be connected to both a 1st gen and a 2nd gen array.
The utility helps ensure that any storage array you select for upgrade:
1. has a supported RAID controller module model and RAID controller module firmware version.
2. has no pre-existing condition that may prevent the upgrade from completing successfully.
3. has its configuration and event log saved prior to upgrade for later use, if required.
4. is offline for the minimum amount of time required.
Icon Status
The storage array cannot be upgraded for one or more reasons (see storage array conditions that prevent firmware
upgrade).
Not-upgradeable The storage array cannot b
There were no problems detected and you can upgrade the storage
array.
Upgradeable: Optimal There were no problems d
One or more problems were detected but the storage array can still be
upgraded.
Upgradeable: Needs Attention One or more problems we
Progress bar Downloading RAID Controller Module
The storage array has pending firmware that is ready for
activation.
Firmware pending The storage array has pen
The new RAID controller module firmware is activating (i.e. replacing the current
firmware).
Firmware activating The new RAID controller
In some cases, the array may be non-optimal but an upgrade will still be possible. It is recommended, but not required, to resolve the non-optimal condition befo
The image below illustrates how the Create Disk Groups and Virtual Disks window has split out the tasks of creating a Disk Group and a Virtual disk.
This image illustrates how the Create Disk Groups and Virtual Disks window has split out the tasks of creating a Disk Group and a Virtual
disk.
NOTE:
With this change in the procedure to create a virtual disk, you must create a disk group before you can create the virtual disk.
For more information about the procedure to create a virtual disk within a new disk group please refer to the RAID 6 Support page of this training document. He
The MD Firmware Upgrade Utility updates the metadata on the disks to the new format during the upgrade. Once this upgrade is started, the process cannot be r
• The Customer owns an older PowerVault MD3000/MD3000i using first generation firmware and if the customer needs to have the RAID controller repla
generation firmware on the replacement controller.
• If the customer owns a newer PowerVault MD3000/MD3000i using second generation firmware are there any considerations that must be made when rep
To address these two considerations and questions you must be aware of a change that is occuring to service stock.
All service stock replacement controllers will be at the second generation firmware level and will need to be downgraded to first generation firmware b
Customer Owns Older PowerVault MD3000/MD3000i, and Wants to Retain First Generation Firmw
The procedure to replace a failed controller in an MD3000i running first generation firmware in Simplex mode has changed. There are two methods to do this:
• The customer can download and install the latest 2nd generation version of the MD Storage manager on a management station. Once they connect to the c
they need using the standard firmware download procedure in MD Storage Manager. At this time, the array may appear to be disconnected but it is n
• If customer does not want to install the latest version of the MD Storage Manager, a separate standalone Utility will be available on support.dell.com that
NOTE:
The user will not lose any data or configuration information in the process of the downgrade
Customer Owns Newer Second Generation PowerVault MD3000/MD3000i with Simplex RAID Cont
In the case of a simplex configured second generation system, replace the controller and check the Dell Support site for any firmware updates. If a newer firmwa
2. Replace failed controller with the PowerVault on. (do not power down the system. If PowerVault is powered down, power it up and wait for POST (sodM
firmware levels and configuration.
NOTE:
Replacement Controllers can be at first (version 06.xx.xx.xx) or second (version 07.xx.xx.xx) generation firmware levels.
3. To verify that Automatic Code Synchronization was successful flashing the new replacement controller to the native controllers firmware levels and confi
1. Use MDSM and fireware version for both controllers in the the storage array profile, If both controllers firmware version is as expected, then Autom
2. If the firmware versions are different or not as expected, the ACS was not completed properly. In this case, The system will need to be upgraded or
the second generation firmware after the firmware update is complete.
1. If both controllers in a system have failed, the customer will have to have an on-site Dell Service Provider perform the replacement.
2. The replacement of both controllers is not officially supported. Therefore, when you encounter this issue with a customer you need to escalate the issue to
Accessing
1. Startup a terminal emulation program like putty, teraterm, minicom or hyperterminal using these terminal settings (115200-8-n-1).
NOTE:
Windows Server 2008 does not ship with hypertrm.exe and hypertrm.dll. Customers wanting to use hyperterminal in Server 2008, must copy these tw
2. Send a from the terminal shell. This is accomplished by <Ctrl><PAUSE/BREAK> within HyperTerminal or Putty, use <Ctrl><A><F> from within Min
3. When prompted for input within 5 seconds, press <S> for the service interface menu.
NOTE:
Only capital "S" works here, lowercase "s" will not be recognized for this command.
4. The user will then be prompted for a password. The Password is: supportDell
5. The menu options available are.
• 1) Display IP Configuration
• 2) Change IP Configuration
• 3) Reset Storage Array (SYMbol) Password
• Q) Quit Menu
Display IP Configuration
Display IP Configuration displays the current IP configuration of the of the ethernet maintenance port for the RAID controller currently attached to the Serial Sh
Change IP Configuration
Change IP Configuration will ask the user a series of questions about the IP configuration, the responses will set the IP configuration schema for the maintenanc
This image illustrates the series of questions asked during the serial shell change IP
configuration function.
Other examples of when it would be a good idea to use the serial shell to troubleshoot issues is mainly around significant events that occur with a RAID control
• in a duplex configuration, when one of the controllers is being replaced, you should attach the serial cable to the good controller. In doing this when the re
ACS is the ability for a native duplex controller to flash or synchronize its firmware with a new rplcaement controller.
• when performing a firmware update during normal conditions, it may be helpful to view the serial shell information as the update progresses.
NOTE:
Even though the logs of the updates above can be pulled from the support bundle, the advantage of the serial shell in this case is realtime diagnosis. Finally
that you need.
Chassis 360
Click here to open
Chassis Teardown
Click here to open
The iSCSI input connectors (iSCSI In) are adjacent to each other and are labeled 0 and 1. These ports are further identified by the raid controller in which they r
controller module connects to the enclosure midplane via the two midplane connectors on its internal (rear) panel.
The default IP addresses of the Ethernet connectors on the RAID Controllers are as follows:
• Controller 0
◦ Management Port - 192.168.128.101
◦ iSCSI Port 0,0 - 192.168.130.101
◦ iSCSI Port 0,1 - 192.168.131.101
• Controller 1
◦ Management Port - 192.168.128.102
◦ iSCSI Port 1,0 - 192.168.130.102
◦ iSCSI Port 1,1 - 192.168.131.102
Once the default IP addresses have been set manually, they will remain at the new manually determined settings. If DHCP is used to provide IP addresses to the
The following image shows the back view of the dual port RAID Controller, the connections, and the LEDs.
This is a rear view of the RAID Controller for the MD3000i, showing its connectors and LED indicators, and their
purpose.
This table lists the engineering descriptions for the RAID Controller ports and indicators as shown in the previous image.
RAID Controller
Component
The following image shows the placement of the battery backup module within the RAID controller module.
This image shows the placement of the battery backup module in the RAID controller
module.
At RTS there is no battery learn/test cycle active on the controller. The battery is maintained with a float charge, which means the battery always appears fully c
up on that error.
Enclosure Modes
The MD3000i operates with both controllers having access to their respective physical disk ports on all the physical disks in the enclosure at all times through th
status.
Control Panel
The MD3000i control panel provides LEDs to assist with troubleshooting; they are accessible through the front bezel. The control panel contains the Operationa
Operational Mode select switch has no effect.
NOTE:
MD1000s attached to the MD3000i must be set to Unified Mode.
WARNING:
The split mode LED and enclosure mode switch are not functional. However, if additional expansion enclosures are daisy chained to your system, the en
will cause the unit to not detect all drives and cause sporadic issues with dropping disks.
The following image shows the features of the MD3000i Control Panel module.
Image showing the Unified/Split Mode switch and the LEDs on the front of the MD3000i Control Panel module.
MD3000i Control P
LED Type LED Color
Split Mode Green No indication in the MD3000i RAID Enclosure
Power Green When lit, at least one power supply is supplying power
Enclosure status Steady amber The power is on, and the enclosure is in reset state
Steady blue The power is on, and the enclosure status is OK
Flashing blue The host server is identifying the enclosure
Flashing amber The enclosure is in fault state
Bezel LEDs
The MD3000i has an optional lockable front bezel. The following image shows the features of the front bezel.
MD3000i Control P
LED Type LED Color
Split Mode Green Unused in the MD3000i RAID enclosure.
Power Green When lit, at least one power supply is supplying power
Enclosure status Steady amber The power is on, and the enclosure is in reset state
Steady blue The power is on, and the enclosure status is OK
Flashing blue The host server is identifying the enclosure
Flashing amber The enclosure is in fault state
Physical Disks
The MD3000i enclosure supports up to fifteen 3 Gb/s serial-attached SCSI (SAS) physical disks. Each physical disk is contained in its individual physical disk c
the enclosure. The factory configuration for the MD3000i requires a minimum of two physical disks for proper operation.
Image showing the sequential numbering used for each physical disk slot in the MD3000i enclosure.
SMART Support
Self-Monitoring Analysis and Reporting Technology (SMART) allows physical disks to report errors and failures. SMART monitors the internal performance o
on a physical disk, you can replace or repair the physical disk without losing data. SMART-compliant physical disks have attributes for which data (values) can
Many mechanical failures and some electrical failures display some degradation in performance before failure. Numerous factors relate to predictable physical d
as seek error rate and excessive bad sectors.
Firmware on the RAID controller module uses SMART logic to evaluate the SAS physical disks in your array to predict or detect disk failure. The status LED o
failure.
The following image shows the power supply/cooling fan module LEDs.
Power Supply/Cooling F
LED Type LED Color
DC power Green
• On: DC output voltages are with
• Off: No power or voltages not w
AC power Green
• On: AC input voltage is within s
• Off: No power or voltages not w
The following image shows the power supply/cooling fan modules installed in the MD3000i enclosure.
Image showing the hot-swap power/cooling modules installed in the MD3000i enclosure.
Racking
This procedure outlines the steps for racking the MD3000i.
Racking
1.
Align the rail assembly to the appropriate mounting position in the rack.
Align the rail assembly to the desired mounting position in the rack.
2.
Line up the slide assembly mounting hooks with the square holes on the vertical rail.
Line up the slide assembly mounting hooks with the square hole on the vertical rail.
3.
Push down on the rail until the locking buttons pop out and click.
Push down on the rail until the locking button pops out and clicks.
Racking
4.
Repeat the procedure for the back side of the rail assembly.
Repeat the procedure for the back side of the rail assembly.
5.
Install the second rail assembly.
6.
Place the back of the MD3000i on the rails, and slide the enclosure back until it locks into the rails.
Place the back of the MD3000i on the rails and slide the enclosure back until it locks into the
rails.
7.
Slide the enclosure all the way into the rack.
Racking
Slide the enclosure the rest of the way into the rack.
8.
Tighten the two thumbscrews to secure the MD3000i to the rack.
9.
Place the front bezel on the MD3000i.
An Ethernet cable connects either directly to one of the iSCSI In connectors of one of the RAID Controllers or is connected through an Ethernet switch. Additio
allows enclosures to be cascaded. The following images and descriptions demonstrate supported cabling methods for various configurations of both redundant a
NOTE:
Cabling methods are not critical for specific RAID Controller Module iSCSI Ethernet inputs; i.e., a host may be connected to either of the iSCSI Etherne
However, if the host will only be connecting to a single controller, the preferred path for the volumes assigned to the host must be the same controller wi
Best Practice
As a best practice, the user should isolate the iSCSI network from the public LAN. If that is not possible, then increased security should be used, such as CHAP
One Direct Attached Server - With One Gigabit Ethernet (GbE) Data Path to a Simplex Controller
The following image shows a single server with one GbE card cabled to one controller on a simplex MD3000i in a single path configuration. Loss of the single
This is an image showing one server connected to the MD3000i with a single,
non-redundant data path, and a management path.
Two Direct Attached Servers - Each With One GbE Data Path to a Simplex Controller
The following image shows two servers, each with one GbE card, cabled to one simplex controller. There are no secondary data inputs to the controllers from ei
clustering.
This image shows two servers, each with one GbE data path each, connected
to a simplex controller, with no redundant access to virtual disks.
Four Direct Attached Servers - Each with One GbE Data Path to Duplex Controllers
The following image shows four servers, each with one HBA, attached to the four inputs available on the two dual port controllers. In this instance, no server ha
This image shows four servers, each with one HBA, attached to the four inputs available when using the
dual port controllers. There is no redundancy of pathways for any server.
This image shows two direct attached servers with redundant connections to a
duplex controller array.
The following network attached cabling configurations are supported with the MD3000i.
This image shows one to 16 hosts with non-redundant gigabit ethernet connections to a switch, which has
redundant connections to a simplex controller array.
The is an image of a configuration with as many as sixteen hosts, each with dual data paths on separate
subnets, to a simplex controller array.
This is an image of a configuration with as many as sixteen blade hosts on dual switches, with each host
having dual data ports on separate subnets. This configuration is supported for clustering.
This is an image showing up to sixteen hosts connected to duplex controllers through redundant gigabit
ethernet networks.
WARNING:
This is a data-destructive operation for any data on the MD1000s that was created with a PERC5/E. The metadata format (DACstore) of configured physic
MD1000 enclosure from a PERC5/E to an MD3000i. The data must be backed up, confirmed, and restored to newly formed disk groups and virtual disks o
must be cleared before attaching to the MD3000i. This can be done by clearing the configuration, or deleting all logical drives and hot spares.
1. If the MD1000 expansion enclosures are being migrated from use with a PERC5/E RAID Controller, the preferred method of update is to flash the EMMs
updated.
2. If a PERC5/E is not available for the update process, then the next most desirable course of action is to perform a field replacement of the EMMs.
3. If neither of the first two options are available, the firmware may be flashed using MDSM to download the firmware to the MD1000 EMMs.
1. Power down the MD3000i and cable only one MD1000 to the output of the MD3000i.
2. Each EMM will be individually flashed. Only insert one EMM at a time into the added MD1000 enclosure.
3. Add the MD1000 to the MD3000i, cabling from the output of the MD3000i to the one installed EMM on the MD1000. Power up the MD1000 encl
4. Power up the MD3000i and wait until the blue LED is lit indicating the enclosure is fully initialized.
5. In MDSM choose Download Firmware on the Support tab. Click Download Environmental (EMM) Card Firmware and choose the attached M
6. After the flash has completed, power down the MD3000i, followed by the MD1000.
7. Repeat this process until each EMM has been individually flashed.
8. After all EMMs have been flashed, power down all enclosures and cable the MD3000i and MD1000(s) as shown in the graphic below.
9. Power up the MD1000 expansion enclosures first and wait until the units are fully initialized.
10. Power up the MD3000i and, when the unit is fully initialized, use MDSM to manage the new enclosures and physical disks.
WARNING:
Do not make any configuration changes to the storage array while you are downloading the EMM firmware. Doing so could cause the firmware dow
1. Turn on the MD1000 expansion enclosure(s). Wait for the expansion status LED to light blue on each enclosure.
2. Turn on the MD3000i and wait for the status LED to indicate that the unit is ready:
◦ If the status LEDs light a solid amber, the MD3000i is still coming online.
◦ If the status LEDs are blinking amber, there is an error that can be viewed using MDSM.
◦ If the status LEDs light a solid blue, the MD3000i is ready.
3. After the MD3000i is online and ready, turn on any attached host systems.
The power-down sequence should done in the reverse order, with emphasis on a clean shutdown of the hosts and at least a ten second interval thereafter to insur
Cabling Method for Cascading the Simplex MD3000i to MD1000 Expansion Enclosures
This is an image showing the maximum cascading configuration available with the MD3000i in simplex configuration.
This is an image showing the maximum cascading configuration available with the MD3000i in
simplex configuration.
Cabling Method for Cascading the Duplex MD3000i to MD1000 Expansion Enclosures
This is an image showing the maximum cascading configuration available with the MD3000i in duplex configuration.
This is an image showing the maximum cascading configuration available with the MD3000i in
duplex configuration.
Support Matrix
THe link below contains the PowerVault MD3XXX Support Matrix, including information on supported software and hardware.
NOTE:
For single (simplex) RAID controller module replacements, refer to the Single Controller Replacement (DSP and CRU) procedure in the SATA Supp
This includes vital information on firmware upgrade requirements before starting a single RAID controller module replacement.
Removing the
1.
Gently lift up the orange handle spring lock on the controller to release the locking lever.
This image shows the action of lifting up the orange handled spring lock to release the locking
lever.
2.
Rotate the cam lever lock away from the unit.
This image shows the direction of rotation of the locking lever that releases the RAID Controller
module.
3.
Using the lever lock, pull the RAID Controller module out of the chassis.
Removing the
This image shows the RAID Controller Module being pulled from the bay.
4.
Reverse the previous steps to replace the RAID Controller module.
This procedure outlines the steps for removing and replacing the RAID Controller module battery assembly.
Removing the
1.
Using a Phillips screwdriver, remove the back screw retaining the steel cover over the battery compartment.
This image shows the removal of the rear retaining screw from the battery compartment.
2.
Remove the side screw retaining the steel cover over the battery compartment.
This image shows the removal of the side retaining screw from the battery compartment.
3.
Lift the battery cover up and away from the controller.
Removing the
This is an image showing the battery cover being removed.
4.
Loosen the blue captive screw.
5.
Slide the battery assembly out and away from the controller.
6.
Reverse the previous steps to replace the battery assembly.
Bezel
This procedure outlines the steps for removing and replacing the bezel.
1.
Push inward on the release tab on the left side of the bezel.
Push inward on the release tab on the left side of the bezel.
2.
Pull the bezel away from the enclosure front.
This procedure outlines the steps for removing and replacing the bezel.
1.
Push inward on the release tab on the left side of the bezel.
Push inward on the release tab on the left side of the bezel.
2.
Pull the bezel away from the enclosure front.
Physical Disks
This procedure outlines the steps for removing and replacing the physical disks.
WARNING:
If a physical disk is accidentally removed while it is online/active, wait at least 30 seconds before reinserting the disk. Failure to wait at least 30 seconds m
failed, and there is no I/O to the array, it must be failed using SMcli at the command line before removal.
Phys
1.
Before removing the physical disks, remove the front bezel.
2.
Squeeze the release mechanism on the front of the physical disk carrier, and rotate it downward to open the carrier handle.
Phys
Squeeze the release mechanism on the front of the physical disk carrier, and rotate it
downward to open the carrier handle.
3.
Carefully pull the physical disk carrier from its slot.
4.
Repeat steps 2 and 3 to remove the remaining physical disks.
2.
Push the tab on the bottom of the physical disk blank, and then pull the blank from the slot.
Push the tab on the bottom of the physical disk blank, and pull the blank from its slot.
3.
Remove the remaining physical disk blanks.
Control Panel
This procedure outlines the steps for removing and replacing the control panel.
Control Panel
1.
Before removing the control panel:
2.
Using a Torx T10 driver, remove all 16 screws from the front faceplate of the system.
Control Panel
Using a Torx T10 driver, remove all 16 screws from the front faceplate of the system.
3.
Remove the front faceplate from the system.
4.
Slide the control panel assembly straight out from its connector on the backplane.
Slide the control panel assembly straight out from its connector on the backplane.
Enclosure Midplane
This procedure outlines the steps for removing and replacing the enclosure midplane.
Enclosure Midplane
1.
Before removing the enclosure midplane:
2.
Remove the four Phillips head screws securing the EMM/power supply cage.
Remove the four Phillips head screws holding the EMM/power supply cage in the enclosure.
3.
Slide the EMM/power supply cage out of the enclosure.
Slide the EMM/power supply cage out of the enclosure, and place it aside.
4.
From the back of the enclosure chassis, carefully remove the midplane from the enclosure.
Reach into the enclosure chassis from the back, carefully disconnect the enclosure midplane
from the control panel, and pull it out of the enclosure.
Enclosure Midplane
5. Reverse the previous steps to replace the enclosure midplane.
With a Phillips head screwdriver, loosen the two captive screws securing the power
supply/cooling fan module in the bay.
2.
Grasp the handle on the power supply/cooling fan module, and pull the module away from the bay.
The image shows the module being removed from the bay.
3.
Repeat the previous steps to remove the second power supply/cooling fan module.
4. Reverse the previous steps to replace the power supply/cooling fan modules.
System Architecture
The MD3000i RAID Enclosure is leveraged from the MD1000 chassis, using all the MD1000 components except for the EMMs, which are replaced by the iSCSI RAID
Controllers.
The MD3000i RAID controller is capable of free-standing operation and will continue to operate in the event that the partner controller goes offline or is removed. Note in
the MD3000i Block Diagram above that each RAID controller module has an alternate SAS pathway from its SAS I/O Controller to the opposite controller's SAS expander.
In the event that a path to the peer controller has failed, the remaining controller will process the I/O to the physical disks owned by the peer over the same SAS connection
formerly used by the peer controller.
NOTE:
Clustering requires simultaneous access from cluster nodes to shared storage. If you have clustering software installed on the host, automatic failback should be
disabled to prevent "ping-pong" with single-path failure.
NOTE:
If setting up a cluster host, the MD3000i Stand Alone to Cluster.reg file entry must be merged into the registry of each node. If re-configuring a cluster node to a
standalone host, the MD3000i Cluster to Stand Alone.reg file must be merged into the host registry. These registry files, which set up the host for correct failback
operation, are in the windows\utility directory of the MD3000i Resource CD.
Host-based uses a multi-path driver installed on the host server to access the storage array. If required, the multi-path driver will issue an explicit command to transfer
ownership from the RAID controller module that owns the virtual disk to its peer RAID controller module.
AVT is used only in single-port cluster solutions. I/O access to the Logical Unit Number (LUN) of a virtual disk causes failover. In AVT mode, firmware transfers
ownership of a virtual disk to the online RAID controller module. The alternate or backup node in a cluster takes over and issues I/O to the peer controller and moves the
virtual disk to itself.
To do this, the failover driver (an application or kernel-resident driver) positions itself between the SCSI driver and the rest of the operating system. This lets it monitor all
I/O traffic so it can quickly reroute it to the redundant path in the event of controller or path failure.
WARNING:
Do not remove or modify the Access Virtual Disk. If the Access Virtual Disk is altered, the storage management station will lose management access to the storage
array until the Access Virtual Disk has been restored at the command line.
Mapping Functions
Function GUI CLI
Configure host access (for hosts discovered via host context agent) Yes Yes
Manually define host group/host Yes Yes
Add a host port to an existing host Yes (see note below) Yes
Assign a host to virtual disk mapping Yes Yes
Remove/change a host to virtual disk mapping Yes Yes
Change individual host type Yes Yes
Remove host group, host, or host port Yes Yes
Rename host group or host Yes Yes
NOTE:
Add a host port to an existing host:
When applicable, a partition will be created as part of the mapping screen of the individual virtual disk creation wizard or if a user maps a virtual disk to a host in the
"assign a host-virtual disk" mapping interface. A storage partition is used when the first virtual disk is mapped to a particular host.
Display snapshot virtual disk and snapshot repository properties Yes - only via profile Yes
Controller Functions
Function GUI CLI
Locate Storage Array Yes Yes
Change controller network configuration Yes Yes
Display controller properties Yes - only via profile Yes
Download controller firmware (no staged feature) Yes Yes
Download controller firmware (staged feature) No Yes
Download controller NVSRAM Yes Yes
Activate controller firmware (for "pending/staged" configuration) No Yes
Run controller diagnostics Yes - only via support bundle Yes
Reset controller No Yes
Place controller online/offline Yes Yes
Place controller in service mode No Yes
Other Functions
Function GUI CLI
Redistribute virtual disks back to original controller owners Yes Yes
Collect all support data (support bundle) Yes Yes
Download EMM Firmware Yes Yes
View persistent reservations Yes - only via support bundle Yes
Capture state information Yes - only via support bundle Yes
View unreadable sectors Yes - only via support bundle Yes
SAS-related diagnostics and platform support Yes - only via support bundle Yes
View event log Yes Yes
MDSM Installation
Complete the following steps to install Modular Disk Storage Management (MDSM) on a host running a supported Windows operating system.
Installation of MDSM
1.
You must have administrative privileges to install MDSM on a Windows host.
2. The setup program will check for the existence of the Microsoft iSCSI Initiator and the correct Storport hot-fix revision.
NOTE:
A minimum version of the Storport driver must be installed on the host server before installing the MD Storage Manager software. A hotfix with the
minimum supported version of the Storport driver is located in the \windows\Windows_2003_Server\hotfixes directory on the MD3000i Resource CD.
The MD Storage Manager installation will test for the minimum Storport version and will require you to install it before proceeding.
This is an image showing the installation installation screen advising the user to install the
store port hot fix if the server is to be used as an array host.
3.
The Micrososft iSCSI Initiator software should also be installed on a host before MDSM is installed. If the iSCSI initiator is not installed, at the end of the
installation the user will see a warning screen advising them to uninstall MDSM, install the iSCSI Initiator software, and then reinstall the MDSM host software.
This is an image of the warning popup window seen at the reboot phase of the installation if the user has
chosen to install the host software without the store port hot fix installed.
4.
When all the prerequisites are met, the user will be able to proceed normally with the software installation.
This is an image of the installation proceeding after all prerequisites have been met.
5.
The user will receive a popup window that requests that changes be made to the registry before proceeding. Clck OK.
Installation of MDSM
This is an image of the popup window requesting the user to acknowledge with okay to install registry
changes for failover to work correctly.
6.
At the opening screen of installation choose the language from the list box and press Ok.
This image shows that at the opening screen of installation the user is
leaving the default selection of English and is clicking okay.
7.
At the Introduction screen click Next.
This image shows that at the Introduction screen the user should click
Next.
8.
In the License Agreement screen choose to accept the license agreement and click Next.
This image shows that at the license screen accept the license agreement
and click Next.
9.
Leave the default path for installation of the software and click Next.
This image shows the user leaving the default installation path selected
and clicking next.
Installation of MDSM
10.
Select Typical (Full Installation) and then click Next.
This image shows the screen where the user should select Typical
Installation.
11.
Select the Automatically Start Monitor radio button if you require the the event monitor to be running on this host.
Otherwise, select the Do Not Automatically Start Monitorradio button.
NOTE:
You should configure only one system to automatically restart the event monitor. Running the event monitor on mulitple systems can cause multiple alert
notifications to be sent for the same error condition. Enabling event monitoring on only one system avoids this issue.
This image shows the screen where the user selects automatically start
monitor.
12.
Click Install.
This image shows the screen where the user will click install.
13.
The user will see the following two screens during installation.
NOTE:
Note the lines in the second screen which show the installation and loading of the MPIO driver. This is done in every installation of the MDSM on a Windows
server. It is loaded regardless of whether the system has redundant pathways, and is not optional.
Installation of MDSM
This is an image of a screen displayed during the installation.
14.
Click Done to restart the system.
• Controller 0
◦ Management Port - 192.168.128.101
◦ iSCSI Port 0 - 192.168.130.101
◦ iSCSI Port 1 - 192.168.131.101
• Controller 1
◦ Management Port - 192.168.128.102
◦ iSCSI Port 0 - 192.168.130.102
◦ iSCSI Port 1 - 192.168.131.102
NOTE:
By default, the MD3000i controller management ports will request a DHCP-assigned IP address. If these dynamically assigned addresses are not in the same
subnet as the management host, the automatic discovery process will not find the array. The discovery of arrays is only on the host's subnet through port 2463.
If a DHCP response is not acknowledged, the controller management ports will default to addresses shown above.
If the controller management ports have had IP addresses manually set, the units will no longer request DHCP addresses and will continue to operate at the
manually set addresses.
If you are trying to discover and manage an MD3000i array with unknown manually assigned IP addresses, follow the procedure towards the end of this page.
If the array management IP addresses have already been determined, you may select manual and insert the two controller IP addresses.
2.
The discovery screen will show a progress bar while performing discovery of out-of-band storage arrays. In-band storage management is not available until an iSCSI
session is set up and operating.
3.
At the end of the discovery process the user will need to close the discovery window. Click Close to close the window.
4.
When the MDSM interface is focused on an MD3000i, it will add a new management tab in the window, labelled "iSCSI."
5.
The newly discovered array can now be named for quicker and easier identification.
Click OK to continue
3.
Select the host com port to which the serial cable is attached.
Then click OK.
4.
Configure the port settings for the HyperTerminal session, as follows:
5.
In the HyperTerminal session, press Control-B.You will then see the following screen.
6.
Press Q and then Enter
at the screen prompt, type netCfgShow and then Enter
7.
The management port settings for this controller will be shown. These include the IP Address settings for the port.
Now connect the serial port to the management port on the other MD3000i controller.
Repeat these steps to discover the IP Address settings for the remaining controller.
NOTE:
Installation of the Microsoft iSCSI software initiator is integrated into Windows Server 2008. There is therefore no need to install a separate iSCSI software
initiator when using Windows Server 2008 iSCSI attached hosts.
Complete the following steps to install the Microsoft iSCSI Initiator software.
2.
The option boxes for the Initiator Service and the Software Initiator should already be checked. Both of these are needed.
WARNING:
The option box for Microsoft MPIO Multipathing Support for iSCSI must not be checked. MPIO will be installed with the MDSM software. The Device
Specific Module (DSM) has a specific interface needed for proper path failover of the MD3000i by MPIO. Installation of MPIO from this step will cause the
failure of the path failover software.
This is an image showing the proper selection of option boxes for the installation.
3.
Click the radio button for I Agree to accept the software licensing agreement. Click Next.
4.
The installation of the initiator proceeds without user intervention.
5.
Click Finish.
This is an image of the last screen of the installation where the user will click finish.
This image shows a part of the I SCSI configuration worksheet, to be filled out before you begin.
2.
Set the iSCSI port IP addresses on the array.
From the iSCSI tab, click on Configure iSCSI Host Ports.
In this example, the server is directly connected to the array, so the iSCSI ports are being left at the default addresses.
iSCSI port 0 on Controller 0 defaults to 192.168.130.101. iSCSI port 0 on Controller 1 defaults to 192.168.130.102.
3.
The advanced settings are available for each port. Ethernet priority is based on IEEE standard 802.1p. The TCP listening port for iSCSI defaults to 3260. The default
listening port for an iSNS server is 3205. The Jumbo frames setting can be set for oversized Ethernet frames, up to 9000 bytes/frame (default is 1500). Ensure that
every component in the chain will properly support the size of jumbo frame chosen.
NOTE:
When jumbo frames are enabled, and the ICMP response is also enabled, the array can be pinged and appear reachable, while intervening incompatible
network equipment may drop the jumbo data frames.
4.
Enable ICMP Ping response for setup purposes. It can be turned off afterwards, if preferred. Changing the status of this option box will reset all the iSCSI ports on
the array. If data transfer is in progress, it will be interrupted until an iSCSI session has been restored.
5.
Open a command line session and use the Ping command to check connectivity between the host and iSCSI ports configured on the storage array.
This is an image of the ping command being used to test connection between the host and the
array's I SCSI ports.
6.
On the Programs menu launch Microsoft iSCSI Initiator. Click the Discovery tab. Under Target Portals click Add.
This is an image of the user selecting the target portals tab and
clicking add.
7.
Enter the IP address or DNS name of the iSCSI port on the storage array. If the port number was changed from the default of 3260 in a previous step, set the new
port number here. Click Advanced to continue.
This is an image of the user entering the IP address of the iSCSI port on
the storage array, and, if necessary, also the port number if it has been
altered from the default of 3260. Click advanced to continue.
8.
In the Local Adapter dropdown box select Microsoft iSCSI Initiator..
9.
Enter the IP address of the host Ethernet NIC (source) that will be accessing this target port.
10.
Do not set the CHAP logon information.
If CHAP is to be used, the CHAP secret should be configured on the array before setting it here.
Setting up CHAP authentication is covered later in this section.
Click OK to continue.
This is an image of the user selecting okay without setting the CHAP
secret.
11.
Click OK to continue.
12.
A successful connection between the host iSCSI port and the storage array is now shown in the Target Portals section.
Additional connections from network adapters to target portals can now be added if required.
13.
Next, you will need to log onto the new connection.
In the Targets tab, click to select the target. Click Log On.
14.
Click the Automatically restore this connection when the system boots check box if required.
Click the Enable Multi-Path check box if you are setting up connections for multiple Ethernet host ports.
Click Advanced.
NOTE:
The correct iSCSI multi-path software is installed along with the necessary host agent software, when the MDSM software is installed on an iSCSI host.
Installation of the MDSM host files is required for every iSCSI host attached to an MD3000i storage array.
15.
In the Local adapter field, use the drop down menu to select Microsoft iSCSI Initiator
In the Source IP field, use the drop down menu to select the IP address of the network adaptor you wish to connect from.
In the Target Portal field, use the drop down manu to select the IP address of the iSCSI port on the storage array controller that you wish to connect to.
Click OK to continue.
16.
Click OK to continue.
17.
The information in the Targets tab will now show your iSCSI connection to the storage array.
18.
This screen shows all sessions are logged into the storage array.
In the example shown here, there are two sessions logged into the storage array.
Click OK to continue
19.
Repeat steps 6 to 18 for all other iSCSI connections and sessions required between your host and the MD3000i.
20.
On the Configure tab in MDSM, click Configure Host Acess (Manual).
21.
Enter the hostname and the type of operating system. Click Next.
NOTE:
Although Windows 2000 Server is listed as an operating system type in the drop down list, the OS is not supported on the MD3000i.
This is an image showing the user entering the hostname and operating system type. Click
next.
22.
Select the hostname in the list of known iSCSI intiators. Click Add to move the host into the Selected iSCSI initiators list on the right.
NOTE:
In Windows, the iSCSI initiator name can be found on the General tab of the iSCSI Initiator Properties window.
23.
Click Next to continue.
24.
Determine whether this host will share access to the same virtual disks as another host, for clustering purposes. Click Next.
This is an image showing the user selecting whether this host will share disks with another
host. Clustering software must be present to arbitrate between the hosts owning the disk
resources. Click next.
25.
Click Finish.
3.
Select the level of CHAP authentication that will be used, by checking one or more option boxes. The levels of authentication are as follows:
• None only - no authentication is required for a host to connect to the storage array.
• Both None and CHAP - authentication is optional (both authenticated and non-authenticated hosts can connect to the storage array.
• CHAP only - authentication is required by all hosts.
This is an image of the user entering the chap secret in both text
boxes and then clicking okay.
5.
Click OK.
6.
Click OK to exit the Change Target Authentication window.
7.
If a CHAP secret was installed on the storage array, the secret must now also be installed on the hosts. If a Mutual CHAP secret was installed, the initiators' secret
must be installed on the array. For the mutual secret to be set on the initiator, on the General tab in the iSCSI Initiator interface, click Secret. If a mutual secret was
not set up, skip this step.
This is an image.
8.
In the CHAP Secret Setup window, enter the Mutual CHAPsecret that was set on the array in MDSM. Click OK.
This is an image showing the user entering the CHAP secret and
clicking okay.
9.
Highlight the installed initiator-target portals that have been set up. Each one will now be reinstalled with the CHAP secret. Click Remove.
This is an image.
11.
Click OK to close the window. If discovery session failover is desired, repeat the previous step for all iSCSI ports on the storage array. Discovery session is the
timed re-validation of the CHAP authentication.
This is an image.
12.
On the iSCSI Initiator Properties window, click on the Targets tab. If the target discovery was successful, the iqn of the storage array should be listed under
Targets. The status will show Inactive. Click Log On.
This is an image of the user clicking on the targets tab and seeing the
iqn of the storage array with a status of inactive.
13.
Select Automatically restore this connection when the system reboots. Also, select Enable multi-path. Click Advanced.
14.
Local Adapter must be set to Microsoft iSCSI Initiator; Source IP is the source IP address of the network adaptor you want to connect from; Target Portal is the
iSCSI port on the storage array controller you want to connect to; Data Digest and Header Digest are only for troubleshooting purposes; if CHAP is set, select
CHAP Logon Information and enter the CHAP secret; select Perform mutual authentication if mutual authentication is set on the array. Click OK.
15.
Click OK to close this window.
16.
Repeat steps 26 to 31 for any remaining network adaptors to be configured for this host.
17.
On the Targets tab, click on Details.
18.
The two iSCSI sessions to the array can be seen in the Identifier window on the Sessions tab.
This is an image of the user clicking on the sessions tab to see the
two I SCSI sessions to the storage array.
Configuration of the iSCSI Initiator - Dual Path Data, Dual Controllers (Duplex)
The redundant network configuration with dual, independent Ethernet switches offers considerable flexibility in the physical placement of equipment, while maintaining
high availability and data path redundancy. As many as 16 hosts can be configured with dual path, redundant data paths to the storage array. In a redundant network
configuration, each iSCSI initiator on the host will be mapped to one portal on each of the storage array's controllers. In the direct attached scenario the default gateway is
not important. However, in the redundant SAN-attached configuration, if a host passes through a router the default gateway must be configured on each iSCSI portal on the
RAID Controllers.
The methods of configuring the IP addresses of the controllers are the same as in direct attached, and the configuration of the iSCSI initiators are the same. The differences
are that each host is cabled to redundant switches to provide the data pathways to each of the array controllers. Automatic discovery is limited to the local subnet when
installing hosts, so that manual discovery is the method of choice.
The Microsoft iSCSI software initiator is integrated into Windows Server 2008. There is therefore no need to install a separate iSCSI software initiator.
The following procedure shows the steps necessary to configure IPv6 iSCSI host connectivity.
2.
You will need to configure the MD3000i storage array iSCSI ports with the correct IPv6 addresses.
From the iSCSI menu tab in the Modular Disk Storage Manager (MDSM), click on the Configure iSCSI Host Ports option
This image shows the selection of the configure iSCSI host port option in MDSM.
3.
From the iSCSI host port menu, select the first iSCSI port you wish to configure from the drop down menu.
This image show the selection of the iSCSI host port to be configured.
4.
Click the Enable IPv6 check box. Ensure that the Enable IPv4 check box is de-selected.
Next, click the IPv6 Settingsmenu tab
5.
IPv6 addresses can be set manually or automatically.
As the host in this configuration is directly connected to the MD3000i storage array, the Obtain configuration automatically option is selected. A link-local IPv6
address will be assigned to the port. Link-local addresses will be unique for each interface on a LAN segment.
To configure a specific IPv6 address, select the Specify configuration option and enter the address.
This is an image of the MDSM IPv6 iSCSI port configuration screen. Obtain configuration automatically is
being selected.
6.
Advanced IPv6 settings are available for each port.
Click on Advanced IPv6 Settings button to configure the Ethernet priority or to enable VLAN support.
7.
Advanced host port settings are available for each port.
Click on Advanced Host Port Settings button to set up a custom listening port to enable jumbo Ethernet frames.
The TCP listening port for iSCSI defaults to 3260. The default listening port for an iSNS server is 3205.
The Jumbo frames setting can be set for oversized Ethernet frames, up to 9000 bytes/frame (default is 1500). Ensure that every component in the chain will properly
support the size of jumbo frame chosen.
8.
Repeat steps 3-7 above for any remaining iSCSI host ports that need configuring.
9.
Click the check box to Enable ICMP Ping responses for setup purposes. It can be turned off after the array setup is completed, if preferred. Changing the status of
this option box will reset all the iSCSI ports on the array. If data transfer is in progress, it will be interrupted until an iSCSI session has been restored.
NOTE:
Make note of the IP address assigned to each port. These will be needed for the following steps.
Click OK to continue.
10.
You will now see a warning message stating that the iSCSI ports will be reset if you continue.
11.
On your iSCSI host, open a command line session and type in the command netsh interface ipv6 show interface
A list of available interfaces is shown. The IDx column shows the ZoneID for each interface
Make a note of each Local Area Connection (LAN)and the associated ZoneID
This image shows the netsh interface ipv6 show interface CLI command.
12.
From the command line, type in the command netsh interface ipv6 show address
Make a note of the IPv6 address assigned to each Local Area Connection.
This image shows the netsh interface ipv6 show address CLI command.
13.
From the command line, type in the command ping [IPv6 address]%[ZoneID]
Replace [IPv6 address] with the IPv6 address for a connected LAN Ethernet port on your host and [ZoneID] with the relevant ZoneID for that port
A reply to the Ping command indicates correct connectivity between host and storage array.
14.
The iSCSI software initiator now needs to be configured for each iSCSI host to be attached to the MD3000i storage array.
From the Start menu, click Administrative Tools, and launch the Microsoft iSCSI Initiator.
Click the Discovery tab.
Under Target Portals click Add.
15.
If the Microsoft iSCSI Service is not running, you will see a warning message.
Click Yes to start the iSCSI Service.
16.
The iSCSI Initiator will now start.
NOTE:
The Microsoft iSCSI Initiator included in Windows Server 2008 includes a tab for RADIUS (Remote Authentication Dial In User Service)support. This is
an AAA (Authentication, Authorization and Accounting) protocol used for controlling access to network resources.
Note that the MD3000i storage array does not support RADIUS authentication, hence this tab is not used.
17.
Click the Discovery tab.
Under Target Portals click Add Portal.
18.
Enter the IPv6 address for your the first MD3000i iSCSI port that you wish to connect to.
Click Advanced
19.
In the Local adapter field, use the drop down menu to select Microsoft iSCSI Initiator
20.
In the Source IP field, use the drop down menu to select the IPv6 address of your first connected Ethernet port
21.
Click OK to continue
22.
Click OK to continue
23.
A successful connection between the host iSCSI port and the storage array is now shown in the Target Portals section.
24.
In the Targets tab, click on the target shown. Click Log On.
25.
Click the Automatically restore this connection when the system bootscheck box if required.
Click the Enable Multi-Path check box if you are setting up connections for multiple Ethernet host ports.
Click Advanced.
NOTE:
The correct iSCSI multi-path software is installed, along with the necessary host agent software, when the MDSM software is installed on an iSCSI host.
Installation of the MDSM host files is required for every iSCSI host attached to an MD3000i storage array.
26.
In the Local adapter field, use the drop down menu to select Microsoft iSCSI Initiator
27.
In the Source IP field, use the drop down menu to select the IPv6 address of your connected Ethernet port
28.
In the Target Portal field, use the drop down menu to select the IPv6 address of the MD3000i iSCSI port to which your are connecting.
29.
Click OK to continue.
30.
Click OK to continue.
31.
The information in the Targets tab will now show your iSCSI connection to the storage array.
32.
Repeat steps 14 to 28 for all other iSCSI connections between your host and the MD3000i.
33.
On the Configure tab in MDSM, click Configure Host Access (Manual) option.
34.
Enter the hostname in the appropriate field and select the host type from the drop down menu.
Click Next.
This image shows entering the host name and selecting host type, then clicking next.
35.
Select the host from the list of known iSCSI intiators.
Click Add to move the host into the Selected iSCSI initiators list on the right.
36.
Click Next to continue.
37.
Determine whether this host will share access to the same virtual disks with another host (for clustering purposes).
Click Yes or No as appropriate.
Click Next to continue.
38.
Click Finish.
39.
At the Configure Host Access completion window, click No if you have finished configuring host access.
If you have other hosts to configure, click Yes and repeat the steps above for additional hosts.
40.
From the iSCSItab of MDSM, choose the View/End iSCSI Sessions option
41.
You will now see the iSCSI sessions from your configured host to the storage array.
Click Close to exit
42.
Take a moment to look at the iSNS target discovery configuration page.
From the iSCSI tab, Click on the Change Target Discovery option
43.
The Change target Discovery page shows the setup options for using iSNS.
Click the Use iSNS Server check box to allow the MD3000i to register with an iSNS server for target discovery.
Choose appropriate settings for either IPv4 or IPv6 configuration.
If you are not setting up iSNS, click Cancel to remove iSNS discovery options.
2.
Click the CHAP Secret button
3.
Enter the CHAP secret into the text boxes. < br>A random secret can also be chosen.
NOTE:
Securely store the CHAP secret in a text file, as this secret must be entered during the setup of CHAP on the hosts. If the secret is lost and another host needs to
be added later, the secret will have to be reconfigured for all existing hosts.
Click OK to continue.
4.
Next, ensure the correct level of CHAP authentication is chosen by checking one or more option box.
The levels of authentication are as follows:
• None only - no authentication is required for a host to connect to the storage array.
• Both None and CHAP - authentication is optional (both authenticated and non-authenticated hosts can connect to the storage array.
• CHAP only - authentication is required by all hosts.
Click OK to continue
This image shows selection of authentication method and clicking OK to continue.
6.
Mutual CHAPs authentication provides a slightly higher degree of security.
This requires the Initiator to authenticate the Target and the Target to authenticate the Initiator.
If Mutual CHAPs is to be configured, from the iSCSI tab in MDSM, choose the Enter Mutual Authentication Permissions option.
This image shows selecting the Enter Mutual Authentication Permissions option.
7.
Select the host that requires mutual authentication from the Select an Initiator field.
Click the CHAP Secret button.
This image shows selecting the Initiator and clicking the CHAP Secret button.
8.
Enter and confirm the CHAP secret for the selected host. CHAP secrets must be a minimum of 12 characters in length.
Click OK to continue.
10.
Next, you will need to configure the host for CHAPs or Mutual CHAPs authentication.
If Mutual CHAPs is required, a secret must be installed on the hosts iSCSI initiator.
From the General tab of the Microsoft iSCSI Initiator, click Secret.
11.
Enter the iSCSI initiator's CHAP secret. This must be a min of 12 characters long. Click OK to continue
12.
From the Discovery tab, select each target portal and click Remove.
This image shows clicking the Details button in the Targets tab.
14.
Select each target by clicking the check box beside the identifier.
Click Log off.
15.
Click OK to continue.
16.
In the Favorite Targets tab, select each target shown and click Remove.
17.
In the Discovery tab, click Add Portal.
18.
Enter the IPv6 address for your the first MD3000i iSCSI port that you wish to connect to.
Click Advanced.
19.
In the Local adapter field, select Microsoft iSCSI Initiator
In the Source IP field, select the IP address for the host Ethernet port you are configuring.
Click OK to continue.
20.
Click OK to continue.
21.
You will now see the target portal to which you have connected
22.
From the Targets tab, select the target and click Log on.
23.
Click the Automatically restore this connection when the system bootscheck box if required.
Click the Enable Multi-Path check box if you are setting up connections for multiple Ethernet host ports.
Click Advanced.
24.
In the Local adapter field, select Microsoft iSCSI Initiator
In the Source IP field, select the IP address for the host Ethernet port you are configuring.
In the Target Portal field, select the target iSCSI port of the MD3000i storage array that you are connecting to.
Click OK to continue.
25.
Click OK to continue.
26.
You will now see that you are logged onto and connected to the MD3000i target iSCSI port.
27.
Repeat steps 18 to 27 for all other iSCSI connections between your host and the MD3000i.
28.
In the Favorite Targets tab, you will now see all the connections that you have configured from your host to your MD3000i storage array that will automatically
restore when you reboot your host.
Configuration of the iSCSI Initiator - Dual Path Data, Dual Controllers (Duplex)
The redundant network configuration with dual, independent Ethernet switches offers considerable flexibility in the physical placement of equipment, while maintaining
high availability and data path redundancy. As many as 16 hosts can be configured with dual path, redundant data paths to the storage array. In a redundant network
configuration, each iSCSI initiator on the host will be mapped to one portal on each of the storage array's controllers. In the direct attached scenario the default gateway is
not important. However, in the redundant SAN-attached configuration, if a host passes through a router the default gateway must be configured on each iSCSI portal on the
RAID Controllers.
The methods of configuring the IP addresses of the controllers are the same as in direct attached, and the configuration of the iSCSI initiators are the same. The differences
are that each host is cabled to redundant switches to provide the data pathways to each of the array controllers. Automatic discovery is limited to the local subnet when
installing hosts, so that manual discovery is the method of choice.
RHEL Hosts
Installation of the Red Hat Enterprise Linux (RHEL) iSCSI Initiator and the Modular Disk Storage
Manager Software
This procedure shows the steps necessary to install the RHEL iSCSI Initiator software and the Modular Disk Storage Management (MDSM) software and discover the
storage array.
This is an image showing the user replacing the / etc / iscsi.conf file with the one from the installation CD.
2.
Using VI, open the file / etc / iscsi.conf for editing.
3.
Enter the IP addresses of the MD3000i RAID Controller ports as the DiscoveryAddress.
This is an image showing the insertion of the controller IP addresses as discovery address equals
entries.
4.
Start the iSCSI service, with service iscsi start.
This is an image showing the user starting the I SCSI service at the command line with service i scsi start.
5.
Set the iSCSI service to startup automatically on boot. Also, verify that it will start in run levels 3 and 5.
This is an image showing the verification that the I SCSI service will run under levels 3 and 5.
Installation of MDSM
1.
Configure the NIC ports on the server to maintain the same IP address on reboot. Edit the settings with vi.
This is an image showing the user opening the configuration file for ethernet port eth 5.
2.
The edited file will resemble the following screenshot.
This is an image showing the approximate configuration appearance of the ethernet port.
3.
Perform the following steps to verify configuration.
4.
Insert the MD3000i Resource CD and run the install.sh script from the Linux folder.
This is an image showing the user launching the install shell on the CD from the command line.
5.
Choose option 4 - Install MD3000i Documentation and select the location for installation.
Installation of MDSM
This is an image showing the user choosing option four to install the documentation.
6.
Return to the main menu and choose option 2 - Install MD3000i Storage Manager.
This is an image showing the user returning to the main menu and choosing option two, to install the
storage manager software.
7.
The installer will launch the GUI.
8.
Choose the language and click OK.
This is an image showing the gooey launch where the user chooses the language and selects
okay.
9.
Click Next.
Installation of MDSM
This is an image showing the introduction screen where the user chooses next.
10.
Accept the license agreement and click Next.
This is an image showing the user accepting the license agreement and choosing next.
11.
Choose the installation directory and click Next.
This is an image showing the user selecting the installation directory and choosing next.
12.
Select Typical (Full Installation) and click Next.
Installation of MDSM
This is an image showing the user selecting a typical installation and choosing next.
13.
A warning message will appear instructing the user to install the RDAC for multipath support. With this release the RDAC is now a DKMS package. It will be
installed using option 3 from the install script
This is an image showing the user being warned to install R DAC, however, it will be installed
later.
14.
Click Install.
This is an image showing the user confirming the choices made and selecting install.
15.
Installation continues.
Installation of MDSM
This is an image showing the installation continuing.
16.
Click Done. Do not install the multi-path driver at this time. MPP will be installed later in this procedure.
This is an image showing the user clicking done at the end of the installation.
17. NOTE:
The array will default to requesting DHCP-assigned IP addresses for the management ports on the controllers. If the dynamically assigned addresses are not in
the same subnet with the management host, the discovery process will not find the array. The discovery of arrays is only on the host's subnet through port
2463. If a DHCP response is not acknowledged, the controllers will default to 192.168.128.101 for controller 0, and 192.168.128.102 for controller 1. Once the
units have had IP addresses manually set, the units will no longer request dynamic IP assignment and will continue to operate at the manually set addresses. If
these set addresses are not known, the IP addresses of the controllers will need to be retrieved with the password reset cable.
If the host is remotely connecting to the array, then a default gateway will be needed. If one is not present when this command is run, then the array should be
directly connected to the host and set on the same subnet as the storage to proceed with setup. The IP addresses can then be changed to suit the installation, and then
the host can be connected remotely. Using the same parameters for Hyperterminal that are used for password reset, issue the following command at the shell
interface.
-> netCfgShow
18.
Launch the MDSM GUI. A window should immediately appear asking the user to choose between Automatic discovery or Manual.
Installation of MDSM
This is an image showing the launch of the gooey.
19.
If the array is on the same subnet as the host, you may select the option button for Automatic and click OK. In this scenario, we will choose Manual.
20.
Select Out-of-band management, and enter the IP addresses of the RAID Controllers' management ports. In-band storage management is not enabled until an
iSCSI session is set up and operating. Click Add.
NOTE:
After discovery, it is recommended that you reconfigure the out-of-band management IP addresses of the RAID controllers to be on your management
network. After changing them, you will need to rediscover the array at the new IP addresses.
This is an image showing the user entering the controller IP
addresses so the array can be discovered.
21.
Click No.
This is an image showing the screen where the user can select
whether to discover another array.
Installation of MDSM
22.
On the iSCSI tab in MDSM, click on Configure iSCSI Host Ports.
This is an image showing the user clicking on configure i scsi host ports on the I SCSI tab of MDSM.
23.
Select each port on the raid controllers and enter the appropriate IP address for each to match the configuration set in the iscsi.conf file. Select Enable ICMP PING
responses for testing. It may be turned off later if desired. Click OK when complete.
This is an image of the user setting the IP addresses to match the addresses chosen for the
controllers during the I SCSI initiator installation.
24.
Start the iSCSI service at the command line with the command service iscsi start.
This is an image showing the user starting the I SCSI service from the command line.
25.
Restart the MDSM agent at the command line. Make sure to wait a few seconds for the start to complete. It should look similar to the screenshot below.
Installation of MDSM
This is an image showing the user restarting the MDSM agent at the command line.
26.
Verify the iSCSI connections have been established using the command iscsi-ls.
This is an image showing the user verifying the I SCSI connections with the command I SCSI dash L
S.
27.
On the Configure tab in MDSM, select Configure Host Access (Automatic)
This is an image of the user selecting Configure host access automatic, on the configure
tab.
28.
Select the host name and move it to the right. Click ok to complete
Installation of MDSM
This is an image showing the user selecting the host name and moving it to the right hand column, and
then clicking okay to complete.
29.
Install the RDAC software. Ensure the kernel-devel / kernel-smp-devel package is installed - rpm -qa | grep kernel Insert the MD3000i resource cd and run the
install.sh script from the linux folder.
30.
Choose option 3 - Install Multi-pathing driver. This will install the following packages:
31.
The dkms install of the RDAC driver installs the drivers, makes a new initrd, and modifies the grub.conf file to boot to the MPP kernel. Here is an example
grub.conf after the RDAC is installed.
Installation of MDSM
This is an image showing an example grub dot conf file after R Dac has been installed.
32.
Reboot the host.
33.
To test the installation, configure two Virtual Disks in MDSM and map them to the host. Use the hot_add utility to rescan for new devices. Verify the LUNs are
seen on one path using the SMdevices command.
MDSM Usage
This is detailed information about the MDSM user interface and methods used to manage the MD3000i Enclosure System for data storage.
• The Title Bar at the top of the screen displays the name of the application and the Dell logo.
• Beneath the Title Bar is the Array Selector, listing the MD3000i Storage Array that is currently selected. The icon next to the array's name indicates its condition.
You can choose another array by clicking the down-arrow next to the array's name and highlighting a different array in the drop-down list. Links to the right of the
array name let you add or remove arrays from the list of managed arrays. Links to the far right provide access to online help or close the Storage Manager.
• Beneath the Array Selector is the Content Area. Six tabs appear in this area to group the tasks you can perform on the selected array. When you click on a tab, the
Content Area displays links for the tasks you can perform. The following sections list some of the tasks you can perform under each tab.
• Out-of-band management - data is separate from commands and events, utilizing the Internet Protocol over the Ethernet management connection. For automatic array
discovery, the management software on the host will utilize port 2463 to locate arrays on the local subnet. Both discovery and out-of-band management utilize
software port 2463. Firewall ports should be enabled for port 2463 to allow remote administration of the array.
• In-band management - commands, events, and data are transferred across the iSCSI sessions after initiation.
NOTE:
Dell recommends using out-of-band management but in-band management may be optionally configured. To use automatic discovery for in-band management, the
management station must initially be on the same subnet as the MD3000i management ports. After initial setup the management station needs only to be able to reach
the in-band subnet.
• To add a storage array that uses out-of-band management, specify the host name or IP address of each controller in the storage array .
• To add a storage array that uses in-band management, specify the host name or IP address of the host after an iSCSI session has been initiated.
NOTE:
These settings apply to all storage arrays currently managed by the management station.
1. Click the Tools tab, and then click the Set up SNMP Alerts link.
2. Enter the Community name. The community name is an ASCII string that identifies a known set of management consoles and is set by the network administrator in
the management console. The default community name is public.
3. Enter the Trap destination. The trap destination is the IP address or the host name of a management console that runs an SNMP service.
4. Click Add to add the management console to the Configured SNMP addresses list.
5. Repeat steps 2 through 4 until you have added all management consoles that should receive SNMP alerts.
6. Click OK.
NOTE:
You must install an SNMP service on every system included in the list of addresses configured to receive SNMP alerts. You do not have to install MDSM on a system
in order to display SNMP alerts. You need only install an appropriate SNMP service and application (such as the Dell IT Assistant).
Event Monitor
When enabled, the event monitor runs continuously in the background and monitors activity on the managed storage arrays. If the event monitor detects any critical
problems, it can notify a host or remote system using e-mail, SNMP trap messages, or both. For the most timely and continuous notification of events, enable the event
monitor on a management station that runs 24 hours a day. Enabling the event monitor on multiple systems or having a combination of an event monitor and MDSM active
can result in duplication of events.
Microsoft Windows
Linux OSes
At the command prompt, type SMmonitor start and press <Enter>. When the program startup begins, the system displays the following message: SMmonitor started. To
stop the service type SMmonitor stop and press <Enter>. The system displays the following message: Stopping Monitor process.
• Snapshot Copy - A snapshot virtual disk is a point-in-time image of a virtual disk in a storage array. It is not an actual virtual disk containing data; rather, it is a
reference to the data that was contained on a virtual disk at a specific time. A snapshot virtual disk is the logical equivalent of a complete physical copy. However,
you can create a snapshot virtual disk much faster than a physical copy, using less disk space.
• Virtual Disk Copy - When you create a virtual disk copy, you create a copy pair that has a source virtual disk and a target virtual disk on the same storage array.
When you start a virtual disk copy, all data is copied to the target virtual disk, and the source virtual disk permissions are set to read-only until the virtual disk copy is
complete
Here is a comparison of the Snapshot and Virtual Disk Copy Premium Features:
This is a
thumbnail
image of a
sample
Snapshot
Snapshot
The Snapshot feature provides a point in time (PiT) copy of a virtual disk. Its creation is near instantaneous and requires only a small amount of disk space. The snapshot
virtual disk appears and functions as a standard virtual disk. The virtual disk on which the snapshot is based, called the source virtual disk, must be a standard virtual disk in
the storage array. No I/O requests are permitted on the source virtual disk while the virtual disk snapshot is being created. Deleting a snapshot does not affect data on the
source virtual disk.
A snapshot repository virtual disk containing metadata and copy-on-write data is automatically created when a snapshot virtual disk is created. The only data stored in the
snapshot repository virtual disk is that which has changed since the time of the snapshot.
After the snapshot repository virtual disk is created, I/O write requests to the source virtual disk resume. Before a data block on the source virtual disk is modified,
however, the contents of the block to be modified are copied to the snapshot repository virtual disk for safekeeping. Because the snapshot repository virtual disk stores
copies of the original data in those data blocks, further changes to those data blocks write only to the source virtual disk. The snapshot repository uses less disk space than a
full physical copy, because the only data blocks that are stored in the snapshot repository virtual disk are those that have changed since the time of the snapshot.
When you create a snapshot virtual disk, you specify where to create the snapshot repository virtual disk, its capacity, and other parameters. You can disable or delete the
snapshot virtual disk when you no longer need it, such as when using the snapshot as a means of capturing a backup, and the backup is now complete. If you disable a
snapshot virtual disk, you can re-create and reuse it the next time you perform a backup. If you delete a snapshot virtual disk, you also delete the associated snapshot
repository virtual disk.
NOTE:
You can create concurrent snapshots of a source virtual disk on both the source disk group and on another disk group.
• These types of virtual disks are not valid source virtual disks: snapshot repository virtual disks, snapshot virtual disks, target virtual disks that are participating in a
virtual disk copy.
• You cannot create a snapshot of a virtual disk that contains unreadable sectors.
• You must satisfy the requirements of your host operating system for creating snapshot virtual disks. Failure to meet the requirements of your host operating system
results in an inaccurate point-in-time image of the source virtual disk or the target virtual disk in a virtual disk copy
If you want to use a snapshot regularly, such as for backups, use the Disable Snapshot and Re-create Snapshot options to reuse the snapshot. Disabling and re-creating
snapshots preserves the existing virtual disk-to-host mappings to the snapshot virtual disk.
Using the simple path, you can specify these parameters for your snapshot virtual disk:
• Snapshot Virtual Disk Name - A user-specified name that helps you associate the snapshot virtual disk to its corresponding snapshot repository virtual disk and
source virtual disk.
• Snapshot Repository Virtual Disk Name - A user-specified name that helps you associate the snapshot repository virtual disk to its corresponding snapshot virtual
disk and source virtual disk.
Using the simple path, these defaults are used for the other parameters of a snapshot virtual disk:
• Capacity Allocation - The snapshot repository virtual disk is created using free capacity on the same disk group where the source virtual disk resides.
• Host-to-Virtual Disk Mapping - The default setting is Map now.
• Percent Full - When the snapshot repository virtual disk reaches the specified repository full percentage level, the event is logged in the Main Event Log (MEL). The
default snapshot repository full percentage level is 50 percent of the source virtual disk.
• Snapshot Repository Virtual Disk Full Conditions - When the snapshot repository virtual disk becomes full, you are given a choice of failing write activity to the
source virtual disk or failing the snapshot virtual disk.
NOTE:
Removing the drive letter of the associated virtual disk in Windows or unmounting the virtual disk in Linux will help to guarantee a stable copy of the virtual disk for
the Snapshot.
1. Stop all data access (I/O) activity to the source virtual disk before creating a snapshot to ensure that you capture an accurate point-in-time image of the source virtual
disk.
2. If you are running Windows, run the SMrepassist (replication assistance) utility from the command line in a DOS window. The SMrepassist utility is installed in
the /util/ directory of your host. It flushes the source virtual disk's cache before creating a snapshot. Use this syntax: SMrepassist -f <filesystem-identifier>
3. In MDSM, click the Configure tab, and then click the Create Snapshot Virtual Disks link.
4. The Additional Instructions dialog appears; click the Close button in this dialog to continue.
5. Click the plus sign (+) to the left of the disk group to expand it, then click the virtual disk from which you want to create a snapshot. Click the Next button. A No
Capacity Exists warning appears if there is not enough space in the disk group of the source virtual disk to create the snapshot.
6. On the Create Snapshot Virtual Disks - Select Path screen, select the Simple path.
NOTE:
A snapshot repository virtual disk requires 8 MB of free space. If the required free space is not available in the disk group of the source virtual disk, the Create
Snapshot Virtual Disks feature defaults to the Advanced path.
7. Click the Next button.
8. Type a name for the snapshot in the Snapshot virtual disk name text box.
9. Type a name for the snapshot repository virtual disk in the Snapshot repository virtual disk name text box.
10. Click the Next button.
11. Choose whether to map the virtual disk to a host (or host group) now or later:
To map now, click the Map now radio button, select a host or host group by clicking it, then assign a LUN.
To map later, click the Map later radio button.
12. Click the Finish button to create the snapshot virtual disk and the associated snapshot repository virtual disk.
13. After you have created one or more snapshot virtual disks, mount or reassign a drive letter of the source virtual disk.
14. If needed, assign host-to-virtual disk mapping between the snapshot virtual disk and the host operating system that accesses it.
NOTE:
In some cases, depending on the host type and any virtual disk manager software in use, the software prevents you from mapping the same host to both a source
virtual disk and its associated snapshot virtual disk.
15. If you are using a Linux-based system, run the hot_add utility to register the snapshot virtual disk with the host operating system.
Use the advanced path to choose whether to place the snapshot repository virtual disk on free capacity or unconfigured capacity and to change the snapshot repository
virtual disk parameters. You can select the advanced path regardless of whether you use free capacity or unconfigured capacity for the snapshot virtual disk.
Using the advanced path, you can specify these parameters for your snapshot virtual disk:
• Snapshot Virtual Disk Name-A user-specified name that helps you associate the snapshot virtual disk to its corresponding snapshot repository virtual disk and source
virtual disk.
• Snapshot Repository Virtual Disk Name -A user-specified name that helps you associate the snapshot repository virtual disk to its corresponding snapshot virtual disk
and source virtual disk.
• Capacity Allocation - This parameter allows you to choose where to create the snapshot repository virtual disk. You can allocate capacity by using one of these
methods:
◦ Use free capacity on the same disk group where the source virtual disk resides.
◦ Use free capacity on another disk group.
◦ Use unconfigured capacity and create a new disk group for the snapshot repository virtual disk.
Dell recommends placing the snapshot repository virtual disk within the disk group of the source virtual disk. This ensures that if physical disks associated with
the disk group are moved to another storage array, all the virtual disks associated with the snapshot virtual disk remain in the same group.
• Percent Full - When the snapshot repository virtual disk reaches the user-specified repository full percentage level, the event is logged in the Major Event Log
(MEL). The default snapshot repository full percentage level is 50% of the source virtual disk.
• Snapshot Repository Virtual Disk Full Conditions - You can choose whether to fail writes to the source virtual disk or fail the snapshot virtual disk when the snapshot
repository virtual disk becomes full.
• Host-to-Virtual Disk Mapping - Choose whether to map the snapshot virtual disk to a host or host group now or to map the snapshot virtual disk later. The default
setting is Map later.
NOTE:
Removing the drive letter of the associated virtual disk in Windows or unmounting the virtual disk in Linux will help to guarantee a stable copy of the virtual disk for
the Snapshot.
1. Stop all data access (I/O) activity to the source virtual disk before creating a snapshot to ensure that you capture an accurate point-in-time image of the source virtual
disk.
2. If you are running Windows, run the SMrepassist (replication assistance) utility from the command line in a DOS window to flush the source virtual disk's cache.
Use this syntax: SMrepassist -f <filesystem-identifier>
3. In MDSM, click the Configure tab, and then click the Create Snapshot Virtual Disks link.
4. The Additional Instructions dialog appears; click the Close button in this dialog to continue.
5. Click the plus sign (+) to the left of the disk group to expand it, then click the virtual disk from which you want to create a snapshot. Click the Next button.
A No Capacity Exists warning appears if there is not enough space in the disk group of the source virtual disk to create the snapshot.
6. On the Create Snapshot Virtual Disks - Select Path screen, select the Advanced path.
7. Click the Next button.
8. Type a name for the snapshot in the Snapshot virtual disk name text box.
9. Type a name for the snapshot repository virtual disk in the Snapshot repository virtual disk name text box.
10. Click the Next button.
11. Choose whether to create the snapshot virtual disk from unconfigured capacity or free capacity.
To create the snapshot virtual disk from unconfigured capacity:
1. Click the Unconfigured capacity radio button, then click the Next button.
2. On the Create Snapshot Virtual Disks - Specify Capacity screen, choose a RAID level, then click the Next button.
Choose a name that helps you associate the snapshot virtual disk and snapshot repository virtual disk with its corresponding source virtual disk.
By default, the snapshot name is shown in the Snapshot virtual disk name field as:
<source-virtual disk-name>-<sequence-number>
where sequence-number is the chronological number of the snapshot relative to the source virtual disk.
The default name for the associated snapshot repository virtual disk that is shown in the Snapshot repository virtual disk field is:
<source-virtual disk-name>-R<sequence-number>
For example, if you are creating the first snapshot virtual disk for a source virtual disk called Accounting, the default snapshot virtual disk is Accounting-1, and the
associated snapshot repository virtual disk default name is Accounting-R1. The default name of the next snapshot virtual disk you create based on Accounting is
Accounting-2, with the corresponding snapshot repository virtual disk named as Accounting-R2 by default.
Whether you use the software-supplied sequence number that (by default) populates the Snapshot virtual disk name or the Snapshot repository virtual disk name field, the
next default name for a snapshot or snapshot repository virtual disk still uses the sequence number determined by the software. For example, if you give the first snapshot
of source virtual disk Accounting the name Accounting-8, and do not use the software-supplied sequence number of 1, the default name for the next snapshot of
Accounting is still Accounting-2. The next available sequence number is based on the number of existing snapshots of a source virtual disk. If you delete a snapshot virtual
disk, its sequence number becomes available again.
You must choose a unique name for the snapshot virtual disk and the snapshot repository virtual disks, or an error message is displayed. Names are limited to 30 characters.
After you reach this limit in either the Snapshot virtual disk name or the Snapshot repository virtual disk name fields, the source virtual disk name is truncated enough to
append the sequence string.
• Use the free capacity available on the disk group of the snapshot repository virtual disk.
• Add unconfigured capacity to the disk group of the snapshot repository virtual disk. Use this option when no free capacity exists on the disk group.
You cannot increase the storage capacity of a snapshot repository virtual disk if the snapshot repository virtual disk has any one of these conditions:
1. Click the Modify tab, then click the Modify snapshot virtual disks link.
2. Click the Expand Snapshot Repository link.
3. Click the snapshot repository virtual disk you want to expand.
4. If necessary, you can add free capacity to the volume group by adding an unassigned physical disk. To add an unassigned physical disk:
1. Click the Add Drives button.
2. Select the capacity to add from the drop-down menu.
3. Click the Add button.
5. Enter the amount by which you want to expand the snapshot repository virtual disk in the Increase capacity by field
Click the Finish button to expand the capacity of the snapshot repository virtual disk.
NOTE:
Before you create a new point-in-time image of a source virtual disk, stop any data access (I/O) activity and remove the drive letter of the associated virtual disk.
The SMdevices utility displays the snapshot virtual disk in its output, even after the snapshot virtual disk is disabled.
1. Click the Modify tab, then click the Modify snapshot virtual disks link.
2. Click the Disable Snapshot Virtual Disks link.
3. Highlight the snapshot virtual disk to be disabled and click the Disable button beneath the list.
4. In the Confirm Disable Snapshot Virtual Disk dialog box, type yes and then click the OK button.
The snapshot virtual disk is disabled. The associated snapshot repository virtual disk does not change status, but copy-on-write activity to the disabled snapshot virtual disk
stops until the snapshot virtual disk is re-created.
NOTE:
This action invalidates the current snapshot.
1. Click the Summary tab, then click the Disk Groups & Virtual Disks link to ensure that the snapshot virtual disk is in Optimal or Disabled status.
2. Follow any additional instructions needed for your operating system. Failure to follow these additional instructions can create unusable snapshot virtual disks.
3. Click the Modify tab, then click the Modify snapshot virtual disks link.
4. Click the Re-create Snapshot Virtual Disks link.
5. Highlight the snapshot virtual disk to re-create and click the Re-Create button beneath the list.
6. In the Confirm Snapshot Virtual Disk Re-Creation dialog box, type yes and then click the OK button.
Re-creating a snapshot repository virtual disk uses the previously configured snapshot name and parameters.
This is a
thumbnail
image of a
sample
Virtual Disk
The source virtual disk can be a standard virtual disk, a snapshot virtual disk, or the source virtual disk of a snapshot virtual disk. When you start a virtual disk copy, all
data is copied to the target virtual disk, and the source virtual disk permissions are set to read-only until the virtual disk copy is complete.
NOTE:
The preferred method for creating a virtual disk copy is to copy from a snapshot virtual disk. This allows the original virtual disk used in the snapshot operation to
remain fully available for read/write activity while the snapshot is used as the source for the virtual disk copy operation.
NOTE:
The target virtual disk capacity must be equal to or greater than the source virtual disk capacity.
When you begin the disk copy process, you must define the rate at which the copy is completed. Giving the copy process top priority will slightly impact I/O performance,
while giving it lowest priority will make the copy process take longer to complete. You can modify the copy priority while the disk copy is in progress.
When creating a snapshot virtual disk, map the snapshot virtual disk to only one node in the cluster. Mapping the snapshot virtual disk to the host group or both nodes in the
cluster may cause data corruption by allowing both nodes to concurrently access data.
• If you are using the target virtual disk for backup purposes.
• If you are using the data on the target virtual disk to copy back to the source virtual disk of a disabled or failed snapshot virtual disk.
If you decide not to preserve the data on the target virtual disk after the virtual disk copy is complete, change the write protection setting for the target virtual disk to
Read/Write.
Follow these steps to set the target virtual disk read/write permissions:
1. Click the Modify tab, and then click the Manage Virtual Disk Copies link.
2. Select one or more copy pairs in the table and click the Permissions button to the right of the table.
The Set Target Virtual Disk Permissions dialog box appears.
3. In the Set Target Virtual Disk Permissions dialog box select either Read-Only or Read/Write.
4. Click the OK button in the dialog box.
If you select Read-Only, write requests to the target virtual disk will be rejected. If you select Read/Write, the host can read and write to the target virtual disk after the
virtual disk copy is complete.
• While a virtual disk copy has a status of In Progress, Pending, or Failed, the source virtual disk is available for read I/O activity only. After the virtual disk copy is
complete, read and write I/O activity to the source virtual disk are permitted.
• A virtual disk can be selected as a target virtual disk for only one virtual disk copy at a time.
• A virtual disk with a Failed status cannot be used as a source virtual disk or target virtual disk.
• A virtual disk with a Degraded status cannot be used as a target virtual disk.
• A virtual disk participating in a modification operation cannot be selected as a source virtual disk or target virtual disk. Modification operations include:
◦ Capacity expansion
◦ RAID-level migration
◦ Segment sizing
◦ Virtual disk expansion
◦ Defragmenting a virtual disk
1. Before you create a full copy of a source virtual disk, stop any data access (I/O) to the source virtual disk and the target virtual disk so that the source virtual disk has
a stable version of the data to be copied.
2. If the host to which the source virtual disk is mapped is running Windows:
1. Remove the drive letter from the target virtual disk (assuming the target virtual disk has previously been assigned a drive letter).
2. Run the SMrepassist utility on the host where the snapshot virtual disk is mounted to flush all the write buffers from the new physical disk. At the host prompt,
type SMrepassist -f <filename-identifier> and press <Enter>.
The write buffers for the physical disk are flushed.
3. If the host to which the source virtual disk is mapped is running Linux and the target virtual disk is mounted, dismount the target virtual disk.
4. Click the Configure tab, then click the Create Virtual Disk Copies link.
5. On the Select Source Virtual Disk page, select the virtual disk to copy (source virtual disk), and click Next.
NOTE:
If the virtual disk you select is not valid, an information dialog box appears explaining the types of virtual disks you can use as the source for a virtual disk
copy. Click the OK button to close this dialog box and select a different source virtual disk.
NOTE:
If you select a target virtual disk with a capacity similar to the source virtual disk, you reduce the risk of having unusable space on the target virtual disk
after the virtual disk copy is completed.
2. To create a new virtual disk for the target, click the Create a new virtual disk radio button. Type a name for this new target virtual disk in the text box.
7. Click the Next button at the bottom of the page.
8. Set the copy priority for the virtual disk copy and click the Next button. The source virtual disk, the target virtual disk, and the copy priority setting that you selected
appear on the Create virtual disk copies-Confirm Copy Settings dialog. The higher priorities allocate more resources to the virtual disk copy at the expense of the
storage array's performance.
9. If you approve of the parameters, type yes in the text box and click Finish to confirm the copy settings and start the virtual disk copy.
The Copy Started page appears, verifying that the virtual disk copy has started. This dialog also enables you to exit the Create virtual disk copies feature or create
another new virtual disk copy.
10. Based on whether you want to create another virtual disk copy or modify the one you just created, choose:
◦ Yes - Create a new virtual disk copy.
◦ No - Exit the Create virtual disk copies dialog.
◦ Manage Virtual Disk Copies - Recopy, stop the copy process, set permissions or priority, or remove virtual disk copies.
You can view the progress of a virtual disk copy in the Manage virtual disk copies page. For each copy operation in progress, the list displays a sliding scale in the Status
field showing the percentage of the operation that is complete. Once the virtual disk copy is complete, do this:
1. In Linux, if you created the target virtual disk with unconfigured capacity, run the hot_add utility.
2. If you created the target virtual disk with unconfigured capacity, you map the virtual disk to a host in order to use it.
3. Register the target virtual disk with the operating system before using the new virtual disk:
1. Enable write permission on the target virtual disk by either removing the virtual disk copy pair or explicitly setting write permission.
2. In Windows, assign a drive letter to the virtual disk.
3. In Linux, mount the virtual disk.
4. Enable I/O activity to the source virtual disk and the target virtual disk.
You can change the copy priority for a virtual disk copy in the following circumstances:
1. Click the Modify tab, and then click the Manage virtual disk copies link.
2. Select the copy operation you wish to stop by clicking it and click the Stop button.
You can only select one copy operation at a time to be stopped.
3. Click Yes to stop the virtual disk copy.
1. Click the Modify tab, and then click the Manage virtual disk copies link.
You can only select one copy operation at a time to be recopied.
2. Select the copy operation in the list displayed by the Manage Virtual Disk Copies page, and then click the Recopy button at the right of the list.
3. The Recopy dialog box appears. Set the copy priority.
4. Type yes, and then click OK.
When you remove a virtual disk copy from the storage array, the target write attribute for the target virtual disk is also removed. If the virtual disk copy is in "In Progress"
status, you must stop the virtual disk copy before you can remove the copy pair.
1. Click the Modify tab, and then click the Manage virtual disk copies link.
2. Select one or more copy pairs in the table, and click Remove.
The Remove Copy Pairs dialog appears.
3. Click Yes to remove the copy pair.
You will need the Feature Activation Code from the Product Feature Key Booklet, the Service tag number, and the Feature Enable Identifier. You can find these items in
the Modular Disk Storage Manager console for the array on which you wish to install the Virtual Disk Copy feature. They can be reached from the View/enable premium
features link in the Tools tab.
Follow these steps to recover the license key for premium features:
1. Verify that your Modular Disk Storage Array is connected to the server and powered up, and that MDSM software is installed on your management server.
2. Go to http://www.md-storage.com/ (website owned by LSI, Inc.) and follow the website instructions to download the premium key file. The premium key file must
be downloaded to your system.
NOTE:
Dell recommends that you enter your e-mail address so that a backup copy of your Premium Feature Key file is sent to you for your records.
3. Enable Snapshot and Virtual Disk Copy in the Modular Disk Storage Manager console.
1. Click the Enable a feature link.
2. Enter the key file as prompted.
3. Ensure that the screen is updated to indicate that Snapshot and Virtual Disk Copy are now enabled.
If Snapshot or Virtual Disk Copy ever becomes disabled, an alert will be generated and you can access the web site and repeat this process.
The engine processes commands that configure and monitor a Storage Array. It processes them one at a time. The user may access the script engine using the system
management command line interface (SMcli) utility at an operating system prompt.
NOTE:
The SMcli command is installed under the client directory of the selected path during a management station install of the MDSM software.
You can use the script language commands to define and manage all aspects of a storage array, such as host topology, virtual disk configuration, and controller
configuration. The actual number of commands is large.
NOTE:
If an array password is set, commands that request information from the array can be successfully issued without the password, but changes to the configuration must
be accompanied by the password.
You can use the command line interface to perform the following functions:
• Directly access the script engine and run commands in interactive mode or using a script file.
• Create script command batch files to be run on multiple storage arrays when you need to install the same configuration on different storage arrays.
• Run script commands on a storage array directly connected to a host, a storage array connected to a host by an Ethernet, or a combination of both.
• Display configuration information about the storage arrays.
• Add storage arrays to and remove storage arrays from the management domain.
• Perform automatic discovery of all storage arrays attached to the local subnet.
• Add or delete Simple Network Management Protocol (SNMP) trap destinations and email alert notifications.
• Specify the mail server and sender email address or SMTP (Simple Mail Transport Protocol) server for alert notifications.
• Direct the output to a standard command line display or to a named file.
Interactive Access
The scripting engine may be accessed interactively from the command prompt. If you enter SMcli and a storage array name but do not specify CLI parameters, script
commands, or a script file, the command line interface begins running in interactive mode. Interactive mode enables you to run individual commands without prefixing the
commands with SMcli. You can enter a single command, view the results, and enter the next command without typing the complete SMcli string. Interactive mode is useful
for determining configuration errors and quickly testing configuration changes.
NOTE:
There are several caveats which apply to interactive access. The most important of these are that it gives the user no acknowledgement as to the completion of an
operation, and there is no error checking or error handling. For this reason the routine use of the interactive method is discouraged and the scripted method should be
used instead.
Scripted Access
The table below lists the general form of some script language commands and provides a definition of each command.
• If configuration of the MD3000 storage array is cleared using the clear storagearray configuration command, restart all attached hosts prior to reconfiguring
the storage array.
• When using the command-line interface to create host ports, ensure that the host type is specified in the command. If no host is specified, the default host is Windows
2000/Server 20003 Non-Clustered.
• The SMcli -d parameter shows all storage arrays that are currently discovered.
. Each time a double quote is used within the command string the escape character must accompany it.
This example shows how to change the name of a storage array. The original name of the storage array is Payroll_Array. The new name is Finance_Array. The array is
being addressed by its name with in-band management, indicated by the -n. In-band management requires that the management station that issues the command must have
a SAS connection to the array. Otherwise, the command would be issued out-of-band, using the IP address of one of the RAID Controller Modules.
Windows
Linux
This example shows how to display all storage arrays in the current configuration. The command in this example returns the host name of each storage array. If you also
want to know the IP address of each storage array in the configuration, add the -i parameter to the command.
SMcli -d -i
The next example shows the command to reset the service tag on an MD3000 enclosure. The name of the array is TRNG01A, the password on the array is password, and
the new service tag is 6MNL42P. Again, the array name requires double quotes, so that the escape character < \ > must be used immediately before each double quote.
In-Band
When addressing an in-band command to the array with the -n prefix, the command is not controller specific as to which controller might own an addressed object. For
example, the script statement might be:
Where TRNG05A is the storage array's name as created in the management software. The command will be processed without regard to the controllers' ownership of
objects or their IP addresses.
Out-of-Band - IP Address
For example, to address a create virtualdisk command to a specific controller by its IP address (out-of-band), the owner= statement refers to the controller being
addressed by that IP address. If owner=0 is used in the create statement and the command is being addressed to controller 1's IP address, then the command will fail with
the following error.
Error 1011 - A management connection to the RAID controller module in slot 0 must be defined to
complete this operation.
Out-of-Band - Hostname
The array may also be accessed by addressing the storage array by an array name that is referenced in DNS or in the hosts file. In that instance, the entry will refer to one of
the controllers, but not to both. If an entry is made in the DNS table that references the array name (hostname) to the primary controller, then commands addressed to that
storage array using the DNS identity will be directed towards the primary controller. This will create problems when addressing a command that refers to objects owned by
the other controller.
NOTE:
The previous command shows the command being addressed to Controller 0 and the assigned owner will be Controller 0. If the command is directed at Controller 0
and the virtual disk is created with the owner being Controller 1, the command will fail.
WARNING:
This method of performance enhancement should be avoided in any situation wherein the data cannot be quickly and easily recovered from another source. Turning
off cache mirroring will allow conditions that can result in data loss.
WARNING:
All I/O to the storage array should be halted and the hosts taken offline for this procedure.
At the SMcli command prompt, use the following command structure to disable or enable the cache mirror: set (allVirtualDisks | virtualDisk ["virtualDiskName"]
|.virtualDisks ["virtualDiskName1" ... "virtualDiskNamen"] | virtualDisk <wwid>) mirrorCacheEnabled=(TRUE | FALSE)
NOTE:
When making a disk query, the preferred method is to query the storage array name in-band. If the command fails when addressed to a specific RAID Controller
Module, it may not own the virtual disk. Repeat the command, directing the command to the other controller.
To show the properties of a disk group use the following command syntax:
...and / or,
After the premium feature has been disabled the Enable a Feature link will appear which allows the user to reload the feature key file and restore normal operation.
Clustering
Unless stated otherwise, reference to the Windows OS on this page implies either Windows® Server 2003 Enterprise, Windows® Server 2003 Enterprise x64 Edition, or
Windows® Server 2003 R2 Enterprise Edition.
NOTE:
Dell does not support upgrades from a non-redundant cluster configuration to a redundant configuration.
NOTE:
Majority Node Set (MNS) Quorum resource type is not supported on the MD3000i.
MDSM Client
The software runs on the management station to centrally manage the Dell™PowerVault MD3000i RAID enclosures. You can use Dell™ PowerVault Modular Disk
Storage Manager (MDSM) to perform tasks such as creating or managing RAID arrays, binding virtual disks, and downloading firmware.
MDSM Agent
This software resides on each cluster node to collect server-based topology data that can be managed by the MDSM Client.
Multi-Path Software
Multi-path software (the failover driver) is software on each cluster node that provides management of the redundant data path between the server and the RAID enclosure.
For the multi-path software to correctly manage a redundant path, the configuration must provide for redundant data path NICs and redundant cabling. Multi-path software
identifies the existence of multiple paths to a virtual disk and establishes a preferred path to that disk. If any component in the preferred path fails, the multi-path software
automatically re-routes I/O requests to an alternate path so that the storage array continues to operate without interruption. In a topology with redundant Ethernet switches
and redundant pathways to each switch from the host, and redundant paths from each switch to both controllers, the MPIO driver will be sending I/O through both NICs in
round-robin fashion. Failure of one Ethernet path at the host will cause MPIO to send all the data through the other NIC. Failure of both of the pathways to a single
controller will cause MPIO to reroute the data through the other controller.
Failback Mode
To set the correct failback mode on each cluster node, you must merge the MD3000i Stand Alone to Cluster.reg file located in the windows\utility directory of the Dell
PowerVault MD3000i Resource CD into the registry of each node. These registry files enable correct failback operation on the host.
NOTE:
If you uninstall and reinstall the multi-path I/O software or MDSM, you must merge the MD3000i Stand Alone to Cluster.reg file into the registry again.
NOTE:
If you are reconfiguring a cluster node into a stand-alone host, you must merge the MD3000i Cluster to Stand Alone.reg file located in the windows\utility directory
of the Dell PowerVault MD3000i Resource CD into the host registry.
Advanced Features
• Snapshot Virtual Disk - captures point-in-time images of a virtual disk for backups or testing without affecting the contents of the source virtual disk.
NOTE:
When you are creating a snapshot virtual disk, map the snapshot virtual disk to only one node in the cluster. Mapping the snapshot virtual disk to the host group
or both nodes in the cluster may allow both nodes to access data concurrently, causing data corruption.
• Virtual Disk Copy - generates a full copy of data from the source virtual disk to the target virtual disk in a storage array. You can use this feature to back up data,
copy data from disk groups that use smaller capacity physical disks to disk groups using greater capacity physical disks, or restore snapshot virtual disk data to the
source virtual disk.
NOTE:
When you attempt to create a virtual disk copy of an MSCS cluster shared disk directly, the operation fails and displays this error message: The operation
cannot complete because the selected virtual disk is not a source virtual disk candidate. To create a virtual disk copy for an MSCS cluster shared disk, create a
snapshot of the disk, and then use the snapshot virtual disk as the source for the virtual disk copy. When you create the snapshot virtual disk, do not map the
snapshot virtual disk to a cluster node.
When you use these advanced features in a redundant cluster configuration, the automatic failback feature is disabled by default. Therefore, when a failed component is
repaired or replaced, the virtual disk(s) do not automatically transfer to the preferred controller. You can manually initiate a failback using MDSM Client or the Storage
Management Command Line Interface (SMcli).
If the cluster of shared disks fails, restore it from the target virtual disk, using one of these methods:
• Use Virtual Disk Copy to transfer the data from the target virtual disk back to the cluster shared disk.
• Un-assign the clustered shared disk from the host group and map the target virtual disk to the host group.
NIC Teaming
Network Interface Card (NIC) teaming combines two or more NICs to provide load balancing and/or fault tolerance. Dell supports cluster NIC teaming, but only in a public
network. NIC teaming is not supported in a private network. Use the same brand of NICs in a team, and do not mix brands of teaming drivers. NIC teaming is not supported
on host iSCSI initiators connecting to the MD3000i.
1. Click the Configure tab and then click the Configure Host Access link.
2. Select both the cluster nodes individually, or by clicking the Select All check box beneath the list.
3. Set the Host Type on each cluster node. Click the View Details button (next to the list) and choose the appropriate host type. For a non-redundant cluster host
configuration, select Windows MSCS Cluster - Single Path. For a redundant configuration with dual iSCSI initiators, select Windows 2000/Server 2003
Clustered.
4. Click OK to configure access to the array for the hosts you selected.
Overview
The MD3000i Configuration Utility provides a consolidated approach for configuring MD3000i storage arrays and iSCSI host servers via a single wizard-driven interface.
The MD3000i Configuration Utility is available from the MD3000i Resource CD or as a standalone application that can be downloaded from http://www.support.dell.com/.
NOTE:
The primary means of launching the utility is from the menu presented from the Resource CD. However, there is also an MDconfig.bat file (Windows) and an
MDconfig.sh file (Linux), in the root of the Resource CD, that can launch the utility directly.
2.
Choose the MD3000i Configuration Utility option.
3.
The initial configuration screen is displayed.
4.
Once the wizard is launched, the MD3000i storage array and associated host servers can be configured.
Array Configuration
Storage Array Configuration Overview
The overall process for storage array configuration using the MD3000i Configuration Utility is outlined in the following steps.
NOTE:
In order to configure the storage array, the configuration utility requires network access to the management ports of the MD3000i. This means you must have a
properly functioning network infrastructure before you attempt to configure your storage array.
• Array Configuration
• Host Configuration
• Update Configuration File
2.
If you use another utility such as Modular Disk Storage Manager to modify the iSCSI connectivity settings of your storage array after the initial configuration
process, you can update the configuration files this utility generated with the most recent settings by using Update Configuration File option.
The Update Configuration File option is a manual method to synchronize the MD3000i configuration file with the physical MD3000i configuration, after the
physical MD3000i configuration has been updated by another tool such as Modular Disk Storage Manager.
4.
Automatic discovery will query the local sub-network for all MD3000i storage arrays and may take several minutes to complete.
Manual discovery allows you to locate MD3000i storage arrays that are outside of the local sub-network. Manual discovery requires selecting whether your storage
array has a single controller (simplex) or dual controllers (duplex).
To perform an automatic discovery of storage arrays within the local subnet choose Automatic.
5.
The Storage Array Configuration screen presents a list of all MD3000i storage arrays that were obtained from the discovery process. The Add and Remove buttons
can be used to modify the contents of the list. This is particularly useful to add arrays from multiple configuration files or to remove arrays you are not interested in
configuring.
6.
The Storage Array Name and Password screen prompts you to provide a name for your MD3000i storage array and provides the opportunity to specify an array
password. If you are setting up the storage array for the first time, there will not be an existing password.
An uninitialized storage array will have a default name of "Unnamed." Each storage array should be given a unique name, the maximum name length is 30
characters and can consist of letters, numbers, and the special characters underscore (_), minus (-), and pound sign (#). No other special characters are permitted.
MD3000i storage arrays can be configured with a password that will be required to modify their settings. For security reasons, a storage array will enter a "lockout"
state for 10 minutes if you fail to provide the correct password for ten attempts. The maximum password length is 30 characters.
7.
The iSCSI ports on your storage array support two protocols; IPv4 and IPv6. The management ports only support IPv4. Each protocol may be configured
automatically or manually.
8.
There must be a DHCP server available to receive the request and assign IP addresses. If there is not a DHCP server on your network, you will need to configure the
IP addresses manually.
Unlike IPv4, IPv6 has a built-in mechanism for devices to automatically configure themselves. Due to this capability, your storage array will automatically have a
local IP address assigned regardless of your choice for automatic configuration. This local IP address is not routable. Autoconfiguration is a process that allows your
storage array to communicate with the routers on your network to automatically configure routable and router IP addresses. It is typical for your storage array to
have multiple IPv6 addresses.
9.
Each management port that needs to be configured with an IP address will display the IPv4 Port Configuration screen. In addition to a text prompt with the name of
the port being configured, a diagram illustrating the management port is also shown.
You will only be prompted to enter an IP addresses if you previously chose to manually configure the management port.
10.
As you progress through the management ports, the diagram will update to reflect the IP addresses for the ports you have already entered.
11.
Each iSCSI port can be configured with an automatic or manual IP address.
12.
Controller 0 IN 0 is the first port to be configured.
14.
Controller 1 IN 0 is the third port to be configured.
15.
Controller 1 IN 1 is the fourth port to be configured.
16.
The iSCSI ports can be manually configured with IPv6 addresses.
17.
If the iSCSI ports are configured with manual IPv6 addresses the iSCSI Port Configuration Screen is displayed twice for every iSCSI port; once for configuring an
IPv4 address and a second time for configuring an IPv6 address.
18.
Each port that needs to be configured with an IPv6 address will present the IPv6 Port Configuration screen. The Local IP address field may not be changed. You will
only be prompted to enter an IPv6 addresses if you previously chose to manually configure the port.
19.
CHAP (Challenge Handshake Authentication Protocol) is an optional iSCSI authentication method where the storage array (target) authenticates iSCSI initiators on
the host servers. Two types of CHAP are supported - target CHAP and mutual CHAP.
In target CHAP, the storage array authenticates all requests for access issued by the iSCSI initiator(s) on the host server(s) via a CHAP secret. To set up target
CHAP authentication, you enter a CHAP secret on the storage array, then configure each iSCSI initiator on the host server(s) to send that secret each time it attempts
to access the storage array. The target CHAP method will be used if you choose to configure CHAP with this utility. The length must be a minimum of 12 characters
and a maximum of 16 characters. The CHAP secret must use ACSII code characters with a decimal value between 32 and 126.
20.
Mutual CHAP requires the storage array to send a secret that is different than the target secret back to the iSCSI initiator. This is an advanced configuration option
and will require further setup outside the scope of MD3000i Configuration Utility.
21.
The summary screen provides you with an opportunity to review the settings you specified for the storage array in a consolidated manner before applying them.
After applying the configuration, you will no longer be able to go back and make changes for this array without restarting the process from the Select Storage Array
screen.
22.
The Save Configuration screen provides the opportunity to save the settings you specified for the storage array.
This option saves the settings you specified for an MD3000i storage array into a configuration file of your choice. These configuration files are primarily used to
expedite the host configuration process, but they may be also used to restore your storage array's iSCSI configuration.
23.
In order to have the configuration for all storage arrays easily available when you configure your hosts, Dell recommends that you save all storage arrays into a
single file by using the Append Configuration to Existing File option on the Save Configuration screen.
Dell also recommends that you save the configuration files generated during this process to a location that can be accessed from all hosts, such as a network share or
removable storage, so the MD3000i Configuration Utility can use the information to assist in the host configuration process.
25.
The Configure Host Connectivity screen provides the opportunity to configure host server access to the MD3000i storage array.
26.
Once the array configuration has been completed the Configuration Utility displays the Finished screen.
27. NOTE:
If the MD3000i Configuration Utility fails to complete the required configuration, the Modular Disk Storage Management tool can be used to configure the
MD3000i Storage array.
Host Configuration
After you have completed configuring the MD3000i storage array, the next task is to run the MD3000i Configuration Utility on all hosts that will need to access the storage
arrays. It is not necessary to have a configuration file generated from the MD3000i configuration process, but having one can simplify the host configuration process.
NOTE:
The option to configure a host will be disabled if the machine the utility is running on does not have an iSCSI initiator or the required driver components installed.
2.
The Load Saved Configuration option allows you to load MD3000i configuration information that has been previously saved using this utility.
The Discover New Arrays option allows you to discover storage arrays by connecting to the MD3000i management ports.
The Enter iSCSI Port IP Adress option allows you to enter one of the IP addresses of an MD3000i storage array's iSCSI ports, if the host does not have access to
the management ports nor a configuration file.
The configuration file that was saved during the MD3000i Array configuration can now be used, choose Load Saved Configuration and point the wizard to the
location of the previously saved .xml configuration file.
3.
The Storage Array Connection screen provides the opportunity to select a storage array for the host initiator to connect to.
5.
The Storage Array Login screen provides the opportunity to specify which ports on the MD3000i the initiator should log in to.
6.
The Connect to Additional Arrays screen provides the opportunity to configure connections to addtional MD3000i storage arrays.
7.
Once the host configuration has been completed the Configuration Utility displays the Finished screen.
8. NOTE:
If the MD3000i Configuration Utility fails to complete the required configuration, the host operating system can be used to configure the host initiator
connection.
Foreground Initialization
The RAID controller module firmware supports full foreground initialization for virtual disks. All access to the virtual disk is blocked during the initialization
process. During initialization, zeros (0x00) are written to every sector of the virtual disk. The virtual disk is available after the initialization is completed without
requiring a RAID controller module restart.
Background Initialization
The RAID controller module executes a background initialization when the virtual disk is created to establish parity, while allowing full host access to the virtual
disks. Background initialization does not run on RAID 0 virtual disks. The background initialization rate is controlled by MD Storage Manager. You must stop an
ongoing background initialization before you change the rate, or the rate change will not take effect. After you stop background initialization and change the rate, the
rate change will take effect when the background initialization restarts automatically.
NOTE:
Unlike initialization of virtual disks, background initialization does not clear data from the physical disks.
Consistency Check
A consistency check verifies the correctness of data in a redundant array (RAID levels 1, 5, and 10). For example, in a system with parity, checking consistency
means computing the data on one physical disk and comparing the results to the contents of the parity physical disk. A consistency check is similar to a background
initialization. The difference is that background initialization cannot be started or stopped manually, while consistency check can.
NOTE:
Dell recommends that you run data consistency checks on a redundant array at least once a month. This allows detection and automatic replacement of
unreadable sectors. Finding an unreadable sector during a rebuild of a failed physical disk is a serious problem, since the system does not have the redundancy
to recover the data.
• Unrecovered media error - Data could not be read on the first attempt or on any subsequent attempts. For virtual disks with redundancy protection, data is
reconstructed, rewritten to the physical disk, and verified and the error is reported to the event log. For virtual disks without redundancy protection (RAID 0
virtual disks and degraded RAID 1 and RAID 5 virtual disks), the error is not corrected but is reported to the event log.
• Recovered media error - Data could not be read by the physical disk on the first attempt but was successfully read on a subsequent attempt. Data is rewritten
to the physical disk and verified and the error is reported to the event log.
• Redundancy mismatches error - The first ten redundancy mismatches found on the virtual disk are reported to the event log.
• Unfixable error - Data could not be read and parity or redundancy information could not be used to regenerate the data. For example, redundancy information
cannot be used to reconstruct the data on a degraded virtual disk. The error is reported to the event log.
Cycle Time
The media verification operation runs only on selected disk groups, independent of other disk groups. Cycle time is how long it takes to complete verification of the
metadata region of the disk group and all virtual disks in the disk group for which media verification is configured. The next cycle for a disk group starts
automatically when the current cycle completes. You can set the cycle time for a media verification operation between 1 to 30 days. The firmware throttles the
media verification I/O accesses to disks based on the cycle time.
The RAID controller module tracks the cycle for each disk group independent of other disk groups on the controller and creates a checkpoint. If the media
verification operation on a disk group is preempted or blocked by another operation on the disk group, the firmware resumes after the current cycle. If the media
verification process on a disk group is stopped due to a RAID controller module restart, the firmware resumes the process from the last checkpoint.
1. Click the Tools tab, then click the Change Media Scan Settings link.
2. Select the number of days allowed for the media scan to complete in the Scan duration (days) box.
NOTE:
Performing the media scan frequently may negatively impact the performance of other operations. Adjust scan duration based on the performance needs
of your storage array.
3. In the Select virtual disks to scan box, click the virtual disk you want to include in the media scan.
NOTE:
Press <Ctrl> and click to add more than one virtual disk to the media scan. Click the Select All button to include all virtual disks in the media scan.
4. Check the Scan selected virtual disks option box to enable scanning, then choose either the With consistency check or Without consistency check radio
button.
Consistency check enables parity data to be checked during the media scan.
5. Click OK to accept the updated media scan settings.
You cannot perform a media scan while performing another long-running operation on the disk drive such as reconstruction, copy-back, reconfiguration, volume
initialization, or immediate availability formatting. If you want to perform another long-running operation, you should suspend the media scan.
NOTE:
A background media scan is the lowest priority of the long-running operations.
1. Click the Tools tab, then click the Change Media Scan Settings link.
2. Check the Suspend media scan option box.
3. Click OK to suspend media scanning.
When considering a segment-size change, two scenarios illustrate different approaches to the limitations:
• If I/O activity stretches beyond the segment size, you can increase it to reduce the number of disks required to satisfy a single I/O. Using a single physical disk
for a single request frees other disks to service other requests, especially when you have multiple users accessing a database or storage environment.
• If you are using the virtual disk in a single-user, large I/O environment (such as for multimedia application storage), performance can be optimized when a
single I/O request is serviced with a single data stripe (the segment size multiplied by the number of physical disks in the disk group used for data storage). In
this case, multiple disks are used for the same request, but each disk is only accessed once.
NOTE:
If you try to start a disk group process on a controller that does not have an existing active process, the start attempt will fail if the first virtual disk in the disk
group is owned by the other controller and there is an active process on the other controller.
NOTE:
Setting a high priority level will impact storage array performance. It is not advisable to set priority levels at the maximum level. Priority should also be
assessed in terms of impact to host access and time to complete an operation. For example, the longer a rebuild of a degraded virtual disk takes, the greater the
risk for potential secondary disk failure.
Disk Migration
You can move virtual disks from one array to another without taking the target array offline. However, the disk group being migrated must be offline prior to
performing the disk migration. If the disk group is not offline prior to migration, the source array holding the physical and virtual disks within the disk group will
mark them as missing. However, the disk groups themselves will still be migrated to the target array.
An array can import a virtual disk only if it is in an optimal state. You can move virtual disks that are part of a disk group only if all members of the disk group are
being migrated. The virtual disks automatically become available after the target array has finished importing all the disks in the disk group.
When you migrate a physical disk or a disk group from one MD3000 / MD3000i array to another, the array you migrate to will recognize any data structures and/or
metadata you had in place on the migration source array. However, if you are migrating from another storage array, the MD3000 / MD3000i array will not recognize
the migrating metadata. In this case, the RAID controller will initialize the physical disks and mark them as unconfigured capacity.
• Hot virtual disk migration - Disk migration with the destination storage array power turned on.
• Cold virtual disk migration - Disk migration with the destination storage array power turned off.
NOTE:
To ensure that the migrating disk groups and virtual disks are correctly recognized when the target storage array has an existing physical disk, use hot virtual
disk migration.
NOTE:
Without the delay between physical disk insertions, the storage array can become unstable and manageability is temporarily lost.
Migrating Virtual Disks from Multiple Storage Arrays into a Single Storage Array
When migrating virtual disks from multiple, different storage arrays into a single destination storage array, move all of the physical disks from the same storage
array as a set into the new destination storage array. Ensure that all of the physical disks from a storage array are migrated to the destination storage array before
starting migration from the next storage array.
NOTE:
If the physical disks are not moved as a set to the destination storage array, the newly relocated disk groups might not be accessible.
NOTE:
Disk groups from multiple storage arrays should not be migrated at the same time to a storage array that has no existing physical disks. Use cold virtual disk
migration for the disk groups from one storage array.
Disk Roaming
Moving physical disks within an array is called disk roaming. The RAID controller module automatically recognizes the relocated physical disks and logically
places them in the proper virtual disks that are part of the disk group. Disk roaming is permitted whether the RAID controller module is either online or powered off.
NOTE:
The disk group must be offline before moving the physical disks.
• A media error is encountered when trying to access a physical disk that is a member of a non-redundant disk group (RAID 0 or degraded RAID 1, RAID 5 or
RAID 10).
• An error is encountered on source disks during rebuild.
NOTE:
Valid data on an unreadable sector is no longer accessible.
Configuration Maintenance
The MD3000 / MD3000i RAID Storage Arrays store system configuration on the physical disks installed in the system. These physical disks are designated for
maintaining a consistent description of the array and all its disk groups and virtual disks, as well as device names and IP addresses.
It is important to understand that the storing of a configuration is not potentially destructive; however, the restoration of a configuration can be destructive, so
appropriate care must be taken when restoring a configuration.
MDSM uses a primary (emwdata_v0X.bin) and backup (emwback_v0X.bin) configuration file to store:
• A list of storage arrays and hosts included in the management domain. This list is automatically updated when Automatic Discovery, Add Device, Rescan, or
Remove Device option is performed.
• The name of the mail server you set to forward email to and configured email alert destinations.
• The sender's email addresses you set that will appear on every mail message sent to configured email alert destinations.
• Alert notification destination addresses you set for email and SNMP trap messages about individual storage systems in the management domain.
If these files are deleted, the system will automatically recreate them. When the command is issued to list the current configuration, the command line utility is
reading the emwdata_v0X.bin file. The command line can be used to see the parameters listed above, using the "-d" (display) instruction along with the desired
parameter.
This command will display the array name and the IP addresses of the two controllers: SMcli -d -i The -i parameter refers to the internet address. The command is
going to run in-band to an attached storage array. The output from the command will read the eemwback.v0X.bin file for the requested parameter and display it in
the screen response.
This is an image of a screen shot showing the command query for the current configuration.
While this configuration information does provide several details of interest regarding the storage array, it is by no means as detailed as the level available in an
actual array configuration file. The configuration saved at the command line contains all the details needed to duplicate the entire structure of the array, except for
the user data stored on the virtual disks.
After you have created a new configuration or if you want to copy an existing configuration for use on other storage arrays, you can save the configuration to a file.
To save the configuration, use the save storageArray configuration command. Saving the configuration creates a script file that you can run on the command line.
The following syntax is the general form of the command:
The configuration can be saved at the command line with this syntax:
Where:
• file - Name of the file that contains the configuration values. You must put quotation marks (" ") around the file name.
• allConfig - Saves all of the configuration values to the file. (If you choose this parameter, all of the configuration parameters are set to TRUE.)
• globalSettings - Saves the global settings to the file. To save the global settings, set this parameter to TRUE. To prevent saving the global settings, set this
parameter to FALSE. The default value is TRUE.
• virtualDiskConfigAndSettings - Saves the virtual disk configuration settings and all of the global settings to the file. To save the virtual disk configuration
and global settings, set this parameter to TRUE. To prevent saving the virtual disk configuration and global settings, set this parameter to FALSE. The default
value is TRUE.
• hostTopology - Saves the host topology to the file. To save the host topology, set this parameter to TRUE. To prevent saving the host topology, set this
parameter to FALSE. The default value is FALSE.
• lunMappings - Saves the LUN mapping to the file. To save the LUN mapping, set this parameter to TRUE. To prevent saving the LUN mapping, set this
parameter to FALSE. The default value is FALSE.
NOTE:
In Windows the .scr file will be assumed to be a screen saver file.
WARNING:
Note that the script engine will not check for the presence of a file by the same name as the one specified in the command and will overwrite the file without
warning.
WARNING:
Because the write zeros flag is on by default any pre-existing data on the storage array will not be preserved. The recover command can be used in place of
the create command, which turns off the write zeros flag that, in theory, allows the structure to be reloaded without destroying the data. The use of the recover
command can be seen in the Procedures area of the site.
The configuration can be loaded onto the array at the command line using the "-f" command, while tells the system to use the following file to set the parameters
listed in the file. When the system is being instructed to read from a file the filename is not enclosed in quotes as it is when writing to the file.
NOTE:
When instructing the array to load the configuration file, the file should be directed towards the array name rather than the IP address. When the file is directed
to the IP address of a controller and the controller is being instructed within the file to create a disk group or virtual disk to be owned by the other controller,
the command will fail. When the configuration file is being directed towards the array name the command to build disk groups and virtual disks on both
controllers will succeed. If the configuration must be loaded remotely, change the owner= statement to give ownership of all virtual disk resources to the
RAID Controller that will receive the command. The virtual disks can be reassigned after the configuration has been created.
The command is issued with the following syntax:
NOTE:
Path to the configuration file is assumed to be the root directory.
Firmware is the operating system that runs on the RAID Controller. It may be upgraded and enhanced to correct defects found or modify/add features. The
NVSRAM is a more static set of values that determine the operating parameters of the controller, and set behavior. It is expected that the NVSRAM will be updated
less frequently than the firmware.
During the installation, the multi-path driver maintains data access through one RAID Controller module while the other RAID Controller module firmware is
upgraded.
NOTE:
Use MDSM to verify that it lists both RAID Controller modules as optimal. Downloading firmware when either or both controllers are non-optimal will result
in unsynchronized firmware, and the installation will have to be repeated after restoring the RAID controller module(s) to optimal condition.
The firmware installation is available on the Support tab in MDSM. The software displays only those firmware files that are compatible. If one controller is
replaced, the assumption is made that the firmware should not be revised without customer intent. Therefore, a replacement controller with newer firmware will be
automatically flashed to match the firmware in the remaining controller, without regard to which is newer. If both controllers are replaced, the firmware of both is
set to the newest version available on the two controllers. The RAID Controller module holds dual redundant copies of the firmware in flash memory along with
recovery code. The firmware can run from either image, and the images are synchronized except during firmware download.
• MDSM
• Command Line Interface (SMcli)
• Serial Port (advanced method)
The following table lists the steps required to flash the RAID Controller firmware and NVSRAM through MDSM.
NOTE:
Before you download the RAID controller module firmware or NVSRAM files, ensure that the appropriate multi-path driver is running on the host.
Previous Procedure Table not closed!
Flashing Firmware and NVSRAM on the Storage Array Through MDSM
1.
The current version of firmware can be seen by clicking on Storage Array Profile on the Summary tab in MDSM.
his is an image of the user selecting the storage array profile link on the summary tab.
2.
Both the firmware version and the NVSRAM version can be seen in the Summary window of the Storage Array Profile.
3.
On the Support tab in MDSM, click Download Firmware.
This is an image of the MDSM Support tab showing the Download Firmware link.
4.
Click Download RAID Controller Module Firmware.
This is an image showing the link to download the RAID controller module firmware.
NOTE:
The application launches a window where the user can browse to the location of the firmware file.
5.
Click the file name, and then click OK.
This is an image of the select file window where the user can navigate to the firmware
file.
7.
Click Transfer to begin the download process.
8.
Click Yes to verify the download.
NOTE:
The firmware installation may take up to 15 minutes.
9.
If the download fails, verify that the RAID controller modules' firmware matches in the RAID Controller Modules tab in the Storage Array Profile.
10.
Click Close to complete the firmware installation procedure.
Only Dell-supported 3.0 Gbps SAS physical disks are supported in the storage enclosure. The RAID firmware looks for specific identifiers in all attached physical
disks to ensure they are valid. If the RAID controller module detects physical disks that are not validated by Dell in the SAS solution, such as unsupported SAS disk
or a SATA disk, the RAID controller module marks the disk as unsupported and places it in a Not Ready state.
This is an image of the user clicking on the Download Firmware link in the Support screen.
2.
Click Download Physical Disk Firmware.
This is an image of the user clicking on the download physical disk firmware link.
NOTE:
The interface queries the physical disks and shows the versions of physical disk firmware that are present on the system.
This is an image of the package selection screen for physical disk firmware.
NOTE:
Click the individual file to determine if the firmware file selected is compatible with a physical disk that is present. If the firmware highlighted is not
compatible with any physical disk installed, it will not let you proceed with that file.
This is an image of the user selecting an inappropriate firmware package in the package
selection window.
NOTE:
After you select an appropriate file, the firmware is installed on all physical disks in the system that require that firmware upgrade. You can then select
another firmware file that is compatible with other physical disk types present until the process is complete.
From the command line, type: C:\SMcli -n TRNG01A -c "set enclosure [0] serviceTag=\"6MNL42P"\";
Where:
NOTE:
If a password is set on the array, the password switch and password will prepend the command string, as in the following example.
Verify that the command completes successfully. In MDSM, click Tools>Set or Change Enclosure Tags and verify that the correct service tag is seen. If the
window is already open, close it and reopen it and the new service tag should be seen immediately.
This is an image of a service tag replacement procedure being done using the command line interface.
Diagnosis of a Controller
The diagnose controller command provides three tests that enable you to verify that a RAID controller module is functioning correctly:
• Read test
• Data loopback test
• Write test
The read test initiates a read command as it would be sent over an I/O data path. The read test compares data with a known, specific data pattern, checking for data
integrity and errors. If the read command is unsuccessful or the data compared is not correct, the RAID controller module is considered to be in error and is placed
offline.
Run the data loopback test only on RAID controller modules that have connections between the RAID controller module and the physical disks. The test passes data
through each RAID controller module physical disk-side channel out onto the loop and back again. Enough data is transferred to determine error conditions on the
channel. If the test fails on any channel, this status is saved so that it can be returned if all other tests pass.
The write test initiates a write command as it would be sent over an I/O data path to the diagnostics region on a specified physical disk. This diagnostics region is
then read and compared to a specific data pattern. If the write fails or the data compared is not correct, the RAID controller module is considered to be in error, and
it is failed and placed offline.
For best results, run all three tests at initial installation. Also, run the tests any time you make changes to the storage array or to components connected to the storage
array (such as hubs, switches, and host adapters).
The test results contain a generic, overall status message and a set of specific test results. Each test result contains the following information:
Events are written to the MEL when diagnostics are started and when testing is completed. These events help you to evaluate whether diagnostics testing was
successful or failed and the reason for the failure.
Where:
This is an image of a screen shot showing the controller diagnosis using testID=2.
This is an image of a screen shot showing the controller diagnosis using testID=3.
Since host-based multi-path software is configured for automatic failback, once the RAID controller module is replaced, its data paths and virtual disks are restored
to their original RAID controller module.
NOTE:
Do not power off the array enclosure to replace the RAID controller module. The existing controller will update the new RAID controller module's firmware to
match the existing RAID controller module during a hot replacement. If the RAID controller module is replaced with the array powered off, the new controller
could update the existing controller which potentially will not have the ideal firmware for your environment.
NOTE:
If attached hosts have non-redundant connections to the storage array, offlining a controller will make the disks accessed through that controller unavailable for
those hosts that are attached to that controller. Those hosts should be downed or I/O should be quiesced during this operation so that any write operations
attempted while the controller is offline do not time out.
Complete the following steps to place a RAID controller module offline or online.
2.
In the Manage RAID Controller Modules window, click Place RAID Controller Module Online or Offline.
3.
In the RAID Controller Module drop-down list, select the appropriate RAID Controller to bring offline, and then click OK.
NOTE:
You must select the proper RAID controller in this step. If you have to take the remaining controller offline in a system with a RAID Controller that is
malfunctioning but still online, you may disrupt client access to the system.
This is an image showing the selection of a RAID controller to be brought offline.
4.
Click Yes to confirm the process.
5.
To verify that the controller has been placed offline, return to the Summary tab, and then click Storage Array Needs Attention.
This is an image showing the Status of Storage Array Needs Attention in the Summary window.
NOTE:
The Recovery Guru window opens. Read the additional information here if needed. In the Details window the Service Action (removal) allowed
option indicates Yes.
This is an image of the Recovery Guru window that is activated when clicking on the Storage
Array Needs Attention link.
6.
Disconnect each of the Ethernet cables and the SAS cable (if using enclosure expansion) from the controller, making sure to identify the original position of
each cable.
7.
Remove the RAID Controller Module and insert the replacement controller.
9.
In the Place RAID Controller Module Online or Offline window, click OK to place the controller online.
This is an image of the Place RAID Controller Online or Offline window, where the user can see
the default option of placing the offline controller back online.
10.
Click Yes to continue through the warning that displays the message that placing the controller online makes it available for I/O operations.
11.
Click the Recheck button in the Recovery Guru.
NOTE:
The failure should no longer appear in the Summary area. It may take several minutes for the Storage array status on the Summary tab to return to
normal.
This is an image showing the Storage array status having returned to optimal after the RAID controller has
been placed back online.
12. Stop SMAgent. The net stop smagent command can be used at the command line. After the service has stopped, restart SMAgent with the net start
smagent command.
When the SMcli command completes, the array status will change to Needs attention. Disregard and proceed with these steps.
3.
Insert the redundant RAID Controller into slot 1. Allow the controller to boot, stabilize and show a green status LED.
4. Add the new management connection to MDSM with the following command:
Where IP Address 192.168.128.102 is the current IP address of the new controller, and 10.37.121.54 is the desired address.
NOTE:
The controller will boot looking for a DHCP server. If one is not available, then the controller will default to 192.168.128.102 (in the secondary
position). If the IP address of the second controller was set by DHCP but is not known, the password reset (serial) cable can be used to determine the
current IP address, with the netCfgShow command.
5.
Start the Modular Disk Storage Manager (MDSM). Click Add New Storage Array.
6.
Select Manual.
7.
Select Out-of-band management. Enter the IP addresses of the management ports of the two RAID controllers and click Add.
This is an image of the user select out of band, and entering the I P
addresses of the two RAID controllers.
8. NOTE:
The simplex controller does not have cache mirroring enabled, so that the user must now enable this attribute for each of the existing Virtual Disks. Any
new Virtual Disks created will automatically have this cache attribute enabled.
Using SMcli issue the following command to modify cache mirroring:
The output will look similar to the screen shot. The red arrow notes the name of one of the virtual disks. Also shown, in the red circle, are the read and write
cache attributes. The write cache with mirroring will be enabled.
This is a partial image of the output generated for the show all virtual disks command.
10.
This command must be completed for every virtual disk. After the command completes successfully for each disk, re-run the show allVirtualDisks
command to verify that the cache attributes are enabled for each Virtual Disk. Write cache without batteries will remain disabled.
NOTE:
When the upgrade is complete, the non-volatile static RAM (NVSRAM) version of RAID controller 0 will continue to be reported in simplex mode
even though the RAID controller is actually running in duplex mode. Also, all the existing virtual disks will continue to be owned by controller 0. The
virtual disks will need to be redistributed, with a possible dependency on the type of host connectivity.
MD3000i Utilities
Windows Utilities
• Uninstall MD Storage Manager.exe - The main install executable for removing the application. It is called as part of the standard Add/Remove procedure.
This file is not for direct customer execution.
• Modular Disk Storage Manager Client.exe - The executable that starts the client application. It is linked via the application icon.
• SMclient.bat - The batch file to start the client application. SMclient calls into the .jar file and typically is not for direct customer execution.
• SMcli.exe - The Command Line (CLI) Script utility, which is used to configure and monitor storage arrays by issuing commands from Windows command
prompts or a Linux operating system path.
• asyncRVMUtil.bat - A Command Line utility used to synchronize Asynchronous RVM mirrors. It is not applicable for the MD3000 / MD3000i, including
for potential future use.
• SMmonitor.exe - The Persistent Monitor Application, which is registered as a Windows Service. It is not for direct customer execution.
• SMsnapassist.exe - The utility used to map a Snapshot to a host. This utility may be the same as SMrepassist
• SMrepassist.exe - A Windows-only utility used to flush cache prior to a Virtual Disk Copy/Snapshot to a host
SMrepassist (replication assistance) is a host-based utility. Use this utility before and after you create a virtual disk copy on the Windows operating system.
Running SMrepassist ensures that all the memory-resident data for file systems on the target virtual disk is flushed and that the driver recognizes signatures
and file system partitions.
You also can use this utility to resolve duplicate signature problems for snapshot virtual disks. If you are running a Windows operating system, use the
SMrepassist utility to flush the source virtual disk's cache prior to creating a Snapshot.
SMrepassist is installed in the "util" directory of your host server installation. Run the utility from the command line in a DOS window on a host server
running Windows.
SMrepassist -f <filesystem-identifier>
Where -f flushes all the memory-resident data for the file system indicated by <filesystem-identifier>, and <filesystem-identifier> specifies a unique file
system in this syntax: drive-letter: <mount-point-path>
The file system might include only a drive letter or a drive letter and a mount point.
An error message appears in the command line when the utility cannot distinguish between:
◦ Source virtual disk and snapshot virtual disk (for example, if the snapshot virtual disk has been removed)
◦ Standard virtual disk and virtual disk copy (for example, if the virtual disk copy has been removed)
• SMdevices.bat> - Utility that displays the physical devices mapped to the host from the array. Includes information such as virtual disk names and RAID
controller module ownership information. This file calls dsmUtil.exe.
• NTSnapshot.bat - This is similar to SMsnapassist and SMrepassist and should be deprecated for the MD3000.
• SMagent.exe - Agent application that allows in-band management access to the Storage Array. It includes another thread that runs the Host Context Agent
service for topology discovery. SMagent is registered as a Windows Service and is not for direct customer execution.
• dsmUtil.exe - DSM utility to provide debugging and support for the DSM driver, which is part of MPIO. A README will be included with the utility.
• rdacInstall.exe - Utility used to install and remove the DSM, used by the installer to install the DSM. It is called as part of the standard Add/Remove
procedure. This file is not for direct customer execution.
Linux Utilities
• SMagent - Agent application that allows in-band management access to the Storage Array. It is registered as a daemon. It is not for direct customer execution.
• SMmonitor - The Persistent Monitor Application, which is registered as a daemon. It is not for direct customer execution.
• Uninstall_dellmdstoragemanager - Install script for removing the application. It is used by Add/Remove Programs.
• Genfileattributes - Script used by the MPP driver during install. It is not for customer execution.
• Genuniqueid - Script used by the MPP driver during install. It is not for customer execution.
• Hbacheck - Script used by the MPP driver during install. It is not for customer execution.
• mppSupport - Utility used by Support to gather information about the host and the configuration.
• mppUtil - Utility to provide debugging and support for the MPP driver. It is documented by a man page.
• mppBusRescan26 - Utility to rescan for new devices. It is documented by a man page.
• mppMkInitrd - Script used by the MPP driver during install. It is not for customer execution.
Product Positioning
This page identifies the key feature changes in the "SATA Support on the MD3000x" program. All of these changes will be described in more detail later in this
Tech sheet.
Hardware Changes
• Enclosures can now contain SAS, SATA II or a mix of these physical disk types
• Disk groups can contain physical disks of one interface type only
• Hot Spare physical disks will not cross architectures
• Both interface types of physical disks are hot pluggable (SATA II requires interposers)
• Supports redundant path single HBA in Windows for MD3000
• Support for VMWare ESX3.5 & ESX3.5i from 1st December 2007
• SATA II physical disks are available in 250GB, 500GB, 750GB an 1TB capacities
NOTE:
A full list of PowerEdge Servers Supported can be found in the Support Matrix found at www.dell.com under Products ' PowerVault Storage ' MD3000x
Software Supported
Supported operating systems are as follows:
NOTE:
For exact service pack and patch requirements check the support matrix found at www.dell.com under Products--->PowerVault Storage--->MD3000x
Global Differences
The procedure for a Simplex Controller replacement may be slightly different depending on region.
EMEA/APAC
A customer replacement/upgrade procedure is put in place allowing the customer to run the resource the CD and perform the necessary step to replace the controller.
Americas
The is no customer replacement procedure in place and this operation must be performed by a DSP.
Configuration Changes
MD3000 Support of a single SAS HBA with redundant data paths.
In previous versions of the MD3000x implementation redundant data paths were only available using servers with 2 SAS HBA’s. This latest implementation allows
for single SAS HBA servers to have redundant data paths as shown in the diagram below. This is also the case for a single server.
In fact SATA physical drives are plug compatible with SAS physical drives, and SAS is able to tunnel SATA control data. It is therefore relatively easy to construct
a backplane which can support both types of physical disks natively.
It is with this in mind that this update provides the MD3000x enclosures with support for two physical disk types: Serial Attached SCSI (SAS) and Serial Advanced
Technology Attachment II (SATA).
To enable this cross-drive architecture in the MD3000x enclosures, a new physical drive caddy is used for the SATA physical disks, with an interposer card situated
between the physical disk and the backplane. This interposer card ensures that the SATA physical disks achieve the correct “porting” to communicate with the SAS
backplane.
NOTE:
Certain physical drive failures may require a replacement of the interposer card only. Detailed instructions on removal of the interposer card can be found
in the MD1000 training material.
As already mentioned, enclosures can now contain a mixture of SAS and SATA II physical disks, however, the user must be aware that a disk group must consist of
only one type. Furthermore, in the case of a failed disk in a disk group, the failed disk must be replaced with a disk of the same type. The management software
provided with the array will not allow creation of RAID storage using mixed drive types. This also means that any enclosures fitted with mixed physical disk types,
and using Hot Spares, require hot spares of both types to provide hot spare coverage for disk groups of either type.
All components in the connectivity chain from the SATA physical disks to the controllers will require a minimum firmware level discussed later in this Tech sheet.
As both SAS and SATA II physical disks are hot-pluggable, you may remove and insert disks without shutting down your enclosure remembering to follow the rules
below.
To ensure that physical disks are safely removed from and inserted into the MD3000x RAID storage arrays, follow these guidelines:
• Wait at least 60 seconds between removing a physical disk and inserting a replacement.
• When you pull a physical disk from a storage array to move it to a different slot, wait 60 seconds before inserting the physical disk into the new slot.
• Wait at least 60 seconds between the removal of physical disks from a storage array.
• Wait at least 60 seconds between the insertion of physical disks into a storage array.
In a large configuration, storage management software may take up to 10 seconds to detect hardware changes.
Firmware Update
To enable SATA support on your RAID Controller Module you must be running a minimum firmware revision on the controller, NVSRAM and the SATA disk
itself. The following steps explain how to update the firmware from both a management station and from a host connecting to a RAID Controller Module.
NOTE:
The storage matrix found at support.dell.com under Products--->PowerVault Storage--->MD3000i in the Manuals section lists minimum supported
firmware revisions and supported SATA physical disks.
1. The latest resource CD found at support.dell.com under Products--->PowerVault Storage--->MD3000x in the Drivers and Downloads section.
2. Latest firmware revisions as required
NOTE:
Firmware supporting the SATA physical disks must be ABOVE :
Storage Array: MD3000x
Controller Module firmware: 06.50.32.60
MD Storage Manager version: 02.50.G6.02
NOTE:
For non-RBOD enclosures, SATA physical disk firmware can be upgraded using the SATA Disk Firmware Utility available from http://support.dell.com.
For RBOD enclosures the upgrade takes place through the MD Storage Manager utility, with the appropriate firmware wrapped for RBOD flashing.
1. A notebook, workstation or server is required that can be connected to array via serial cable
3. The customer should be made aware of the length of time it requires to flash and reboot array if a firmware upgrade or downgrade is required on the new
replacement controller (20 minutes)
4. If the unit for service is in an unmanned Data Centre (where security guards open the doors for DSP, and lock them into DATA centre for the duration of the
repair:
◦ Tech support should advise customer of what the DSP is required to do to service the simplex controller.
◦ The tech should have the authority granted by the customer to connect a notebook, serial cable and run a CD to service.
◦ If customer does not allow tech to perform these activities then the customer needs to have an employee there authorized to perform service
◦ If customer will not allow the Dell DSP to run the CD in their environment (virus concerns), then the customer should download the ISO image from
our website so their systems can scan for virus, etc. This must be done in advance of DSP arrival.
◦ If customer refuses to allow outside software or outside notebooks to be connected to their array alternative would be CLI with serial shell access.
Customer must provide a host or notebook to attach to array to perform this task. DSP would escalate to ORC (Onsite Resource Centre) for step by step
instructions for CLI commands.
NOTE:
Because of these requirements tech support agents need to be aware that in there dispatch notes the DSP must be told that access to a notebook is required
when servicing a simplex controller
Connect the serial cable between the controller and COM1 of the computer that will be used to perform the update.
(WARNING!!!!! This controller is not running the correct firmware release. SOD is now suspended. In order to prevent any configuration loss, you must
download x6…. Release)
• Serial cable
• Appropriate Firmware and NVSRAM files:
◦ For MD3000i without SATA support:
■ Firmware version 06.50.32.60 (RC_6503260_1532.dlp)
■ NVSRAM version 650890-906 (N1532-650890-906.dlp)
◦ For MD3000i with SATA support:
■ Firmware version 06.70.10.60 (RC_6701060_1532.dlp)
■ NVSRAM version 670890-901 (N1532-670890-901.dlp)
Connect the serial cable between the controller and COM1 of the computer that will be used to perform the update
1. Open a Terminal window and type “minicom -s” to open minicom to the configuration menu.
2. Ensure that minicom is set correctly using the following steps:
◦ Use the arrow key to scroll down to “Serial Port Setup”.
◦ Ensure that the serial device is set to the correct COM port (usually ttyS0), BPS/Par/Bits is set to 115200 8N1 and Hardware Flow Control is set to Yes.
◦ Press the return key once you are done making changes.
◦ Save the setup and exit.
◦ Press Ctrl+A T to open the terminal settings.
◦ Ensure that “Terminal Emulation” is set to ANSI.
◦ Press the return key to return to the main screen.
3. Power cycle the MD3000i enclosure.
4. Wait till you see a warning message indicating that the firmware does not match and giving the firmware version to be updated to..
WARNING!!!!! This controller is not running the correct firmware release. SOD is now suspended. In order to prevent any configuration loss, you must
download x6…. Release)
• 1) Display IP Configuration
• 2) Change IP Configuration
• 3) Reset Storage Array (SYMbol) Password
Display IP Configuration
Display IP Configuration displays the current IP configuration of the of the ethernet maintenance port for the RAID controller currently attached to the Serial Shell
terminal window as shown in the image below.
Change IP Configuration
Change IP Configuration will ask the user a series of questions about the IP configuration, the responses will set the IP configuration schema for the maintenance
port of the controller currently attached to the serial shell terminal. The image below illustrates the command and its question series.
This image illustrates the series of questions asked during the serial shell change IP
configuration function.
Other examples of when it would be a good idea to use the serial shell to troubleshoot issues is mainly around significant events that occur with a RAID controller.
These include but are not limited to:
• in a duplex configuration, when one of the controllers is being replaced, you should attach the serial cable to the good controller. In doing this when the
replacement controller is installed, you will be able to determine if ACS (Automatic Code Synchronization) has taken place and successfully completed.
Remember ACS is the ability for a native duplex controller to flash or synchronize its firmware with a new rplcaement controller.
• when performing a firmware update during normal conditions, it may be helpful to view the serial shell information as the update progresses.
NOTE:
Even though the logs of the updates above can be pulled from the support bundle, the advantage of the serial shell in this case is realtime diagnosis. Finally the
logs that are stored in the support bundle are very large and it may take some time for you to identify the section of each log that provides you the information
that you need.
Critical Events
RAID Controller Module Thermal Shutdown
Enclosure management provides a feature which automatically shuts down the operating system, the server, and the enclosure when the temperature within the
storage enclosure reaches dangerous extremes. Thermal shutdown protects the data on the physical disks from corruption in the event of cooling system failure. The
temperature at which shutdown occurs is determined by the enclosure temperature probe's Minimum Failure Threshold and the Maximum Failure Threshold. These
thresholds are default settings that cannot be changed. If the temperature sensors on the backplane or RAID controller module detect a temperature exceeding these
thresholds, the controller issues a thermal shutdown command that shuts down the power supplies for the storage enclosure within five seconds. When the sensors
no longer detect an over-temperature condition, the storage enclosure restarts automatically.
NOTE:
Event reports describe the temperature probes as follows:
NOTE:
While the MD3000 / MD3000i uses the Recovery Guru popup whenever certain classes of events happen, the Major Event Log (MEL) contains a listing
of each alert.
These events can be further researched using DSN and find additional, updated information that may not be available in the Recovery Guru.
Critical Conditions
The storage array will generate a critical event if the RAID controller module detects a critical condition that could cause immediate failure of the enclosure and/or
loss of data. The storage array is in a critical condition if:
When the enclosure is under critical condition, its enclosure status LED blinks amber.
NOTE:
If both RAID controller modules fail simultaneously, the enclosure cannot issue critical or noncritical event alarms for any enclosure component.
Noncritical Conditions
A noncritical condition is an event or status that will not cause immediate failure, but must be corrected to ensure continued reliability of the storage array. Examples
of noncritical events include:
When the enclosure is under non-critical condition, its enclosure status LED shows steady amber.
Where the:
From MDSM
To add a management console to the list of addresses configured to receive SNMP alerts:
NOTE:
The Management Information Base (MIB) for the storage array is copied to the client directory as part of a Full or Management Station installation selection.
DellMDStorageArray.mib can be compiled on an SNMP Management Console using the interface provided by the console.
1. Click the Tools tab, then click the Set up SNMP Alerts link.
2. Enter the Community name.
3. Enter the trap destination (hostname of the management console
4. Click Add to add the management console to the Configured SNMP addresses list.
5. Repeat steps 2 through 4 until you have added all management consoles that should receive SNMP alerts.
6. Click OK.
Recovery Guru
The Recovery Guru is a component of MDSM that diagnoses critical events on the storage array and recommends step-by-step recovery procedures for problem
resolution. You can access the Recovery Guru by clicking the Storage Array Needs Attention link on the Summary page or by clicking the Recover from failure
link on the Support page.
Here is a list of links to all of the Recovery Guru screens and the problems each addresses:
• You manually placed the disk group Offline in preparation for moving the physical disks to another storage array (disk group
relocation). If this is the reason, do not follow these recovery steps. Instead, follow the instructions in the appropriate
documentation for moving physical disks.
• You placed the disk group offline from the Command Prompt
• You were performing a disk group relocation operation and inserted all of the physical disks in the destination storage array but
the disk group is still showing an Offline status.
Offline RAID Controller The RAID controller module was placed Offline. This could be caused by:
Module
• The RAID controller module failed a diagnostic test and was automatically placed Offline. The diagnostics are initiated
internally by the RAID controller module.
• The RAID controller module was manually placed Offline.
Physical Disk - Loss of A communication path with a physical disk has been lost.
Path Redundancy
Power Supply/Cooling Fan Power to one of the power supply/cooling fan modules has been lost.
Module - No Power Input
RAID Controller Module The RAID controller module listed in the Details area cannot determine the board identifier for the alternate RAID controller module
Board Identifier Cannot Be in this storage array.
Determined
RAID Controller Module The RAID controller module was manually placed in Service mode for diagnostic or recovery reasons at the instruction of a
In Service Mode technician.
RAID Controller Module A memory parity error was detected on a RAID controller module in this storage array.
Memory Parity Error
RAID Controller Module One of the RAID controller modules in this storage array is incompatible with the other RAID controller module.
Mismatch
RAID Controller Module A RAID controller module detected an unsupported cabling configuration.
Miswire
RAID Controller Submodel The submodel number associated with this RAID controller module was not set at the factory or it is not supported by the RAID
Not Supported controller module firmware.
Removed Battery The battery in the RAID controller module is not present, probably because a new RAID controller module was inserted without a
battery or the battery was not properly connected.
Removed EMM Module An EMM module has either been removed from the expansion enclosure or is present but is not seated properly.
Removed Fan Module A fan has been removed from the enclosure.
Removed Power A power supply/cooling fan module has been removed from an enclosure in the storage array.
Supply/Cooling Fan
Module
SAS Device Limit The storage array contains more than the allowed number of total expansion enclosures.
Exceeded
SAS Device Miswire The storage array contains an improper connection from the source enclosure to the expansion enclosure(s).
Snapshot Repository All of the repository virtual disk's available capacity has been used.
Virtual Disk Capacity -
Full
Snapshot Repository A snapshot repository virtual disk capacity exceeded a warning threshold level. If the repository virtual disk's capacity becomes full,
Virtual Disk Capacity - its associated snapshot virtual disk can fail.
Threshold Exceeded
Snapshot Virtual Disk The Snapshot Virtual Disk feature is out of compliance. This normally occurs if:
Feature - Out Of
Compliance • You migrated physical disks with Snapshot Virtual Disk feature data into a storage array and the feature is not enabled
• You migrated physical disks into a storage array already containing snapshot virtual disks and the addition of these snapshot
virtual disks now exceeds the total number of snapshot virtual disks allowed.
• You disabled the Snapshot Virtual Disk feature on a storage array where the feature is supported and you have existing snapshot
virtual disks present.
Storage Array Component One or both of the RAID controller modules in this storage array cannot communicate with the storage array component listed in the
- Loss of Communication Details area.
Uncertified Physical Disk An unsupported physical disk has been inserted into the storage array.
Uncertified EMM The storage array currently contains an expansion enclosure in which one or both of the EMMs are uncertified.
Unknown Failure Type The failure type cannot be determined.
Unreadable Sectors Unreadable sectors have been detected on one or more virtual disks.
Detected
Unreadable Sectors Log The Unreadable Sectors log has been filled to its maximum capacity.
Full
Unrecovered Interrupted The process of recovering from an interrupted write has failed. An interrupted write occurs when power fails or a RAID controller
Write module resets while there is data still in cache.
Unsupported Expansion Your storage array contains one or more unsupported expansion enclosures.
Enclosure
Virtual Disk Copy There was either a read error from this copy pair's source virtual disk or a write error to this copy pair's target virtual disk that caused
Operation Failed the copy operation to fail.
Virtual Disk Not On There is a problem accessing the RAID controller module listed in the Recovery Guru Details Area. Any virtual disks that have this
Preferred Path RAID controller module assigned as their preferred path will be moved to the non-preferred path (alternate RAID controller module).
This procedure will help you pinpoint the problem along the data path. Because the virtual disks will be moved to the alternate RAID
controller module, they should still be accessible; therefore, no action is required on the individual virtual disks. Possible causes
include:
• The RAID controller module failed a manually initiated diagnostic test and was placed Offline.
• The RAID controller module was manually placed Offline or is in Service Mode.
• There are disconnected or faulty cables.
• A host adapter failed.
• The storage array contains a defective RAID controller module.
1. Read and understand the “How to” sections of the document. These sections will help you understand some useful troubleshooting skills
2. A quick read through of the remaining document will give you an understanding of the most common issues and the troubleshooting techniques for them
The document can then be used as a quick reference guide for troubleshooting. Searching for specific Recovery Guru Events should lead you to the appropriate
sections. Engineering has provided some additional details where appropriate and these are shown in RED.
Controller Troubleshooting
This document is meant to be used as a guide to troubleshooting the most common controller issues. The ideal way to do this would be to learn this document
thoroughly from the beginning to the end. However, that scenario will probably not pan out. The second best method would be to use this document in a two
pronged approach:
Controller Troubleshooting
1. Read and understand the “How to” sections of the document. These sections will help you understand some useful troubleshooting skills.
2. A quick read through of the remaining document will give you an understanding of the most common issues and the troubleshooting techniques for them.
The document can then be used as a quick reference guide for troubleshooting. Searching for specific Recovery Guru Events should lead you to the appropriate
sections. Engineering has provided some additional details where appropriate and these are shown in RED.
Power Issues
Watch for possible power issues based on the indications described in the following table.
Power Issues
Problem Action
• Enclosure-status indicators show a 1. Confirm that at least two drives are present in the enclosure; a minimum of two drives must be installed.
problem 2. Check the fault LEDs on the rear of the power supply to locate the faulty power supply.
• Power-supply fault indicators are lit 3. Turn off the enclosure and attached peripherals, and disconnect the enclosure from the electrical outlet.
4. Ensure that the power supply is properly installed by removing and reinstalling it.
5. If the problem is resolved, skip the rest of this procedure. If the problem persists, remove the faulty power
supply.
6. Install a new power supply.
CAUTION:
Power supply/cooling fan modules are hot-pluggable. The enclosure can operate on a single functioning power supply/cooling fan module; however, both
modules must be installed to ensure proper cooling. The fans in both supplies are powered by a common 5 volt bus. The fans in a failed power supply will
continue to operate as long as the power supply remains seated in the enclosure. A single power supply/cooling fan module can be removed, if necessary, from
the enclosure for up to five minutes, provided that the other module is functioning properly. After five minutes, the enclosure will overheat and may cause an
automatic thermal shutdown.
NOTE:
After installing a power supply/cooling fan module, allow several seconds for the enclosure to recognize the power supply and determine whether it is working
properly.
Cooling Issues
Watch for possible cooling issues based on the indications described in the following table.
Cooling Issues
Problem Action
• MDSM issues a fan-related error message Ensure that none of the following conditions exist:
Watch for possible fan issues based on the indications described in the following table.
Troubleshooting a Fan
Problem Action
• Enclosure-status indicator is amber • Check event logs for fan related issues
• MDSM issues a fan-related error message • Locate the malfunctioning fan
Log Reading
On this page you will find a presentation on MD3000/MD3000i log reading. Click on this link to download the zip file for the presentation. Once the file is
downloaded, extract the executable file and run it to view the presentation.
Recovery Guru
The text found on the Recovery Guru pages is meant to provide a point of reference and assist the support technician know what the embedded MDSM Recovery
Guru information will report to the customer.
The Recovery Guru pages are meant simply as a reference to assist in knowing what the Recovery Guru will report and are not meant as a replacement to
troubleshooting (although they will vastly help). To view them, navigate to the MD3000 / MD3000i Recovery Guru training pages.
1. Solution ID #74083 - MD3000 faulty DIMM on controller caused slow performance, controller panic and loss of management.
2. Solution ID #74371 - How to troubleshoot an MD3000 or MD3000i RAID controller that is in a reboot loop or panic condition.
3. Solution ID #74401 - How to change the management port settings on an MD3000/MD3000i using a serial connection.
To access the information in these Delta solutions, please navigate to kcs.dell.com, and search for the specific solution ID number.
1. Procedure 1 Installation of the Management Software, installation and configuration of the iSCSI initiator, along with management and discovery
of storage -
The following procedures show the steps necessary to install the Modular Disk Storage Manager (MDSM) software, install and configure the iSCSI initiator,
discover the MD3000i storage array and add hosts with Windows Server, Red Hat Linux, and SUSE Linux.
◦ MDSM installation and storage discovery Reference
◦ iSCSI host configuration for Windows (using IPv4) Reference
◦ iSCSI host configuration for Windows (using IPv6 on Windows 2008) Reference
◦ iSCSI host configuration for Red Hat Enterprise Linux Reference
2. Procedure 2 Reference Name the array, assign controller IP addresses, and synchronize the system clocks - This procedure shows the steps necessary
to name the array, assign IP addresses to the controllers, and synchronize the system clocks. The steps are also shown for adding SMcli.exe to the path
statement for ease of use at the command line.
3. Procedure 3 Reference Creation of Disk Groups and Virtual Disks - These procedures show the steps necessary to create Disk Groups and several types
of Virtual Disks. The procedures are identical for both the MD3000 and the MD3000i. Also shown are the steps for creating virtual disk slices on a disk
If the simulator does not start, be sure that the Demo Configuration Editor is not running and there are not any other simulator windows running.
After the simulator has started, the user may wish to interact with it through the Command Line Interface. To start the CLI, run SMcliDell-
SimBP_Demo__DualPortpkg.bat in a terminal window after the Simulator has loaded. To connect, the command syntax would follow this structure: SMcliDell-
SimBP_Demo__DualPortpkg.bat localhost.
• Simulator Zipfiles
Expand the files into a new directory folder and choose the appropriate simulator by selecting a .bat file.
Dell Engineering has recorded some Microsoft LiveMeeting sessions focusing on the following tools:
• PowerVault MD Utility
• PowerVault MD Analyser
NOTE:
The Knowledge Check for the PowerVault MD3000i is the same as the Knowledge Check for the PowerVault MD3000. If you have completed and passed the
Knowledge Check for the PowerVault MD3000, the Dell Training tool will report that you have successfully taken this Knowledge Check and prevent you
from taking it again here.
NOTE:
The Knowledge Check for the PowerVault MD3000 is the same as the Knowledge Check for the PowerVault MD3000i. If you have completed and passed the
Knowledge Check for the PowerVault MD3000i, the Dell Training tool will report that you have successfully taken this Knowledge Check and prevent you
from taking it again here.
Pre-requisites
This video assumes that you have met the following prerequisites:
If you need to refresh your knowledge on these prerequisites, you may wish to visit the following training links:
• iSCSI Fundamentals
• Networking Fundamentals
Video Content
This video contains the following information covering installation procedures on the following topics/items:
Use the following link to download the Adobe™ Captivate™ training module video: MD3000i_Windows.zip
To view the video, download the file and uncompress on a local hard drive and run the executable (i.e., MD3000i_Windows.exe).
Pre-requisites
This video assumes that you have met the following prerequisites:
If you need to refresh your knowledge on these prerequisites, you may wish to visit the following training links:
• iSCSI Fundamentals
• Networking Fundamentals
Video Content
Click the image below to start the video in a new web browser window.
Use the following link to download the Adobe™ Captivate™ training module video: MD3000i_RHEL5.zip
To view the video, download the file and uncompress on a local hard drive and run the executable (i.e., MD3000i_RHEL5.exe).
Pre-requisites
This video assumes that you have met the following prerequisites:
If you need to refresh your knowledge on these prerequisites, you may wish to visit the following training links:
• iSCSI Fundamentals
• Networking Fundamentals
Video Content
Click the image below to start the video in a new web browser window.
Use the following link to download the Adobe™ Captivate™ training module video: MD3000i_VMware.zip
To view the video, download the file and uncompress on a local hard drive and run the executable (i.e., MD3000i_VMware.exe).
PowerPoint Presentations
The following documentation is used in a classroom environment.
• Presentation 1 - MD3000i Overview - This presentation shows the marketing position and features of the MD3000i.
• Presentation 2 - MD3000i - iSCSI - This presentation reviews essental information regarding the use and deployment of iSCSI.
• Presentation 3 - MD3000i Hardware - This presentation shows the hardware FRUs and describes their properties.
• Presentation 4 - MD3000i and MDSM Highlights and Deltas - This presentation describes the highlights and deltas between the MD3000 and the MD3000i.
Classroom Setup
Introduction
The MD3000i classroom training is designed to give the student a general understanding of how to setup, configure, and troubleshoot an MD3000i Storage Array.
Classroom Schedule
The training can be performed in any classroom that will accommodate the number of racked servers and storage arrays that will allow a maximum of two students
per array.
clock
Total time for this class setup is 6 to 8 hours;
the servers need to be imaged, and Ethernet cabling
needs to be organized, additional Ethernet switches
installed as needed, and any remaining configuration on
the arrays cleared.
Objectives
The following is a list of terminal objectives for the class. This list can be printed out and handed to the students as an overall list of objectives for the class:
Prerequisites
• Working knowledge of Microsoft Windows Server 2003®.
• Working knowledge of iSCSI
Materials Required
The following materials are required for each two-person station:
Course Preparation
Software Configuration Nodes
Have the Windows 2003 operating system installed. The students will load the software as the labs progress.
Cabling Diagram
The cabling shall be accomplished in accordance with the supported cabling configurations outlined and specified in the Training pages.
IP Addressing Scheme
IP addresses may be hard coded or DHCP may be used for all Ethernet ports other than the one used for RDP access. Hosts and storage arrays should be configured
to be on the same subnet for out-of-band discovery by management hosts.
clock
Total time for the basics class is 16 hours.
Class Overview
Introductions
clock
Total time for introductions is about 30 minutes.
Introduce yourself and then go around the class having the students introduce themselves.
You may want to have the students state their level of enterprise experience. You may also ask the students to state what they would like to get out of the class.
This information will help you as the instructor understand the class expectations as well as the students' level of experience. Additionally, this will break some of
the tension in the classroom and allow the student to get to know one another.
Class Overview
clock
Total time for introductions is about 10 minutes.
Provide the class with a quick overview of what the one day class will cover.
Class Overview
• Instructor and class participant introductions
• MD3000 Advancement Overview
• MD3000 Advancement presentation with labs.
Ask the students gather around a lab station and explain what hardware makes up each station, and how it is connected together.
List of items at each station (hosts and storage are remotely mounted):
• 2 PE1950 servers.
• One MD3000 array.
Class Outline
All of the times listed after the PowerPoint are the total amount of time needed for each module.
Day One
Start with the overview PowerPoint.
Module One
Presentation Time Lab(s) Lab Time(s)
Install Microsoft STORport hotfix
Load driver for SAS5/E
Powerpoint Presentations: Install MDSM software
MD3000-01-Overview.ppt Discover the array out of band using netCfgShow
Number of Hours Number of Hours
MD3000-02-Hardware.ppt Name the Array
4 4
MD3000-03-Architecture.ppt Change IP addresses if needed
MD3000-04-MDSM.ppt Create Disk Groups and Virtual Disks
Assign VD ownership to host
Scan for new LUNs on host and format as drives
Day Two
Module Two
Presentation Time Lab(s) Lab Time(s)
Reset a lost administrator password
Presentation Number of Hours Gather the support files Number of hours
None N/A Flash the controller Firmware and NVSRAM 8
Save the array configuration to a file
Edit config file, creating an additional LUN
Useful Documents
iSCSI SAN Best Practice Documentation
Click on the link below:
IP SAN Best Practice White Paper.
Date: 2008-11-27 Owner:Mark Smith Page: iSCSI best practice ref doc added
Requested By: David Spencer Reviewed By: Approved By: David Spencer
Changes: Added Useful Documents page
Date: 2008-11-12 Owner:Brian Urbanek Page: Second Generation Section, and Virtualization Sections added
Requested By: David Spencer Reviewed By: Approved By: David Spencer
Changes: Added several pagess and in the Second Generation Section, and the Virtualization Section was added.
Date: 2008-02-28 Owner:Mark Smith Page: IPV6 support added. Windows 2008 and revised Linux support added
Requested By: Becky Wright Reviewed By: Jason Groce Approved By: David Spencer
Changes: Added IPV6 iSCSI setup and configuration. Added Windows 2008, revised Linux and revised clustering support.
Previous
Copyright 1999-2008 Dell Inc. For Dell Employees and Dell Service Providers only.
inside.us.dell.com | Privacy Policy | About Dell | Contact Us | About Us