Sunteți pe pagina 1din 26

WHITE PAPER: ENTERPRISE SOLUTIONS

Veritas Storage Foundation 5.0 for Windows Best Practices for Storage Management

White Paper: Enterprise Solutions

Veritas Storage Foundation 5.0 for Windows Best Practices for Storage Management
Contents Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Storage Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Disk Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Track Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Allocation Unit Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Format Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 Dirty Region Logging (DRL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Veritas FastResync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Dynamic Multi-pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 Windows Storage Management Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Track Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Disk Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 Dirty Region Logging (DRL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 Veritas FastResync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Veritas Dynamic Multi-pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Overview
This document discusses storage design best practices for Microsoft Windows based servers that utilize the Veritas Storage Foundation for Windows (SFW) solution for storage management. This paper will explain basic storage design principles for the Windows environment, as well as recommended storage practices for Veritas Storage Foundation for Windows.

Introduction
Veritas Storage Foundation for Windows by Symantec dramatically increases the amount of time that users have access to data by reducing both planned and unplanned downtime. Traditional disk storage management is labor intensive, often requiring systems to be taken offline for hours at a time, preventing users access to data, and requiring tedious manual intervention by system administrators. Veritas Storage Foundation for Windows overcomes these obstacles by providing easy-to-use online disk storage management for mission-critical Windows environments in the enterprise. Veritas Storage Foundation enables high availability of data, optimizes storage I/O performance, and protects current storage investments while allowing freedom of choice for hardware in the future. Although Veritas Storage Foundation for Windows is easy to install and run, some features common to enterprise environments require careful consideration. This document discusses those features and brings best practices to the deployment of the product.

Storage Design Considerations


Disk Basics
An understanding of the physical disk level will help to clarify the design considerations.
Sector

Cluster of 4 Sectors

Track

Platters

Figure 1. Tracks, sectors, and clusters on a hard disk.

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

A hard disk is divided into areas called tracks, sectors, and cylinders. A track is a circular ring on one side of a disk. Sections within each track are called sectors. A sector is the smallest physical unit on a disk, typically holding 512 bytes of data. A track sector is the area of intersection of a track and sector. A disk sector is a wedge-shape piece of the disk. A cylinder is a set of all matched tracks at a given radius, on a disk with multiple recording surfaces. A cluster is a set of track sectors, depending on the formatting scheme in use. One cluster is the minimum space used by any read or write. Cluster (or allocation unit) size represents the smallest amount of disk space allocated to hold a file. All file systems used by Windows organize the hard disk based upon cluster size. Extra space must be used to hold the file (up to the next multiple of the cluster size) when the file size does not come out to an even multiple of the cluster size. If no cluster size is specified during format, the file system picks defaults based upon the size of the partition.

Track Alignment
If basic partition(s) or dynamic volume(s) are created on a disk that is not track aligned, an I/O operation may cross, or straddle, disk track boundaries. If an I/O operation does straddle a track boundary, it can consume extra resources or cause additional work in the storage array, leading to performance loss. Microsofts diskpar or diskpart utilities should be used to create trackaligned basic partition(s) to improve performance. However, these products do not work for dynamic volumes. Veritas Storage Foundation for Windows includes automated track alignment capabilities, which will be explained in a later section. The most important data structure on the disk is the Master Boot Record (MBR), which resides on the first sector of the disk. MBR contains the boot-loader and partition table. The partition table maintains starting and ending sector values, which in Windows are only 6 bits in length. Therefore, their maximum value is 63 due to this limited number of bits and the fact that sector enumeration begins at 1, not 0. Also, it may be easier to think of a disk as a sequence of blocks (rather than sectors) starting from address zero and incrementing until the end of the disk. Note that block enumeration begins at 0, not 1. Windows creates partitions on cylinder boundaries and, by default, allocates the first 63 sectors as hidden sectors. With a physical disk that maintains 64 sectors per track, Windows always creates the partition starting at the 64th sector (block address 63), which misaligns it with the underlying physical disk and results in serious performance degradation. If we use the Windows default partition location (63), an I/O of 4,096 bytes (8 sectors/blocks) starting at the beginning of the partition will write one block to the last block of the first track and seven blocks

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

to the start of the second track. This means the I/O will straddle the first and second tracks. This will require the storage array to reserve two cache slots for the data and will also require two flush I/O operations to the disk, which will impact the performance.

Partition #1
1.................Sectors..............64 1.................Sectors..............64 1.................Sectors..............64

MBR
Track 1 Track 2 Track 3

Figure 2. I/O straddling with a misaligned partition.

For illustration purposes, in Figure 2 the partition is broken into 4-KB blocks and arranged in sequence with two different colors. It clearly shows that one of every 8 blocks would straddle a track boundary. Suppose a partition is created along the track boundary of the underlying disk. The partition layout in the physical disk would be as depicted in Figure 3. In this case, there are no 4-KB blocks straddling a track boundary.

Partition #1
1.................Sectors..............64 1.................Sectors..............64 1.................Sectors..............64

MBR
Track 1 Track 2 Track 3

Figure 3. Avoiding I/O straddling with an aligned partition.

According to performance analysis information, I/O to a misaligned partition in a storage area network (SAN) with 64 sectors per track would result in the following: Any I/O of 32 KB or larger will always cause a boundary crossing. Any random I/O of 16 KB will cause a boundary crossing 50 percent of the time. Any random I/O of 8 KB will cause a boundary crossing 25 percent of the time. Any random I/O of 4 KB will cause a boundary crossing 12.5 percent of the time.

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Allocation Unit Size


NTFS uses clusters as its fundamental unit of disk allocation. A cluster consists of a fixed number of disk sectors. When you use the Format command or Disk Administrator, clusters are known as the allocation units. In NTFS, the default allocation unit size depends on the volume size. Using the Format command from the command line to format your NTFS volume (or via SFW Wizard), a variety of allocation unit sizes for a specific NT disk volume can be set. Before you set up a RAID array or new standalone disks, you need to determine the size of the average disk transfer on your disk subsystem and set the allocation unit size to match it as closely as possible. By matching the allocation unit size with the amount of data that you typically transfer to and from the disk, you will incur lower disk subsystem overhead and gain better overall performance. To determine the size of your average disk transfer, use Performance Monitor to review two counters (Avg. Disk Bytes/Read and Avg. Disk Bytes/Write) under the LogicalDisk object. The Avg. Disk Bytes/Read counter measures the average number of bytes transferred from the disk during read operations and the Avg. Disk Bytes/Write counter measures the average number of bytes transferred to the disk during write operations.

Default Cluster Size


As noted earlier, all file systems used by Windows organize the hard disk based upon cluster (or allocation unit) size, which represents the smallest amount of disk space that can be allocated to hold a file. So, when file sizes do not come out to an even multiple of the cluster size, extra space must be used to hold the file (up to the next multiple of the cluster size). On a typical partition, this means that (cluster size)/2 X (number of files) worth of space is lost. If no cluster size is specified during format, NTFS chooses defaults based upon the size of the partition. These defaults have been selected to reduce the amount of space lost and to reduce the amount of fragmentation on the partition. Table 1 illustrates the default values used by Windows 2000 and Windows 2003 when a volume is formatted to NTFS using the Format command from the command line without specifying a cluster size, or formatting a volume from Microsoft Internet Explorer when the Allocation Unit box in the Format dialog lists Default Allocation Size.

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Table 1. Default values.

Volume size
7 MB512 MB 513 MB1,024 MB 1,025 MB2 GB 2 GB2 TB

Default NTFS cluster size (Windows 2000 and 2003)


512 bytes 1 KB 2 KB 4 KB

Please see the following Microsoft Knowledge Base articles for more information: Windows Server 2003: KB314878 (http://support.microsoft.com/default.aspx/kb/314878/) Windows 2000: KB140365 (http://support.microsoft.com/default.aspx/kb/140365/)

Format Type
When a regular format of a volume occurs, files are removed from the volume that you are formatting and the hard disk is scanned for bad sectors. The scan for bad sectors is responsible for the majority of the time that it takes to format a volume. The Quick format option removes files from the partition, but does not scan the disk for bad sectors. Only use this option if your hard disk has been formatted before and you are sure that it is not damaged. If the volume has been Quick formatted, chkdsk /r can be used to validate the volume.

Striping
Striping is a method for increasing performance, although care is required. Striping alone does not produce redundancy; this is the job of parity (RAID1, RAID5, and so on). Disks are normally protected by hardware parity in the SAN; therefore, it is important to consider the RAID level within the SAN. The golden rule in using stripes to increase performance is never stripe on a stripe. This means that if the LUNs in the SAN are being carved from a RAID5 stripe, there is no performance gain from striping at the level of Veritas Storage Foundation for Windows. The exception is when each LUN in the SFW RAID0 stripe comes from different RAID5 groups. See Figure 4.

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Disk1

Disk2

Disk3

Disk4

Disk5

Disk6

Disk7

Disk8 72GB Physical Disks

RAID5

504GB RAID5 Disk

LUN1

LUN2

LUN3

LUN56

504GB virtual disk carved into 56 x 9GB LUNs by SAN software

Disk9

Disk10

Disk11

Disk12

Disk13

Disk14

Disk15

Disk16

RAID5 LUN56 LUN56 LUN57 LUN58 3 x 9GB LUNs Striped RAID0 Producing 27GB three column stripe

LUN112

LUN11 LUN112

LUN16 LUN168 Disk17 Disk18 Disk19 Disk20 Disk21 Disk22 Disk23 Disk24

RAID5

LUN113

LUN114

LUN115

LUN168

Figure 4. LUN56, LUN112, and LUN168 are used to create a SFW striped 27GB volume.

Striping offers no performance gain if the LUNs come from the same RAID5 group. In fact, striping on a stripe will cause degradation of performance. In this case, using a concatenated volume in Veritas Storage Foundation for Windows is recommended.

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

72GB Physical Disks Disk1 Disk2 Disk3 Disk4 Disk5 Disk6 Disk7 Disk8

RAID5

504GB RAID5 Disk

LUN1 LUN2

LUN3 LUN4

LUN5 LUN6

LUN7 LUN8

LUN9 LUN10

LUN11

LUN13 LUN14

LUN15 LUN16 504GB virtual disk carved into 9GB LUNs by SAN software

LUN12

LUN6 LUN6 N

LUN8 LUN8

LUN10 LUN10 N10 N

3 x 9GB LUNs creating a RAID0 27GB SFW Volume

Figure 5. Stripe on Stripe

Performance I/O
I/O intensive applications tend to be limited by how fast a disk can execute I/O requests. For example, if a disk takes 10 milliseconds to rotate, find, and transfer a request (in the case of Exchange, 4,096 bytes) we are talking about 100 requests per second per spindle or column. The subtlety of striped volumes for I/O intensive applications is that striping does not improve the execution time of any single request. It improves the average response time of a larger number of concurrent requests by increasing the disk resource utilization, thereby reducing the average time that a request waits for the previous one to finish executing. Data striping only improves performance if requests overlap in time. As multiple requests are made for Exchange, for example, there is an overlap.

10

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

The graph in Figure 6 represents typical disk response time versus I/O requests per second. (This has probably increased somewhat as technology improves.)
250

200

Response Time [ms]

150

100

50

0 20 40 60 80 100 120 140 160

Request Rate (I/Os per second)

Figure 6. Typical disk response time versus I/O requests per second.

As the number of requests to the disk increase, the response time also increases along an exponential curve. Disk queuing causes this behavior and can only be mitigated, not avoided. Any disk can service only a limited number of I/Os, and I/O queues accumulate after a disk reaches that limit. Also, the larger the disk, the slower it can be. For example, it is unrealistic to expect a 50-GB disk to process more than 70 I/O requests per second. Over time, disks might spin faster, get denser, and hold more data, but they can still only serve I/O at a set rate, and that rate is not increasing at present.

11

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Stripe Unit Size


The data is put on the disk in volume blocks. In the case of Microsoft Exchange Server 2003, this will be 4,096 bytes (or 8 blocks), and called the stripe or a row. For Microsoft SQL Server 2005, this is 8,192 bytes (or 16 blocks). The subdisk making up the volume is called a column. The number of consecutive volume blocks written to the subdisk is called the stripe unit. The stripe unit size is constant for a striped volume. The typical striped unit size is between 50 and 200 blocks. The stripe unit size multiplied by the number of columns (i.e. disks in volume) is the stripe size. With these two figures, Veritas Storage Foundation for Windows can translate the volume block number into its physical block location. The stripe unit size is the number of consecutive volume blocks written to the subdisk (the subdisk being the physical LUN or column within the volume). This is variable in SFW and can be optimized, depending on the server application. Microsoft Exchange Server 2003 is I/O intensive, transferring relatively small amounts of data (4,096 bytes). Therefore, the I/O request time is dominated by disk motion (seek and rotation latency) rather than data transfer. Such highly intensive I/O applications usually have multiple I/O requests outstanding simultaneously, so it is preferable that each request in an I/O intensive application be satisfied completely by one disk, leaving as many other disks as possible free to serve other requests. Database applications allocate blocks in volume address space. SFW maps these volume blocks to disk blocks. The overlay of these two mappings makes it difficult to guarantee that the request will never be split across two disks, but if the stripe unit size is sufficiently large, the probability of the split will be small.

12

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Table 2. Stripe Size Statistics

Stripe size (sectors)


128 256 512 1,024 128 256 512 1,024 2,048 4,096 8,192

Stripe size (bytes)


64 K 128 K 256 K 512 K 64 K 128 K 256 K 512 K 1 MB 2 MB 4 MB

I/O produced (bytes)


4,096 4,096 4,096 4,096 8,192 8,192 8,192 65,536 65,536 65,536 65,536

I/O produced (sectors)


8 8 8 8 16 16 16 128 128 128 128

Possible I/O split

Probability

Probability

7 7 7 7 15 15 15 127 127 127 127

7/128 = 5.5 7/256 = 2.7 7/512 = 1.4 7/1,024 = 0.7 15/128 = 11.7 15/256 = 5.9 15/512 = 2.9 127/1,024=12.4 127/2,048=6.2 127/4,096=3 127/8,192=1.55

94.5 97.3 98.6 99.3 88.3 94.1 97.1 87.6 93.8 97 98.5

Note: 512 bytes = 1 sector

The Microsoft Exchange Server 2003 application produces data in 4,096 bytes (8 blocks on the disk). With a stripe unit size of 128K bytes (256 blocks on the disk), this will mean there are 256 possible starting blocks for the I/O request. Seven of these possible starting points will lead to the splitting of the 8-block data request over two disks. Therefore, the percentage of split is 7/256, or 2.7 percent. This means 97.3 percent of the write operations are serviced by a single disk. The Microsoft SQL Server 2005 application produces data in 8,192 bytes (16 blocks on the disk). With a stripe unit size of 256K bytes (512 blocks on the disk), this will mean there are 512 possible starting blocks for the I/O request. Fifteen of these possible starting points will lead to the splitting of the 16-block data request over two disks. Therefore, the percentage of split is 15/512, or 2.9 percent. This means 97.1 percent of the write operations are serviced by a single disk.

13

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

The stripe unit size is required to minimize the average response time while maximizing throughput. A small stripe unit size results in a uniform load distribution among disks in the array and decreases the variance in response times. It also increases the overhead of disk seeks and rotational latencies, thereby decreasing throughput. Large stripe unit sizes increase the array throughput at the expense of increased load imbalance and variance in response times. To maximize the number of clients that can be serviced simultaneously, the server should select a stripe unit size that balances these tradeoffs. A compromise on the stripe unit size for I/O intensive applications is one that results in a 35 percent probability of splitting a data request across two disks.

Column Size
Data transfer performance is increased when multiple disks transfer data in parallel to satisfy a single application request. Another reason for a greater number of disks is the aggregate rotational latency. The latency of N non-synchronized disks that are accessed at the same time is N / (N+1) times the revolution time. Therefore, latency is exponential. The greater the number of disks, the better the performance. Once past four disks, the rotational latency becomes 80 percent.
Latency
100

80

60

100%

Latenc

40

20

0 1 8 15 22 29

Disks

Figure 7. Rotational latency increases over a greater number of disks.

14

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

The latest version of Veritas Storage Foundation for Windows now allows the stripe column size to be changed dynamically with the SFW Dynamic Relayout capabilities. When a volume is increased in size, the minimum number of disks that can be added to that stripe is the same as the number of columns. For example, if the volume contained sixteen 9-GB LUNs (total volume 144 GB, 16 columns) a minimum of sixteen disks would be needed to increase the volume. In later versions of SFW, a command line feature will allow the column size to be altered. With this current limitation in mind, it is important to size the volumes appropriately now to enable efficient growth in the future.

Failure rating
As stated previously, the more columns in a striped volume, the better for I/O intensive applications. A greater number of disks means more concurrent requests can be serviced, leading to shorter I/O queuing times. However, utilizing more disks poses a greater risk of disk failure. The average life of a disk is 500,000 hours, or 57 years. This is an averagesome disks may last longer, and some only a few months. Because this is a per-disk value, on a 32-way stripe the average failure rate would be 1.8 years (57 years / 32 = 1.8), 3.5 years with a 16-way stripe, and 7 years with an 8-way stripe.

Mirroring
By default, in a mirrored volume the data is read from each plex in a round-robin fashion. This is recommended when the plexes are located in the same arrayif one of the plexes is on a remote site, the value must be changed. This will increase performance, even if only slightly. Performance will increase depending on the distance between plexes.

15

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

To set the volume read policy, use the following steps: 1. Right-click on the volume for which you wish to set the read policy. 2. Select Set Volume Usage on the context menu. 3. The Set Volume Usage dialog appears. Select local plex.

Figure 8. Setting the volume read policy in Windows.

Dirty Region Logging


Dirty Region Logging (DRL) is used to resynchronize all copies of a mirror quickly when a system is restarted following a crash. If DRL is not used, all copies of the mirror must be synchronized by copying the full contents of the volume to each plexan intensive and lengthy operation. A DRL log can be added during creation of the mirrored volume, or it can be added later. Multiple logs can be associated with a single mirrored volume for fault tolerance; however, a large number of logs can have an impact on performance. The following should be noted concerning DRL: DRLs track changes to volumes via bits that are dirtied by writes. Each bit represents a region in the data volume. A regions size, measured in KB, is determined by the overall size of the volume, and is coded to have a maximum value. The DRL is composed of two parts, an active part and a recovery part, with each being half the DRLs size. For example, for a 1-KB DRL, each part would be about 500 bytes.

16

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

DRL size varies depending on volume size. If the volume is large enough that all bits have been allocated and the maximum region size has been reached, the DRLs size will vary accordingly. Note: Veritas Storage Foundation for Windows uses a block size of 512 bytes. The DRL will be at least two blocks in size, one each for the active and recovery parts, and will therefore have a minimum size of 1 KB (1,024 bytes). Smaller volumes may not utilize the entire DRL. Before a write is committed to disk, it updates the corresponding bit(s) in the DRL. If a bit is already dirty, then the write is committed to disk with no change to the DRL. If the bit is clean, it is dirtied and then the write is committed. After a system crash, the active parts contents are copied to the recovery part in the DRL. Mirrors resync to what is in the recovery, while the active part continues to be updated by changes. That way, even if the system crashes during the resync, it is still protected. During a resync-after-crash operation, all dirty bits (in the recovery) are used in the resync (except the last 128 bits, as previously stated). A lazy write algorithm is used to clean dirty bits. The lazy write thread wakes up periodically (e.g., every five minutes; the exact value is specified in the code) and writes to the DRL using a least recently used (LRU) algorithm to determine which bits are cleaned. The DRL is also coded to have a maximum (and a minimum) number of dirty bits, which also influences bit cleaning. During VM volume transactions (such as format) the whole DRL is cleaned, because transactions either fail or complete as a whole, hence mirrors would already be in sync if the transaction completes. An overhead of about 5 percent is associated with DRL use.

Veritas FastResync
Veritas FastResync is used to quickly resynchronize mirrored volumes that have been temporarily split and rejoined. FastResync works by copying only changes to the newly reattached volume using logging. This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror resynchronization without logging.

17

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

FastResync can be used with a standard mirrored volume, or it can be used with Veritas FlashSnap. FlashSnap enables the creation of independently addressable multipurpose volumes, which are mirrors of volumes on a server. Multipurpose volumes can be detached from the local server and moved to another server for backup or other activities, and can then be reattached to the original volume on the local server and quickly resynchronized using FastResync. Symantec recommends that FastResync be used when each plex of the mirror is on a different site. Data Change Objects (DCO) are used by Veritas Storage Foundation for Windows FastResync capabilities. DCO and DRL keep track of regions on a volume where the mirrors are not synchronized; however, they perform different functions. DRL is responsible for determining whether a write to a mirrored volume has been completed on all mirrors and is used to resynchronize mirrors following a system crash. DCO retains a record of updates that have been missed by a detached mirror. As part of the FlashSnap process, Veritas FastResync logs are added to mirrored volumes to track changes to the volumes for resync purposes after a mirror has been broken and then reassociated with the original. The DCO is a bitmap that tracks changes to regions in a volume via dirty bits. Each bit in the DCO represents a region in the volume. Writes to the volume mark corresponding bits in the DCO as dirty. When a SnapStart command is issued, a mirror is added to the volume and a mirrored DCO volume is associated with it. When a SnapShot command is issued, the mirror is broken and a drive letter assigned to the resulting new volume. The DCO mirror is also broken, with each data volume (original and snapshot) having an associated DCO volume. When a SnapBack command is issued, the volumes are reassociated and resynched based on a combination of the dirty bits in each DCO volume. Minimum DCO_Volume size = 64 KB Maximum DCO_Volume size = 2 MB

18

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Default region size = 32 KB. If the volume is grown after the DCO has been added, the region size grows. The DCO never grows. There is no size limit for regions. Hence, if a volumes growth is excessive, it is recommended that the DCO be removed and re-added, after which its size would be more in tune with the size of the volume (up to its maximum size). The DCO contains a 64-byte header. Bits are cleaned as part of the resync process.

Dynamic Multi-pathing
Dynamic Multi-pathing software provides the intelligence necessary to manage multiple I/O paths between a server and a SAN-based storage subsystem. Without multipathing software, the server operating system presents applications with multiple images of a disk or LUN (one for each I/O path discovered), which can result in data corruption. At its most basic, multipathing software has two main modes of operation. When configured for redundancy, a single path is dedicated to I/O transfer, while other paths are in standby mode. The software manages failover between the I/O paths, thus eliminating the potential for a single point of failure. If connectivity along one path to a storage device is interrupted, the multipathing software dynamically switches I/Os to a surviving path, allowing application access to continue unimpeded. The other mode of operation allows for all paths to be utilized for I/O transfer. This can improve performance by leveraging the presence of these multiple paths, increasing the available bandwidth for I/O traffic.

Path A Path B

Vendor A Disk Array

Server Storage Area Network (SAN)

Vendor B Disk Array

Figure 9. Dynamic Multi-pathing offers multiple paths from server to storage.

19

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Windows Storage Management Best Practices


These recommendations and best practices are for Windows servers running Veritas Storage Foundation 5.0 for Windows. Although they can be applied to the majority of customer deployments, there will be circumstances where they do not apply. Symantec recommends validating all Veritas Storage Foundation for Windows designs by consulting with Symantec Professional Services or Presales Engineering.

Disk Groups
A Disk Group is a container for administration purposes. Symantec recommends one Disk Group per application.

Track Alignment
Track aligning basic disks and dynamic volumes can improve disk performance. For basic disks, Microsofts diskpar or diskpart utilities can be utilized. These utilities allow the user to create track-aligned basic partitions to improve performance. Veritas Storage Foundation for Windows provides automated track alignments for most leading array families from EMC, HP, HDS, IBM, and Network Appliance. When Veritas Storage Foundation for Windows is installed and first configured, track alignment is enabled for specific array families. Once track alignment is enabled for an array family, a dynamic volume created using disk resources on that array family will automatically be track aligned. This eliminates the need to run special commands or processes on new volumes created. The administrator can set it and forget it.

Figure 10. Choose Track Alignment from the Administrator screen.

20

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Figure 11. Choose the Track Alignment settings.

Figure 12. Harddisk Properties

21

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Disk Formatting
Symantec recommends any new volume be formatted using Quick format and NTFS, with allocation sizes as noted in Table 3.
Table 3. Allocation sizes by application.

Application
Exchange SQL Oracle Other

Allocation size (bytes)


4,096 8,192 (some DBAs prefer 65,536) Varies depending on Oracle I/O configuration 4,096

Striping Striped volume column size


The larger the column size, the better the performance. For example, its a good thing for a large I/O to cross disks so that you can use more disks to complete the request simultaneously. You can only add to a striped volume with the minimum number of disks equal to the column size. Therefore, if the stripe size is 4, the minimum number of disks required to extend the volume is 4. Symantec recommends a column size, if possible, of 8.

Striped volume unit size


Table 4 shows the I/O produced by varying stripe sizes.
Table 4. I/O produced by stripe size.

Typical I/O produced (bytes)


4,096 8,192 65,536

Stripe size (sectors)


256 512 4,096

Stripe size (bytes)


128 K 256 K 2 MB

Probability of I/O crossing two disks


2.7% 2.9% 3%

22

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Mirroring
Veritas Storage Foundation for Windows is used to mirror LUNs from different arrays to protect against site and/or array failure. Mirroring for disk redundancy is usually provided at the hardware level. These recommendations are based on a concatenated mirror between two separate arrays on different sites.

Initial Mirror Synchronization


How long does it take to mirror a 1-TB volume? This is difficult to estimate, because many factors play into the variations on the time. There are a few things you can do to increase the synchronization speed: Set the O/S Disk Write Caching, via Device Manager, to enabled on all disks being mirrored for the period of synchronization. The write cache should only be enabled if the array has battery backup. Ensure the disks are track aligned.

Dirty Region Logging


One Dirty Region Log per mirrored volume is located on the local plex. There is no benefit in mirroring the DRL. Add the DRL after the mirror has been synchronized. Technically, having the DRL on a separate volume other than the mirrored plexes gives maximum performance, but the volume this resides on needs to be highly available. Because the size of the DRL is usually 32 K, practicality outweighs performance. The overhead of DRL can be as high as 60 to 70 percent if it resides on the same disk as the data disk. Mirroring the DRL is useful to protect the DRL itself against disk failure.

23

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Veritas FastResync
Use one Veritas FastResync DCO per site. Snap volumes will also require a DCO. Figure 13 shows a typical mirrored volume with a snap volume.

Figure 13. Mirrored volume with a snap volume.

Preferred Read Policy


The Read Policy should be set for the local plex.

Veritas Dynamic Multi-pathing


The Veritas Storage Foundation for Windows Dynamic Multi-pathing Option is the industrys leading SAN storage multipathing solution for mission-critical Windows servers. Veritas Dynamic Multi-pathing is fully compliant with the Microsoft Windows MPIO Framework and is in its third generation of MPIO integration. Veritas Dynamic Multi-pathing offers MPIO Device Specific Module (DSM) support for most leading array families from EMC, HP, HDS, IBM, and Network Appliance, as well as a feature-rich solution unsurpassed in the industry. Whether you are looking for an array-independent multipathing solution for your Windows SAN builds or a feature-rich solution to improve SAN storage performance or management, Veritas Dynamic Multi-pathing is the ideal choice for your Windows servers.

24

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Symantec recommends that customers implement Veritas Dynamic Multi-pathing using the included DMP Device Specific Modules (DSMs), which are Microsoft MPIO compliant. These DMP DSMs should be used with Windows Server 2003 x86, x64, and IA-64 operating systems with Fibre Channel SAN StorPort Miniport HBA drivers. Additionally, Symantec recommends the use of SCSI-3 with hardware arrays that support SCSI-3, which allows the use of active/active load balancing in Microsoft Cluster Server (MSCS) and Veritas Cluster Server environments. For specific load-balancing recommendations, please see the Veritas Dynamic Multi-pathing Load Balancing Performance white paper.

Clustering
The vxclus utility makes it possible to bring an MSCS cluster disk group online on a node with a minority of the disks in the disk group. The vxclus utility creates an entry in the registry that enables the cluster resource to be brought online. Once vxclus enable is executed, you can bring the resource online in Cluster Administrator. vxclus enable -g<DynamicDiskGroupName> [-p] This command enables a designated cluster disk group for forced import so that it can be brought online when a minority of disks in the disk group are available. The vxclus utility creates an entry in the Windows registry that enables the cluster resource for forced import. Once vxclus enable is executed, you can bring the resource online with the Cluster Administrator. After the resource is brought online, the vxclus force import functionality is disabled. However, if -p is specified, the entry made in the Windows registry is such that the vxclus force import functionality remains enabled. This allows persistent forced import of the designated cluster disk group so that this resource can always be brought online with the Cluster Administrator.

VXCLUS with MSCS


If MSCS is being used with mirrored volume(s) for the application, with each plex in different sites, vxclus needs to be set for the Disk Group to ensure automatic import of the volume. vxclus enable g DGNAME -p Please note that vxclus should be run on all cluster nodes, since the results of the command are stored in the Windows registry of each cluster node.

25

Veritas Storage Foundation 5.0 for Windows: Best Practices for Storage Management

Summary
By using Veritas Storage Foundation for Windows and the storage design best practices discussed in this document, it is possible to optimize the storage performance of Windows servers. Veritas Storage Foundation for Windows is a leading storage management application for Windows servers and overcomes the obstacles of traditional disk management by providing easy-to-use online disk storage management for mission-critical Windows environments in the enterprise. Veritas Storage Foundation for Windows enables high availability of data and optimized storage I/O performance and protects current storage investments while allowing freedom of choice for hardware in the future.

Acknowledgments
Thanks to Paul Barrington, Lead Technical Architect, Symantec EMEA Consulting Group, who contributed to this white paper.

26

About Symantec Symantec is a global leader in infrastructure software, enabling businesses and consumers to have confidence in a connected world. The company helps customers protect their infrastructure, information, and interactions by delivering software and services that address risks to security, availability, compliance, and performance. Headquartered in Cupertino, Calif., Symantec has operations in 40 countries. More information is available at www.symantec.com.

For specific country offices and contact numbers, please visit our Web site. For product information in the U.S., call toll-free 1 (800) 745 6054.

Symantec Corporation World Headquarters 20330 Stevens Creek Boulevard Cupertino, CA 95014 USA +1 (408) 517 8000 1 (800) 721 3934 www.symantec.com

Copyright 2007 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo, FlashSnap, Veritas, and Veritas Storage Foundation are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Microsoft, Internet Explorer, and Windows are registered trademarks of Microsoft Corporation in the United States and other countries. Other names may be trademarks of their respective owners. Printed in the U.S.A. 02/07 11859264

S-ar putea să vă placă și