Sunteți pe pagina 1din 20

NETAPP TECHNICAL REPORT

Predictive Cache Statistics


Best Practices Guide
Paul Updike, NetApp
June 2008 | TR-3681

This guide investigates the use of the Predictive Cache Statistics (PCS) functionality in Data ONTAP .

TABLE OF CONTENTS
1

INTRODUCTION TO PREDICTIVE CACHE STATISTICS ........................................................ 3


1.1

ENABLING PREDICTIVE CACHE STATISTICS..........................................................................................3

COLLECTING PREDICTIVE CACHE STATISTICS .................................................................. 3


2.1

DATA COLLECTION FOR HIGH-PRECISION ANALYSIS..........................................................................3

2.2

DATA COLLECTION FOR REAL-TIME ANALYSIS ....................................................................................4

ANALYZING RESULTS.............................................................................................................. 5
3.1

DECODING COUNTERS ..............................................................................................................................5

3.2

EXAMPLE OF A BASIC COUNTERS-BASED ANALYSIS..........................................................................6

3.3

WORKING WITH THE REAL-TIME FLEXSCALE-PCS DATA ....................................................................7

3.4

EXAMPLE OF REAL-TIME DATA ANALYSIS.............................................................................................7

3.5

ANALYSIS TOOLS .......................................................................................................................................8

PREDICTIVE CACHE STATISTICS MODES OF OPERATION ................................................ 9


4.1

METADATA CACHING .................................................................................................................................9

4.2

NORMAL USER DATA CACHING (DEFAULT) .........................................................................................10

4.3

LOW-PRIORITY DATA CACHING..............................................................................................................11

4.4

CHOOSING THE BEST MODE...................................................................................................................12

APPENDIX ................................................................................................................................ 12
5.1

APPENDIX 1: SAMPLE OF EXT_CACHE_OBJ STATISTICS ..................................................................12

5.2

APPENDIX 2 : FLEXSCALE-PCS.XML FILE CONTENTS ........................................................................19

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

INTRODUCTION TO PREDICTIVE CACHE STATISTICS

Predictive Cache Statistics (PCS) is a way of simulating the effects of additional extended cache memory in
a system. PCS simulates caches at three memory points above system memory: 2x, 4x, and 8x of the base
system memory. The three caches are named EC0 (2x), EC1 (4x), and EC2 (8x)
For example, on a FAS3070 with 8GB of system memory, PCS simulates EC0 = 16GB. EC1 is represented
as an additional 16GB, bringing the total to 32GB simulated at that point. Finally, EC2 is 32GB and brings
the total to 64GB, or 8 times the base memory.
This technique allows you to collect cache statistics as if an actual cache were installed at each of these
memory points. From those statistics, you can model the behavior of real caches and predict the
effectiveness of purchasing and placing one or more Performance Acceleration Modules on the system to
improve the workload.

1.1 ENABLING PREDICTIVE CACHE STATISTICS


You can enable PCS with a simple command line option on storage controllers with more than 2GB of
system memory. To enable PCS you must be running Data ONTAP 7.2.6.1 or later:
Data ONTAP 7.2.6.1 and later 7.2.x releases
optionsext_cache.enableon
Data ONTAP 7.3 and later
options flexscale.enable pcs
NOTE: For the remainder of this document, we will use the Data ONTAP 7.3 syntax. You may replace
flexscale with ext_cache for PCS on Data ONTAP 7.2.6.x releases.
Before enabling Predictive Cache Statistics, you should observe the following precautions:

Monitor CPU via the sysstat command. If the CPU busy column stays at or above 80%, there may
be a noticeable difference in the performance of the system while PCS is running. This can be between
10% and 20% increase in protocol latency as the CPU approaches 100% utilization. In this scenario,
either:
o

Run PCS and closely monitor system performance; or

Do not run PCS on this system

If you choose to run PCS and monitor system performance, if there is a negative performance impact,
you can disable the statistics with flexscale.enable off.

2
2.1

When a collection period is complete, disable the PCS functionality. PCS should not be left enabled all
the time.

COLLECTING PREDICTIVE CACHE STATISTICS


DATA COLLECTION FOR HIGH-PRECISION ANALYSIS

Predictive Cache Statistics are implemented as Data ONTAP Counter Manager objects. They can be
started, shown, stopped, and reset just like any other counters by using the stats command. The counters
associated with PCS are held in two counter manager objects, ext_cache and ext_cache_obj.

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

The ext_cache_obj object contains the performance counters, such as the hits and misses. The
individual counters are too numerous to list here, but an example of the counters is listed in Appendix 1.
Each of the simulated caches, EC0, EC1, and EC2, has the same set of counters.
The simplest and recommended method of collecting PCS is through the perfstat tool, available in the
tool chest at NOW (NetApp on the Web). This tool collects all of the statistics at the time intervals given
in the command at the command line. For example, to collect a 10-minute sample five times, the command
would be:
perfstat.sh f <storage controller> -t 10 i 5
The perfstat tool is available here:
http://now.netapp.com/NOW/download/tools/perfstat/
Note: For proper data collection, be sure to use the latest version of the perfstat tool. Earlier versions of

perfstat on Windows wont collect PCS stats. Also, do not use the -F option to perfstat when
collecting PCS. Some analysis tools dont work with perfstats collected in this way.
COLLECTION PROCESS
To collect Predictive Cache Statistics, follow these steps:
1.

Observe the precautions just described and then enable PCS.

2.

Allow the simulated caches to warm up. This might take a day or so. One way to tell when it has
stabilized is to view the Usage column described in section 2.2. When the percentage has stabilized
across the three instances, youre probably ready to go.

3.

At a time of interest in the workload, run perfstat as described earlier. Five iterations of 10 minutes
each (-t 10 i 5) tends to work well.

4.

Save the perfstat output.

5.

Disable PCS by setting the option to Off.

6.

Analyze the results.

7.

If necessary, based on the results of the analysis, repeat the process.

2.2

DATA COLLECTION FOR REAL-TIME ANALYSIS

Sometimes it may not be possible to collect a perfstat, or you may not have access to the analysis tools
to process it. In these scenarios, there is still a way to understand the performance changes of adding
Performance Acceleration Modules to a system.
In the same manner described earlier, collect the perfstat data for the workload in question. Section 3,
Analyzing Results, looks at some of the counters to understand how the cache is working.
In addition to the perfstat, you can also collect real-time data from the cache by following these steps:
1.

Copy the text from Appendix 2 into a plain text file and save the file with the name flexscalepcs.xml.

2.

Move the file to the storage system in the <root volume>/etc/stats/preset/ folder.

3.

At step 3 in the collection process just described, start the real-time counters with
> stats show p flexscale-pcs
This lists output to the screen in the following manner:

Instance

Blocks Usage

Hit

Miss Hit Evict Invalidate Insert

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

ec0
ec1
ec2

16777216
16777216
33554432

%
99
99
99

/s
/s
3451 15091
0
348
0
0

%
18
0
0

/s
5189
4002
3095

/s
0
1112
806

/s
5298
5189
4002

ec0
ec1
ec2

16777216
16777216
33554432

99
99
99

4152 16519
0
390
0
0

20
0
0

0
0
0

198
51
8

0
0
0

ec0
ec1
ec2

16777216
16777216
33554432

99
99
99

4196 16049
0
410
0
0

20
0
0

5124
3561
2756

67
1547
765

5233
5124
356

---

---

The columns are defined as follows:


Instance: Instance of the virtual cache.
Blocks: The number of 4KB blocks in the cache.
Usage: The percentage of the instance that is currently filled with data.
Hit: The hits per second of that instance.
Miss: The misses per second of that instance.
Hit %: The percentage of hits to total accesses.
Evict: The blocks evicted per second of data from the instance.
Invalidate: The number of blocks invalidated due to data updates.
Insert: The number of insertions per second into the instance.

ANALYZING RESULTS

Perfstat collects a lot of data. In that data, there are many data points that apply to PCS. This paper
doesnt cover in-depth analysis of all perfstat data, so familiarity with NetApp storage system
performance is assumed. To find the PCS data in a raw perfstat file, search for
perfstat_ext_cache_obj, which is in the header of the PCS object as it is displayed in the perfstat
output file. See Appendix 1 for an example of such a listing.
Additionally, for all analysis it is important to remember that the Performance Acceleration Module is
available at 16GB per module. The instances of the simulated caches may not line up. When doing an
analysis, it is safe to assume linear relationships. For example, if the virtual cache is 64GB, you can divide
by four 16GB module to understand the per-module benefit. Similarly, you can divide the hits/s and so on
and interpolate the results in the same way.

3.1

DECODING COUNTERS

There are numerous counters in the ext_cache_obj object. When looking at them in their bare form, it is
useful to group them in association with their respective modes of operation, as detailed in section 5. This
document does not define the individual counters. You can investigate them by using the Data Ontap stats
explain command.
Because the simulated caches are being used to help determine the effect of real caches at similar memory
points, the most important data is in terms of the number of hits or misses in each cache. This helps you to
understand the amount of data.
NORMAL
ext_cache_obj:ec0:hit_normal_lev0:248/s

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

ext_cache_obj:ec0:miss_normal_lev0:50/s
METADATA
ext_cache_obj:ec0:hit_metadata_file:0/s
ext_cache_obj:ec0:hit_directory:0/s
ext_cache_obj:ec0:hit_indirect:0/s
ext_cache_obj:ec0:miss_metadata_file:1/s
ext_cache_obj:ec0:miss_directory:0/s
ext_cache_obj:ec0:miss_indirect:0/s
LOW PRIORITY
ext_cache_obj:ec0:hit_flushq:246/s
ext_cache_obj:ec0:hit_once:214/s
ext_cache_obj:ec0:hit_age:0/s
ext_cache_obj:ec0:miss_flushq:42/s
ext_cache_obj:ec0:miss_once:16/s
ext_cache_obj:ec0:miss_age:0/s
For basic analysis purposes, it is only necessary to understand which mode the counters are associated
with, not their individual definitions. The counters can be summed to get total hits/misses per mode. You can
use this information to determine the best run-time mode for a simulated cache and/or Performance
Acceleration Module.
TOTALS
ext_cache_obj:ec0:hit:261/s
ext_cache_obj:ec0:miss:129/s
These are the summary counters for each of the simulated caches. Dont expect these numbers to match
exactly to the sums of individual modes. It is expected and normal that they are a little off.
Finally, the amount of cache used in each simulated instance is available in the following object.
CACHE USAGE
ext_cache_obj:ec0:usage:55%

3.2

EXAMPLE OF A BASIC COUNTERS-BASED ANALYSIS

The percentage that the simulated caches are filled helps determine the number of Performance Accelerator
Modules for this workload. Assume that a system is a FAS6080 and has 32GB of base memory. For the
data just given, the simulated cache at EC0 is 2x the base, or 64GB. This cache is filled to only 55%, and
the other two (EC1, EC2) are at 0%. Note that the cache has stabilized at this number and is no longer in
the process of warming up.
You now know that the total amount of space required to cover this workload in cache is 55% of 64GB, or a
little more than two 16 GB Performance Accelerator Modules.
Looking back at the individual modes of operation, note that the hits are in the normal and low-priority
counters, as well as a fair number of misses. There are no hits or misses in the metadata, so you can
conclude that the metadata is fitting into system memory well. The two modes of operation that make the
most sense are normal, which does not include low-priority blocks, and low-priority modes.

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

Through experimentation, you can turn on the low-priority mode and allow another warm-up period, or you
can leave the cache in normal mode. Try to find the mode that gives the most hits and replaces the biggest
amount of I/O to the disk.

3.3

WORKING WITH THE REAL-TIME FLEXSCALE-PCS DATA

You can use the output of the flexscale-pcs stats to determine several things.
Instance
ec0
ec1
ec2

Blocks Usage
%
16777216
99
16777216
99
33554432
99

Hit Miss Hit Evict Invalidate Insert


/s
/s
%
/s
/s
/s
3451 15091 18 5189
0
5298
0
348
0 4002
1112
5189
0
0
0 3095
806
4002

The hit percentage (hit%) tells you the utilization of the cache at each instance. (how full).

If the hit/(invalidate+evict) ratio is small, then the caching point at that instance is possibly too small.
This means that there is a lot more data being discarded than had a chance to be used.

If the (hit+miss)/invalidate ratio is too small, it might mean a workload with a large amount of updates;
switching to metadata mode and checking the hit% again is advisable.

If the usage is stable and there are a small number of invalidates and evictions, then the working set fits
well.

The KB/s that the cache serves is approximately equal to the hit/s *4KB per block.

Understanding this information allows you to estimate the amount of work that could be replaced with a
Performance Acceleration Module. In addition to this data, you may want to observe the output of sysstat
over a similar interval to understand the workload and the amount of data that is going to disk. Combining
the two gives a picture of the effectiveness of adding Performance Acceleration Modules.

3.4

EXAMPLE OF REAL-TIME DATA ANALYSIS

This section analyzes the data above to see what kind of predictions you can make about the workload.
Instance
ec0
ec1
ec2

Blocks Usage
%
16777216
99
16777216
99
33554432
99

Hit Miss Hit Evict Invalidate Insert


/s
/s
%
/s
/s
/s
3451 15091 18 5189
0
5298
0
348
0 4002
1112
5189
0
0
0 3095
806
4002

ec0
ec1
ec2

16777216
16777216
33554432

99
99
99

4152 16519
0
390
0
0

20
0
0

0
0
0

198
51
8

0
0
0

ec0
ec1
ec2

16777216
16777216
33554432

99
99
99

4196 16049
0
410
0
0

20
0
0

5124
3561
2756

67
1547
765

5233
5124
356

---

---

First, note that all three caches are 99% full: the cache is at a stasis point, and the data is valid.

Second, check to see how big each of the caches is:

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

ec0 = 16777216 blocks *4KB/block * 1048576KB/GB = 64GB


ec1 = 16777216 blocks *4KB/block * 1048576KB/GB = 64GB
ec2 = 33554432 blocks *4KB/block * 1048576KB/GB = 128GB

Check the hits versus the amount of data churning through the cache. It looks like ec0 is the only cache
with any hits, so concentrate there:
3451 / 5189 + 0 = 0.66 to 1
Thats a pretty good ratio, so the cache is stable and seems effective in the first 64GB.

Looking at the hit percentage, you see about 18% hits for 3451 hits/s. Thats not bad.

Finally, look at the amount of data being fed out of the cache:
3451 blocks/s *4KB/block = 13804 KB/s
So on this system, if you added four Performance Acceleration Modules (64GB in ec0 /16GB per
module), you might expect 13.5 MB/s of disk reads to be replaced. In a small block random read
intensive workload, you could expect that 3451 blocks would represent the IOPS of almost two shelves
of disks.

3.5

ANALYSIS TOOLS

It can be complicated to take the many counters that are available from PCS and turn them into data that
can influence a purchase and implementation decision. To that end, the following tools have been
developed to assist the analysis of PCS data.
PERFSYS
Perfsys is a tool that can take perfstat and turn it into a set of concise analytics. Its requires Perl to run
and is available here:
http://wikid.netapp.com/w/Perfsys#Location
It can be used as a standalone tool that is run against a perfstat file. The output of perfsys is an HTML
file that can be opened with a Web browser. The output has multiple data points. The PCS section takes the
following form:
Flex Scale Table Key
EC0 cumulative simulated cache: 64 gigabytes
EC1 cumulative simulated cache: 128 gigabytes
EC2 cumulative simulated cache: 256 gigabytes

Total IOs that would have gone to disk: 5846


Caching
Cache
Chain Length
Disk IOs
Hits in MB/s
Point
% Used
Replaced
64GB
99
1.510
1991
11.750
128GB
99
1.510
442
2
256GB
99
1.510
261
1

Total IOs that would have gone to disk: 5897


Caching
Cache
Chain Length
Disk IOs
Hits in MB/s
Point
% Used
Replaced
64GB
99
1.518
2044
12.125
128GB
99
1.518
387
2
256GB
99
1.518
254
1

Raw Hits
3008
669
395

Raw Hits
3104
588
386

Raw Misses
15405
390
0

Raw Misses
15660
382
0

Meta Hits
2873
659
379

Meta Hits
2969
579
370

Meta Misses
1919
1257
878

Meta Misses
1846
1266
895

Lo-Pri hits
0
0
0

Lo-Pri hits
0
0
0

Lo-Pri Misses
0
0
0

Lo-Pri Misses
0
0
0

The output collects the data from the hits and misses counters, sums the data, groups it by mode, and
displays iteration by iteration for each simulated cache. It also calculates the disk I/Os that would be
replaced by using the cache at that caching point. This makes the analysis process much faster and more
obvious than the counters-based example in section 3.2.

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

This analysis also provides a key factor to observe: the number of disk I/Os being replaced at the caching
points.
PROCESS FOR ANALYSIS USING PERFSYS
1.

Collect PCS by following the steps in section 3.

2.

Run perfsys against the file with


perfsys.plperfstat<perfstatfile>

3.

Open the HTML output file with a Web browser.

4.

Compare hits and misses for the modes of operation to determine the best mode.

5.

Rerun the tests if necessary.

LATX
Latx is a tool that is being developed by the NetApp technical support organization. It is currently in its
infancy, but it adds a lot of capability that is not available in other performance analysis tools. Latx includes
in its output a perfsys report, which provides the output described earlier. For NetApp internal use, Latx
allows you to upload a file via the Web browser and get the perfsys output without needing to have Perl or
the script installed on your local machine.
Latx is available at http://latx/ on the NetApp internal network.

PREDICTIVE CACHE STATISTICS MODES OF OPERATION

The PCS modes of operations match the three modes of operations for the Performance Acceleration
Module,which provide the ability to tune the caching behavior to match the storage systems workload. As
described in this section, each of the modes in the simulated cache allows a broader amount of data to be
stored in the module than the previous one.

4.1

METADATA CACHING

Metadata mode allows only metadata into the extended cache area. In many random workloads, the actual
application data is seldom reused in a timely fashion that would benefit from a caching technology. However,
these workloads tend to reuse metadata, and as a result, gain can often be realized by filtering out other
types of data and allowing only metadata into the module.
Metadata is an often misunderstood concept in Data ONTAP. Metadata can be defined in two contexts. For
file services protocols such as NFS and CIFS, metadata generally means the data required to maintain the
file and directory structure. When applied to SAN, the term means the small number of blocks (fewer than 1
in 400) that are used for the bookkeeping of the data in a LUN. On NetApp storage systems, LUNs do not
have a file and directory structure associated with them.
Metadata-only caching is implemented by restricting user data from entering the module. Normal user data
and low-priority user data are kept from the module by setting their options to Off.

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

Metadata Caching
lopri_blocks=off
Low priority user data
normal_data_blocks=off
Normal user data
(metadata)
Metadata

Figure 1) Metadata only; normal and low-priority user data are not allowed in the module.

To configure this mode, set the following options to Off.


flexscale.lopri_blocksoff
flexscale.normal_data_blocksoff

4.2

NORMAL USER DATA CACHING (DEFAULT)

The default mode caches all normal data, just as would be cached by Data ONTAP in main memory. This
mode includes user and application data as well as metadata. It does not include low-priority data, which is
discussed in the next section.

Normal User Data Caching


lopri_blocks=off
Low priority user data
normal_data_blocks=on
Normal user data

(metadata)
Metadata

Figure 2) Metadata and normal user data are cached; low-priority user data is not allowed in the module.

This is the default mode of operation. The following options set the module for normal user data caching:
flexscale.lopri_blocksoff
flexscale.normal_data_blockson

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

10

4.3

LOW-PRIORITY DATA CACHING

When low-priority mode is enabled, metadata, normal, and low-priority data are all cached. Normally, lowpriority data isnt kept for long in system memory. Its the first thing to go when more space is needed.
Low-priority data is those types of data that have a high chance of overrunning other cached data and/or
have less likelihood of being reused. Data of this nature is less beneficial to keep, and is retained at a
relatively low priority. The majority of low-priority user data falls into two categories:

Recent user or application writes

Large sequential reads

In the case of recent writes, the inbound write workload can be high enough that the writes overflow the
cache and cause other data to be ejected. At the same time, writes can come in so fast that they arent kept
long enough before they have to be ejected to make room for more writes. In addition, heavy write
workloads tend not to be read after writing, so theyre not necessarily good candidates for caching at all.
Large sequential reads have a similar effect. Examples of this type of workload are large file copies,
backups, database dumps, and so on. A large amount of data is brought through the cache, overwhelming
the other data in it and resident for a short time itself. Like writes, large sequential reads are seldom reread
and also tend to be bad candidates.
These scenarios apply to the confines of main memory. With ample space to store recent writes or large
sequential reads, there can be benefits to keeping the data resident in a large fast-access location. The
extended cache provided by the Performance Acceleration Module has the potential to absorb the lowpriority data and keep it resident long enough to have a chance at reuse.
Because of the design of the Performance Acceleration Module, enabling low-priority mode for the module
does not affect the behavior of main memory. Main memory, where space is more limited, still treats the
data with low priority, whereas in the module it can now be kept with other user data.

Figure 3) Metadata, normal user data, and low-priority data are all allowed in the module.

To configure this mode, change the lopri_blocks setting to On. .


flexscale.lopri_blockson
flexscale.normal_data_blockson
Note: The flexscale.normal_data_blocks option must be on for low-priority mode to work. Setting
this option to Off effectively results in metadata mode.

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

11

4.4

CHOOSING THE BEST MODE

To review, the three modes of operation are:

Metadata

Normal user data

Low-priority data

Each mode can cache more data than the previous one. This has two effects on how long and how much of
that data stays in the cache. When you choose a more restrictive mode, the data kept has more memory
available to it. More of the working set of that type of data can remain in the cache for a longer time than it
would if it were competing with data from other modes.
For example, if you choose to use metadata mode, the cache is used exclusively for metadata. No user data
is allowed into the cache. Potentially, all of the metadata for a working set can be kept in the cache,
providing fast access to it. If the working set is very large and active (terabytes), there is little chance of
reaccessing user data. However, having the metadata quickly available still improves the performance of the
workload.
When you choose to cache normal user data as well, the total pool of data being cached increases. This
may present more chances to have access hits to that data. Or, because a greater amount is now moving
through the module, the data needed for the hit may not stay around long enough to actually be accessed.
With this in mind, use the following guidance to determine which mode fits your workload.

5
5.1

1.

Measure the performance of the system before enabling the module, as detailed in section 8.1.

2.

Start with the default mode, normal user data. Measure the performance of the cache with this
mode enabled, as described in section 3.

3.

Using this data, determine the performance of the simulated cache.

4.

If this is a substantial improvement, you may want to do nothing more. If not, change the mode to
either metadata or low priority and repeat.

APPENDIX
APPENDIX 1: SAMPLE OF EXT_CACHE_OBJ STATISTICS

=-=-=-=-=-= PERF systemname POSTSTATS =-=-=-=-=-= stats stop -I


perfstat_ext_cache_obj
TIME: 10:31:35
TIME_DELTA: 2:3 (123s)
ext_cache_obj:ec0:type:1
ext_cache_obj:ec0:uptime:3484948981
ext_cache_obj:ec0:blocks:8388608
ext_cache_obj:ec0:associativity:4
ext_cache_obj:ec0:sets:2097152
ext_cache_obj:ec0:usage:55%
ext_cache_obj:ec0:accesses_total:714557SS
ext_cache_obj:ec0:accesses:390/s
ext_cache_obj:ec0:accesses_sync:0/s
ext_cache_obj:ec0:hit:261/s
ext_cache_obj:ec0:hit_flushq:246/s
ext_cache_obj:ec0:hit_once:214/s
ext_cache_obj:ec0:hit_age:0/s

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

12

ext_cache_obj:ec0:hit_normal_lev0:248/s
ext_cache_obj:ec0:hit_metadata_file:0/s
ext_cache_obj:ec0:hit_directory:0/s
ext_cache_obj:ec0:hit_indirect:0/s
ext_cache_obj:ec0:hit_partial:0/s
ext_cache_obj:ec0:hit_sync:0/s
ext_cache_obj:ec0:hit_flushq_sync:0/s
ext_cache_obj:ec0:hit_once_sync:0/s
ext_cache_obj:ec0:hit_age_sync:0/s
ext_cache_obj:ec0:hit_normal_lev0_sync:0/s
ext_cache_obj:ec0:hit_metadata_file_sync:0/s
ext_cache_obj:ec0:hit_directory_sync:0/s
ext_cache_obj:ec0:miss:129/s
ext_cache_obj:ec0:miss_flushq:42/s
ext_cache_obj:ec0:miss_once:16/s
ext_cache_obj:ec0:miss_age:0/s
ext_cache_obj:ec0:miss_normal_lev0:50/s
ext_cache_obj:ec0:miss_metadata_file:1/s
ext_cache_obj:ec0:miss_directory:0/s
ext_cache_obj:ec0:miss_indirect:0/s
ext_cache_obj:ec0:miss_sync:0/s
ext_cache_obj:ec0:miss_flushq_sync:0/s
ext_cache_obj:ec0:miss_once_sync:0/s
ext_cache_obj:ec0:miss_age_sync:0/s
ext_cache_obj:ec0:miss_normal_lev0_sync:0/s
ext_cache_obj:ec0:miss_metadata_file_sync:0/s
ext_cache_obj:ec0:miss_directory_sync:0/s
ext_cache_obj:ec0:lookup_reject:0/s
ext_cache_obj:ec0:lookup_reject_sync:609/s
ext_cache_obj:ec0:lookup_reject_normal_l0:0/s
ext_cache_obj:ec0:lookup_reject_io:0/s
ext_cache_obj:ec0:lookup_chains:14/s
ext_cache_obj:ec0:lookup_chain_cnt:301/s
ext_cache_obj:ec0:hit_percent:66%
ext_cache_obj:ec0:hit_percent_sync:0%
ext_cache_obj:ec0:inserts:73/s
ext_cache_obj:ec0:inserts_flushq:49/s
ext_cache_obj:ec0:inserts_once:0/s
ext_cache_obj:ec0:inserts_age:0/s
ext_cache_obj:ec0:inserts_normal_lev0:51/s
ext_cache_obj:ec0:inserts_metadata_file:3/s
ext_cache_obj:ec0:inserts_directory:7/s
ext_cache_obj:ec0:inserts_indirect:9/s
ext_cache_obj:ec0:insert_rejects_misc:4/s
ext_cache_obj:ec0:insert_rejects_present:218/s
ext_cache_obj:ec0:insert_rejects_flushq:0/s
ext_cache_obj:ec0:insert_rejects_normal_lev0:0/s
ext_cache_obj:ec0:insert_rejects_throttle:0/s
ext_cache_obj:ec0:insert_rejects_throttle_io:0/s
ext_cache_obj:ec0:insert_rejects_throttle_refill:0/s
ext_cache_obj:ec0:insert_rejects_throttle_mem:0/s
ext_cache_obj:ec0:insert_rejects_cache_reuse:0/s
ext_cache_obj:ec0:insert_rejects_vbn_invalid:0/s
ext_cache_obj:ec0:reuse_percent:357%
ext_cache_obj:ec0:evicts:72/s
ext_cache_obj:ec0:evicts_ref:2/s
ext_cache_obj:ec0:readio_solitary:0/s

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

13

ext_cache_obj:ec0:readio_chains:0/s
ext_cache_obj:ec0:readio_blocks:0/s
ext_cache_obj:ec0:readio_in_flight:0
ext_cache_obj:ec0:readio_max_in_flight:0
ext_cache_obj:ec0:readio_avg_chainlength:0
ext_cache_obj:ec0:readio_avg_latency:0ms
ext_cache_obj:ec0:writeio_solitary:0/s
ext_cache_obj:ec0:writeio_chains:0/s
ext_cache_obj:ec0:writeio_blocks:0/s
ext_cache_obj:ec0:writeio_in_flight:0
ext_cache_obj:ec0:writeio_max_in_flight:0
ext_cache_obj:ec0:writeio_avg_chainlength:0
ext_cache_obj:ec0:writeio_avg_latency:0ms
ext_cache_obj:ec0:blocks_ref0:8345882
ext_cache_obj:ec0:blocks_ref1:41023
ext_cache_obj:ec0:blocks_ref2:608
ext_cache_obj:ec0:blocks_ref3:18446744073709551221
ext_cache_obj:ec0:blocks_ref4:18446744073709551257
ext_cache_obj:ec0:blocks_ref5:539
ext_cache_obj:ec0:blocks_ref6:380
ext_cache_obj:ec0:blocks_ref7:930
ext_cache_obj:ec0:blocks_ref0_arrivals:7905
ext_cache_obj:ec0:blocks_ref1_arrivals:50631
ext_cache_obj:ec0:blocks_ref2_arrivals:3933
ext_cache_obj:ec0:blocks_ref3_arrivals:2088
ext_cache_obj:ec0:blocks_ref4_arrivals:1900
ext_cache_obj:ec0:blocks_ref5_arrivals:2098
ext_cache_obj:ec0:blocks_ref6_arrivals:1437
ext_cache_obj:ec0:blocks_ref7_arrivals:5795
ext_cache_obj:ec0:lru_ticks:242592
ext_cache_obj:ec0:invalidates:0/s
ext_cache_obj:ec1:type:1
ext_cache_obj:ec1:uptime:3484948957
ext_cache_obj:ec1:blocks:8388608
ext_cache_obj:ec1:associativity:4
ext_cache_obj:ec1:sets:2097152
ext_cache_obj:ec1:usage:0%
ext_cache_obj:ec1:accesses_total:1391
ext_cache_obj:ec1:accesses:5/s
ext_cache_obj:ec1:accesses_sync:0/s
ext_cache_obj:ec1:hit:0/s
ext_cache_obj:ec1:hit_flushq:6/s
ext_cache_obj:ec1:hit_once:0/s
ext_cache_obj:ec1:hit_age:0/s
ext_cache_obj:ec1:hit_normal_lev0:7/s
ext_cache_obj:ec1:hit_metadata_file:0/s
ext_cache_obj:ec1:hit_directory:0/s
ext_cache_obj:ec1:hit_indirect:0/s
ext_cache_obj:ec1:hit_partial:0/s
ext_cache_obj:ec1:hit_sync:0/s
ext_cache_obj:ec1:hit_flushq_sync:0/s
ext_cache_obj:ec1:hit_once_sync:0/s
ext_cache_obj:ec1:hit_age_sync:0/s
ext_cache_obj:ec1:hit_normal_lev0_sync:0/s
ext_cache_obj:ec1:hit_metadata_file_sync:0/s
ext_cache_obj:ec1:hit_directory_sync:0/s
ext_cache_obj:ec1:miss:5/s

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

14

ext_cache_obj:ec1:miss_flushq:36/s
ext_cache_obj:ec1:miss_once:16/s
ext_cache_obj:ec1:miss_age:0/s
ext_cache_obj:ec1:miss_normal_lev0:43/s
ext_cache_obj:ec1:miss_metadata_file:1/s
ext_cache_obj:ec1:miss_directory:0/s
ext_cache_obj:ec1:miss_indirect:0/s
ext_cache_obj:ec1:miss_sync:0/s
ext_cache_obj:ec1:miss_flushq_sync:0/s
ext_cache_obj:ec1:miss_once_sync:0/s
ext_cache_obj:ec1:miss_age_sync:0/s
ext_cache_obj:ec1:miss_normal_lev0_sync:0/s
ext_cache_obj:ec1:miss_metadata_file_sync:0/s
ext_cache_obj:ec1:miss_directory_sync:0/s
ext_cache_obj:ec1:lookup_reject:0/s
ext_cache_obj:ec1:lookup_reject_sync:0/s
ext_cache_obj:ec1:lookup_reject_normal_l0:0/s
ext_cache_obj:ec1:lookup_reject_io:0/s
ext_cache_obj:ec1:lookup_chains:0/s
ext_cache_obj:ec1:lookup_chain_cnt:0/s
ext_cache_obj:ec1:hit_percent:0%
ext_cache_obj:ec1:hit_percent_sync:0%
ext_cache_obj:ec1:inserts:72/s
ext_cache_obj:ec1:inserts_flushq:0/s
ext_cache_obj:ec1:inserts_once:0/s
ext_cache_obj:ec1:inserts_age:0/s
ext_cache_obj:ec1:inserts_normal_lev0:0/s
ext_cache_obj:ec1:inserts_metadata_file:0/s
ext_cache_obj:ec1:inserts_directory:0/s
ext_cache_obj:ec1:inserts_indirect:0/s
ext_cache_obj:ec1:insert_rejects_misc:0/s
ext_cache_obj:ec1:insert_rejects_present:4/s
ext_cache_obj:ec1:insert_rejects_flushq:0/s
ext_cache_obj:ec1:insert_rejects_normal_lev0:0/s
ext_cache_obj:ec1:insert_rejects_throttle:0/s
ext_cache_obj:ec1:insert_rejects_throttle_io:0/s
ext_cache_obj:ec1:insert_rejects_throttle_refill:0/s
ext_cache_obj:ec1:insert_rejects_throttle_mem:0/s
ext_cache_obj:ec1:insert_rejects_cache_reuse:0/s
ext_cache_obj:ec1:insert_rejects_vbn_invalid:0/s
ext_cache_obj:ec1:reuse_percent:0%
ext_cache_obj:ec1:evicts:63/s
ext_cache_obj:ec1:evicts_ref:0/s
ext_cache_obj:ec1:readio_solitary:0/s
ext_cache_obj:ec1:readio_chains:0/s
ext_cache_obj:ec1:readio_blocks:0/s
ext_cache_obj:ec1:readio_in_flight:0
ext_cache_obj:ec1:readio_max_in_flight:0
ext_cache_obj:ec1:readio_avg_chainlength:0
ext_cache_obj:ec1:readio_avg_latency:0ms
ext_cache_obj:ec1:writeio_solitary:0/s
ext_cache_obj:ec1:writeio_chains:0/s
ext_cache_obj:ec1:writeio_blocks:0/s
ext_cache_obj:ec1:writeio_in_flight:0
ext_cache_obj:ec1:writeio_max_in_flight:0
ext_cache_obj:ec1:writeio_avg_chainlength:0
ext_cache_obj:ec1:writeio_avg_latency:0ms

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

15

ext_cache_obj:ec1:blocks_ref0:8384862
ext_cache_obj:ec1:blocks_ref1:3682
ext_cache_obj:ec1:blocks_ref2:825
ext_cache_obj:ec1:blocks_ref3:18446744073709551586
ext_cache_obj:ec1:blocks_ref4:33
ext_cache_obj:ec1:blocks_ref5:18446744073709551498
ext_cache_obj:ec1:blocks_ref6:18446744073709551455
ext_cache_obj:ec1:blocks_ref7:18446744073709551131
ext_cache_obj:ec1:blocks_ref0_arrivals:4438
ext_cache_obj:ec1:blocks_ref1_arrivals:6259
ext_cache_obj:ec1:blocks_ref2_arrivals:2204
ext_cache_obj:ec1:blocks_ref3_arrivals:815
ext_cache_obj:ec1:blocks_ref4_arrivals:653
ext_cache_obj:ec1:blocks_ref5_arrivals:292
ext_cache_obj:ec1:blocks_ref6_arrivals:302
ext_cache_obj:ec1:blocks_ref7_arrivals:816
ext_cache_obj:ec1:lru_ticks:256816
ext_cache_obj:ec1:invalidates:4/s
ext_cache_obj:ec2:type:1
ext_cache_obj:ec2:uptime:3484948917
ext_cache_obj:ec2:blocks:16777216
ext_cache_obj:ec2:associativity:8
ext_cache_obj:ec2:sets:2097152
ext_cache_obj:ec2:usage:1%
ext_cache_obj:ec2:accesses_total:0
ext_cache_obj:ec2:accesses:0/s
ext_cache_obj:ec2:accesses_sync:0/s
ext_cache_obj:ec2:hit:0/s
ext_cache_obj:ec2:hit_flushq:3/s
ext_cache_obj:ec2:hit_once:0/s
ext_cache_obj:ec2:hit_age:0/s
ext_cache_obj:ec2:hit_normal_lev0:5/s
ext_cache_obj:ec2:hit_metadata_file:0/s
ext_cache_obj:ec2:hit_directory:0/s
ext_cache_obj:ec2:hit_indirect:0/s
ext_cache_obj:ec2:hit_partial:0/s
ext_cache_obj:ec2:hit_sync:0/s
ext_cache_obj:ec2:hit_flushq_sync:0/s
ext_cache_obj:ec2:hit_once_sync:0/s
ext_cache_obj:ec2:hit_age_sync:0/s
ext_cache_obj:ec2:hit_normal_lev0_sync:0/s
ext_cache_obj:ec2:hit_metadata_file_sync:0/s
ext_cache_obj:ec2:hit_directory_sync:0/s
ext_cache_obj:ec2:miss:0/s
ext_cache_obj:ec2:miss_flushq:33/s
ext_cache_obj:ec2:miss_once:16/s
ext_cache_obj:ec2:miss_age:0/s
ext_cache_obj:ec2:miss_normal_lev0:38/s
ext_cache_obj:ec2:miss_metadata_file:1/s
ext_cache_obj:ec2:miss_directory:0/s
ext_cache_obj:ec2:miss_indirect:0/s
ext_cache_obj:ec2:miss_sync:0/s
ext_cache_obj:ec2:miss_flushq_sync:0/s
ext_cache_obj:ec2:miss_once_sync:0/s
ext_cache_obj:ec2:miss_age_sync:0/s
ext_cache_obj:ec2:miss_normal_lev0_sync:0/s
ext_cache_obj:ec2:miss_metadata_file_sync:0/s

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

16

ext_cache_obj:ec2:miss_directory_sync:0/s
ext_cache_obj:ec2:lookup_reject:0/s
ext_cache_obj:ec2:lookup_reject_sync:0/s
ext_cache_obj:ec2:lookup_reject_normal_l0:0/s
ext_cache_obj:ec2:lookup_reject_io:0/s
ext_cache_obj:ec2:lookup_chains:0/s
ext_cache_obj:ec2:lookup_chain_cnt:0/s
ext_cache_obj:ec2:hit_percent:0%
ext_cache_obj:ec2:hit_percent_sync:0%
ext_cache_obj:ec2:inserts:63/s
ext_cache_obj:ec2:inserts_flushq:0/s
ext_cache_obj:ec2:inserts_once:0/s
ext_cache_obj:ec2:inserts_age:0/s
ext_cache_obj:ec2:inserts_normal_lev0:0/s
ext_cache_obj:ec2:inserts_metadata_file:0/s
ext_cache_obj:ec2:inserts_directory:0/s
ext_cache_obj:ec2:inserts_indirect:0/s
ext_cache_obj:ec2:insert_rejects_misc:0/s
ext_cache_obj:ec2:insert_rejects_present:1/s
ext_cache_obj:ec2:insert_rejects_flushq:0/s
ext_cache_obj:ec2:insert_rejects_normal_lev0:0/s
ext_cache_obj:ec2:insert_rejects_throttle:0/s
ext_cache_obj:ec2:insert_rejects_throttle_io:0/s
ext_cache_obj:ec2:insert_rejects_throttle_refill:0/s
ext_cache_obj:ec2:insert_rejects_throttle_mem:0/s
ext_cache_obj:ec2:insert_rejects_cache_reuse:0/s
ext_cache_obj:ec2:insert_rejects_vbn_invalid:0/s
ext_cache_obj:ec2:reuse_percent:0%
ext_cache_obj:ec2:evicts:11/s
ext_cache_obj:ec2:evicts_ref:0/s
ext_cache_obj:ec2:readio_solitary:0/s
ext_cache_obj:ec2:readio_chains:0/s
ext_cache_obj:ec2:readio_blocks:0/s
ext_cache_obj:ec2:readio_in_flight:0
ext_cache_obj:ec2:readio_max_in_flight:0
ext_cache_obj:ec2:readio_avg_chainlength:0
ext_cache_obj:ec2:readio_avg_latency:0ms
ext_cache_obj:ec2:writeio_solitary:0/s
ext_cache_obj:ec2:writeio_chains:0/s
ext_cache_obj:ec2:writeio_blocks:0/s
ext_cache_obj:ec2:writeio_in_flight:0
ext_cache_obj:ec2:writeio_max_in_flight:0
ext_cache_obj:ec2:writeio_avg_chainlength:0
ext_cache_obj:ec2:writeio_avg_latency:0ms
ext_cache_obj:ec2:blocks_ref0:16776473
ext_cache_obj:ec2:blocks_ref1:640
ext_cache_obj:ec2:blocks_ref2:131
ext_cache_obj:ec2:blocks_ref3:18446744073709551615
ext_cache_obj:ec2:blocks_ref4:15
ext_cache_obj:ec2:blocks_ref5:2
ext_cache_obj:ec2:blocks_ref6:3
ext_cache_obj:ec2:blocks_ref7:18446744073709551569
ext_cache_obj:ec2:blocks_ref0_arrivals:579
ext_cache_obj:ec2:blocks_ref1_arrivals:1181
ext_cache_obj:ec2:blocks_ref2_arrivals:457
ext_cache_obj:ec2:blocks_ref3_arrivals:76
ext_cache_obj:ec2:blocks_ref4_arrivals:55

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

17

ext_cache_obj:ec2:blocks_ref5_arrivals:30
ext_cache_obj:ec2:blocks_ref6_arrivals:19
ext_cache_obj:ec2:blocks_ref7_arrivals:61
ext_cache_obj:ec2:lru_ticks:87054
ext_cache_obj:ec2:invalidates:1/s

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

18

5.2

APPENDIX 2 : FLEXSCALE-PCS.XML FILE CONTENTS

<?xml VERSION = "1.0" ?>


<!-- Display in column format basic FlexScale PCS performance information
-->
<preset orientation="column" interval="5"
print_footer="on">
<object name="ext_cache_obj">
<counter name="blocks">
<title>Blocks</title>
<width>9</width>
</counter>
<counter name="usage">
<title>Usage</title>
<width>5</width>
</counter>
<counter name="hit">
<title>Hit</title>
<width>5</width>
</counter>
<counter name="miss">
<title>Miss</title>
<width>5</width>
</counter>
<counter name="hit_percent">
<title>Hit</title>
<width>3</width>
</counter>
<counter name="evicts">
<title>Evict</title>
<width>5</width>
</counter>
<counter name="invalidates">
<title>Invalidate</title>
<width>10</width>
</counter>
<counter name="inserts">
<title>Insert</title>
<width>6</width>
</counter>
</object>
</preset>

2008 NetApp. All rights reserved. Specifications are subject to change without notice. NetApp, the NetApp logo, Go
further, faster, Data ONTAP, and NOW are trademarks or registered trademarks of NetApp, Inc. in the United States

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

19

and/or other countries. Windows is a registered trademark of Microsoft Corporation. All other brands or products are
trademarks or registered trademarks of their respective holders and should be treated as such.

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

20

S-ar putea să vă placă și