Documente Academic
Documente Profesional
Documente Cultură
Copyright © ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959, United States.
1
G 169 – 01
given value or belongs to a given set of values. ISO 3534/1 5.1.3 The probability that a random variable X is greater
3.2.9 random variable—a variable that may take any of the than the critical value for a given distribution is written P
values of a specified set of values and with which is associated (X>cv). This probability is often called the “p-value.” In this
a probability distribution. notation, the null hypothesis can be rejected if
3.2.9.1 Discussion—A random variable that may take only P(X>cv) < a
isolated values is said to be “discrete.” A random variable 5.2 Experimental Design—The next step in setting up a
which may take any value within a finite or infinite interval is weathering test is to design the weathering experiment. The
said to be “continuous.” ISO 3534/1 experimental design will depend on the type and number of
3.2.10 replicates—test specimens with nominally identical predictor variables, and the expected variability in the sample
composition, form, and structure. population, exposure conditions, and measurements. The ex-
3.2.11 response variable (dependent variable)— a random perimental design will determine the amount of replication,
variable whose value depends on other variables (factors). specimen positioning, and appropriate statistical methods for
Response variables within the context of this guide are usually analyzing the data.
property measurements (for example, tensile strength, gloss, 5.2.1 Response Variable—The methods covered in this
color, and so forth). ISO 3534/3 guide work for a single response variable. In weathering and
4. Significance and Use durability testing, the response variable will usually be a
quantitative property measurement such as gloss, color, tensile
4.1 The correct use of statistics as part of a weathering strength, modulus, and others. Sometimes, qualitative data
program can greatly increase the usefulness of results. A basic such as a visual rating make up the response variable, in which
understanding of statistics is required for the study of weath- case nonparametric statistical methods may be more appropri-
ering performance data. Proper experimental design and sta- ate.
tistical analysis strongly enhances decision-making ability. In
5.2.1.1 If the response variable is “time to failure,” or a
weathering, there are many uncertainties brought about by
counting process such as “the number of failures over a time
exposure variability, method precision and bias, measurement
interval,” then reliability-based methods should be used.
error, and material variability. Statistical analysis is used to
help decide which products are better, which test methods are 5.2.1.2 Here are the key considerations regarding the re-
most appropriate to gauge end use performance, and how sponse variable:
reliable the results are. ((1) What is the response variable?
4.2 Results from weathering exposures can show differ- ((2) Will the data represent quantitative or qualitative
ences between products or between repeated testing. These measurements?
results may show differences which are not statistically signifi- Qualitative data may be best analyzed with a nonparamet-
cant. The correct use of statistics on weathering data can ric method.
increase the probability that valid conclusions are derived. ((3) What is the expected variability in the measure-
ment?
5. Test Program Development When there is a high amount of measurement variability,
5.1 Hypothesis Formulation: then more replication of test specimens is needed.
5.1.1 All of the statistical methods in this guide are designed ((4) What is the expected variability in the sample
to test hypotheses. In order to apply the statistics, it is population?
necessary to formulate a hypothesis. Generally, the testing is More variability means more replication.
designed to compare things, with the customary comparison
((5) Is the comparison relative (ranked) or a direct
being:
comparison of sample statistics (for example, means)?
Do the predictor variables significantly affect the Ranked data is best handled with nonparametric methods.
response variable? 5.2.1.3 It is important to recognize that variability in expo-
Taking this comparison into consideration, it is possible to sure conditions will induce variability in the response variable.
formulate a default hypothesis that the predictor variables do Variability in both outdoor and laboratory exposures has been
not have a significant effect on the response variable. This well-documented (for example, see Guide G 141). Excessive
default hypothesis is usually called Ho, or the Null Hypothesis. variability in exposure conditions will necessitate more repli-
5.1.2 The objective of the experimental design and statisti- cation. See 5.2.2 for additional information.
cal analysis is to test this hypothesis within a desired level of 5.2.2 Predictor Variables—The objective of most of the
significance, usually an alpha level (a). The alpha level is the methods in this guide is to determine whether or not the
probability below which we reject the null hypothesis. It can be predictor variables had a significant effect on the response
thought of as the probability of rejecting the null hypothesis variable. The variables will be a mixture of the things that are
when it is really true (that is, the chance of making such an controllable (predictor variables – the items of interest), things
error). Thus, a very small alpha level reduces the chance in that are uncontrolled (blocking variables), or even worse,
making this kind of an error in judgment. Typical alpha levels things that are not anticipated.
are 5 % (0.05) and 1 % (0.01). The x-axis value on a plot of the 5.2.2.1 The most common variables in weather and durabil-
distribution corresponding to the chosen alpha level is gener- ity testing are the applied environmental stresses. These can be
ally called the critical value (cv). controlled, for example, temperature, irradiance, humidity
2
G 169 – 01
level in a laboratory device, or uncontrolled, that is, an TABLE 1 EXAMPLE EXPERIMENT
arbitrary outdoor exposure. Response Variable Predictor Variance 1 Predictor Variance 2
x
AA1 A A
NOTE 1—Even controlled environmental factors typically exhibit vari- x
AA2 A A
ability, which must be accounted for (see Guide G 141). The controlled x
AB1 A B
variables are the essence of the weathering experiment. They can take on x
AB2 A B
x
discrete or continuous values. AC1 A C
x
AC2 A C
5.2.2.2 Some examples of discrete predictor variables are: x
BA1 B A
x
BA2 B A
Polymer A, B, C x
BB1 B B
Ingredient A, B, C, D x
BB2 B B
Exposure location A versus B (for example, Ohio to Florida, or, x
BC1 B C
Laboratory 1, Laboratory 2, and Laboratory x
BC2 B C
3)
3
G 169 – 01
to try several approaches. A brief description of each method you have reason to suspect that the data are not normally
follows, along with a small example application. distributed, an alternate method like Mann-Whitney may be
6.2 Student’s t-Test: more appropriate.
6.2.1 The Student’s t-Test can be used to compare the means 6.2.3 Often, physical property measurements are close to
of two independent samples (random variables). This is the normally distributed. The following is an example problem and
simplest comparison that can be made: there is only one factor analysis. The analysis was calculated two ways: assuming that
with two possible states (by default discrete). Since it is such a the populations had equal variance, and not making such an
direct and limited comparison, replication must be used, assumption. In either case, the resulting probability values
typically with at least three replicates in each sample. SeeTable indicate that there is a significant difference in the sample
2. means (assuming an alpha level of 0.05).
6.2.2 The t-Test assumes that the data are close to normally Predictor samples t-test on COLOR CHANGE grouped by FORMULA:
distributed, although the test is fairly robust. The distributions Formula N Mean Standard Deviation
of each sample need not be equal, however. For large sample A 5 1.060 0.114
B 3 1.300 0.100
sizes, the t-Distribution approaches the normal distribution. If
Analysis Method t Value Degrees of Freedom P(X>cv)
TABLE 2 STUDENT’S t-TEST EXAMPLE Separate variances 3.116 4.9 0.036
Pooled variances 3.000 6.0 0.024
Color Change Formula
1.000 A P(X>cv) indicates the probability that a Student’s
1.200 A t-distributed random variable is greater than the cv, that is, the
1.100 A area under the tail of the t-distribution to the right of Point t.
0.900 A
1.100 A Since this value in either case is below a pre-chosen alpha level
1.300 B of 0.05, the result is significant. Note that this result would not
1.400 B
1.200 B
be significant at an alpha level of 0.01.
6.3 ANOVA:
4
G 169 – 01
6.3.1 Analysis of Variance (ANOVA) performs comparisons TABLE 4 REGRESSION EXAMPLE
like the t-Test, but for an arbitrary number of predictor Modifier Level Impact Retention After Exposure
variables, each of which can have an arbitrary number of 0.005 0.535
levels. Furthermore, each predictor variable combination can 0.01 0.6
0.02 0.635
have any number of replicates. Like all the methods in this 0.02 0.62
guide, ANOVA works on a single response variable. The 0.03 0.68
predictor variables must be discrete. See Table 3. 0.04 0.754
0.05 0.79
6.3.2 The ANOVA can be thought of in a practical sense as
an extension of the t-Test to an arbitrary number of factors and
levels. It can also be thought of as a linear regression model TABLE 5 PATHOLOGICAL LINEAR REGRESSION EXAMPLE
whose predictor variables are restricted to a discrete set. Here x v
is the example cited in the t-Test, extended to include an
0.01 0.029979
additional formula, and another factor. The new factor is to test 0.02 0.054338
whether the resulting formulation is affected by the technician 0.03 0.088581
who prepared it. There are two technicians and three formulas 0.04 0.082415
0.05 0.126631
under consideration. 0.06 0.073464
6.3.3 This example also illustrates that one need not have 0.07 0.123222
0.08 0.097003
identical numbers of replicates for each sample. In this 0.09 0.099728
example, there are two replicates per factor combination for 0.75 0.805909
Formula A, but no replication appears for the other formulas. 0.86 0.865667
Analysis of Variance
Response variable: COLOR CHANGE
Sum of Degrees of
Source Squares Freedom Mean square F Ratio P(X>cv)
Formula 0.483 2 0.241 16.096 0.025 6.4.3 For example, the following experiment looks at the
Technician 0.005 1 0.005 0.333 0.604 effect of an impact modifying ingredient level on impact
Error 0.045 3 0.015 - - strength after one year of outdoor weathering in Arizona.
6.3.4 Assuming an alpha level of 0.05, the analysis indicates 6.4.4 The plot of ingredient level versus retained impact
that the formula resulted in a significant difference in color strength shown with a linear fit and 95 % confidence bands
change means, but the technician did not. This is evident from looks like: (See Fig. 2)
the probability values in the final column. Values below the 6.4.5 This example illustrates the use of replicates at one of
alpha level allow rejection of the null hypothesis. the levels. It is a good idea to test replicates at the levels that
6.4 Linear Regression: are thought to be important or desirable. The analysis indicates
6.4.1 Linear regression is essentially an ANOVA in which a good linear fit. We see this from the R2 value (squared
the factors can take on continuous values. Since discrete multiple R) of 0.976. The R2 value is the fraction of the
factors can be set up as belonging to a subset of some larger variability of the response variable explained by the regression
continuous set, linear regression is a more general method. It is model, indicates the degree of fit to the model.
in fact the most general method considered in this guide. See 6.4.6 The analysis of variance indicates a significant rela-
Table 4. tionship between modifier level and retained impact strength in
this test (the probability level is well below an alpha level of
6.4.2 The most elementary form of linear regression is easy
5 %).
to visualize. It is the case in which we have one predictor
Linear Regression Analysis
variable and one response variable. The easy way to think of Response Variable: Impact Retention (%)
the predictor variable is as an x-axis value of a two dimensional Number of Observations: 7
plot. For each predictor variable level, we can plot the Multiple R: 0.988
Squared Multiple R: 0.976
corresponding measurement (response variable) as a value on Degrees of
the ordinate axis. The idea is to see how well we can fit a line Source Freedom Sum of Squares F Ratio P(X>cv)
to the points on the plot. See Table 5. Regression 1 0.0464 205.1 less than 0.0001
Residual 5 0.0011 - -
5
G 169 – 01
6.4.8 Another use of regression is to compare two predictor Linear Regression Analysis
random variables at a number of levels for each. For example, Number of Observations: 11
Multiple R: 0.997
results from one exposure test can be plotted against the results Squared Multiple R: 0.994
from another exposure. If the points fall on a line, then one Degrees of
could conclude that the tests are “in agreement.” This is called Source Freedom Sum of Squares F Ratio P(X>cv)
Regression 1 0.9235 1509 less than 0.0001
correlation. The usual statistic in a linear correlation analysis is Residual 9 0.0055 - -
R2, which is a measure of deviation from the model (a straight
line). The R2 values near one indicate good agreement with the 6.4.10 Even though the analysis indicates a good fit to a
model, while those near zero indicate poor agreement. This linear model, the cluster of data does not fit a linear model well
type of analysis is different from the approaches suggested at all without the outliers. If the objective of this analysis were
above which were constructed to test whether one random correlation, a ranked method like Spearman’s (see 6.7) would
variable depended somehow on others. It should be noted, provide a more reliable analysis.
however, that correlation can always be phrased in ANOVA- 6.5 Mann-Whitney:
like terms. The correlation example included for the Spearman 6.5.1 The Mann-Whitney test is the nonparametric analog to
rank correlation method illustrates this. The observations then the Student’s t-Test. It is used to test for difference in two
make up a response random variable. Correlation on absolute populations. This test is also known as the Rank-Sum test, the
results is not recommended in weathering testing. Instead, U-test, and the Wilcoxon Test. This test works by ranking the
relative data (ranked data) often provide more meaningful combined data from each population. It is important to look for
analysis (see Spearman’s rank correlation). repeats of the data (these are known as “ties”) Ties are treated
6.4.9 Regression/correlation can lead to misleadingly high as follows: the rank is equivalent to the sum of the ranking
R2 values when the x-axis values are not well-spaced. Consider values normally assigned for that value of the response variable
the following example, which contains a cluster of data that divided by the number of repeats for that value of the response
does not exhibit a good linear fit, along with a few outliers. Due variable. (See the following example.) The ranks are then
to the large spread in the x-axis values, the clustered data summed for one of the groups. This rank sum is normally
appears almost as a single data point, resulting in a high R2 distributed for a sufficient number of observations, with the
value. (See Fig. 3). following mean and standard deviation:
6
G 169 – 01
n A ~ n A 1 nB 1 1 !
mean 5 2 Total Number of Observations: 8
standard deviation ~SD! 5 Rank sum for Formula A = 1 + 2 + 3.5 + 3.5 + 5.5 = 15.5
Œ
Rank sum - mean = 15.5 – 22.5 = –7.0
nAnB ( ~t3i – t i! Standardized value = –7.0/3.3139 = –2.11
nAnB~nA 1 nB 1 1! i51 Compare with an alpha level of 0.05 for a normal random
–
12 12~nA 1 nB!~nA 1 n B – 1! variable, –1.96 to 1.96
6.6 Kruskal-Wallis:
where
nA = number of data points in Sample A, and 6.6.1 The Kruskal-Wallis method is a nonparametric analog
nB = number of specimens in Sample B. of single-factor ANOVA. This method compares the medians
If there are no ties in the data (see 6.5.1), the formula for of three or more groups of samples. To carry out the Kruskal-
standard deviation can be considerably simplified, because the Wallis method, the data are ranked just as in the Mann-Whitney
second term under the radical (beginning with the minus sign) Method.
evaluates to zero. 6.6.2 Unlike Mann-Whitney, the sampling distribution is
6.5.2 The rank sum can be standardized by means of the arranged so that it follows the chi-square distribution, in which:
transformation: 12 k
R2
(rank sum – mean)/SD chi2square 5 ( nii – 3~N 1 1!
N~N 1 1! i51
This value can be compared with a table of z-values for the
And, if there are ties, the following correction must be
normal distribution to test for significance. (For small numbers
applied:
of data points, the Student’s t-distribution is more appropriate.)
For example, consider the same data set that appears in the chi2square
chi2square~corrected! 5
Student’s t-Test section. Table 6 indicates a significant differ-
1–
(t~t2 – 1!
ence in sample means, since the standardized value is below N~N2 – 1!
the value of a normally distributed random variable at an alpha
level of 0.05. This is the same conclusion as the t-Test. where:
Mann-Whitney Analysis: N = total number of observations,
k = number of groups,
~5!~5 1 3 1 1! nI = sample size of the ith group,
mean 5 2 5 22.5
Ri = rank sum of ith group, and
standard deviation ~SD! 5 t = count of a particular tie.
Œ
6.6.3 This statistic is compared against the chi-square dis-
~5!~3!~5 1 3 1 1! ~5!~3!~~23 – 2! 1 ~23 – 2!!
– 5 3.3139 tribution with k – 1 degrees of freedom (see Table X2.1 if
12 ~12!~5 1 3!~5 1 3 – 1!
needed), and if it exceeds the value corresponding to the alpha
TABLE 6 MANN-WHITNEY EXAMPLE level, the null hypothesis is rejected, which means that the
Rank Order
median of the response variable of one or more of the sample
sets is different from the others. See Table 7
Color Change Formula Normal Correlation for Ties
Kruskal-Wallis Analysis:
S DS D
0.9 A 1 1
1 A 2 2 12 1392 2002 3272
1.1 A 3 3.5 chi2square 5 * 12 1 12 1 12 – 3~36 1 1!
~ !~
36 36 1 1 !
1.1 A 4 3.5
5 13.813
1.2 A 5 5.5
1.2 B 6 5.5 Since there are ties, the corrected chi-square must be
1.3 B 7 7
1.4 B 8 8 calculated:
13.813
chi2square~corr.! 5 5 13.84
~3!~32 – 1! 1 ~2!~22 – 1! 1 ~2!~22 – 1! 1 ~4!~42 – 1! 1 ~2!~22 – 1! 1 ~2!~22 – 1!
1–
~36!~372 – 1!
7
G 169 – 01
TABLE 7 KRUSKAL-WALLIS EXAMPLE
Rank Order
Formula Gloss Normal Correlation for Ties
A 6 1 1
A 8 3 3
A 10 5 5
A 11 6 7
A 12 9 9
A 14 11 11.5
A 14 12 11.5
A 15 13 13
A 18 16 16
A 20 19 19
A 21 20 20
A 24 23 23
B 7 2 2
B 9 4 4
B 11 7 7
B 11 8 7
B 16 14 14
B 17 15 15
B 19 17 17.5
B 22 21 21
B 23 22 22
B 26 25 26.5
B 27 29 29.5
B 31 34 34.5
C 13 10 10
C 19 18 17.5
C 25 24 24
C 26 26 26.5
C 26 27 26.5
C 26 28 26.5
C 27 30 29.5
C 28 31 31
C 29 32 32
C 30 33 33
C 31 35 34.5
C 32 36 36
correlation does not make any assumptions about the normality the response variable, color as one predictor variable (10
of the distributions of the underlying data. levels), and exposure condition as another predictor variable (2
6.7.3 Spearman’s method works by assigning a rank to each levels). See Table 8 for the data, along with rankings for use in
observation in each group separately (contrast this to the the Spearman’s calculation. Data analysis according to Spear-
previous rank-sum methods in which the ranks are pooled). man’s method appears as follows, along with some other
Ties are still ranked as in Mann-Whitney or Kruskal-Wallis, methods of comparison:
but the actual calculation does not have to be corrected. The
Spearman’s correlation is calculated according to the following
formula: TABLE 8 Correlation Example
8
G 169 – 01
Spearman’s Rank Correlation Analysis: Number of Observations: 10
Dependent Variable: 60° Gloss Retention (%)
Grouped by Exposure Type
~6!@~2 – 2!2 1 ~1 – 1!2 1 ~10 – 10!2 1 ~9 – 9!2 1 ~7 – 6!2 1 ~4 – 3!2 1 ~3 – 4!2 1 ~8 – 8!2 1 ~5 – 5!2 1 ~6 – 7!2#
rs 5 1 –
~10!~102 – 1!
rs 5 0.975758
Linear Regression Analysis (Correlation):
Dependent Variable: 60° Gloss Retention (%)
Number of Observations: 10
Multiple R: 0.9394
Squared Multiple R: 0.8824
(See Fig. 4)
Analysis of Variance:
Dependent Variable: 60° Gloss Retention (%)
Sum of Degrees of
Source squares Freedom Mean Square F Ratio P-value
Color 0.733641 9 0.081516 9.39231 0.001323
Exposure 0.416793 1 0.416793 48.02333 6.84E05
Error 0.078111 9 0.008679 - -
8. Summary of Results
8.1 The Spearman’s method indicates good agreement in
material durability rankings between the exposures. Linear
regression indicates a good fit to a linear model.
8.2 The correlation plot illustrates this graphically. How-
ever, from the plot, we see that the Arizona exposure resulted
in lower retained gloss overall. We also see that there is a wide
spread in durability for the 10 different colors.
8.3 ANOVA detects the differences in harshness between
exposures, and indicates that they are significantly different.
ANOVA also detects the differences in retained gloss across the
ten colors, indicating that in this example, color is a significant
factor.
9. Keywords
9.1 experimental design; statistics; weathering
9
G 169 – 01
APPENDIXES
(Nonmandatory Information)
X1. RESOURCES
Downie, N. M., and Heath, R. W., Basic Statistical Methods, Sheskin, David J., Handbook of parametric and Nonpara-
4th ed., Harper & Row Publishers, New York, 1974. metric Statistical Procedures, CRC Press, New York, 1997.
Freund, J. E., Modern Elementary Statistics, 4th ed., Prentice Gonick, Larry, and Smith, Woolcott, The Cartoon Guide to
Hall, 1974. Statistics, HarperCollins, New York, 1993.
Simon, L. E., An Engineer’s Manual of Statistical Methods,
John Wiley& Sons, New York, 1941.
10
G 169 – 01
ASTM International takes no position respecting the validity of any patent rights asserted in connection with any item mentioned
in this standard. Users of this standard are expressly advised that determination of the validity of any such patent rights, and the risk
of infringement of such rights, are entirely their own responsibility.
This standard is subject to revision at any time by the responsible technical committee and must be reviewed every five years and
if not revised, either reapproved or withdrawn. Your comments are invited either for revision of this standard or for additional standards
and should be addressed to ASTM International Headquarters. Your comments will receive careful consideration at a meeting of the
responsible technical committee, which you may attend. If you feel that your comments have not received a fair hearing you should
make your views known to the ASTM Committee on Standards, at the address shown below.
This standard is copyrighted by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959,
United States. Individual reprints (single or multiple copies) of this standard may be obtained by contacting ASTM at the above
address or at 610-832-9585 (phone), 610-832-9555 (fax), or service@astm.org (e-mail); or through the ASTM website
(www.astm.org).
11