Documente Academic
Documente Profesional
Documente Cultură
@c 2012 by G. David Garson and Statistical Associates Publishing. All rights reserved
worldwide in all media. No permission is granted to any user to copy or post this work in
any format or any media.
The author and publisher of this eBook and accompanying materials make no
representation or warranties with respect to the accuracy, applicability, fitness, or
completeness of the contents of this eBook or accompanying materials. The author and
publisher disclaim any warranties (express or implied), merchantability, or fitness for any
particular purpose. The author and publisher shall in no event be held liable to any party for
any direct, indirect, punitive, special, incidental or other consequential damages arising
directly or indirectly from any use of this material, which is provided “as is”, and without
warranties. Further, the author and publisher do not warrant the performance,
effectiveness or applicability of any sites listed or linked to in this eBook or accompanying
materials. All links are for information purposes only and are not warranted for content,
accuracy or any other implied or explicit purpose. This eBook and accompanying materials is
© copyrighted by G. David Garson and Statistical Associates Publishing. No part of this may
be copied, or changed in any format, sold, or used in any way under any circumstances
other than reading by the downloading individual.
Contact:
Email: gdavidgarson@gmail.com
Web: www.statisticalassociates.com
Table of Contents
Overview .................................................................................................................................... 6
Key Terms and Concepts ............................................................................................................ 7
Variables ................................................................................................................................ 7
Discriminant functions........................................................................................................... 7
Pairwise group comparisons.................................................................................................. 8
Output statistics..................................................................................................................... 8
Examples .................................................................................................................................... 9
SPSS user interface ..................................................................................................................... 9
The “Statistics” button ........................................................................................................ 10
The “Classify” button ........................................................................................................... 10
The “Save” button ............................................................................................................... 13
The “Bootstrap” button ....................................................................................................... 13
The “Method” button .......................................................................................................... 14
SPSS Statistical output for two-group DA ................................................................................ 16
The “Analysis Case Processing Summary” table.................................................................. 16
The “Group Statistics” table ................................................................................................ 16
The “Tests of Equality of Group Means” table .................................................................... 16
The “Pooled Within-Group Matrices” and “Covariance Matrices” tables. ......................... 18
The “Box’s Test of Equality of Covariance Matrices” tables ............................................... 18
The “Eigenvalues” table....................................................................................................... 19
The “Wilks’ Lambda” table .................................................................................................. 21
The “Standardized Canonical Discriminant Function Coefficients” table ........................... 21
The “Structure Matrix” table ............................................................................................... 23
The “Canonical Discriminant Functions Coefficients” table ................................................ 23
The “Functions at Group Centroids” table .......................................................................... 24
The “Classification Processing Summary” table .................................................................. 24
The “Prior Probabilities for Groups” table .......................................................................... 25
The “Classification Function Coefficients” table ................................................................. 25
Linearity .................................................................................................................................... 43
Additivity .................................................................................................................................. 43
Multivariate normality ............................................................................................................. 43
Frequently Asked Questions ......................................................................................................... 44
Isn't discriminant analysis the same as cluster analysis?......................................................... 44
When does the discriminant function have no constant term? .............................................. 44
How important is it that the assumptions of homogeneity of variances and of multivariate
normal distribution be met? .................................................................................................... 44
In DA, how can you assess the relative importance of the discriminating variables?............. 44
Dummy variables ................................................................................................................. 45
In DA, how can you assess the importance of a set of discriminating variables over and above
a set of control variables? (What is sequential discriminant analysis?) .................................. 45
What is the maximum likelihood estimation method in discriminant analysis (logistic
discriminate function analysis)?............................................................................................... 45
What are Fisher's linear discriminant functions? .................................................................... 46
I have heard DA is related to MANCOVA. How so? ................................................................. 46
How does MDA work? .............................................................................................................. 46
How can I tell if MDA worked?................................................................................................. 46
For any given MDA example, how many discriminant functions will there be, and how can I
tell if each is significant? .......................................................................................................... 47
What are Mahalonobis distances? ........................................................................................... 47
How are the multiple discriminant scores on a single case interpreted in MDA? .................. 47
Likewise in MDA, there are multiple standardized discriminant coefficients - one set for each
discriminant function. In dichotomous DA, the ratio of the standardized discriminant
coefficients is the ratio of the importance of the independent variables. But how are the
multiple set of standardized coefficients interpreted in MDA? .............................................. 48
Are the multiple discriminant functions the same as factors in principal-components factor
analysis? ................................................................................................................................... 48
What is the syntax for discriminant analysis in SPSS? ............................................................. 48
Bibliography .................................................................................................................................. 50
Discriminant analysis has basic two steps: (1) an F test (Wilks' lambda) is used to
test if the discriminant model as a whole is significant, and (2) if the F test shows
significance, then the individual independent variables are assessed to see which
differ significantly in mean by group and these are used to classify the dependent
variable.
regression, but the b's are discriminant coefficients which maximize the distance
between the means of the criterion (dependent) variable. Note that the foregoing
assumes the discriminant function is estimated using ordinary least-squares, the
traditional method, but note maximum likelihood estimation is also possible.
There is one discriminant function for 2-group discriminant analysis, but for
higher order DA, the number of functions (each with its own cut-off value) is the
lesser of (g - 1), where g is the number of categories in the grouping variable, or p,
the number of discriminating (independent) variables. Each discriminant function
is orthogonal to the others. A dimension is simply one of the discriminant
functions when there are more than one, in multiple discriminant analysis.
The first function maximizes the differences between the values of the dependent
variable. The second function is orthogonal to it (uncorrelated with it) and
maximizes the differences between values of the dependent variable, controlling
for the first factor. And so on. Though mathematically different, each discriminant
function is a dimension which differentiates a case into categories of the
dependent (here, religions) based on its values on the independents. The first
function will be the most powerful differentiating dimension, but later functions
may also represent additional significant dimensions of differentiation.
Output statistics
DA and MDA output a variety of coefficients and tables to be discussed in
conjunction with examples below. Among these are eigenvalues, canonical
correlations, discriminant scores, discriminant coefficients, functions at group
centroids, and various measures of significance.
Copyright @c 2012 by G. David Garson and Statistical Associates Publishing Page 8
Examples
The sections which follow discuss two examples, one for DA and one for MDA.
DA example. Using a modified SPSS sample data file, GSS93 subset.sav, voter
participation in the 1992 presidential election (vote92, coded 1=voted, 2=did not
vote) is predicted from sex (sex, coded 1=male, 2=female), age (age in years),
educ (highest year of school completed), rincome91 (respondent’s 1991 income,
coded in 21 ascending income ranges), and self-classified liberalism (polviews,
coded from 1= extremely liberal to 7=extremely conservative).
MDA example. Using the same dataset, MDA is used to try to classify race (race,
coded 1=white, 2=black, 3=other) using the predictor variables educ, rincome91,
polviews, agewed (age when first wed), sibs (number of siblings), and rap (rap
music, coded from 1=like very much to 5=dislike very much).
The discriminant score is the value resulting from applying a discriminant function
formula to the data for a given case. A “Z score” is the discriminant score for
standardized data. To get discriminant scores in SPSS, check "Discriminant scores"
in the dialog above. One can also view the discriminant scores by clicking the
Classify button and checking "Casewise results."
The “Bootstrap” button
The “Bootstrap” button is shown below with defaults if “Perform bootstrapping”
is selected (not selected is the default, which grays out all selections in the
bootstrap dialog). Bootstrapping, which was not selected in the example below,
cannot be selected at the same time as requesting saved variables. The example
below does not select bootstrapping.
default being .05 for entry and .10 for removal. Using these defaults,
variables are added if the p significance level of F is less than .05 and are
removed if greater than .10.
3. Summary of steps is default output for stepwise discriminant function
analysis. At each step, statistics are displayed for all variables.
4. F for pairwise distances is not default output. If selected, a matrix is output
showing the pairwise F ratios for each pair of groups. In DA as opposed to
MDA, there is only one such ratio since there are only two groups.
specified model until all predictors were significant and only then consider
subsequent tables discussed below.
The smaller the Wilks' lambda for an independent variable, the more that variable
contributes to the discriminant function, so in the table above, education is the
variable contributing the most to classification of voters and non-voters. Lambda
varies from 0 to 1, with 1 meaning all group means are the same and any lower
value indicated difference in means across groups. Wilks' lambda is sometimes
called the U statistic.
Note that dummy independent variables are more accurately tested with a Wilks’
lambda difference test than with Wilks' lambda as it appears in the table above.
The researcher may run a model with and without a set of dummies (ex., for
region, with values being East, West, North, and with South left out as the
reference level). The ratio of the F values for the two models may be tested. The
Wilks lambda for the model without the dummies is divided by Wilks lambda for
the model with the dummies, and an approximate F value for this ratio may be
computed using calculations reproduced in Tabachnick and Fidell (2001: 491).
SPSS does not directly support this test, which may also be used in any sequential
discriminant analysis, such as where the models are with and without a set of
control variables.
The “Pooled Within-Group Matrices” and “Covariance Matrices” tables.
These tables, not illustrated, show the covariance and correlation matrices overall
(the pooled table) and by group (the “Covariance Matrices” table). If covariances
and differences vary markedly by group, this may lead the researcher to select the
“Separate-groups” rather than “Within-groups” “Use covariance option” of the
classification dialog discussed above.
The “Box’s Test of Equality of Covariance Matrices” tables
Box’s M is a statistical test of whether the covariance matrices differ by group. As
such it is a more accurate test than visual inspection of the “Covariance Matrices”
table discussed above. The “Sig.” value in the “Test Results” table illustrated
below should be non-significant in a DA model using the default classification
setting of “Within-groups” classification discussed above.
When sample size is large, even very small differences in covariance matrices may
be found significant by Box's M. Moreover, even though DA may be robust even
when the assumption of multivariate normality is violated, but Box’s M is very
sensitive to that assumption being met. The Box’s M test is usually ignored if in
the “Log Determinants” table shown above, the log determinants of the two
groups are similar. If the determinants are markedly dissimilar, the researcher
may opt for quadratic DA (not supported by SPSS) or may check “Separate-
groups” in the “Classification” button dialog discussed above. There is also the
option of running the model on a “Within-groups” and on a “Separate-groups”
covariance basis and seeing if results are substantively similar, in which case a
significant Box’s M would be ignored.
The “Eigenvalues” table
Eigenvalues, also called characteristic roots, reflect the ratio of importance of the
discriminant functions (equations representing dimensions) used to classify
observations. There is one eigenvalue for each discriminant function. For two-
group DA, there is only one discriminant function and one eigenvalue, which
accounts for 100% of the explained variance. Therefore the researcher cannot
discriminant function is also the correlation of that function with the discriminant
scores. A canonical correlation close to 1 means that nearly all the variance in the
discriminant scores can be attributed to group differences explained by the given
function. Note that for two-group DA, the canonical correlation is equivalent to
the Pearsonian correlation of the discriminant scores with the grouping variable.
Thus the structure coefficients show the order of importance of the discriminating
variables by total correlation, whereas the standardized discriminant coefficients
show the order of importance by unique contribution. The sign of the structure
coefficient also shows the direction of the relationship. For multiple discriminant
analysis, the structure coefficients additionally allow the researcher to see the
relative importance of each independent variable on each dimension.
used to classify new cases just as unstandardized regression coefficients are used
to construct the prediction equation. In the table shown above, the canonical
discriminant function coefficient for education is .288 and that is its value (slope)
in the discriminant function equation for the first (and in DA, only) function.
Unstandardized discriminant function coefficients represent an intermediate step
in discriminant function analysis and usually are not reported in research findings.
The constant plus the sum of products of the unstandardized coefficients with the
observations yields the discriminant scores. That is, unstandardized discriminant
coefficients are the regression-like b coefficients in the discriminant function, in
the form L = b1x1 + b2x2 + ... + bnxn + c, where L is the latent variable formed by
the discriminant function, the b's are discriminant coefficients, the x's are
discriminating variables, and c is a constant. The discriminant function coefficients
are partial coefficients, reflecting the unique contribution of each variable to the
classification of the criterion variable.
If one clicks the Statistics button in SPSS after running discriminant analysis and
then checks "Unstandardized coefficients," then SPSS output will include the
unstandardized discriminant coefficients.
The “Functions at Group Centroids” table
Functions at group centroids are the mean discriminant scores for each of the
dependent variable categories for each of the discriminant functions. In the figure
above, for instance, the mean discriminant score for function 1 (the only function
in DA) is .236. Two-group discriminant analysis has two centroids, one for each
group. In a well-discriminating model, the means should be well apart. The closer
the means, the more errors of classification there likely will be.
Functions at group centroids are used to establish the cutting point for classifying
cases. If the two groups are of equal size, the best cutting point is half way
between the values of the functions at group centroids (that is, the average). If
the groups are unequal, the optimal cutting point is the weighted average of the
two values. Cases which evaluate on the function above the cutting point are
classified as "did not vote," while those evaluating below the cutting point are
evaluated as "Voted."
The “Classification Processing Summary” table
This table, not illustrated, reports the number of cases with missing or out-of-
range codes on the dependent variable, and also reports the number of cases
with at least one missing discriminating variable. Both types of cases are excluded
from analysis. The table also reports the remaining cases used in output.
The “Prior Probabilities for Groups” table
This table reminds the researcher of the prior probabilities assumed for purposes
of classification. If prior probabilities were set to “All groups equal” in the
classification dialog discussed above, then for DA, which has two dependent
groups, this table will report both prior probabilities as being .500. If, as in this
example, the prior probability option is set to “Compute from group sizes”, then
the table below is output. The coefficient in the “Prior” column is the
“Unweighted” value for that row divided by the total: it is that groups percent of
the sample. Prior probabilities are used to make classification in the more
numerous group more likely.
The table illustrated below is output when "Fisher's" is checked under "Function
Coefficients" in the "Statistics" option of discriminant analysis discussed above.
Two sets (one for each dependent group in DA) of unstandardized linear
discriminant coefficients are calculated, which can be used to classify cases. This is
the classical method of classification, now little used.
The hit ratio (here, 76.1%) must be compared not to zero or even to 50%, but to
the percent that would have been correctly classified by chance alone.
• Perhaps the most common criterion for “by chance alone” is obtained by
multiplying the prior probabilities times the group sizes, summing for all
groups, and dividing the sum by N. Deriving the numbers from the prior
probabilities table shown earlier above,
more categories (here, race). For the following tables the reader is referred to
similar DA output discussed above:
• Analysis Case Processing Summary table
• Group Statistics table
• Tests of Equality of Group Means table
• Pooled Within-Groups Matrices table
• Covariance Matrices table
• Log Determinants table
• Test Results table (for Box’s M)
• Standardized Canonical Discriminant Function Coefficients table
• Canonical Discriminant Function Coefficients table
• Functions at Group Centroids table
• Classification Processing Summary table
• Prior Probabilities for Groups table
• Classification Function Coefficients table
• Casewise Statistics table
• Classification Results table
The “Eigenvalues” table
As discussed above, eigenvalues reflect the ratio of importance of the
discriminant functions. Since DA has only 1 function but MDA has more, the
“ratio” aspect is more easily seen in MDA. Here the first discriminant function is
able to account for 95% of the variance accounted for by the model, while the
second function accounts for the other 5%. Note this is not the same as variance
accounted for by race as DA and MDA percentages in this table always add to
100%. Rather, the eigenvalues show the relative importance of the discriminant
functions.
Territorial map areas appear more clearly in the map below, in which different
variables were used to predict the categories of race (colors and labels added).
Combined-groups plot
Instead of the histogram given in DA, an MDA request for a combined-groups plot
(in the “Classify” button dialog) generates a scatterplot such as that shown below.
That the group centroids are close suggests a weak model which does not
discriminate well. While discriminant function 1 does discriminate somewhat
between blacks (green circles tending to be on the minus side of function 1) and
whites (purple circles tending to be on the positive side), there is lots of overlap.
Moreover, the grey circles representing race=3=other seem randomly placed.
Separate-groups plots
Similar scatterplots, not shown, can be output for each level of the dependent
variable in MDA (for each race in this example).
In SPSS there are several available criteria for entering or removing new variables
at each step: Wilks’ lambda is the default. Others are unexplained variance,
Mahalanobis’ distance, smallest F ratio, and Rao’s V. The researcher typically sets
the critical significance level by setting the "F to remove" in most statistical
packages. These methods were discussed previously above.
The researcher should keep in mind that the stepwise method capitalizes on
chance associations and thus significance levels are worse (that is, numerically
higher) than the true alpha significance rate reported. Thus a reported
significance level of .05 may correspond to a true alpha rate of .10 or worse. For
this reason, if stepwise discriminant analysis is employed, use of cross-validation
is recommended. In the split halves method, the original dataset is split in two at
random and one half is used to develop the discriminant equation and the other
half is used to validate it.
Example
In this section, only differences in stepwise MDA output compared to DA and
MDA output are discussed. The example below uses the same dataset as for MDA
above, trying to classify race (race, coded 1=white, 2=black, 3=other) using the
predictor variables educ, rincome91, polviews, agewed (age when first wed), sibs
(number of siblings), and rap (rap music, coded from 1=like very much to 5=dislike
very much). What is the “optimal” model produced by stepwise discriminant
function analysis?
Stepwise discriminant analysis in SPSS
As illustrated below, stepwise discriminant analysis is requested in the main SPSS
discriminant analysis dialog, by checking the “Use stepwise method” radio button.
Nearly all output is identical to that for the MDA example above using the “Enter”
method, except that it is presented in steps. Predictor variables are added to or
removed from the model according to criteria set by the “Method” button,
configured for this example as shown below.
Stepwise Wilks' lambda also appears in the "Variables Not in the Analysis" table of
stepwise DA output, after the "Sig. of F to Enter" column. Here the criterion is
reversed: the variable with the lowest stepwise Wilks' lambda is the best
candidate to add to the model in the next step. For instance, in the table below,
for step 1 the lowest Wilks’ lambda is the .886 for rap music and that is the
variable added in step 2.
The stepwise method for these data thus employed three variables. The enter
method presented earlier retained all variables entered in the initial discriminant
function analysis dialog (six variables). Since the stepwise method specified a
different model, the coefficients in ensuing tables differ somewhat from those for
the enter method model. That is, even non-significant discriminant variables will
affect the coefficients. This is why, as mentioned earlier, some researchers use
the enter method for confirmatory purposes but drop non-significant predictors
one at a time until all those remaining in the analysis are significant. For stepwise
models, all variables in the final analysis are always significant.
Assumptions
Proper specification
The discriminant coefficients can change substantially if variables are added to or
subtracted from the model.
Independence
All cases must be independent. Thus one cannot have correlated data (not
before-after, panel, or matched pairs data, for instance).
No lopsided splits
Group sizes of the dependent should not be extremely different. If this
assumption is violated, logistic regression is preferred. Some authors use 90:10 or
worse as the criterion in DA.
Interval data
The independent variable is or variables are interval. As with other members of
the regression family, dichotomies, dummy variables, and ordinal variables with
at least 5 categories are commonly used as well.
Variance
No independents should have a zero standard deviation in one or more of the
groups formed by the dependent.
Random error
Errors (residuals) are randomly distributed.
Homogeneity of covariances/correlations
Within each group formed by the dependent, the covariance/correlation between
any two predictor variables should be similar to the corresponding
covariance/correlation in other groups. That is, each group has a similar
covariance/correlation matrix as reflected in the log determinants (see "Large
samples" discussion above).
Linearity
DA and MDA assume linearity (do not take into account exponential terms unless
such transformed variables are added as additional independents).
Additivity
DA and MDA assume additivity (do not take into account interaction terms unless
new crossproduct variables are added as additional independents).
Multivariate normality
For purposes of significance testing, predictor variables are assumed to follow
multivariate normal distributions. That is, each predictor variable has a normal
distribution about fixed values of all the other independents. As a rule of thumb,
is the relative contribution of each variable. Note that the betas will change if
variables are added or deleted from the equation.
Dummy variables
For any given MDA example, how many discriminant functions will
there be, and how can I tell if each is significant?
The answer is min(g-1,p), where g is the number of groups (categories) being
discriminated and p is the number of predictor (independent variables). The min()
function, of course, means the lesser of. SPSS will print Wilks's lambda and its
significance for each function, and this tests the significance of the discriminant
functions.
that case in three-dimensional discriminant space. Each axis represents one of the
discriminant functions, roughly analogous to factor axes in factor analysis. That is,
each axis represents a dimension of meaning whose label is attributed based on
inference from the structure coefficients.
One can also locate the group centroid for each group of the dependent in
discriminant space in the same manner.
In the case of two discriminant functions, cases or group centroids may be plotted
on a two-dimensional scatterplot of discriminant space (a canonical plot). Even
when there are more than two functions, interpretation of the eigenvalues may
reveal that only the first two functions are important and worthy of plotting.
{WILKS } { n }
{MAHAL }
{MAXMINF }
{MINRESID}
{RAO }
[/MAXSTEPS={n}]
[/FIN={3.84**}] [/FOUT={2.71**}] [/PIN={n}]
{ n } { n }
[/POUT={n}] [/VIN={0**}]
{ n }
[/FUNCTIONS={g-1,100.0,1.0**}] [/PRIORS={EQUAL** }]
{n1 , n2 , n3 } {SIZE }
{value list}
[/SAVE=[CLASS[=varname]] [PROBS[=rootname]]
[SCORES[=rootname]]]
[/ANALYSIS=...]
[/MISSING={EXCLUDE**}]
{INCLUDE }
[/MATRIX=[OUT({* })] [IN({* })]]
{'savfile'|'dataset'} {'savfile'|'dataset'}
[/HISTORY={STEP**} ]
{NONE }
[/ROTATE={NONE** }]
{COEFF }
{STRUCTURE}
[/CLASSIFY={NONMISSING } {POOLED } [MEANSUB]]
{UNSELECTED } {SEPARATE}
{UNCLASSIFIED}
[/STATISTICS=[MEAN] [COV ] [FPAIR] [RAW ] [STDDEV]
[GCOV] [UNIVF] [COEFF] [CORR] [TCOV ]
[BOXM] [TABLE] [CROSSVALID]
[ALL]]
[/PLOT=[MAP] [SEPARATE] [COMBINED] [CASES[(n)]] [ALL]]
**Default if subcommand or keyword is omitted.
Bibliography
George H. Dunteman (1984). Introduction to multivariate analysis. Thousand
Oaks, CA: Sage Publications. Chapter 5 covers classification procedures and
discriminant analysis.
Huberty, Carl J. (1994). Applied discriminant analysis . NY: Wiley-Interscience.
(Wiley Series in Probability and Statistics).
Klecka, William R. (1980). Discriminant analysis. Quantitative Applications in the
Social Sciences Series, No. 19. Thousand Oaks, CA: Sage Publications.
Lachenbruch, P. A. (1975). Discriminant analysis. NY: Hafner.
McLachlan, Geoffrey J. (2004). Discriminant analysis and statistical pattern
recognition. NY: Wiley-Interscience. (Wiley Series in Probability and
Statistics).
Press, S. J. and S. Wilson (1978). Choosing between logistic regression and
discriminant analysis. Journal of the American Statistical Association, Vol.
73: 699-705. The authors make the case for the superiority of logistic
regression for situations where the assumptions of multivariate normality
are not met (ex., when dummy variables are used), though discriminant
analysis is held to be better when assumptions are met. They conclude that
logistic and discriminant analyses will usually yield the same conclusions,
except in the case when there are independents which result in predictions
very close to 0 and 1 in logistic analysis.
Tabachnick, Barbara G. and Linda S. Fidell (2001). Using multivariate statistics,
Fourth ed. (Boston: Allyn and Bacon). chapter 11 covers discriminant
analysis.
Copyright 1998, 2008, 2012 by G. David Garson and Statistical Associates Publishers.
Worldwide rights reserved in all languages and all media. Do not copy or post in any format.
Last update 8/3/2012.
Association, Measures of
Assumptions, Testing of
Canonical Correlation
Case Studies
Cluster Analysis
Content Analysis
Correlation
Correlation, Partial
Correspondence Analysis
Cox Regression
Creating Simulated Datasets
Crosstabulation
Curve Fitting & Nonlinear Regression
Data Distributions and Random Numbers
Data Levels
Delphi Method
Discriminant Function Analysis
Ethnographic Research
Evaluation Research
Event History Analysis
Factor Analysis
Focus Groups
Game Theory
Generalized Linear Models/Generalized Estimating Equations
GLM (Multivariate), MANOVA, and MANCOVA
GLM (Univariate), ANOVA, and ANCOVA
GLM Repeated Measures
Grounded Theory
Hierarchical Linear Modeling/Multilevel Analysis/Linear Mixed Models
Integrating Theory in Research Articles and Dissertations
Kaplan-Meier Survival Analysis
Latent Class Analysis
Life Tables
Logistic Regression
Log-linear Models,
Longitudinal Analysis
Missing Values Analysis & Data Imputation
Multidimensional Scaling
Multiple Regression
Narrative Analysis
Network Analysis
Ordinal Regression
Parametric Survival Analysis
Partial Least Squares Regression
Participant Observation
Path Analysis
Power Analysis
Probability
Probit Regression and Response Models
Reliability Analysis
Resampling
Research Designs
Sampling
Scales and Standard Measures
Significance Testing
Structural Equation Modeling
Survey Research
Two-Stage Least Squares Regression
Validity
Variance Components Analysis
Weighted Least Squares Regression