Sunteți pe pagina 1din 6

Test <.

05

Power/Effect Size Analysis


Levene's test

Correlation Pearson's r: .1, .3, .5

One-Sample t Test T value

Independent Sample t Test T value

Paired Sample t Test T value

One-Way ANOVA F ratio

Two-Way ANOVA 3 F ratio + 3 Hypotheses

Repeated Measures ANOVA F ratio

Linear Regression

Multiple Regression sample size: 41 + # of predictors

the probability of an event occurring


based on the Ivs; the outcome is
Logistic Regression usuallly a dichotomous variable
like a t test: measure relationship of
two categorical variables; whether
distribution of the two differ from each
Chi-square (One-Way/Goodness of other; null: no association between the
Fit) - one IV two

like an interaction effect in ANOVA;


Null: the two IVs are independent of
Chi-square (Two-Way/Test of each other; pearson's chi sqaure sig or
Independence) - two IVs not

outliers more than 3SD


independence of observation that there is no relationship between the observations
Assumption Effect Size
show magnitude and practicality of experimental
effect; need enough effect size to warrant actions
in the real world
Not sig --> homogeneity of variance
Linearity, independent random sampling, normal
distribution, absence of influential outliers,
homoscedasticity just use r
Independent random sampling; Normal
distribution; no outliers, independence of
observation Cohen's d: .2, .5, .8
Independent random sampling; Normal
distribution; Homogeneity (Levene's Test) Cohen's d: .2, .5, .8
Independent random sampling; Normal
distribution Cohen's d: .2, .5, .8
Independent random sampling; Normal
distribution; no outliers, Homogeneity
(Levene's Test) Eta Squared: .01, .06, .14
Independent random sampling; Normal Eta Squared: .01, .06, .14;
distribution of residuals; Homegeneity of partial eta square for the
covariance; Homogeneity (Levene's Test) interaction:.01, .09, .25
Independent random sampling; Normal
distribution; Mauchly's Test for Sphericity (not
sig) Eta Squared: .01, .06, .14

Linearity, independent random sampling, normal


distribution, absence of influential outliers,
homoscedasticity, no or little multicollinearity

Linearity, independent random sampling,


normality of residuals, independence of
observation, absence of influential outliers,
homoscedasticity, no or little multicollinearity,
dichotomous predictors are coded numerically

Binary DV, independent random sampling,


independence of observation, no or little
multicollinearity (variance inflation factor, >4
eliminate or centralize), linearity of IVs and log
odds, large sample size
Independent random sampling, Independence of
observation, levels of IVs are mutually exclusive,
at least 5 expected frequencies in each group of
IV Cramer's Phi: .1, .3, .5

same as above + each IV should have at least 2


levels that are independent of each other Cramer's Phi: .1, .3, .5

ionship between the observations in each group or between the groups themselves.
Post-Hoc/others

3 levels: Fisher's LSD; >3 levels: Tukey's HSD

Main effect: Fisher's LSD or Tukey's HSD; interaction:


simple main effect

3 levels: Fisher's LSD; >3 levels: Tukey's HSD

R squared: the amount of variance explained by IVs; beta


weight: change in DV resulting from per unit change in IV
holding other variables constant; sign of beta weights
explain the direction of the relationship

R squared: the amount variance explained by the


regression model; issues: shrinkage when model based
on sample but to be applied to another; solution: cross-
validation -> split sample half, use one to build model,
apply beta weights to the other half to see if it works;
attention: only use predictors that have statistically sig
correlation with DV in the model

Nagelkerke/Pseud R squared: the amount of DV


explained by Ivs; Odds-ratio: change in odds of predicting
DV with one unit change in one IV holding all other
variables constant
een the groups themselves.

S-ar putea să vă placă și