Documente Academic
Documente Profesional
Documente Cultură
manipulations (or different samples that may come from different groups) within a single factor.
That is, 1 factor, k levels k separate groups are
compared.
NEW: Often, we can answer the same research questions looking at just 1 sample that is exposed to different manipulations within a factor.
That is, 1 factor, k levels 1 sample is compared
across k conditions.
Investigate development over time (Quazi-Independent: time 1, time 2, time 3) Chart learning (manipulate different levels of practice) (Independent: 1 hour, 4 hours, 7 hours) Compare different priming effects in a LDT (Independent: Forward, backward, no-prime, non-word) Simply examine performance under different
Extending t-tests
T-test
Comparing two
ANOVA cousin
Comparing more than two
independent samples?
Independent-samples t-test!
ANOVA!
R-M ANOVA
Like the Related-samples t, repeated measures ANOVA is more powerful because we eliminate
In a One-way ANOVA, the F-ratio is calculated using the variance from three sources:
F = Treatment(group) Effect + Individual differences + Experimenter error/Individual differences + Experimenter error.
The denominator represents random error and we
do not know how much was from ID and EE. This is error we expect from chance.
individual differences!!!! So, the F ratio is conceptually calculated using the variance from two sources:
F = Treatment(group) Effect + Experimenter error/
Experimenter error. The denominator represents truly random error that we cannot directly measurethe leftovers. This is error we expect just from chance. What does this mean for our F-value?
ANOVAs will be more powerful because they have a smaller error term bigger Fs. Let me demonstrate, if you will.
Assume treatment variance = 10, experimental-
Other than breaking the SSwithin into two components and subtracting out SSsubjects, repeated measures ANOVA is similar to OneWay ANOVA.
which they are exposed to 4 different types of word pairs. RT to click Word or Non-Word recorded and averaged for each pair type.
Forward-Prime pairs (e.g., baby-stork)
Backward-Prime pairs (e.g., stork-baby) Unrelated Pairs (e.g., glass-apple)
Hypotheses
For the overall RM ANOVA:
Ho: f = b = u= n Ha: At least one treatment mean is different from
another.
Specifically:
Ho: f < b
Ho: f and b < u and n Ho: u < n
The Data
Part 1 2 3 4 Forward .2 .5 .4 .4 Backward .1 .3 .3 .2 Unrelated .4 .8 .6 .8 Nonword .7 .9 .8 .9 Sum Part. 1.4 2.5 2.1 2.3 F2= 1.36 B2= .58
5 6
7 8 9 10
.6 .3
.1 .2 .3 .4 3.4
.4 .3
.1 .1 .2 .2 2.2
.8 .5
.5 .6 .4 .6 6
.8 .8
.7 .9 .7 .9 8.1
2.6 1.9
1.4 1.8 1.6 2.1 19.7
All = 19.7
Remember, this is the sum of the squared deviations from the grand mean.
So, SStotal = 12.39 (19.72/40) = 12.39 9.70225 = 2.68775 Importantly, SStotal = SSwithin/error + SSbetween
The Data
Part 1 2 3 4 Forward .2 .5 .4 .4 Backward .1 .3 .3 .2 Unrelated .4 .8 .6 .8 Nonword .7 .9 .8 .9 Sum Part. 1.4 2.5 2.1 2.3 F2= 1.36 B2= .58
5 6
7 8 9 10
.6 .3
.1 .2 .3 .4 3.4
.4 .3
.1 .1 .2 .2 2.2
.8 .5
.5 .6 .4 .6 6
.8 .8
.7 .9 .7 .9 8.1
2.6 1.9
1.4 1.8 1.6 2.1 19.7
All = 19.7
group/condition. Measures variability within each condition, then adds them together. ( X 1 ) 2 ( X 2 ) 2 ( X k ) 2 2 2 2 ( X 1 ) ( X 2 ) ... ( X k ) n n n
1 2 k
+ (6.63 ([8.1]2/10)
= (1.36 1.156) + (.58 .484) + (3.82 3.6) + (6.63 6.561) = .204+ .096+ .22+ .069= .589
The Data
Part 1 2 3 4 Forward .2 .5 .4 .4 Backward .1 .3 .3 .2 Unrelated .4 .8 .6 .8 Nonword .7 .9 .8 .9 Sum Part. 1.4 2.5 2.1 2.3 F2= 1.36 B2= .58
5 6
7 8 9 10
.6 .3
.1 .2 .3 .4 3.4
.4 .3
.1 .1 .2 .2 2.2
.8 .5
.5 .6 .4 .6 6
.8 .8
.7 .9 .7 .9 8.1
2.6 1.9
1.4 1.8 1.6 2.1 19.7
All = 19.7
Breaking up SSwithin/error
We must find SSSUBJECTS and subtract that
+1.92/4 +1.42/4 +1.82/4 +1.62/4 +2.12/4 +) 19.72/40 = .49+1.5625+1.1025+1.3225+1.69+.9025+.49+.81+.6 4+1.1025) -9.70225 = 10.1125-9.70225 =.41025
The Data
Part 1 2 3 4 Forward .2 .5 .4 .4 Backward .1 .3 .3 .2 Unrelated .4 .8 .6 .8 Nonword .7 .9 .8 .9 Sum Part. 1.4 2.5 2.1 2.3 F2= 1.36 B2= .58
5 6
7 8 9 10
.6 .3
.1 .2 .3 .4 3.4
.4 .3
.1 .1 .2 .2 2.2
.8 .5
.5 .6 .4 .6 6
.8 .8
.7 .9 .7 .9 8.1
2.6 1.9
1.4 1.8 1.6 2.1 19.7
All = 19.7
SSbetween-group:
The (same) Formula:
( X 1 ) 2 ( X 2 ) 2 ( X k ) 2 ( X TOT )2 ... n1 n2 nk NTOT
19.72/40 = (1.156+ .484+ 3.6+ 6.561) 9.70225 = 11.801 9.70225 = 2.09875 or 2.099
Mean Squared
mean)
We want to find the average squared deviations from the mean for each type of variability. To get an average, you divide by n in some form (or k which is n of groups) and do a little correction with -1.
That is, you use df.
MSbetween/group = MSwithin/error =
SS between df between
SS ERROR df ERROR
variance.
Or, variability due to ___________?
F?
MS F = MS
BET
= .7/.007 = 105.671
ERROR
Disadvantages
Practice effects (learning) Carry-over effects (bias) Demand characteristics (more exposure, more time to think
Control
Counterbalancing Time (greater spacingbut still have implicit memory). Cover Stories
Sphericity
Levels of our IV are not independent
same participants are in each level (condition).
We look at the variance of the differences between every pair of conditions, and assume these variances are the same.
If these variances are equal, we have Sphericity
More Sphericity
Testing for Sphericity
Mauchlys test Significant, no sphericity, NS Sphericity! If no sphericity, we must engage in a correction of the F-ratio. Actually, we alter the degrees of freedom associated with the F-ratio.
Symmetry
Effect Sizes
The issue is not entirely settled. Still some
debate and uncertainty on how to best measure effect sizes given the different possible error terms. 2 = See book for equation.
Specific tests
Can use Tukey post-hoc for exploration Can use planned comparisons if you have a
priori predictions.
Sphericity not an issue
Contrast Formula
Contrasts
Some in SPSS:
Difference: Each level of a factor is compared to the mean
of the previous level Helmert: Each level of a factor is compared to the mean of the next level Polynomial: orthogonal polynomial contrasts Simple: Each level of a factor is compared to the last level
Specific:
2+ Within Factors
Set up. Have participants run on tread mill for 30min. Within-subject factors:
Factor A Measure Fatigue every 10min, 3 time points. Factor B Do this once after drinking water, and again (different day) after drinking new sports drink.
Again, we will want to find SS for the factors and interaction, and eventually the respective MS as
well. Again, this will be very similar to a one-way ANOVA. Like a 1-factor RM ANOVA, we will also compute SSsubject so we can find SSerror.
SSerror. NEW: We will do this for each F we calculate. For each F, we will calculate:
SS Effect ; SS Subject (within that effect) ; and SS Error
SSSub (A)
SSSub (B)
SSSub (AxB)
Getting to F
Factor A (Time)
SSA = 91.2; dfA = (kA 1) = 3 1 = 2 SSError(A*S) = 16.467; dfError(A*S) = (kA 1)(s 1) = (3-
1)(10-1) = 18 So, MSA = 91.2/ 2 = 45.6 So, MSError(A*S) = 16.467/ 18 = .915 FA = 45.6/.915 = 49.846
Snapshot of other Fs
Factor B (Drink)
dfB = (kB 1) = 2 1 = 1 dfError(B*S) = (kB 1)(s 1) = (2-1)(10-1) = 9
and MSerror for the other main effect and the Interaction F-values.