Documente Academic
Documente Profesional
Documente Cultură
In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated
procedures, in which the observed variance is partitioned into components due to different sources of variation.
In its simplest form ANOVA provides a statistical test of whether or not the means of several groups are all
equal, and therefore generalizes Student's two-sample t-test to more than two groups. ANOVAs are helpful
because they possess a certain advantage over a two-sample t-test. Doing multiple two-sample t-tests would
result in a largely increased chance of committing a type I error. For this reason, ANOVAs are useful in
comparing three or more means.
Contents
[hide]
• 1 Overview
• 2 Models
• 3 Assumptions of ANOVA
observational data
• 4 Logic of ANOVA
• 6 History
• 7 See also
• 8 Notes
• 9 References
• 10 External links
[edit]Overview
In practice, there are several types of ANOVA depending on the number of treatments and the way they are
applied to the subjects in the experiment:
Factorial ANOVA is used when the experimenter wants to study the effects of
two or more treatment variables. The most commonly used type of factorial
ANOVA is the 22 (read "two by two") design, where there are two independent
variables and each variable has two levels or distinct values. However, such
use of ANOVA for analysis of 2k factorial designs andfractional factorial
designs is "confusing and makes little sense"; instead it is suggested to refer
the value of the effect divided by its standard error to a t-table.[1] Factorial
ANOVA can also be multi-level such as 33, etc. or higher order such as 2×2×2,
etc. but analyses with higher numbers of factors are rarely done by hand
because the calculations are lengthy. However, since the introduction of data
analytic software, the utilization of higher order designs and analyses has
become quite common.
Repeated measures ANOVA is used when the same subjects are used for
each treatment (e.g., in a longitudinal study). Note that such within-subjects
designs can be subject tocarry-over effects.
The fixed-effects model of analysis of variance applies to situations in which the experimenter applies several
treatments to the subjects of the experiment to see if the response variablevalues change. This allows the
experimenter to estimate the ranges of response variable values that the treatment would generate in the
population as a whole.
Random effects models are used when the treatments are not fixed. This occurs when the various treatments
(also known as factor levels) are sampled from a larger population. Because the treatments themselves
are random variables, some assumptions and the method of contrasting the treatments differ from ANOVA
model 1.
Most random-effects or mixed-effects models are not concerned with making inferences concerning the
particular sampled factors. For example, consider a large manufacturing plant in which many machines
produce the same product. The statistician studying this plant would have very little interest in comparing the
three particular machines to each other. Rather, inferences that can be made for all machines are of interest,
such as their variability and the mean. However, if one is interested in the realized value of the random
effect best linear unbiased prediction can be used to obtain a "prediction" for the value.
[edit]Assumptions of ANOVA
Levene's test for homogeneity of variances is typically used to examine the plausibility of homoscedasticity.
The Kolmogorov–Smirnov or the Shapiro–Wilk test may be used to examine normality.
When used in the analysis of variance to test the hypothesis that all treatments have exactly the same effect,
the F-test is robust (Ferguson & Takane, 2005, pp. 261–2).[2] The Kruskal–Wallis test is
a nonparametric alternative which does not rely on an assumption of normality. And the Friedman test is
the nonparametric alternative for a one way repeated measures ANOVA.
The separate assumptions of the textbook model imply that the errors are independently, identically, and
normally distributed for fixed effects models, that is, that the errors are independent and
[edit]Randomization-based analysis
See also: Random assignment and Randomization test
In a randomized controlled experiment, the treatments are randomly
assigned to experimental units, following the experimental protocol. This
randomization is objective and declared before the experiment is carried out.
The objective random-assignment is used to test the significance of the null
hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This
design-based analysis was discussed and developed by Francis J.
Anscombe at Rothamsted Experimental Station and by Oscar
Kempthorne at Iowa State University.[3]Kempthorne and his students make
an assumption of unit treatment additivity, which is discussed in the books of
Kempthorne and David R. Cox.
[edit]Unit-treatment additivity
In its simplest form, the assumption of unit-treatment additivity states that the
observed response yi,j from experimental unit i when receiving
treatment j can be written as the sum of the unit's response yi and the
treatment-effect tj, that is
yi,j = yi + tj.[4]
The assumption of unit-treatment addivity implies that, for every
treatment j, the jth treatment have exactly the same effect tj on every
experiment unit.
The test statistics of this derived linear model are closely approximated
by the test statistics of an appropriate normal linear model, according
to approximation theorems and simulation studies by Kempthorne and
his students (Hinkelmann and Kempthorne). However, there are
differences. For example, the randomization-based analysis results in a
small but (strictly) negative correlation between the observations
(Hinkelmann and Kempthorne, volume one, chapter 7; Bailey chapter
1.14). In the randomization-based analysis, there is no assumption of
a normal distribution and certainly no assumption of independence. On
the contrary, the observations are dependent!
[edit]Logic of ANOVA
[edit]Partitioning of the sum of squares
The fundamental technique is a partitioning of the total sum of
squares (abbreviated SS) into components related to the effects used
in the model. For example, we show the model for a simplified ANOVA
with one type of treatment at different levels.
[edit]The F-test
Main article: F-test
where
I = number of treatments
and
nT = total number of cases
to the F-
distribution with I − 1,nT − I degrees
of freedom. Using the F-distribution
is a natural candidate because the
test statistic is the quotient of two
mean sums of squares which have
a chi-square distribution.
[edit]ANOVA on ranks
See also: Kruskal-Wallis one-way
analysis of variance
A variant of rank-transformation is
'quantile normalization' in which a
further transformation is applied to
the ranks such that the resulting
values have some defined
distribution (often a normal
distribution with a specified mean
and variance). Further analyses of
quantile-normalized data may then
assume that distribution to compute
significance values. However, two
specific types of secondary
transformations, the random normal
scores and expected normal scores
transformation, have been shown to
greatly inflate Type I errors and
severely reduce statistical power
(Sawilowsky, 1985a, 1985b).
η2 ( eta-squared ): Eta-squared
describes the ratio of variance
explained in the dependent variable
by a predictor while controlling for
other predictors. Eta-squared is a
biased estimator of the variance
explained by the model in the
population (it only estimates effect
size in the sample). On average it
overestimates the variance
explained in the population. As the
sample size gets larger the amount
of bias gets smaller. It is, however,
an easily calculated estimator of the
proportion of the variance in a
population explained by the
treatment. Note that earlier versions
of statistical software (such as
SPSS) incorrectly reports Partial eta
squared under the misleading title
"Eta squared".
Several variations of
benchmarks exist.
Nevertheless, alternative
rules of thumb have
emerged in certain
disciplines: Small = 0.01;
medium = 0.06; large =
0.14 (Kittler, Menard &
Phillips, 2007).
Omega Squared Omega
squared provides a
relatively unbiased
estimate of the variance
explained in the
population by a predictor
variable. It takes random
error into account more
so than eta squared,
which is incredibly biased
to be too large. The
calculations for omega
squared differ depending
on the experimental
design. For a fixed
experimental design (in
which the categories are
explicitly set), omega
squared is calculated as
follows:[7]
Cohen's ƒ This
measure of effect
size is frequently
encountered when
performing power
analysis
calculations.
Conceptually it
represents the
square root of
variance explained
over variance not
explained.
[edit]Follow up
tests
A statistically
significant effect in
ANOVA is often
followed up with
one or more
different follow-up
tests. This can be
done in order to
assess which
groups are different
from which other
groups or to test
various other
focused
hypotheses. Follow
up tests are often
distinguished in
terms of whether
they are planned (a
priori) or post hoc.
Planned tests are
determined before
looking at the data
and post hoc tests
are performed after
looking at the data.
Post hoc tests such
as Tukey's range
test most
commonly compare
every group mean
with every other
group mean and
typically incorporate
some method of
controlling for Type
I errors.
Comparisons,
which are most
commonly planned,
can be either
simple or
compound. Simple
comparisons
compare one group
mean with one
other group mean.
Compound
comparisons
typically compare
two sets of groups
means where one
set has at two or
more groups (e.g.,
compare average
group means of
group A, B and C
with group D).
Comparisons can
also look at tests of
trend, such as
linear and quadratic
relationships, when
the independent
variable involves
ordered levels.
[edit]Power
analysis
Power analysis is
often applied in the
context of ANOVA
in order to assess
the probability of
successfully
rejecting the null
hypothesis if we
assume a certain
ANOVA design,
effect size in the
population, sample
size and alpha
level. Power
analysis can assist
in study design by
determining what
sample size would
be required in order
to have a
reasonable chance
of rejecting the null
hypothesis.
[edit]Example
s
In a first
experiment, Group
A is given vodka,
Group B is
given gin, and
Group C is given
a placebo. All
groups are then
tested with a
memory task.
A one-way
ANOVA can be
used to assess the
effect of the various
treatments (that is,
the vodka, gin, and
placebo).
In a second
experiment, Group
A is given vodka
and tested on a
memory task. The
same group is
allowed a rest
period of five days
and then the
experiment is
repeated with gin.
The procedure is
repeated using a
placebo. A one-
way ANOVA with
repeated
measures can be
used to assess the
effect of the vodka
versus the impact
of the placebo.
In a third
experiment testing
the effects of
expectations,
subjects are
randomly assigned
to four groups:
1. expect
vodka—
receive
vodka
2. expect
vodka—
receive
placebo
3. expect
placebo—
receive
vodka
4. expect
placebo—
receive
placebo
(the last
group is
used as
the control
group)
[edit]History
The analysis of
variance was used
informally by
researchers in the
1800s using least
squares. In physics
and psychology,
researchers
included a term for
the operator-effect,
the influence of a
particular person on
measurements,
according to
Stephen Stigler's
histories.
[edit]See also
Statistics portal
AMOVA
ANCOVA
ANORVA
Design of
experiments
Duncan's new
multiple range
test
Explained
variance and u
nexplained
variance
Important
publications in
analysis of
variance
Kruskal–Wallis
test
Friedman test
MANOVA
Measurement
uncertainty
Multiple
comparisons
Squared
deviations
t-test
Tukey's test of
additivity
[edit]Notes
1. ^ Box,
Hunter
and
Hunter. St
atistics for
experimen
ters.
Wiley.
p. 188
"Misuse of
the
ANOVA
for
2k factorial
experimen
ts".
2. ^ Non-
statistician
s may be
confused
because
another F-
test is
nonrobust:
When
used to
test the
equality of
the
variances
of two
population
s, the F-
test is
unreliable
if there are
deviations
from
normality
(Lindman,
1974).
3. ^
Ansco
mbe,
F.
J. (19
48). "
The
Validit
y of
Comp
arativ
e
Experi
ments
". Jou
rnal of
the
Royal
Statist
ical
Societ
y.
Serie
sA
(Gene
ral) 11
1 (3):
pp. 18
1–
211.
MR30
181
4. ^ Kempth
orne, Cox
(Chapter
2), and
Hinkelman
n
and Kemp
thorne (Ch
apters 5-
6).
5. ^ Hinkelm
ann and
Kempthor
ne,
chapter 7
or 8.
6. ^ Cox,
chapter 2.
Bailey on
eelworms.
According
to
Cauchy's f
unctional
equation t
heorem,
the logarit
hm is the
only
continuou
s
transforma
tion that
transforms
real
multiplicati
on to
addition.
7. ^ [1]
8. ^ http://w
ww.library.
adelaide.e
du.au/digit
ised/fisher
/9.pdf
9. ^ [Studies
in Crop
Variation.
I. An
examinati
on of the
yield of
dressed
grain from
Broadbalk
Journal of
Agricultura
l Science,
11, 107–
135http://
www.librar
y.adelaide
.edu.au/di
gitised/fish
er/15.pdf]
[edit]Referenc
es
Addelman,
Sidney (Oct.
1969). "The
Generalized
Randomized
Block
Design". The
American
Statistician 23 (
4): pp. 35–36.
Addelman,
Sidney (Sep.
1970). "Variabil
ity of
Treatments
and
Experimental
Units in the
Design and
Analysis of
Experiments".
Journal of the
American
Statistical
Association 65(
331):
pp. 1095–
1108.
Anscombe, F.
J. (1948). "The
Validity of
Comparative
Experiments".
Journal of the
Royal
Statistical
Society. Series
A
(General) 111 (
3): pp. 181–
211. MR30181
Bailey, R.
A (2008). Desi
gn of
Comparative
Experiments. C
ambridge
University
Press. ISBN 97
8-0-521-68357-
9. Pre-
publication
chapters are
available on-
line.
Bapat, R.
B. (2000). Line
ar Algebra and
Linear
Models (Secon
d
ed.). Springer.
Blair, R. C.,
Sawilowsky, S.
S., & Higgins,
J. J. (1987).
Limitations of
the rank
transform in
factorial
ANOVA. Com
munications in
Statistics:
Computations
and
Simulations, B
16, 1133-1145.
Caliński,
Tadeusz and
Kageyama,
Sanpei
(2000). Block
designs: A
Randomization
approach,
Volume I:
Analysis.
Lecture Notes
in
Statistics. 150.
New York:
Springer-
Verlag.ISBN 0-
387-98578-6.
Christensen,
Ronald
(2002). Plane
Answers to
Complex
Questions: The
Theory of
Linear
Models (Third
ed.). New York:
Springer. ISBN
0-387-95361-2.
Cohen, J.
(1992).
Statistics a
power
primer. Psycho
logy Bulletin,
112, 155–159.
Cohen, J.
(1988). Statisti
cal power
analysis for the
behavior
sciences (2nd
ed.).
Conover, W. J.,
& Iman, R. L.
(1981). Rank
transformations
as a bridge
between
parametric and
nonparametric
statistics. Amer
ican
Statistician, 35,
124-129.
Conover, W. J.,
& Iman, R. L.
(1976). On
some
alternative
procedures
using ranks for
the analysis of
experimental
designs. Com
munications in
Statistics, A5,
1349-1368.
Conover, W. J.
& Iman, R. L.
(1981). Rank
transformations
as a bridge
between
parametric and
nonparametric
statistics. Amer
ican
Statistician, 35,
124–129. [2] [3
]
Cox, David
R. Planning of
experiments (1
958)
Cox, David
R. and Reid,
Nancy M. The
theory of
design of
experiments.
(Chapman &
Hall/CRC,
2000).
Ferguson,
George A.,
Takane,
Yoshio. (2005).
"Statistical
Analysis in
Psychology
and
Education",
Sixth Edition.
Montréal,
Quebec:
McGraw–Hill
Ryerson
Limited.
Freedman,
David A. et
alia. Statistics,
4th edition
(W.W. Norton
& Company,
2007) [4]
Freedman,
David
A. Statistical
Models:
Theory and
Practice (Camb
ridge University
Press) [5]
Gates, Charles
E. (Nov.
1995). "What
Really Is
Experimental
Error in Block
Designs?". The
American
Statistician 49 (
4): pp. 362–
363.
Headrick, T. C.
(1997). Type I
error and
power of the
rank transform
analysis of
covariance
(ANCOVA) in a
3 x 4 factorial
layout.
Unpublished
doctoral
disseration,
University of
South Florida.
Hettmansperge
r, T. P.;
McKean, J. W.
(1998). Robust
nonparametric
statistical
methods.
Kendall's
Library of
Statistics. 5 (Fir
st ed.).
London:
Edward Arnold.
pp. xiv+467
pp..ISBN 0-
340-54937-8,
0-471-19479-4.
MR1604954
Hinkelmann,
Klaus
and Kempthorn
e,
Oscar (2008).
Design and
Analysis of
Experiments. I
and II (Second
ed.).
Wiley. ISBN 97
8-0-470-38551-
7.
Hinkelmann,
Klaus
and Kempthorn
e,
Oscar (2008).
Design and
Analysis of
Experiments,
Volume I:
Introduction to
Experimental
Design (Secon
d ed.).
Wiley. ISBN 97
8-0-471-72756-
9.
Hinkelmann,
Klaus
and Kempthorn
e,
Oscar (2005).
Design and
Analysis of
Experiments,
Volume 2:
Advanced
Experimental
Design (First
ed.).
Wiley. ISBN 97
8-0-471-55177-
5.
Kempthorne,
Oscar (1979).
The Design
and Analysis of
Experiments (C
orrected reprint
of (1952) Wiley
ed.). Robert E.
Krieger. ISBN 0
-88275-105-0.
Iman, R. L.
(1974). A
power study of
a rank
transform for
the two-way
classification
model when
interactions
may be
present. Canad
ian Journal of
Statistics, 2,
227-239.
Lentner,
Marvin;
Thomas
Bishop
(1993). Experi
mental design
and
analysis (Seco
nd ed.). P.O.
Box 884,
Blacksburg, VA
24063: Valley
Book
Company. ISB
N 0-9616255-
2-X.
Lindman, H. R.
(1974). Analysi
s of variance in
complex
experimental
designs. San
Francisco: W.
H. Freeman &
Co. Hillsdale,
NJ USA:
Erlbaum.
Keppel, G. &
Wickens, T.D.
(2004). Design
and analysis: A
researcher's
handbook (4th
ed.). Upper
Saddle River,
NJ: Pearson
Prentice–Hall.
Kittler, J.E.,
Menard, W. &
Phillips, K.A.
(2007). Weight
concerns in
individuals with
body
dysmorphic
disorder. Eatin
g Behaviors, 8,
115–120.
Nanna, M. J.
(2002).
Hoteling's
T2 vs. the rank
transformation
with real Likert
data. Journal
of Modern
Applied
Statistical
Methods, 1,
83-99.
Pierce, C.A.,
Block, R.A. &
Aguinis, H.
(2004).
Cautionary
note on
reporting eta-
squared values
from multifactor
anova
designs. Educa
tional and
Psychological
Measurement,
64(6), 916–
924.
SAS Institute.
(1985). SAS/st
at guide for
personal
computers (5th
ed.). Cary, NC:
Author.
SAS Institute.
(1987). SAS/st
at guide for
personal
computers (6th
ed.). Cary, NC:
Author.
SAS Institute.
(2008). SAS/S
TAT 9.2 User's
guide:
Introduction to
Nonparametric
Analysis. Cary,
NC. Author.
Sawilowsky, S.
(1985a). Robu
st and power
analysis of the
2x2x2 ANOVA,
rank
transformation,
random normal
scores, and
expected
normal scores
transformation
tests.
Unpublished
doctoral
dissertation,
University of
South Florida.
Sawilowsky, S.
(1985b). A
comparison of
random normal
scores test
under the F
and Chi-square
distributions to
the 2x2x2
ANOVA
test. Florida
Journal of
Educational
Research, 27,
83-97
Sawilowsky, S.
(1990).
Nonparametric
tests of
interaction in
experimental
design. Review
of Educational
Research, 60(1
), 91-126.
Sawilowsky, S.
(2000) Review
of the rank
transform in
designed
experiments. P
erceptual and
Motor
Skills, 90, 489-
497.
Sawilowsky, S.,
Blair, R. C., &
Higgins, J. J.
(1989). An
investigation of
the type I error
and power
properties of
the rank
transform
procedure in
factorial
ANOVA. Journ
al of
Educational
Statistics, 14,
255-267.
Strang, K.D.
(2009). Using
recursive
regression to
explore
nonlinear
relationships
and
interactions: A
tutorial applied
to a
multicultural
education
study. Practical
Assessment,
Research &
Evaluation,
14(3), 1–13.
Retrieved 1
June 2009
from: [7]
Thompson, G.
L. (1991). A
note on the
rank transform
for
interactions. Bi
ometrika,78(3),
697-701.
Thompson, G.
L., & Ammann,
L. P. (1989).
Efficiencies of
the rank-
transform in
two-way
models with no
interaction. Jou
rnal of the
American
Statistical
Association, 4(
405), 325-330.
Wilk, M. B.
(June
1955). "The
Randomization
Analysis of a
Generalized
Randomized
Block
Design". Biom
etrika 42 (1-2):
pp. 70–79.
Zyskind,
George (Dec.
1963). "Some
Consequences
of
randomization
in a
Generalization
of the
Balanced
Incomplete
Block
Design". The
Annals of
Mathematical
Statistics 34 (4)
: pp. 1569–
1581. doi:10.12
14/aoms/11777
03889.
[edit]External
links
Performing a
one-way
ANOVA in
SPSS - A How-
To Guide by
Laerd Statistics
Performing a
repeated
measures
ANOVA in
SPSS - A How-
To Guide by
Laerd Statistics
SOCR ANOVA
Activity and int
eractive applet.
A tutorial on
ANOVA
devised for
Oxford
University
psychology
students
Examples of all
ANOVA and
ANCOVA
models with up
to three
treatment
factors,
including
randomized
block, split plot,
repeated
measures, and
Latin squares
NIST/SEMATE
CH e-
Handbook of
Statistical
Methods, secti
on 7.4.3: "Are
the means
equal?"
v•d•e
v•d•e
Categories: Analysi
s of
variance | Experime
ntal
design | Statistical
tests | Parametric
statistics
• New features
• Log in / create
account
• Article
• Discussion
• Read
• Edit
• View history
Top of Form
Search
Bottom of Form
• Main page
• Contents
• Featured content
• Current events
• Random article
Interaction
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help
Toolbox
Print/export
Languages
• This page was last
modified on 1 June
2010 at 14:27.
• Text is available
• Privacy policy
• About Wikipedia
• Disclaimers
About the Author: Harold Averkamp (CPA) has worked as an accountant, consultant, and university
accounting instructor for more than 25 years.
He is the author of the 2010 Master Accounting Download Package which has been praised for it's
ability to simplify accounting in a way that anybody can understand.
•
A
ll Bnet
•
A
rticles
•
L
ibrary
•
S
tocks
•
D
ictionary
Top of Form
Find business answ ers
Search
Bottom of Form
• Log In |
• Newsletters
• My BNET
• Today
• Management
• Strategy
• Work Life
• Insight
• Industries
• Business Library
• Video
Additional Resources
Comparing Downside-Risk and Mean-Variance Analysis Using
Bootstrap Simulation
Is Downside risk DR a better alternative to the traditional mean-variance
analysis MV for real estate portfolio diversification? This study demonstrates
a bootstrap approach for comparing the two models. The results show that
ex ante DR portfolio return distributions tend to be negatively skewed with a
smaller left tail, and...
Tags: Portfolio, Downside-Risk, Real Estate, Business Operations
White papers 2002-03-11
Plan Vs. Actual Variance Report (Financial
Services)
This template is designed to help finance professionals analyze key line items
from the income statement profit and loss statement and the balance sheet
against annual financial plans to determine overall progress toward your
organization's financial goals. The user can enter planned and actual data on
this template and see...
Tags: P&L, Variance, Financial, Financial Service, Financial Accounting...
Tools & templates 2004-05-03
Profit and Variance Analysis of Cotton
Production Technologies and Rotation
Crops in Georgia
Genetically modified cotton varieties have the potential for increasing returns
and/or decreasing labor requirements. A nonlinear optimization model is
applied to a whole farm analysis for evaluating cotton production
technologies. This model maximizes farm utility, composed of expected
returns and their variability, at various risk aversion levels in order to...
Tags: analysis, aversion, Benefits, cotton, Georgia...
Research articles 2003-12-01
Analysis of variance
designs; a conceptual and
computational approach
with SPSS and SAS
Analysis of variance designs; a conceptual and computational approach with
SPSS and SAS. Gamst, Glenn et al. Cambridge U. Press 2008 578 pages
$90.00 Hardcover QA279 Gamst (psychology, U. of LaVerne), Meyers
(psychology, California State U., Sacramento) and Guarino (psychology,
Auburn U.)...
Tags: SAS Institute, SPSS Inc.
Research articles 2008-12-01
Sales Trend
Analysis
Use this template to identify sales trends per product per quarter. It
calculates the sales variance, determines if the variance is Favorable F or
Unfavorable U, and calculates the percentage change (the percentage
increase or decrease in sales over last year's level). The column header
works with entries in the...
Tags: Variance, Analysis, JaxWorks, Sales Strategy, Sales Force Management...
Tools & templates 2007-09-01
Managing Your Business Budget
Simply put, a company's budget is its business plan for the year expressed in
numbers—specifically, estimated sales income together with the costs
required to produce those sales. To make a profit in your business, you must
make reasonably accurate estimates of your sales income, estimate your
costs precisely, and manage...
Tags: finance, bookkeeping, sales
Articles 2007-03-27
A Quantitative Model-Independent Method for
Global Sensitivity Analysis of Model Output.
(Statistical Data Included)
A new method for sensitivity analysis SA of model output is introduced. It is
based on the Fourier amplitude sensitivity test FAST and allows the
computation of the total contribution of each input factor to the output's
variance. The term "total" here means that the factor's...
Tags: analysis, computation, Engineering, G., indice...
Research articles 1999-02-01
An Empirical Analysis of Determinants
of Retailer Pricing Strategy
This paper empirically investigates the determinants of retailers' pricing
decisions. It finds that competitor factors explain the most variance in
retailer pricing strategy. Only in the cases of price-promotion coordination
and relative brand price do category and chain factors explain much variance
in retailer pricing. These findings are derived from...
Tags: Retail Company, Pricing Strategy, Analysis, Institute For Operations
Research,Retail...
White papers 2004-01-23
Molecular genetic (RAPD)
analysis of leach's storm-
petrels
ABSTRACT.-Leach's Storm-Petrels Oceanodroma leucorhoa breed colonially
on islands off the Atlantic coast of Canada and exhibit strong site and mate
fidelity. We used RAPD markers to estimate relationships among 91 petrels
representing three colonies: Bon Portage Island and Big White Island in Nova
Scotia, and Gull Island in Newfoundland. Analysis...
Tags: Molecular Inc.
Research articles 1999-04-01
The
Mathematics Of
Returns-Based
Style Analysis
This article, explains the mathematics of Sharpe's algorithm. As it turns out,
a fairly complete and mathematically rigorous description of the algorithm
can be given without using a lot of mathematical formalism. William F.
Sharpe's method of returns- based style analysis is substantially different
from classical multivariate regression analysis. While...
Tags: Asset, Asset Class, Analysis, Zephyr Associates, William F. Sharpe...
White papers 2002-08-01
Wha
t
Fact
ors
Expl
ain
the
Num
ber
of
Phys
ical
Ther
apy
Treat
ment
Sessi
ons
in
Patie
nts
Refe
rred
With
Low
Back
Pain:
A
Mult
ilevel
Anal
ysis
It is well-known that the number of physical therapy treatment sessions
varies over treatment episodes. Information is lacking, however, on the
source and explanation of the variation. The purposes of the current study
are: to determine how the variance in the number of physical therapy
treatment sessions in patients with...
Tags: Patient, Therapy, Treatment Session
White papers 2005-11-24
Enter
prise
Produ
cts
Partn
ers LP
Q3
2007
Earni
ngs
Call
Trans
cript
Question-and-Answer SessionOperator Thank you. We will now begin the
question-and-answer session. [Operator Instructions]. Our first question
comes from Mr. Darren Horowitz of Raymond James. You may ask your
question. Darren Horowitz - Raymond James Good morning guys. My first
question has to do Randy, with what...
Tags: Enterprise Products Partners L.P.
Earnings calls 2007-10-25
Enterprise
Products
Partners L
2007 Earn
Call Trans
Question-and-Answer SessionOperator Thank you. We will now begin the
question-and-answer session. [Operator Instructions]. Our first question
comes from Mr. Darren Horowitz of Raymond James. You may ask your
question. Darren Horowitz - Raymond James Good morning guys. My first
question has to do Randy, with what...
Tags: Enterprise Products Partners L.P.
Earnings calls 2007-10-25
On Taking
Route: Ris
and Perfor
Persistenc
Using a new database of hedge funds, this paper provides a comprehensive
analysis of the risk-return characteristics, risk exposures, style analysis and
performance persistence of various hedge fund strategies. We conduct a
mean-variance analysis to find that a combination of alternative investments
and passive indexing provides significantly better risk-return trade-off...
Tags: Performance, London Business School, Hedge Fund, Analysis, Investment...
White papers 1999-02-01
Is there a
Time
This paper discusses "case for property" in the mixed-asset portfolio, which is
a topic of continuing interest to practitioners and academics. Such an
analysis typically is performed over a fixed period of time and the optimum
allocation to property inferred from the weight assigned to property through
the use of...
Tags: Property, Analysis, University Of Reading
White papers 2002-11-04
Confirmat
Consisten
The validity and reliability of the Student-life Stress Inventory, SSI, was
studied by analyzing the responses made to it by 381 students who were
enrolled in classes at a state university. The confirmatory factor analyses and
the analysis of variance were used to compute the validity. The internal
consistency was...
Tags: SSI Ltd.
Research articles 2001-06-01
Spiritualit
from intro
The authors discuss the results of a content analysis of 14 syllabi of
introductory courses on spirituality in counseling. Course syllabi were
examined to determine trends in the content of these courses and to
determine if the instruction is consistent with 9 competencies developed at
the...
Tags: Computer Associates International Inc.
Research articles 2004-01-01
Examining
analysis: h
strategic c
The domestic U.S. airline industry revealed improved performance metrics in
2005 compared to 2004. For example, 2005 operating losses were about
$2.1 billion on operating revenues of $111 billion, compared to 2004
operating losses of $3.5 billion on operating revenues of $101 billion. (1)
Consistent with the increase in...
Tags: Southwest Airlines Co.
Research articles 2008-06-22
Risk Mana
Although risk management has been a well-ploughed field in financial
modeling for over two decades, traditional risk management tools such as
mean-variance analysis, beta, and Value-at-Risk do not capture many of the
risk exposures of hedge-fund investments. In this article, I review several
aspects of risk management that are unique...
Tags: Hedge Fund, Social Science Electronic Publishing Inc., Risk Management,Financial
Planning, Financial Services...
White papers 2003-01-01
Optimal D
Recent research has demonstrated that the Markowitz efficient frontier is
fuzzy and may consist of many statistically indistinguishable frontiers.
Therefore, it opens the possibility that an efficient portfolio developed by
mean-variance analysis may not be any more efficient than a naively
diversified portfolio. Using an efficiency test, its found that...
Tags: Portfolio, Real Estate, Business Operations
White papers 2003-01-01
Neighboring Terms
• Variable Annuity
• Variable Costing
• Variable Interest Rate
• Variable Rate Note
• Variance
• Variance Analysis
• Variance Components
• Variety Reduction
• VAT
• VAT Inspector
• VAT Paid
BNET
• BNET US
• BNET AU
• BNET UK
• BNET China
Site Help & Feedback | About BNET | Reprint Policy
Popular on CBS sites: Fantasy Baseball | iPad | Video Game Reviews | Cell Phones | NFL
Draft
Top of Form
On this page
• Dictionary
• Sci-Tech Encycl.
• Business
• Accounting
• Public Health
• Sports & Medicine
• Wikipedia
• Best of Web
• Copyrights
Library
• Animal Life
• Business & Finance
• Cars & Vehicles
• Entertainment & Arts
• Food & Cooking
• Health
• History, Politics, Society
• Home & Garden
• Law & Legal Issues
• Literature & Language
• Miscellaneous
• Religion & Spirituality
• Science
• Shopping
• Sports
• Technology
• Travel
• Q&A
analysis of variance
Dictionary:analysis of variance
Sponsored Links
Statistics for Excel
Integrated Excel Add-In Statistical tests and diagrams
www.winstat.com
Statistical Software
Statistical Tools to Analyze Data & Improve Quality - Try Free Trial!
www.Minitab.com
Home > Library > Literature & Language > Dictionary
n.
An analysis of the variation in the outcomes of an experiment to assess the contribution of each
variable to the variation.
English▼
•
•
Top of Form
a n a ly s is o f v a r ia n c e
Bottom of Form
Sponsored Links
E-Tabs software
Automated PowerPoint, Excel, Word reports & charts in seconds
www.e-tabs.com
Clinical Data Solution
Power analysis, hypothesis testing, survival analysis, conf. intervals
Wolfram.com/Clinical-Trial-Download
Sci-Tech Encyclopedia:
Analysis of variance
Top
Home > Library > Science > Sci-Tech Encyclopedia
Total variation in experimental data is partitioned into components assignable to specific sources by
the analysis of variance. This statistical technique is applicable to data for which (1) effects of sources
are additive, (2) uncontrolled or unexplained experimental variations (which are grouped as
experimental errors) are independent of other sources of variation, (3) variance of experimental errors
is homogeneous, and (4) experimental errors follow a normal distribution. When data depart from
these assumptions, one must exercise extreme care in interpreting the results of an analysis of
variance. Statistical tests indicate the contribution of the components to the observed variation. See
also Statistics.
Sponsored Links
Statistical model that tests whether or not groups of data have the same or differing means. The
ANOVA model operates by comparing the amounts of dispersion experienced by each of the groups to
the total amount of dispersion in the data.
Sponsored Links
Variance Jobs
Openings in Your Dream Company. Apply Now to Get A Job Offer!
Naukri.com
Need a good summary?
Generate your own. Topicmarks reads and summarizes any text in minutes.
trynow.topicmarks.com
Accounting Dictionary:
Analysis of Variances
Top
Home > Library > Business & Finance > Accounting Dictionary
Seeking causes for variances between standard costs and actual costs; also called variance analysis.
A Variance is considered favorable if actual costs are less than standard costs; it is unfavorable if
actual costs exceed standard costs. Unfavorable variances need further investigation. Analysis of
variances reveals the causes of these deviations. This feedback aids in planning future goals,
controlling costs, evaluating performance, and taking corrective action.Management By Exception is
based on the analysis of variances, and attention is given to only the variances that require remedial
actions.
Encyclopedia of Public Health:
Analysis of Variance
Top
Home > Library > Health > Public Health Encyclopedia
Analysis of variance (ANOVA) is a statistical technique that can be used to evaluate whether there are
differences between the average value, or mean, across several population groups. With this model,
the response variable is continuous in nature, whereas the predictor variables are categorical. For
example, in a clinical trial of hypertensive patients, ANOVA methods could be used to compare the
effectiveness of three different drugs in lowering blood pressure. Alternatively, ANOVA could be used
to determine whether infant birth weight is significantly different among mothers who smoked during
pregnancy relative to those who did not. In the simplest case, where two population means are being
compared, ANOVA is equivalent to the independent two-sample t-test.
One-way ANOVA evaluates the effect of a single factor on a single response variable. For example, a
clinician may be interested in determining whether there are differences in the age distribution of
patients enrolled in two different study groups. Using ANOVA to make this comparison requires that
several assumptions be satisfied. Specifically, the patients must be selected randomly from each of
the population groups, a value for the response variable is recorded for each sampled patient, the
distribution of the response variable is normally distributed in each population, and the variance of the
response variable is the same in each population. In the above example, age would represent the
response variable, while the treatment group represents the independent variable, or factor, of
interest.
As indicated through its designation, ANOVA compares means by using estimates of variance.
Specifically, the sampled observations can be described in terms of the variation of the individual
values around their group means, and of the variation of the group means around the overall mean.
These measures are frequently referred to as sources of "within-groups" and "between-groups"
variability, respectively. If the variability within the k different populations is small relative to the
variability between the group means, this suggests that the population means are different. This is
formally tested using a test of significance based on the F distribution, which tests the null hypothesis
(H0) that the means of the k groups are equal:
H0 = μ1 = μ2 = μ3 = …. μk
An F-test is constructed by taking the ratio of the "between-groups" variation to the "within-groups"
variation. If n represents the total number of sampled observations, this ratio has an F distribution
with k-1 and n-k degrees in the numerator and denominator, respectively. Under the null hypothesis,
the "within-groups" and "between-groups" variance both estimate the same underlying population
variance and the F ratio is close to one. If the between-groups variance is much larger than the
within-groups, the F ratio becomes large and the associated p-valuebecomes small. This leads to
rejection of the null hypothesis, thereby concluding that the means of the groups are not all equal.
When interpreting the results from the ANOVA procedures it is helpful to comment on the strength of
the observed association, as significant differences may result simply from having a very large number
of samples.
Multi-way analysis of variance (MANOVA) is an extension of the one-way model that allows for the
inclusion of additional independent nominal variables. In some analyses, researchers may wish to
adjust for group differences for a variable that is continuous in nature. For example, in the example
cited above, when evaluating the effectiveness of hypertensive agents administered to three groups,
we may wish to control for group differences in the age of the patients. The addition of a continuous
variable to an existing ANOVA model is referred to as analysis of covariance (ANCOVA).
In public health, agriculture, engineering, and other disciplines, there are numerous study designs
whereby ANOVA procedures can be used to describe collected data. Subtle differences in these study
designs require different analytic strategies. For example, selecting an appropriate ANOVA model is
dependent on whether repeated measurements were taken on the same patient, whether the same
number of samples were taken in each population, and whether the independent variables are
considered as fixed or random variables. A description of these caveats is beyond the scope of this
encyclopedia, and the reader is referred to the bibliography for more comprehensive coverage of this
material. However, several of the more commonly used ANOVA models include the randomized block,
the split-plot, and factorial designs.
(SEE ALSO: Epidemiology; Statistics for Public Health)
Bibliography
Cochran, W. G., and Cox, G. M. (1957). Experimental Design, 2nd edition. New York: Wiley.
Cox, D. R. (1966). Planning of Experiments. New York: Wiley.
Kleinbaum, D. G.; Kupper, L. L.; and Muller, K. E. (1987). Applied Regression Analysis and Other
Multivariate Methods, 2nd edition. Boston: PWS-Kent Publishing Company.
Snedecor, G. W., and Cochran, W. G. (1989). Statistical Methods, 8th edition. Ames, IA: Iowa State
University Press.
— PAUL J. VILLENEUVE
ANOVA
A statistical technique to analyse the total variation of a set of observations as measured by
thevariance of the observations multiplied by their number. Analysis of variance is used to determine
whether the differences between the means of several sample groups are statistically significant.
Wikipedia:
Analysis of variance
Top
Home > Library > Miscellaneous > Wikipedia
In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated
procedures, in which the observed variance is partitioned into components due to different sources of
variation. In its simplest form ANOVA provides a statistical test of whether or not the means of several
groups are all equal, and therefore generalizes Student's two-sample t-test to more than two groups.
ANOVAs are helpful because they possess a certain advantage over a two-sample t-test. Doing
multiple two-sample t-tests would result in a largely increased chance of committing a type I error.
For this reason, ANOVAs are useful in comparing three or more means.
Contents [hide]
• 1 Overview
• 2 Models
• 3 Assumptions of ANOVA
• 4 Logic of ANOVA
• 5 Examples
• 6 History
• 7 See also
• 8 Notes
• 9 References
• 10 External links
Overview
1. Fixed-effects models assume that the data came from normal populations which may differ
only in their means. (Model 1)
2. Random effects models assume that the data describe a hierarchy of different populations
whose differences are constrained by the hierarchy. (Model 2)
3. Mixed-effect models describe situations where both fixed and random effects are present.
(Model 3)
In practice, there are several types of ANOVA depending on the number of treatments and the way
Typically, however, the one-way ANOVA is used to test for differences among at least three
groups, since the two-group case can be covered by a t-test (Gosset, 1908). When there are only
two means to compare, the t-test and the F-test are equivalent; the relation between ANOVA
Factorial ANOVA is used when the experimenter wants to study the effects of two or more
treatment variables. The most commonly used type of factorial ANOVA is the 22 (read "two by
two") design, where there are two independent variables and each variable has two levels or
distinct values. However, such use of ANOVA for analysis of 2k factorial designs and fractional
factorial designs is "confusing and makes little sense"; instead it is suggested to refer the value
of the effect divided by its standard error to a t-table.[1] Factorial ANOVA can also be multi-level
such as 33, etc. or higher order such as 2×2×2, etc. but analyses with higher numbers of factors
are rarely done by hand because the calculations are lengthy. However, since the introduction of
data analytic software, the utilization of higher order designs and analyses has become quite
common.
Repeated measures ANOVA is used when the same subjects are used for each treatment (e.g., in
a longitudinal study). Note that such within-subjects designs can be subject to carry-over effects.
Mixed-design ANOVA. When one wishes to test two or more independent groups subjecting the
subjects to repeated measures, one may perform a factorial mixed-design ANOVA, in which one
factor is a between-subjects variable and the other is within-subjects variable. This is a type of
mixed-effect model.
Multivariate analysis of variance (MANOVA) is used when there is more than one dependent
variable.
Models
The fixed-effects model of analysis of variance applies to situations in which the experimenter applies
several treatments to the subjects of the experiment to see if the response variablevalues change.
This allows the experimenter to estimate the ranges of response variable values that the treatment
treatments (also known as factor levels) are sampled from a larger population. Because the
treatments themselves are random variables, some assumptions and the method of contrasting the
Most random-effects or mixed-effects models are not concerned with making inferences concerning
the particular sampled factors. For example, consider a large manufacturing plant in which many
machines produce the same product. The statistician studying this plant would have very little interest
in comparing the three particular machines to each other. Rather, inferences that can be made
for all machines are of interest, such as their variability and the mean. However, if one is interested in
the realized value of the random effect best linear unbiased prediction can be used to obtain a
Assumptions of ANOVA
Many textbooks present the analysis of variance in terms of a linear model, which makes the following
assumptions:
Independence of cases – this is an assumption of the model that simplifies the statistical analysis.
groups should be the same. Model-based approaches usually assume that the variance is
design and the assumption of unit treatment additivity (Hinkelmann and Kempthorne): If the
responses of a randomized balanced experiment fail to have constant variance, then the
Levene's test for homogeneity of variances is typically used to examine the plausibility
normality.
When used in the analysis of variance to test the hypothesis that all treatments have exactly the same
effect, the F-test is robust (Ferguson & Takane, 2005, pp. 261–2).[2] The Kruskal–Wallis test is
a nonparametric alternative which does not rely on an assumption of normality. And theFriedman
test is the nonparametric alternative for a one way repeated measures ANOVA.
The separate assumptions of the textbook model imply that the errors are independently, identically,
and normally distributed for fixed effects models, that is, that the errors are independent and
Randomization-based analysis
In a randomized controlled experiment, the treatments are randomly assigned to experimental units,
following the experimental protocol. This randomization is objective and declared before the
experiment is carried out. The objective random-assignment is used to test the significance of the null
hypothesis, following the ideas of C. S. Peirce and Ronald A. Fisher. This design-based analysis was
discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar
Kempthorne at Iowa State University.[3] Kempthorne and his students make an assumption of unit
treatment additivity, which is discussed in the books of Kempthorne and David R. Cox.
Unit-treatment additivity
In its simplest form, the assumption of unit-treatment additivity states that the observed
response yi,j from experimental unit i when receiving treatment j can be written as the sum of the
yi,j = yi + tj.[4]
The assumption of unit-treatment addivity implies that, for every treatment j, the jth treatment have
The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and
Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a
randomized experiment, the assumption of unit-treatment additivity implies that the variance is
constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment
The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians
often use transformations to achieve unit-treatment additivity. If the response variable is expected to
follow a parametric family of probability distributions, then the statistician may specify (in the protocol
for the experiment or observational study) that the responses be transformed to stabilize the
variance.[5] Also, a statistician may specify that logarithmic transforms be applied to the responses,
and Cox. Kempthorne's use of unit treatment additivity and randomization is similar to the design-
Kempthorne uses the randomization-distribution and the assumption of unit treatment additivityto
produce a derived linear model, very similar to the textbook model discussed previously.
The test statistics of this derived linear model are closely approximated by the test statistics of an
appropriate normal linear model, according to approximation theorems and simulation studies by
Kempthorne and his students (Hinkelmann and Kempthorne). However, there are differences. For
example, the randomization-based analysis results in a small but (strictly) negative correlation
between the observations (Hinkelmann and Kempthorne, volume one, chapter 7; Bailey chapter 1.14).
The randomization-based analysis has the disadvantage that its exposition involves tedious algebra
and extensive time. Since the randomization-based analysis is complicated and is closely
approximated by the approach using a normal linear model, most teachers emphasize the normal
linear model approach. Few statisticians object to model-based analysis of balanced randomized
experiments.
However, when applied to data from non-randomized experiments or observational studies, model-
based analysis lacks the warrant of randomization. For observational data, the derivation of confidence
intervals must use subjective models, as emphasized by Ronald A. Fisher and his followers. In
practice, the estimates of treatment-effects from observational studies generally are often inconsistent
(Freedman). In practice, "statistical models" and observational data are useful for suggesting
Logic of ANOVA
The fundamental technique is a partitioning of the total sum of squares (abbreviated SS) into
components related to the effects used in the model. For example, we show the model for a simplified
specifies the chi-square distribution which describes the associated sums of squares.
The F-test
The F-test is used for comparisons of the components of the total deviation. For example, in one-way,
or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic
where
I = number of treatments
and
nT = total number of cases
to the F-distribution with I − 1,nT − I degrees of freedom. Using the F-distribution is a natural
candidate because the test statistic is the quotient of two mean sums of squares which have achi-
square distribution.
ANOVA on ranks
When the data do not meet the assumptions of normality, the suggestion has arisen to replace each
original data value by its rank (from 1 for the smallest to N for the largest), then run a standard
ANOVA calculation on the rank-transformed data. Conover and Iman (1981) provided a review of the
four main types of rank transformations. Commercial statistical software packages (e.g., SAS, 1985,
1987, 2008) followed with recommendations to data analysts to run their data sets through a ranking
procedure (e.g., PROC RANK) prior to conducting standard analyses using parametric procedures.
This rank-based procedure has been recommended as being robust to non-normal errors, resistant to
outliers, and highly efficient for many distributions. It may result in a known statistic (e.g., Wilcoxon
Rank-Sum / Mann-Whitney U), and indeed provide the desired robustness and increased statistical
power that is sought. For example, Monte Carlo studies have shown that the rank transformation in
the two independent samples t test layout can be successfully extended to the one-way independent
samples ANOVA, as well as the two independent samples multivariate Hotelling's T2 layouts (Nanna,
2002).
Conducting factorial ANOVA on the ranks of original scores has also been suggested (Conover & Iman,
1976, Iman, 1974, and Iman & Conover, 1976). However, Monte Carlo studies by Sawilowsky (1985a;
1989 et al.; 1990) and Blair, Sawilowsky, and Higgins (1987), and subsequent asymptotic studies
(e.g. Thompson & Ammann, 1989; "there exist values for the main effects such that, under the null
hypothesis of no interaction, the expected value of the rank transform test statistic goes to infinity as
the sample size increases," Thompson, 1991, p. 697), found that the rank transformation is
inappropriate for testing interaction effects in a 4x3 and a 2x2x2 factorial design. As the number of
effects (i.e., main, interaction) become non-null, and as the magnitude of the non-null effects
increase, there is an increase in Type I error, resulting in a complete failure of the statistic with as
high as a 100% probability of making a false positive decision. Similarly, Blair and Higgins (1985)
found that the rank transformation increasingly fails in the two dependent samples layout as the
correlation between pretest and posttest scores increase. Headrick (1997) discovered the Type I error
rate problem was exacerbated in the context of Analysis of Covariance, particularly as the correlation
between the covariate and the dependent variable increased. For a review of the properties of the
to the ranks such that the resulting values have some defined distribution (often a normal distribution
with a specified mean and variance). Further analyses of quantile-normalized data may then assume
that distribution to compute significance values. However, two specific types of secondary
transformations, the random normal scores and expected normal scores transformation, have been
shown to greatly inflate Type I errors and severely reduce statistical power (Sawilowsky, 1985a,
1985b).
Several standardized measures of effect are used within the context of ANOVA to describe the degree
of relationship between a predictor or set of predictors and the dependent variable. Effect size
estimates are reported to allow researchers to compare findings in studies and across disciplines.
Common effect size estimates reported in bivariate (e.g. ANOVA) and multivariate (MANOVA,
ANCOVA, Multiple Discriminant Analysis) statistical analysis includes eta-squared, partial eta-squared,
η2 ( eta-squared ): Eta-squared describes the ratio of variance explained in the dependent variable by
a predictor while controlling for other predictors. Eta-squared is a biased estimator of the variance
explained by the model in the population (it only estimates effect size in the sample). On average it
overestimates the variance explained in the population. As the sample size gets larger the amount of
bias gets smaller. It is, however, an easily calculated estimator of the proportion of the variance in a
population explained by the treatment. Note that earlier versions of statistical software (such as SPSS)
incorrectly reports Partial eta squared under the misleading title "Eta squared".
Partial η2 (Partial eta-squared): Partial eta-squared describes the "proportion of total variation
attributable to the factor, partialling out (excluding) other factors from the total nonerror variation"
(Pierce, Block & Aguinis, 2004, p. 918). Partial eta squared is normally higher than eta squared
The generally accepted regression benchmark for effect size comes from (Cohen, 1992; 1988): 0.20 is
a minimal solution (but significant in social science research); 0.50 is a medium effect; anything equal
to or greater than 0.80 is a large effect size (Keppel & Wickens, 2004; Cohen, 1992).
Because this common interpretation of effect size has been repeated from Cohen (1988) over the
years with no change or comment to validity for contemporary experimental research, it is
without a full understanding of the limitations ascribed by Cohen. Note: The use of specific partial eta-
square values for large medium or small as a "rule of thumb" should be avoided.
Nevertheless, alternative rules of thumb have emerged in certain disciplines: Small = 0.01; medium =
Omega Squared Omega squared provides a relatively unbiased estimate of the variance explained in
the population by a predictor variable. It takes random error into account more so than eta squared,
which is incredibly biased to be too large. The calculations for omega squared differ depending on the
experimental design. For a fixed experimental design (in which the categories are explicitly set),
calculations. Conceptually it represents the square root of variance explained over variance not
explained.
Follow up tests
A statistically significant effect in ANOVA is often followed up with one or more different follow-up
tests. This can be done in order to assess which groups are different from which other groups or to
test various other focused hypotheses. Follow up tests are often distinguished in terms of whether
they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and
post hoc tests are performed after looking at the data. Post hoc tests such asTukey's range test most
commonly compare every group mean with every other group mean and typically incorporate some
method of controlling for Type I errors. Comparisons, which are most commonly planned, can be
either simple or compound. Simple comparisons compare one group mean with one other group
mean. Compound comparisons typically compare two sets of groups means where one set has at two
or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons
can also look at tests of trend, such as linear and quadratic relationships, when the independent
Power analysis
Power analysis is often applied in the context of ANOVA in order to assess the probability of
successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the
population, sample size and alpha level. Power analysis can assist in study design by determining what
sample size would be required in order to have a reasonable chance of rejecting the null hypothesis.
Examples
In a first experiment, Group A is given vodka, Group B is given gin, and Group C is given aplacebo. All
groups are then tested with a memory task. A one-way ANOVA can be used to assess the effect of
the various treatments (that is, the vodka, gin, and placebo).
In a second experiment, Group A is given vodka and tested on a memory task. The same group is
allowed a rest period of five days and then the experiment is repeated with gin. The procedure is
repeated using a placebo. A one-way ANOVA with repeated measures can be used to assess the
In a third experiment testing the effects of expectations, subjects are randomly assigned to four
groups:
1. expect vodka—receive vodka
4. expect placebo—receive placebo (the last group is used as the control group)
Each group is then tested on a memory task. The advantage of this design is that multiple variables
can be tested at the same time instead of running two different experiments. Also, the experiment can
determine whether one variable affects the other variable (known as interaction effects). A factorial
ANOVA (2×2) can be used to assess the effect of expecting vodka or the placebo and the actual
reception of either.
History
The analysis of variance was used informally by researchers in the 1800s using least squares. In
physics and psychology, researchers included a term for the operator-effect, the influence of a
In its modern form, the analysis of variance was one of the many important
statistical innovationsof Ronald A. Fisher. Fisher proposed a formal analysis of variance in his 1918
paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance[8]. His first
application of the analysis of variance was published in 1921[9]. Analysis of variance became widely
known after being included in Fisher's 1925 book Statistical Methods for Research Workers.
See also
Statistics portal
AMOVA
ANCOVA
ANORVA
Design of experiments
Kruskal–Wallis test
Friedman test
MANOVA
Measurement uncertainty
Multiple comparisons
Squared deviations
t-test
Notes
1. ^ Box, Hunter and Hunter. Statistics for experimenters. Wiley. p. 188 "Misuse of the ANOVA
for 2k factorial experiments".
2. ^ Non-statisticians may be confused because another F-test is nonrobust: When used to test
the equality of the variances of two populations, the F-test is unreliable if there are
3. ^
4. ^ Kempthorne, Cox (Chapter 2), and Hinkelmann and Kempthorne (Chapters 5-6).
addition.
7. ^ [1]
8. ^ http://www.library.adelaide.edu.au/digitised/fisher/9.pdf
9. ^ [Studies in Crop Variation. I. An examination of the yield of dressed grain from Broadbalk
Journal of Agricultural Science, 11, 107–
135http://www.library.adelaide.edu.au/digitised/fisher/15.pdf]
References
Addelman, Sidney (Oct. 1969). "The Generalized Randomized Block Design". The American
Addelman, Sidney (Sep. 1970). "Variability of Treatments and Experimental Units in the Design
pp. 1095–1108.
Anscombe, F. J. (1948). "The Validity of Comparative Experiments". Journal of the Royal
Bailey, R. A (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-
Bapat, R. B. (2000). Linear Algebra and Linear Models (Second ed.). Springer.
Blair, R. C., & Higgins, J. J. (1985). A Comparison of the Power of the Paired Samples Rank
Transform Statistic to that of Wilcoxon’s Signed Ranks Statistic. Journal of Educational and
Blair, R. C., Sawilowsky, S. S., & Higgins, J. J. (1987). Limitations of the rank transform in
Caliński, Tadeusz and Kageyama, Sanpei (2000). Block designs: A Randomization approach,
Volume I: Analysis. Lecture Notes in Statistics. 150. New York: Springer-Verlag. ISBN 0-387-
98578-6.
Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear
Cohen, J. (1988). Statistical power analysis for the behavior sciences (2nd ed.).
Conover, W. J., & Iman, R. L. (1981). Rank transformations as a bridge between parametric and
Conover, W. J., & Iman, R. L. (1976). On some alternative procedures using ranks for the
Conover, W. J. & Iman, R. L. (1981). Rank transformations as a bridge between parametric and
Cox, David R. and Reid, Nancy M. The theory of design of experiments. (Chapman & Hall/CRC,
2000).
Ferguson, George A., Takane, Yoshio. (2005). "Statistical Analysis in Psychology and Education",
Freedman, David A. et alia. Statistics, 4th edition (W.W. Norton & Company, 2007) [4]
Freedman, David A. Statistical Models: Theory and Practice (Cambridge University Press) [5]
Gates, Charles E. (Nov. 1995). "What Really Is Experimental Error in Block Designs?". The
Headrick, T. C. (1997). Type I error and power of the rank transform analysis of covariance
Florida.
Helsel, D. R., & Hirsch, R. M. (2002). Statistical Methods in Water Resources: Techniques of
Water Resourses Investigations, Book 4, chapter A3. U.S. Geological Survey. 522 pages.[6]
Kendall's Library of Statistics. 5 (First ed.). London: Edward Arnold. pp. xiv+467 pp.. ISBN 0-
Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and
Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume
Hinkelmann, Klaus and Kempthorne, Oscar (2005). Design and Analysis of Experiments, Volume
Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952)
Iman, R. L. (1974). A power study of a rank transform for the two-way classification model when
Iman, R. L., & Conover, W. J. (1976). A comparison of several rank tests for the two-way
King, Bruce M., Minium, Edward W. (2003). Statistical Reasoning in Psychology and Education,
Fourth Edition. Hoboken, New Jersey: John Wiley & Sons, Inc. ISBN 0-471-21187-7
Lentner, Marvin; Thomas Bishop (1993). Experimental design and analysis (Second ed.). P.O.
Keppel, G. & Wickens, T.D. (2004). Design and analysis: A researcher's handbook (4th ed.).
Kittler, J.E., Menard, W. & Phillips, K.A. (2007). Weight concerns in individuals with body
Pierce, C.A., Block, R.A. & Aguinis, H. (2004). Cautionary note on reporting eta-squared values
from multifactor anova designs. Educational and Psychological Measurement, 64(6), 916–924.
SAS Institute. (1985). SAS/stat guide for personal computers (5th ed.). Cary, NC: Author.
SAS Institute. (1987). SAS/stat guide for personal computers (6th ed.). Cary, NC: Author.
SAS Institute. (2008). SAS/STAT 9.2 User's guide: Introduction to Nonparametric Analysis. Cary,
NC. Author.
Sawilowsky, S. (1985a). Robust and power analysis of the 2x2x2 ANOVA, rank transformation,
random normal scores, and expected normal scores transformation tests. Unpublished doctoral
Sawilowsky, S. (1985b). A comparison of random normal scores test under the F and Chi-square
distributions to the 2x2x2 ANOVA test. Florida Journal of Educational Research, 27, 83-97
Sawilowsky, S. (2000) Review of the rank transform in designed experiments. Perceptual and
Sawilowsky, S., Blair, R. C., & Higgins, J. J. (1989). An investigation of the type I error and
power properties of the rank transform procedure in factorial ANOVA. Journal of Educational
Strang, K.D. (2009). Using recursive regression to explore nonlinear relationships and
Thompson, G. L. (1991). A note on the rank transform for interactions. Biometrika,78(3), 697-
701.
Thompson, G. L., & Ammann, L. P. (1989). Efficiencies of the rank-transform in two-way models
Balanced Incomplete Block Design". The Annals of Mathematical Statistics 34 (4): pp. 1569–
1581. doi:10.1214/aoms/1177703889.
External links
Examples of all ANOVA and ANCOVA models with up to three treatment factors, including
NIST/SEMATECH e-Handbook of Statistical Methods, section 7.4.3: "Are the means equal?"
[show]
v•d•e
Statistics
[show]
v•d•e
Design of experiments
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by
professional editors (see full disclaimer)
Donate to Wikimedia
Math
mathworld.wolfram.com
Related topics:
MANOVA
covariance analysis (statistics)
residual variance (statistics)
Related answers:
Copyrights:
Sports Science and Medicine. The Oxford Dictionary of Sports Science &
Medicine. Copyright © Michael Kent 1998, 2006, 2007. All rights reserved.
Read more
Related answers
• Advantages of variance analysis?
• How does analysis of variance work?
• What is variance analysis in accounting?
• What is variance?
» More
Answer these
• What is variance of analysis?
• What are the variances in a 4 variance analysis?
• What is the analysis of variance used for?
» More
Mentioned in
• MANOVA
• covariance analysis(statistics)
• residual variance (statistics)
• log-linear model
» More
Site
• Help
• Sitemap
• ReferenceAnswers
• WikiAnswers
Company
• About | Jobs
• Press Room
• Blog
• Investor Relations
Legal
• Terms of Use
• Privacy Policy
• IP Issues
• Disclaimer
Webmasters
• AnswerTips
• Widget Gallery
• Badges
• More…
Downloads
• 1-Click Answers
• Browser Toolbar
• More…
Updates
• Email
• Watchlist
• RSS
• What's New
Search
Bottom of Form
• Log In |
• Newsletters
• My BNET
• Today
• Management
• Strategy
• Work Life
• Insight
• Industries
• Business Library
• Video
Additional Resources
Comparing Downside-Risk and Mean-Variance Analysis Using
Bootstrap Simulation
Is Downside risk DR a better alternative to the traditional mean-variance
analysis MV for real estate portfolio diversification? This study demonstrates
a bootstrap approach for comparing the two models. The results show that
ex ante DR portfolio return distributions tend to be negatively skewed with a
smaller left tail, and...
Tags: Portfolio, Downside-Risk, Real Estate, Business Operations
White papers 2002-03-11
Plan Vs. Actual Variance Report (Financial
Services)
This template is designed to help finance professionals analyze key line items
from the income statement profit and loss statement and the balance sheet
against annual financial plans to determine overall progress toward your
organization's financial goals. The user can enter planned and actual data on
this template and see...
Tags: P&L, Variance, Financial, Financial Service, Financial Accounting...
Tools & templates 2004-05-03
Profit and Variance Analysis of Cotton
Production Technologies and Rotation
Crops in Georgia
Genetically modified cotton varieties have the potential for increasing returns
and/or decreasing labor requirements. A nonlinear optimization model is
applied to a whole farm analysis for evaluating cotton production
technologies. This model maximizes farm utility, composed of expected
returns and their variability, at various risk aversion levels in order to...
Tags: analysis, aversion, Benefits, cotton, Georgia...
Research articles 2003-12-01
Analysis of variance
designs; a conceptual and
computational approach
with SPSS and SAS
Analysis of variance designs; a conceptual and computational approach with
SPSS and SAS. Gamst, Glenn et al. Cambridge U. Press 2008 578 pages
$90.00 Hardcover QA279 Gamst (psychology, U. of LaVerne), Meyers
(psychology, California State U., Sacramento) and Guarino (psychology,
Auburn U.)...
Tags: SAS Institute, SPSS Inc.
Research articles 2008-12-01
Sales Trend
Analysis
Use this template to identify sales trends per product per quarter. It
calculates the sales variance, determines if the variance is Favorable F or
Unfavorable U, and calculates the percentage change (the percentage
increase or decrease in sales over last year's level). The column header
works with entries in the...
Tags: Variance, Analysis, JaxWorks, Sales Strategy, Sales Force Management...
Tools & templates 2007-09-01
Managing Your Business Budget
Simply put, a company's budget is its business plan for the year expressed in
numbers—specifically, estimated sales income together with the costs
required to produce those sales. To make a profit in your business, you must
make reasonably accurate estimates of your sales income, estimate your
costs precisely, and manage...
Tags: finance, bookkeeping, sales
Articles 2007-03-27
A Quantitative Model-Independent Method for
Global Sensitivity Analysis of Model Output.
(Statistical Data Included)
A new method for sensitivity analysis SA of model output is introduced. It is
based on the Fourier amplitude sensitivity test FAST and allows the
computation of the total contribution of each input factor to the output's
variance. The term "total" here means that the factor's...
Tags: analysis, computation, Engineering, G., indice...
Research articles 1999-02-01
An Empirical Analysis of Determinants
of Retailer Pricing Strategy
This paper empirically investigates the determinants of retailers' pricing
decisions. It finds that competitor factors explain the most variance in
retailer pricing strategy. Only in the cases of price-promotion coordination
and relative brand price do category and chain factors explain much variance
in retailer pricing. These findings are derived from...
Tags: Retail Company, Pricing Strategy, Analysis, Institute For Operations
Research,Retail...
White papers 2004-01-23
Molecular genetic (RAPD)
analysis of leach's storm-
petrels
ABSTRACT.-Leach's Storm-Petrels Oceanodroma leucorhoa breed colonially
on islands off the Atlantic coast of Canada and exhibit strong site and mate
fidelity. We used RAPD markers to estimate relationships among 91 petrels
representing three colonies: Bon Portage Island and Big White Island in Nova
Scotia, and Gull Island in Newfoundland. Analysis...
Tags: Molecular Inc.
Research articles 1999-04-01
The
Mathematics Of
Returns-Based
Style Analysis
This article, explains the mathematics of Sharpe's algorithm. As it turns out,
a fairly complete and mathematically rigorous description of the algorithm
can be given without using a lot of mathematical formalism. William F.
Sharpe's method of returns- based style analysis is substantially different
from classical multivariate regression analysis. While...
Tags: Asset, Asset Class, Analysis, Zephyr Associates, William F. Sharpe...
White papers 2002-08-01
Wha
t
Fact
ors
Expl
ain
the
Num
ber
of
Phys
ical
Ther
apy
Treat
ment
Sessi
ons
in
Patie
nts
Refe
rred
With
Low
Back
Pain:
A
Mult
ilevel
Anal
ysis
It is well-known that the number of physical therapy treatment sessions
varies over treatment episodes. Information is lacking, however, on the
source and explanation of the variation. The purposes of the current study
are: to determine how the variance in the number of physical therapy
treatment sessions in patients with...
Tags: Patient, Therapy, Treatment Session
White papers 2005-11-24
Enter
prise
Produ
cts
Partn
ers LP
Q3
2007
Earni
ngs
Call
Trans
cript
Question-and-Answer SessionOperator Thank you. We will now begin the
question-and-answer session. [Operator Instructions]. Our first question
comes from Mr. Darren Horowitz of Raymond James. You may ask your
question. Darren Horowitz - Raymond James Good morning guys. My first
question has to do Randy, with what...
Tags: Enterprise Products Partners L.P.
Earnings calls 2007-10-25
Enterprise
Products
Partners L
2007 Earn
Call Trans
Question-and-Answer SessionOperator Thank you. We will now begin the
question-and-answer session. [Operator Instructions]. Our first question
comes from Mr. Darren Horowitz of Raymond James. You may ask your
question. Darren Horowitz - Raymond James Good morning guys. My first
question has to do Randy, with what...
Tags: Enterprise Products Partners L.P.
Earnings calls 2007-10-25
On Taking
Route: Ris
and Perfor
Persistenc
Using a new database of hedge funds, this paper provides a comprehensive
analysis of the risk-return characteristics, risk exposures, style analysis and
performance persistence of various hedge fund strategies. We conduct a
mean-variance analysis to find that a combination of alternative investments
and passive indexing provides significantly better risk-return trade-off...
Tags: Performance, London Business School, Hedge Fund, Analysis, Investment...
White papers 1999-02-01
Is there a
Time
This paper discusses "case for property" in the mixed-asset portfolio, which is
a topic of continuing interest to practitioners and academics. Such an
analysis typically is performed over a fixed period of time and the optimum
allocation to property inferred from the weight assigned to property through
the use of...
Tags: Property, Analysis, University Of Reading
White papers 2002-11-04
Confirmat
Consisten
The validity and reliability of the Student-life Stress Inventory, SSI, was
studied by analyzing the responses made to it by 381 students who were
enrolled in classes at a state university. The confirmatory factor analyses and
the analysis of variance were used to compute the validity. The internal
consistency was...
Tags: SSI Ltd.
Research articles 2001-06-01
Spiritualit
from intro
The authors discuss the results of a content analysis of 14 syllabi of
introductory courses on spirituality in counseling. Course syllabi were
examined to determine trends in the content of these courses and to
determine if the instruction is consistent with 9 competencies developed at
the...
Tags: Computer Associates International Inc.
Research articles 2004-01-01
Examining
analysis: h
strategic c
The domestic U.S. airline industry revealed improved performance metrics in
2005 compared to 2004. For example, 2005 operating losses were about
$2.1 billion on operating revenues of $111 billion, compared to 2004
operating losses of $3.5 billion on operating revenues of $101 billion. (1)
Consistent with the increase in...
Tags: Southwest Airlines Co.
Research articles 2008-06-22
Risk Mana
Although risk management has been a well-ploughed field in financial
modeling for over two decades, traditional risk management tools such as
mean-variance analysis, beta, and Value-at-Risk do not capture many of the
risk exposures of hedge-fund investments. In this article, I review several
aspects of risk management that are unique...
Tags: Hedge Fund, Social Science Electronic Publishing Inc., Risk Management,Financial
Planning, Financial Services...
White papers 2003-01-01
Optimal D
Recent research has demonstrated that the Markowitz efficient frontier is
fuzzy and may consist of many statistically indistinguishable frontiers.
Therefore, it opens the possibility that an efficient portfolio developed by
mean-variance analysis may not be any more efficient than a naively
diversified portfolio. Using an efficiency test, its found that...
Tags: Portfolio, Real Estate, Business Operations
White papers 2003-01-01
Neighboring Terms
• Variable Annuity
• Variable Costing
• Variable Interest Rate
• Variable Rate Note
• Variance
• Variance Analysis
• Variance Components
• Variety Reduction
• VAT
• VAT Inspector
• VAT Paid
BNET
• BNET US
• BNET AU
• BNET UK
• BNET China
Site Help & Feedback | About BNET | Reprint Policy
Popular on CBS sites: Fantasy Baseball | iPad | Video Game Reviews | Cell Phones | NFL
Draft
Top of Form
?
Word / Article Starts with Ends with
Text
Bottom of Form
Dictionary/
thesaurus Medical Legal Financial Acronyms Idioms Encyclopedia Wikipedia
?
dictionary dictionary dictionary encyclopedia
How to thank TFD for its existence? Tell a friend about us, add
a link to this page, add the site to iGoogle, or visit webmaster's
page for free fun content.
Link to this page:
Please bookmark with social media, your votes are noticed and
appreciated:
Dictionary/thesaurus ?
? Full browser
browser
• Sample Plans
• How-To Articles
• Pitch Center
• Resources
• Software
• Blogs & More
• About Us
Top of Form
Search Site
Bottom of Form
• Home
• Write a plan
• Start a business
• Grow a business
• Financing
• Buy a business
• Legal issues
• Online business
The most sophisticated systems separate unit and price factors on materials, hours worked, cost-per-hour on direct
labor, and fixed and variable overhead variances. Though difficult, this kind of analysis can be invaluable in a
complex business.
In this example, the $5,000 positive variance in advertising in January means $5,000 less than planned was spent,
and the $7,000 positive variance for literature in February means $7,000 less than planned was spent. The negative
variance for advertising in February and March, and the negative variance for literature in March, show that more was
spent than was planned for those items.
Illustration 1: Profit and Loss Variance
Evaluating these variances takes thought. Positive variances aren’t always good news. For example, the positive
variance of $5,000 in advertising means that money wasn’t spent, but it also means that advertising wasn’t placed.
Systems sales are way below expectations for this same period–could the advertising missed in January be a
possible cause?
Among the larger single variances for an expense item in a month shown on the illustration was the positive $7,000
variance for the new literature expenses in February. Is this good news or bad news? It may be evidence of a missed
deadline for literature that wasn’t actually completed until March. If so, at least it appears that the costs on completion
were $6,401, a bit less than the $7,000 planned.
Every variance should stimulate questions. Why did one project cost more or less? Were objectives met? Is a positive
variance a cost saving or a failure to implement? Is a negative variance a change in plans, a management failure, or
an unrealistic budget?
A variance table can provide management with significant information. Without this data, some of these important
questions might go unasked.
More on variance
Variance analysis on sales can be very complex. There can be very significant differences between higher or lower
sales because of different unit volumes, or because of different average prices. Illustration 2 shows the Sales
Forecast table (including costs) in variance mode, for the example company.
Illustration 2: Sales Forecast Variance
The units variance shows that the sales of systems were disappointing. In the expenses outlined in Illustration 1, we
see that advertising and mailing costs were below plan. Could there be a correlation between the saved expenses in
mailing, and the lower-than-planned sales? Yes, of course there could.
The mailing cost was much less than planned, but as a result the planned sales never came. The positive expense
variance is not good for the company.
In systems, the comparison between units variance and sales variance yields no surprises. The lower-than-expected
unit sales also had lower-than-expected sales values. Compare that to service, in which lower units yielded higher
sales (indicating much higher prices than planned). Is this an indication of a new profit opportunity, or a new trend?
This clearly depends on the specifics of your business.
It is often hard to tell what caused differences in costs. If spending schedules aren’t met, variance might be caused
simply by lower unit volume. Management probably wants to know the results per unit, and the actual price, and the
detailed feedback on the marketing programs.
Variance analysis is vital to good management. You have to track follow up on budgets, mainly through variance
analysis, or the budgets are useless.
Although variance analysis can be very complex, the main guide is common sense. In general, going under budget is
a positive variance, and over budget is a negative variance. But the real test of management should be whether or
not the result was good for business.
Read Part 1 of this series.
Read Part 2 of this series.
Leave a Comment
Top of Form
Name *
E-mail *
Website
Comment policy
Submit
Bottom of Form
PREVIOUS POST: Plan vs. Actual, Part 2: Cash Flow and Profit and Loss
• EXAMPLES
○ Business plans
○ Marketing plans
PLANNING
○ How to write a business plan
○ Business plan outline
○ Planning videos and webinars
○ Business plan software
○ Business plan template
HOW TOS
○ Starting a business
○ Getting funded
○ Growing your business
○ Running an online business
○ Incorporating your business
○ Buying a business
RESOURCES
○ Biz calculators
○ Business glossary
○ SBDC offices
○ SCORE offices
○ Business plan coaching service
○ Market research reports
○ Marketing plan software
○ Mplans.com
BLOGS
○ Tim Berry's blog
○ Bplans BIG Blog
○ Duct Tape Marketing blog
ABOUT US
○ About Bplans.com
○ Contact us
NEWSLETTER
○ Small business tips and advice. Sent weekly.
○
Top of Form
Enter email
Bottom of Form
•
• Search all articles:
Top of Form
To search, typ
Bottom of Form
•
• a d v e r t i s e m e n t
•
•
•
Marketing plans
Visit our Mplans.com site for sample marketing plans, advice on small business marketing, and more.
•
•
•
•
•
•
•
•
Follow us:
• Blog
• | Newsletter
• | Bplans on Twitter
• | Tim Berry on Twitter
• | Facebook
• | YouTube
Sample Business Plans | Business Plan Help | Business Calculators | Business Planning Software | Resources
Home | Site Map | About Us | Contact Us
Copyright ©1996-2010 Palo Alto Software, Inc. All Rights Reserved
Top of Form
Bottom of Form
Top of Form
Sign In
Bottom of Form
| | Free Newsletters
Finance Resource Center Business Solutions Businesses for Sale B2B Leads State of the
SMB Union Trade Magazines
• Home
• Topics
• Bloggers
• Franchises
• Videos
• Podcasts
• Library
• Ask the Experts
• Shop Legal Forms
• Find Vendors
Careers | Green Business | Starting a Business | Sales & Selling | Business Credit | Women in Business | Web
Tools | Company Profiles
Follow us!
Business Glossary
Top of Form
Ads by Google
IRS Income Tax
Just $299 to File Your IRS Income Taxes. For Expatriates
GreenbackTaxServices.com
Site Map | Contact Us | FAQs | About Home | RSS Directory | Newsletters | Disclosure
Policy | Media Kit
Copyright © 1999 - 2010 AllBusiness.com, Inc. All rights reserved.
No part of this content or the data or information included therein may be reproduced,
republished or redistributed without the prior written consent of AllBusiness.com.
Use of this site is governed by our Copyright and Intellectual Property Policy, Terms of Use
Agreement and Privacy Policy.
Get In-Depth Company Information from Hoover's | Buy a D&B Credit Report | What is in Your
Company's D&B Credit Report?
View All D&B Sales & Marketing Solutions | Get Email Lists from D&B Professional Contacts | Build
Mailing Lists from Zapdata | Company Profiles
Information and opinions on AllBusiness.com solely represent the thoughts and opinions of the authors and are not endorsed by, or reflect
the beliefs of,
AllBusiness.com, its parent company D&B, and its affiliates.
ACCOUNTING TERMS - ACCOUNTING DICTIONARY - ACCOUNTING GLOSSARY
WITH OVER 3,600 ACCOUNTING TERMS AND GROWING, WE ARE THE INTERNET'S MOST
COMPLETE AND POPULAR ACCOUNTING DICTIONARY OF ACCOUNTING TERMS. RANKED #1
BY ALL MAJOR SEARCH ENGINES FOR: ACCOUNTING TERMS, ACCOUNTING DICTIONARY,
AND ACCOUNTING GLOSSARY.
Your requested accounting terms definition is below. If you require additional accounting glossary definitions,
please enter the accounting terms you require.
Search
Bottom of Form
ShareThis
A free website that explains financial and managerial accounting with amazing clarity
Welcome to Accounting For Management
Home » Cost Volume Profit CVP Relationship » Break Even Point Analysis - Definition,
Explanation, Formula, Calculation, Advantages Assumptions and Limitations
Top of Form
Equation Method:
The equation method centers on the contribution approach to the income
statement. The format of this statement can be expressed in equation form as
follows:
[Profit = (Sales − Variable expenses) − Fixed expenses]
Rearranging this equation slightly yields the following equation, which is widely
used in cost volume profit (CVP) analysis:
[Sales = Variable expenses + Fixed expenses + Profit]
According to the definition of break even point, break even point is the level of
sales where profits are zero. Therefore the break even point can be computed by Managerial
finding that point where sales just equal the total of the variable expenses plus Accounting Articles
fixed expenses and profit is zero.
Example:
For example we can use the following data to calculate break even point. ■ Business and Quality
Improvement Programs
• Sales price per unit = $250
■ Cost Terms, Concepts
• variable cost per unit = $150
and Classification
• Total fixed expenses = $35,000
■ Job Order Costing
Calculate break even point
system
Calculation: ■ Process Costing System
Sales = Variable expenses + Fixed expenses + Profit ■ Process Costing System
$250Q* = $150Q* + $35,000 + $0** - Addition of Materials
and Beginning
$100Q = $35000
Inventory
Q = $35,000 /$100
■ Controlling and Costing
Q = 350 Units
Materials
Q* = Number (Quantity) of units sold.
**The break even point can be computed by finding that point where profit is zero
■ Materials and Inventory
The break even point in sales dollars can be computed by multiplying the break Cost Control
even level of unit sales by the selling price per unit.
■ By Products and Joint
350 Units × $250 Per unit = $87,500 Products Costing
A variation of this method uses the Contribution Margin ratio (CM ratio) instead ■ Gross Profit Analysis
of the unit contribution margin. The result is the break even in total sales
dollars rather than in total units sold. ■ Linear Programming
Technique
Break even point in total sales dollars = Fixed expenses / CM ratio
■ Segment Reporting and
$35,000 / 0.40
Transfer Pricing
= $87,500
Home | Advertise With Us | Privacy Policy | Disclaimer & Terms of Use | Site map | Links | Link to us | About Us | Contact Us
No text of this website can be republished without permission of the owner of this site and the authors of these managerial, management, and cost
accounting articles. Otherwise sever civil and criminal penalties shall be imposed. All rights reserved.
Copy right © 2009