OneWay Analysis of Variance
Researchers use oneway analysis of variance in controlled
experiments
to test for significant differences among group means.
This lesson explains when, why, and how to use oneway analysis of variance. The discussion covers
fixedeffects models and
randomeffects models.
Note: Oneway analysis of variance is also known as simple analysis of variance or
as singlefactor analysis of variance.
When to Use OneWay ANOVA
You should only use oneway analysis of variance when you have the right data from the right
experimental design.
Experimental Design
Oneway analysis of variance should only be used with one type of experimental design  a
completely randomized design with one
factor (also known as a
singlefactor, independent groups design). This design is distinguished by the following attributes:
 The design has one, and only one, factor (i.e., one
independent variable) with two or more
levels.
 Treatment groups are defined by a unique combination of nonoverlapping factor levels.
 The design has k treatment groups, where k is greater than one.
 Experimental units are randomly selected from a known population.
 Each experimental unit is randomly assigned to one, and only one, treatment group.
 Each experimental unit provides one dependent variable score.
Data Requirements
Oneway analysis of variance requires that the dependent variable be measured on an
interval scale or a
ratio scale.
In addition, you need to know three things about the experimental design:
 k = Number of treatment groups
 n_{j} = Number of subjects assigned to Group j (i.e., number of subjects that receive treatment j)
 X_{i,j} = The dependent variable
score for the ith subject in Group j
For example, the table below shows the critical information that a researcher would need to conduct a oneway analysis of variance,
given a typical singlefactor, independent groups design:
Group 1 
Group 2 
Group 3 
X_{1,1} 
X_{1,2} 
X_{1,3} 
X_{2,1} 
X_{2,2} 
X_{2,3} 
X_{3,1} 

X_{3,3} 


X_{4,3} 
The design has three treatment groups (k =3). Nine subjects have
been randomly assigned to the groups: three subjects to Group 1 (n_{1} = 3),
two subjects to Group 2 (n_{2} = 4), and four subjects to Group 3 (n_{3} = 2).
The dependent variable score is X_{1,1} for the first subject in Group 1;
X_{1,2} for the first subject in Group 2;
X_{1,3} for the first subject in Group 3;
X_{2,1} for the second subject in Group 1; and so on.
Assumptions of ANOVA
Oneway analysis of variance makes three assumptions about dependent variable scores:
 Independence. The dependent variable score for each experimental unit is independent of the score for any other unit.
 Normality. In the population, dependent variable scores are normally distributed within treatment groups.
 Equality of variance. In the population, the variance of dependent variable scores in each treatment group is equal.
(Equality of variance is also known as homogeneity of variance or homoscedasticity.)
The assumption of independence is the most important assumption. When that assumption is violated, the resulting
statistical tests can be misleading. This assumption is tenable when (a) experimental units are randomly
sampled from the population and (b) sampled units are randomly assigned to treatments.
With respect to the other two assumptions, analysis of variance is more forgiving.
Violations of normality are less problematic when the sample size is large. And violations of the
equal variance assumption are less problematic when the sample size within groups is equal.
Before conducting an analysis of variance, it is best practice to check for violations of normality and homogeneity assumptions.
For further information, see:
Why to Use OneWay ANOVA
Researchers use oneway analysis of variance to assess the effect of one independent variable on one dependent variable.
The analysis answers two research questions:
 Is the mean score in any treatment group significantly different from the mean score in another treatment group?
 What is the magnitude of the effect of the independent variable on the dependent variable?
Notice that analysis of variance tells us whether treatment groups differ significantly, but it doesn't tell us
how the groups differ. Understanding how the groups differ requires additional analysis.
How to Use OneWay ANOVA
To implement oneway analysis of variance with a singlefactor, independent groups design, a researcher takes the following steps:
 Specify a mathematical model to describe the causal factors that affect the dependent variable.
 Write statistical hypotheses to be tested by experimental data.
 Specify a significance level for a hypothesis test.
 Compute the grand mean and the mean scores for each group.
 Compute sums of squares for each effect in the model.
 Find the degrees of freedom associated with each effect in the model.
 Based on sums of squares and degrees of freedom, compute mean squares for each effect in the model.
 Find the expected value of the mean squares for each effect in the model.
 Compute a test statistic, based on observed mean squares and their expected values.
 Find the P value for the test statistic.
 Accept or reject the null hypothesis, based on the P value and the significance level.
 Assess the magnitude of the effect of the independent variable, based on sums of squares.
Whew! Altogether, the steps to implement oneway analysis of variance may look challenging, but
each step is simple and logical. That makes the whole process easy to implement, if you just focus
on one step at a time. So let's go over each step, onebyone.
Mathematical Model
For every experimental design, there is a mathematical model that accounts for all of the
independent and extraneous variables that affect the dependent variable.
Fixed Effects
For example, here is the
fixedeffects mathematical model for a completely randomized design:
X_{ i j} = μ + β_{ j} + ε_{ i ( j )}
where X_{ i j} is the dependent variable score for subject i in treatment group j,
μ is the population mean,
β_{ j} is the treatment effect in group j;
and ε_{ i ( j )} is the effect of all other extraneous variables on subject i
in treatment j.
For this model, it is assumed that ε_{ i ( j )} is normally and independently
distributed with a mean of zero and a variance of σ_{ε}^{2}.
The mean ( μ ) is constant.
Note: The parentheses in ε_{ i ( j )} indicate that subjects are
nested under treatment groups. When a subject is assigned to only one treatment group,
we say that the subject is nested under a treatment.
Random Effects
The randomeffects mathematical model for a completely randomized design
is similar to the fixedeffects mathematical model. It can also be expressed as:
X_{ i j} = μ + β_{ j} + ε_{ i ( j )}
Like the fixedeffects mathematical model, the randomeffects model also assumes that (1) ε_{ i ( j )} is normally and independently
distributed with a mean of zero and a variance of σ_{ε}^{2} and (2) the mean ( μ ) is constant.
Here's the difference between the two mathematical models.
With a fixedeffects model, the experimenter includes all treatment levels of interest in the experiment. With a randomeffects model,
the experimenter includes a random sample of treatment levels in the experiment. Therefore, in the randomeffects mathematical model,
the treatment effect ( β_{ j} ) is a random variable with a mean of zero and a variance of
σ^{2}_{β}.
Statistical Hypotheses
For fixedeffects models, it is common practice to write statistical hypotheses in terms of the treatment effect β_{ j};
for randomeffects models, in terms of the treatment variance σ^{2}_{β }.
 Null hypothesis: The null hypothesis states that the independent variable has no effect on
the dependent variable in any treatment group. Thus,
H_{0}: β_{ j} = 0 for all j (fixedeffects)
H_{0}: σ^{2}_{β} = 0 for all j (randomeffects)
 Alternative hypothesis: The alternative hypothesis states that the independent variable has an effect on
the dependent variable in at least one treatment group. Thus,
H_{1}: β_{ j} ≠ 0 for some j (fixedeffects)
H_{0}: σ^{2}_{β} ≠ 0 for all j (randomeffects)
If the null hypothesis is true, the mean score in each treatment group should equal the population mean. Thus,
if the null hypothesis is true, sample means in the k
treatment groups should be roughly equal. If the null hypothesis is false, at least one pair of sample means should be unequal.
Significance Level
The significance level (also known as alpha or α) is the probability of rejecting the null hypothesis when it
is actually true. The significance level for an experiment is specified by the experimenter, before data collection
begins. Experimenters often choose significance levels of 0.05 or 0.01.
A significance level of 0.05 means that there is a 5% chance of rejecting the null hypothesis
when it is true. A significance level of 0.01 means that there is a 1% chance of rejecting the null hypothesis
when it is true. The lower the significance level, the more persuasive the evidence needs to be
before an experimenter can reject the null hypothesis.
Mean Scores
Analysis of variance begins by computing a grand mean and group means:
 Grand mean. The grand mean (X) is the mean of all observations,
computed as follows:
X = ( 1 / n )
kΣj=1
n_{ j}Σi=1
( X
_{ i j} )
 Group means. The mean of group j ( X_{ j} ) is the mean of all observations
in group j, computed as follows:
X_{ j} = ( 1 / n
_{ j} )
n_{ j}Σi=1
( X
_{ i j} )
In the equations above, n is the total sample size across all groups; and
n_{ j} is the sample size in Group j .
Sums of Squares
A sum of squares is the sum of squared deviations from a mean score. Oneway analysis of variance makes use of three sums of squares:
 Betweengroups sum of squares. The betweengroups sum of squares (SSB) measures variation of group means around the grand mean.
It can be computed from the following formula:
SSB =
kΣj=1
n_{ j}Σi=1
(
X_{ j} 
X )
^{2}
=
kΣj=1
n
_{j} (
X_{ j} 
X )
^{2}
 Withingroups sum of squares. The withingroups sum of squares (SSW) measures variation of all scores around their respective group means.
It can be computed from the following formula:
SSW =
kΣj=1
n_{ j}Σi=1
( X
_{ i j} 
X _{j} )
^{2}
 Total sum of squares. The total sum of squares (SST) measures variation of all scores around the grand mean.
It can be computed from the following formula:
SST =
kΣj=1
n_{ j}Σi=1
( X
_{ i j} 
X )
^{2}
It turns out that the total sum of squares is equal to the betweengroups sum of squares plus the withingroups sum of squares, as shown below:
SST = SSB + SSW
As you'll see later on, this relationship will allow us to assess the magnitude of the effect of the independent variable on the dependent variable.
Degrees of Freedom
The term degrees of freedom (df) refers to the number of independent sample points used to compute a
statistic minus the number of
parameters estimated from the sample points.
To illustrate what is going on, let's find the degrees of freedom associated with the various sum of squares computations:
 Betweengroups degrees of freedom. The betweengroups sum of squares formula appears below:
SSB =
kΣj=1
n
_{j} (
X_{ j} 
X )
^{2}
Here, the formula uses k independent sample points, the sample means X_{ j} .
And it uses one parameter estimate, the grand mean X, which was estimated from the sample points. So, the
betweengroups sum of squares has k  1 degrees of freedom.
 Withingroups degrees of freedom. The withingroups sum of squares formula appears below:
SSW =
kΣj=1
n_{ j}Σi=1
( X
_{ i j} 
X _{j} )
^{2}
Here, the formula uses n independent sample points, the individual subject scores X_{ i j} .
And it uses k parameter estimates, the group means X _{j} ,
which were estimated from the sample points. So, the
betweengroups sum of squares has n  k degrees of freedom (where n is total sample size across all groups).
 Total degrees of freedom. The total sum of squares formula appears below:
SST =
kΣj=1
n_{ j}Σi=1
( X
_{ i j} 
X )
^{2}
Here, the formula uses n independent sample points, the individual subject scores X_{ i j} .
And it uses one parameter estimate, the grand mean X, which was estimated from the sample points. So, the
total sum of squares has n  1 degrees of freedom (where n is total sample size across all groups).
The degrees of freedom for each sum of squares are summarized in the table below:
Sum of squares 
Degrees of freedom 
Betweengroups 
k  1 
Withingroups 
n  k 
Total 
n  1 
Notice that there is an additive relationship between the various sums of squares. The degrees of freedom
for total sum of squares (df_{TOT}) is equal to the degrees of freedom for betweengroups sum of squares (df_{BG}) plus
the degrees of freedom for withingroups sum of squares (df_{WG}). That is,
df_{TOT} = df_{BG} + df_{WG}
Mean Squares
A mean square is an estimate of population variance. It is computed by dividing
a sum of squares (SS) by its corresponding degrees of freedom (df), as shown below:
MS = SS / df
To conduct a oneway analysis of variance, we are interested in two mean squares:
 Withingroups mean square. The withingroups mean square ( MS_{WG} ) refers to
variation due to differences among experimental units within the same group. It can be computed as follows:
MS_{WG} = SSW / df_{WG}
 Between groups mean square. The betweengroups mean square ( MS_{BG} ) refers to
variation due to differences among experimental units within the same group
plus variation due to treatment effects. It can be computed as follows:
MS_{BG} = SSB / df_{BG}
Expected Value
The expected value
of a mean square is the average value of the mean square over a large number of experiments.
Statisticians have derived formulas for the expected value of
the withingroups mean square ( MS_{WG} ) and for the expected value of the betweengroups mean square
( MS_{BG} ). For oneway analysis of variance, the
expected value formulas are:
Fixed and RandomEffects:
E( MS_{WG} ) = σ_{ε}^{2}
FixedEffects:


E( MS_{BG} ) = σ_{ε}^{2} + 


( k  1 ) 
RandomEffects:
E( MS_{BG} ) = σ_{ε}^{2} + nσ_{β}^{2}
In the equations above, E( MS_{WG} ) is the expected value of the withingroups mean square;
E( MS_{BG} ) is the expected value of the betweengroups mean square;
n is total sample size; k is the number of treatment groups;
β_{ j} is the treatment effect in Group j; σ_{ε}^{2}
is the variance attributable to everything except the treatment effect (i.e., all the extraneous variables); and
σ_{β}^{2} is the variance due to random selection of treatment levels.
Notice that MS_{BG} should equal MS_{WG}
when the variation due to treatment effects
( β_{ j} for fixed effects and σ_{β}^{2} for random effects)
is zero (i.e., when the independent variable does not affect the
dependent variable). And MS_{BG} should be bigger than the MS_{WG}
when the variation due to treatment effects is not zero (i.e., when the independent variable does affect the
dependent variable)
Conclusion: By examining the relative size of the mean squares, we can make a judgment about whether an
independent variable affects a dependent variable.
Test Statistic
Suppose we use the mean squares to
define a test statistic F as follows:
F(v_{1}, v_{2}) = MS_{BG} / MS_{WG}
where MS_{BG} is the betweengroups mean square,
MS_{WG} is the withingroups mean square, v_{1} is the degrees of freedom for MS_{BG},
and v_{2} is the degrees of freedom for MS_{WG}.
Defined in this way, the F ratio measures the size of MS_{BG} relative to MS_{WG}.
The F ratio is a convenient measure that we can use to test the null hypothesis. Here's how:
 When the F ratio is close to one, MS_{BG} is approximately equal to MS_{WG}.
This indicates that the independent variable did not affect the dependent variable, so we cannot
reject the null hypothesis.
 When the F ratio is significantly greater than one, MS_{BG} is bigger than MS_{WG}.
This indicates that the independent variable did affect the dependent variable, so we must reject the null hypothesis.
What does it mean for the F ratio to be significantly greater than one?
To answer that question, we need to talk about the Pvalue.
Note: With a completely randomized design, the test statistic F is computed in the same way for
fixedeffects and for randomeffects. With more complex designs (i.e., designs with more than one factor),
test statistics may be computed differently for fixedeffects models than for randomeffects models.
PValue
In an experiment, a Pvalue is the probability of obtaining a result more extreme than the observed experimental outcome,
assuming the null hypothesis is true.
With analysis of variance, the F ratio is the observed experimental outcome that we are interested in.
So, the Pvalue would be the probability that an F statistic would be more extreme (i.e., bigger) than the
actual F ratio computed from experimental data.
How does an experimenter attach a probability to an observed F ratio?
Luckily, the F ratio is a random variable
that has an F distribution.
Therefore, we can use an F table or an online calculator to find the probability that
an F statistic will be bigger than the actual F ratio observed in the experiment.
F Distribution Calculator
To find the Pvalue associated with an observed F ratio, use Stat Trek's free
F distribution calculator. You can
access the calculator by clicking a link in the table of contents (at the top of this web page in the left column).
find the calculator in the Appendix section of the table of contents, which can be
accessed by tapping the "Analysis of Variance: Table of Contents" button at the top of the page.
Or you can
click
tap
the button below.
F Distribution Calculator
For an example that shows how to find the Pvalue for an F ratio, see Problem 2
at the bottom of this page.
Hypothesis Test
Recall that the experimenter specified a significance level early on  before the first data point was collected.
Once you know the significance level and the Pvalue, the hypothesis test is routine.
Here's the decision rule for accepting or rejecting the null hypothesis:
 If the Pvalue is bigger than the significance level, accept the null hypothesis.
 If the Pvalue is equal to or smaller than the significance level, reject the null hypothesis.
A "big" Pvalue indicates that (1) none of the k treatment means
( X_{ j} ) were significantly different, so
(2) the independent variable did not have a statistically significant effect on the dependent variable.
A "small" Pvalue indicates that (1) at least one treatment mean differed significantly from another treatment mean, so
(2) the independent variable had a statistically significant effect on the dependent variable.
Magnitude of Effect
The hypothesis test tells us whether the independent variable in our experiment has a statistically
significant effect on the dependent variable, but it does not address the magnitude (strength)
of the effect. Here's the issue:
 When the sample size is large, you may find that even small differences in treatment means are
statistically significant.
 When the sample size is small, you may find that even big differences in treatment means are
not statistically significant.
With this in mind, it is customary to supplement analysis of variance with an appropriate measure
of effect size. Eta squared (η^{2}) is one such measure. Eta squared is the proportion of variance in the
dependent variable that is explained by a treatment effect. The eta squared formula
for oneway analysis of variance is:
η^{2} = SSB / SST
where SSB is the betweengroups sum of squares and SST is the total sum of squares.
ANOVA Summary Table
It is traditional to summarize ANOVA results in an analysis of variance table. Here,
filled with hypothetical data, is an analysis of variance table for a oneway analysis of variance.
Analysis of Variance Table
Source 
SS 
df 
MS 
F 
P 
BG 
230 
k  1 = 10 
23 
2.3 
0.09 
WG 
220 
N  k = 22 
10 


Total 
450 
N  1 = 32 



This is an ANOVA table for a singlefactor, independent groups design. The experiment used 11 treatment groups, so k equals 11.
And three subjects were assigned to each treatment group, so N equals 33. The table shows critical outputs for
betweengroup (BG) treatment effects and withingroup (WG) treatment effects.
Many of the table entries are derived from the sum of squares (SS) and degrees of freedom (df), based on the following formulas:
SS_{TOTAL} = SS_{BG} + SS_{WG} = 230 + 220 = 450
MS_{BG} = SS_{BG} / df_{BG} = 230/10 = 23
MS_{WG} = MS_{WG} / df_{WG} = 220/22 = 10
F(v_{1}, v_{2}) = MS_{BG} / MS_{WG} = 23/10 = 2.3
where MS_{bg} is the betweengroups mean square,
MS_{wg} is the withingroups mean square, v_{1} and df_{BG} are the degrees of freedom for MS_{BG},
v_{2} and df_{WG} are the degrees of freedom for MS_{WG}, and the
F ratio is F(v_{1}, v_{2}).
An ANOVA table provides all the information an experimenter needs to (1) test hypotheses and (2) assess the magnitude of treatment effects.
Hypothesis Tests
The Pvalue (shown in the last column of the ANOVA table) is the probability that an F statistic would be more extreme (bigger) than the
F ratio shown in the table, assuming the null hypothesis is true. When the Pvalue is bigger
than the significance level, we accept the null hypothesis; when it is smaller, we reject it.
Suppose the significance level for this experiment was 0.05. Based on the table entries,
can we reject the null hypothesis? From the ANOVA table, we see that the Pvalue
is 0.09. Since Pvalue is bigger than the significance level (0.05),
we cannot reject the null hypothesis.
Magnitude of Effects
Since the Pvalue in the ANOVA table was bigger than the significance level, the treatment effect in this
experiment was not statistically significant. Does that mean the treatment effect was small? Not
necessarily.
To assess the strength of the treatment effect, an experimenter might compute eta squared (η^{2}). The
computation is easy, using sums of squares entries from the ANOVA table, as shown below:
η^{2} = SSB / SST = 230 / 450 = 0.51
where SSB is the betweengroups sum of squares and SST is the total sum of squares.
For this experiment, eta squared is 0.51. This means that 51% of the variance in the dependent variable
can be explained by the effect of the independent variable.
Even though the treatment effect was not statistically significant, it was not
unimportant; since the independent variable accounted for more than half the
variance in the dependent variable. The moral here is that a hypothesis test by itself may not tell the whole
story. It also pays to look at the magnitude of an effect.
Advantages and Disadvantages
Oneway analysis of variance with a singlefactor,
independent groups design has advantages and disadvantages.
Advantages include the following:
 The design layout is simple  one factor with k factor levels.
 Data analysis is easier with this design than with other designs.
 Computational procedures are identical for fixedeffects and randomeffects models.
 The design does not require equal sample sizes for treatment groups.
 The design requires subjects to participate in only one treatment group.
Disadvantages include the following:
 The design does not permit repeated measures.
 The design can test the effect of only one independent variable.