TwoFactor Analysis of Variance: Example
In this lesson, we use analysis of variance to analyze results from a balanced, twofactor, fullfactorial experiment; and we show how to interpret the results of our analysis. We'll analyze results for a fixedeffects model, a randomeffects model, and a mixed model.
Note: Computations for analysis of variance are usually handled by a software package. For this example, however, we will do the computations "manually", since the gory details have educational value.
Problem Statement
As part of a full factorial experiment, a researcher tests the effect of Factor A and Factor B on a continuous variable. The design calls for two levels of Factor A and three levels of Factor B  six treatment groups in all. The researcher selects 30 subjects randomly from a larger population and randomly assigns five subjects to each treatment group.
The researcher collects one dependent variable score from each subject, as shown in the table below:
Table 1. Dependent Variable Scores
A_{1}  A_{2}  

B_{1}  B_{2}  B_{3}  B_{1}  B_{2}  B_{3} 
1 2 3 2 1 
1 2 4 3 1 
2 3 4 3 2 
2 3 4 3 2 
2 3 5 2 2 
3 4 5 4 3 
A_{1}  A_{2}  

B_{1}  B_{2}  B_{3}  B_{1}  B_{2}  B_{3} 
1 2 3 2 1 
1 2 4 3 1 
2 3 4 3 2 
2 3 4 3 2 
2 3 5 2 2 
3 4 5 4 3 
The treatment levels represent all the levels of interest to the experimenter, so this experiment uses a fixedeffects model to select treatment levels for study.
In conducting this experiment, the researcher has two research questions:
 Do the independent variables have a significant effect on the dependent variable?
 How strong is the effect of independent variables on the dependent variable?
To answer these questions, the researcher uses analysis of variance.
Is ANOVA the Right Technique?
Before you crunch the first number in analysis of variance, you must be sure that analysis of variance is the correct technique. That means you need to ask two questions:
 Is the experimental design compatible with analysis of variance?
 Does the dataset satisfy the critical assumptions required for twofactor analysis of variance?
Let's address both of those questions.
Experimental Design
As we discussed in the previous lesson (see Analysis With Full Factorial Experiments), analysis of variance is appropriate with a balanced, completely randomized, full factorial experiment; so we can check the experimental design box.
Critical Assumptions
We also learned in the previous lesson that analysis of variance with full factorial experiments makes three critical assumptions:
 Independence. The dependent variable score for each experimental unit is independent of the score for any other unit.
 Normality. In the population, dependent variable scores are normally distributed within treatment groups.
 Equality of variance. In the population, the variance of dependent variable scores in each treatment group is equal. (Equality of variance is also known as homogeneity of variance or homoscedasticity.)
Therefore, before we implement analysis of variance with this study, we need to make sure our dataset is consistent with all three assumptions.
Independence of Scores
The assumption of independence is the most important assumption. When that assumption is violated, the resulting statistical tests can be misleading.
The independence assumption is satisfied by the design of the study, which features random selection of subjects and random assignment to treatment groups. Randomization tends to distribute effects of extraneous variables evenly across groups.
Normal Distributions in Groups
Violations of normality can be a problem when sample size is small, as it is in this study. Therefore, it is important to be on the lookout for any indication of nonnormality.
There are many ways to check for normality. On this website, we describe three at: How to Test for Normality: Three Simple Tests. Given the small sample size, our best option for testing normality is to look at the following descriptive statistics:
 Central tendency. The mean and the median are summary measures used to describe central tendency  the most "typical" value in a set of values. With a normal distribution, the mean is equal to the median.
 Skewness. Skewness is a measure of the asymmetry of a probability distribution. If observations are equally distributed around the mean, the skewness value is zero; otherwise, the skewness value is positive or negative. As a rule of thumb, skewness between 2 and +2 is consistent with a normal distribution.
 Kurtosis. Kurtosis is a measure of whether observations cluster around the mean of the distribution or in the tails of the distribution. The normal distribution has a kurtosis value of zero. As a rule of thumb, kurtosis between 2 and +2 is consistent with a normal distribution.
The table below shows the mean, median, skewness, and kurtosis for each group from our study.
Table 2. Descriptive Statistics
A_{1}B_{1}  A_{1}B_{2}  A_{1}B_{3}  A_{2}B_{1}  A_{2}B_{2}  A_{2}B_{3}  

Mean  1.8  2.2  2.8  2.8  3.2  3.8 
Med  2  2  3  3  3  4 
Rng  2  3  2  2  3  2 
Skew  0.51  0.54  0.51  0.51  0.54  0.51 
Kurt  0.61  1.49  0.61  0.61  1.48  0.61 
A_{1}B_{1}  A_{1}B_{2}  A_{1}B_{3}  A_{2}B_{1}  A_{2}B_{2}  A_{2}B_{3}  

Mean  1.8  2.2  2.8  2.8  3.2  3.8 
Median  2  2  3  3  3  4 
Range  2  3  2  2  3  2 
Skew  0.51  0.54  0.51  0.51  0.54  0.51 
Kurt  0.61  1.49  0.61  0.61  1.48  0.61 
In all six groups, the difference between the mean and median looks small (relative to the range). And skewness and kurtosis measures are consistent with a normal distribution (i.e., between 2 and +2). These are crude tests, but they provide some confidence for the assumption of normality in each group.
Note: With Excel, you can easily compute the descriptive statistics for treatment groups. To see how, go to: How to Test for Normality: Example 1.
Homogeneity of Variance
When the normality of variance assumption is satisfied, you can use Hartley's Fmax test to test for homogeneity of variance. Here's how to implement the test:
 Step 1. Compute the sample variance ( s^{2}_{j} ) for each treatment group.
kΣj=1( X_{ i, j}  X_{ j} )^{ 2}
s^{2}_{j} = ( n_{ j}  1 ) where X_{ i, j} is the score for observation i in Group j , X_{ j} is the mean of Group j, and n_{ j} is the number of observations in Group j.
Here is the variance ( s^{2}_{j} ) for each group in the study.
Table 3. Sample Variance
A_{1} A_{2} B_{1} B_{2} B_{3} B_{1} B_{2} B_{3} 0.7 1.7 0.7 0.7 1.7 0.7 A_{1} A_{2} B_{1} B_{2} B_{3} B_{1} B_{2} B_{3} 0.7 1.7 0.7 0.7 1.7 0.7  Step 2. Compute an F ratio from the following formula:
F_{RATIO} = s^{2}_{MAX} / s^{2}_{MIN}
F_{RATIO} = 1.7 / 0.7
F_{RATIO} = 2.43
where s^{2}_{MAX} is the largest group variance, and s^{2}_{MIN} is the smallest group variance.
 Step 3. Compute degrees of freedom ( df ).
df = n  1
df = 5  1
df = 4
where n is the largest sample size in any group.
 Step 4. Based on the degrees of freedom ( 4 ) and the number of groups ( 6 ),
Find the critical F value from the Table of Critical F Values for Hartley's Fmax Test.
From the table, we see that the critical Fmax value is 29.5.
Note: The critical F values in the table are based on a significance level of 0.05.
 Step 5. Compare the observed F ratio computed in Step 2 to the critical
F value recovered from the Fmax table in Step 4. If the F ratio is smaller than the Fmax table value,
the variances are homogeneous. Otherwise, the variances are heterogeneous.
Here, the F ratio (2.43) is smaller than the Fmax value (29.5), so we conclude that the variances are homogeneous.
Note: Other tests, such as Bartlett's test, can also test for homogeneity of variance.
Analysis of Variance
Having confirmed that the critical assumptions are tenable, we can proceed with analysis of variance. That means taking the following steps:
 Specify a mathematical model to describe how main effects and interaction effects influence the dependent variable.
 Write statistical hypotheses to be tested by experimental data.
 Specify a significance level for a hypothesis test.
 Compute the grand mean and the mean scores for each treatment group.
 Compute sums of squares for each effect in the model.
 Find the degrees of freedom associated with each effect in the model.
 Based on sums of squares and degrees of freedom, compute mean squares for each effect in the model.
 Find the expected value of the mean squares for each effect in the model.
 Compute a test statistic for each effect, based on observed mean squares and their expected values.
 Find the P value for each test statistic.
 Accept or reject the null hypothesis for each effect, based on the P value and the significance level.
 Assess the magnitude of effect, based on sums of squares.
Now, let's execute each step, onebyone, with our sample experiment.
Mathematical Model
For every experimental design, there is a mathematical model that accounts for all of the independent and extraneous variables that affect the dependent variable.
For example, here is the fixedeffects mathematical model for a twofactor, completely randomized, fullfactorial experiment:
X_{ i j m} = μ + α_{ i} + β_{ j} + αβ_{ i j} + ε_{ m ( ij )}
where X_{ i j m} is the dependent variable score for subject m in treatment group ij, μ is the population mean, α_{ i} is the main effect of Factor A at level i; β_{ j} is the main effect of Factor B at level j; αβ_{ i j} is the interaction effect of Factor A at level i and Factor B at level j; and ε_{ m ( ij )} is the effect of all other extraneous variables on subject m in treatment group ij.
For this model, it is assumed that ε_{ m ( ij )} is normally and independently distributed with a mean of zero and a variance of σ_{ε}^{2}. The mean ( μ ) is constant.
Note: The parentheses in ε_{ m ( ij )} indicate that subjects are nested under treatment groups. When a subject is assigned to only one treatment group, we say that the subject is nested under a treatment.
Statistical Hypotheses
With a full factorial experiment, it is possible to test all main effects and all interaction effects. For example, here are the null hypotheses (H_{0}) and alternative hypotheses (H_{1}) for each effect in a twofactor full factorial experiment.
H_{0}: α_{ i} = 0 for all i  H_{0}: β_{ j} = 0 for all j  H_{0}: αβ_{ ij} = 0 for all ij 
H_{1}: α_{ i} ≠ 0 for some i  H_{1}: β_{ j} ≠ 0 for some j  H_{1}: αβ_{ ij} ≠ 0 for some ij 
Significance Level
The significance level (also known as alpha or α) is the probability of rejecting the null hypothesis when it is actually true. The significance level for an experiment is specified by the experimenter, before data collection begins.
Experimenters often choose significance levels of 0.05 or 0.01. For this experiment, let's use a significance level of 0.05.
Mean Scores
Analysis of variance for a full factorial experiment begins by computing a grand mean, marginal means, and group means. Here are computations for the various means, based on dependent variable scores from Table 1:
 Grand mean. The grand mean (X) is the mean of all observations,
computed as follows:
N =pΣi=1qΣj=1n = pqnX = ( 1 / N )pΣi=1qΣj=1nΣm=1( X_{ i j m} )
X = 2.7
 Marginal means for Factor A. The mean for level i of Factor A
( X_{ i . .} ) is computed as follows:
X_{ i . .} = ( 1 / q )qΣj=1nΣm=1( X_{ i j m} )
X_{ 1 . .} = 2.27
X_{ 2 . .} = 3.13
 Marginal means for Factor B. The mean for level j of Factor B
( X_{ . j .} ) is computed as follows:
X_{ . j .} = ( 1 / p )pΣi=1nΣm=1( X_{ i j m} )
X_{ . 1 .} = 2.3
X_{ . 2 .} = 2.5
X_{ . 3 .} = 3.3
 Group means. The mean of all observations in group i j
( X_{ i j .} ) is computed as follows:
X_{ i j .} = ( 1 / n )nΣm=1( X_{ i j m} )
X_{ 1 1 .} = 1.8
X_{ 1 2 .} = 2.2
X_{ 1 3 .} = 2.8
X_{ 2 1 .} = 2.8
X_{ 2 2 .} = 2.8
X_{ 2 3 .} = 3.8
In the equations above, N is the total sample size across all treatment groups; n is the sample size in a single treatment group, p is the number of levels of Factor A, and q is the number of levels of Factor B.
Sums of Squares
A sum of squares is the sum of squared deviations from a mean score. Twoway analysis of variance makes use of five sums of squares. Below, we compute all five sums of squares:
 Factor A sum of squares. The sum of squares for Factor A (SSA) measures variation of the marginal means
of Factor A ( X_{ i} )
around the grand mean ( X ). It can be computed from the following formula:
SSA = nqpΣi=1( X_{ i}  X )^{2}
SSA = 5.633
 Factor B sum of squares. The sum of squares for Factor B (SSB) measures variation of the marginal means
of Factor B ( X_{ j} )
around the grand mean ( X ). It can be computed from the following formula:
SSB = npqΣj=1( X_{ j}  X )^{2}
SSB = 5.600
 Interaction sum of squares. The sum of squares for the interaction between Factor A and Factor B (SSAB)
can be computed from the following formula:
SSAB = npΣi=1qΣj=1( X_{ i j}  X_{ i}  X_{ j} + X )^{2}
SSAB = 0.267
 Withingroups sum of squares. The withingroups sum of squares (SSW) measures variation of all scores
( X_{ i j m} ) around their respective group means
( X _{i j} ).
It can be computed from the following formula:
SSW =pΣi=1qΣj=1nΣm=1( X_{ i j m}  X _{i j} )^{2}
SSW = 24.80
 Total sum of squares. The total sum of squares (SST) measures variation of all scores
( X_{ i j m} ) around the grand mean
( X ).
It can be computed from the following formula:
SST =pΣi=1qΣj=1nΣm=1( X_{ i j m}  X )^{2}
SST = 36.30
In the formulas above, n is the sample size in each treatment group, p is the number of levels of Factor A, and q is the number of levels of Factor B.
Degrees of Freedom
The term degrees of freedom (df) refers to the number of independent sample points used to compute a statistic minus the number of parameters estimated from the sample points.
The degrees of freedom used to compute the various sums of squares for a balanced, twoway factorial experiment are shown in the table below:
Sum of squares  Degrees of freedom 

Factor A  p  1 = 2  1 = 1 
Factor B  q  1 = 3  1 = 2 
AB interaction  ( p  1 )( q  1) = 1 * 2 = 2 
Within groups  pq( n  1 ) = 2 * 3 * 4 = 24 
Total  npq  1 = 2 * 3 * 5  1 = 29 
Mean Squares
A mean square is an estimate of population variance. It is computed by dividing a sum of squares (SS) by its corresponding degrees of freedom (df), as shown below:
MS = SS / df
To conduct analysis of variance with a twofactor, full factorial experiment, we are interested in four mean squares:
 Factor A mean square. The Factor A mean square ( MS_{A} ) measures
variation due to the main effect of Factor A. It can be computed as follows:
MS_{A} = SSA / df_{A} = 5.63 / 1 = 5.63
 Factor B mean square. The Factor B mean square ( MS_{B} ) measures
variation due to the main effect of Factor B. It can be computed as follows:
MS_{B} = SSB / df_{B} = 5.6 / 2 = 2.8
 Interaction mean square. The mean square for the AB interaction measures variation due to
the AB interaction effect. It can be computed as follows:
MS_{AB} = SSAB / df_{AB} = 0.267 / 2 = 0.1335
 Within groups mean square. The withingroups mean square ( MS_{WG} ) measures
variation due to differences among experimental units within the same treatment group. It can be computed as follows:
MS_{WG} = SSW / df_{WG} = 24.8 / 24 = 1.03
Expected Value
The expected value of a mean square is the average value of the mean square over a large number of experiments.
Statisticians have derived formulas for the expected value of mean squares for balanced, twofactor, full factorial experiments. The expected values differ, depending on whether the experiment uses all fixed factors, all random factors, or a mix of fixed and random factors. The table below shows the expected value of mean squares for a balanced, twofactor, full factorial experiment when both factors are fixed:
Mean square  Expected value 

MS_{A}  σ^{2}_{WG} + nqσ^{2}_{A} 
MS_{B}  σ^{2}_{WG} + npσ^{2}_{B} 
MS_{AB}  σ^{2}_{WG} + nσ^{2}_{AB} 
MS_{WG}  σ^{2}_{WG} 
In the table above, n is the sample size in each treatment group, p is the number of levels for Factor A, q is the number of levels for Factor B, σ^{2}_{A} is the variance of main effects due to Factor A, σ^{2}_{B} is the variance of main effects due to Factor B, σ^{2}_{AB} is the variance due to interaction effects, and σ^{2}_{WG} is the variance due to extraneous variables (also known as variance due to experimental error).
Test Statistics
Suppose we want to test the significance of a main effect or the interaction effect in a twofactor, full factorial experiment. We can use the mean squares to define a test statistic F as follows:
F(v_{1}, v_{2}) = MS_{EFFECT 1} / MS_{EFFECT 2}
where MS_{EFFECT 1} is the mean square for the effect we want to test; MS_{EFFECT 2} is an appropriate mean square, based on the expected value of mean squares; v_{1} is the degrees of freedom for MS_{EFFECT 1} ; and v_{2} is the degrees of freedom for MS_{EFFECT 2}.
The expected value of the numerator of the F ratio should be identical to the expected value of the denominator, except for one thing: The numerator should have an extra term that includes the effect being tested.
FixedEffects Model
The table below shows how to construct F ratios when an experiment uses a fixedeffects model.
Table 1. FixedEffects Model
Effect  F ratio  Degrees of freedom  

v_{1}  v_{2}  
A 
MS_{A}
MS_{WG}

p1  pq(n1) 
B 
MS_{B}
MS_{WG}

q1  pq(n1) 
AB 
MS_{AB}
MS_{WG}

(p1)(q1)  pq(n1) 
Using formulas from the table above, we can compute an F ratio for each treatment effect, as shown below:
F_{A} = F(v_{1}, v_{2}) = F(1, 24) = MS_{A} / MS_{WG} = 5.63 / 1.03 = 5.47
F_{B} = F(v_{1}, v_{2}) = F(2, 24) = MS_{B} / MS_{WG} = 2.8 / 1.03 = 2.72
F_{AB} = F(v_{1}, v_{2}) = F(2, 24) = MS_{AB} / MS_{WG} = 0.1335 / 1.03 = 0.13
How to Interpret F Ratios
For each F ratio in the table above, notice that numerator should equal the denominator when the variation due to the source effect ( σ^{2}_{ SOURCE} ) is zero (i.e., when the source does not affect the dependent variable). And the numerator should be bigger than the denominator when the variation due to the source effect is not zero (i.e., when the source does affect the dependent variable)
Defined in this way, each F ratio is a convenient measure that we can use to test the null hypothesis about the effect of a source (Factor A, Factor B, or the AB interaction) on the dependent variable. Here's how to conduct the test:
 When the F ratio is close to one, the numerator of the F ratio is approximately equal to the denominator. This indicates that the source did not affect the dependent variable, so we cannot reject the null hypothesis.
 When the F ratio is significantly greater than one, the numerator is bigger than the denominator. This indicates that the source did affect the dependent variable, so we must reject the null hypothesis.
What does it mean for the F ratio to be significantly greater than one? To answer that question, we need to talk about the Pvalue.
PValue
In an experiment, a Pvalue is the probability of obtaining a result more extreme than the observed experimental outcome, assuming the null hypothesis is true.
With analysis of variance, the F ratio is the observed experimental outcome that we are interested in. So, the Pvalue would be the probability that an F ratio would be more extreme (i.e., bigger) than the actual F ratio computed from experimental data.
The F ratios defined for analysis of variance follow the F distribution. Therefore, we can use Stat Trek's F Distribution Calculator to find the probability that an F statistic will be bigger than the actual F ratios observed in the experiment.
To illustrate how this can be done, we'll find the Pvalue for the F ratio associated Factor A. Recall that the F ratio associated Factor A was defined as follows:
F ratio = 
MS_{A}
MS_{WG}

To find the Pvalue for this F ratio, we enter three inputs into the F Distribution Calculator: the degrees of freedom (1) for the Factor A mean square, the degrees of freedom (24) for the withingroups mean square, and the observed F statistic (5.47) into the calculator; then, click the Calculate button.
From the calculator, we see that the P ( F > 5.47 ) equals about 0.03. Therefore, the PValue for Factor A is 0.03. Following the same procedure, we can find that the Pvalue for Factor B is 0.09; and the Pvalue for the AB interaction is 0.88.
Hypothesis Test
Recall that we specified a significance level 0.05 for this experiment. Once you know the significance level and the Pvalue, the hypothesis test is routine. Here's the decision rule for accepting or rejecting the null hypothesis:
 If the Pvalue is bigger than the significance level, accept the null hypothesis.
 If the Pvalue is equal to or smaller than the significance level, reject the null hypothesis.
When we apply these decision rules to this experiment, here are the conclusions:
 Since the Pvalue (0.03) for Factor A is smaller than the significance level (0.05), we reject the null hypothesis that Factor A has no effect on the dependent variable.
 Since the Pvalue (0.09) for Factor B is bigger than the significance level (0.05), we cannot reject the null hypothesis that Factor B has no effect on the dependent variable.
 Since the Pvalue (0.88) for the AB interaction is bigger than the significance level (0.05), we cannot reject the null hypothesis that the AB interaction has no effect on the dependent variable.
Magnitude of Effect
The hypothesis test tells us whether a main effect or an interaction effect has a statistically significant effect on the dependent variable, but it does not address the magnitude (i.e., strength) of the effect. Here's the issue:
 When the sample size is large, you may find that even small effects are statistically significant.
 When the sample size is small, you may find that even big effects are not statistically significant.
With this in mind, it is customary to supplement analysis of variance with an appropriate measure of the magnitude of each treatment effect. Eta squared (η^{2}) is one such measure. Eta squared is the proportion of variance in the dependent variable that is explained by a treatment effect. The eta squared formula for analysis of variance is:
η^{2} = SS_{EFFECT} / SST
where SS_{EFFECT} is the sum of squares for a treatment effect and SST is the total sum of squares.
Given this formula, we can compute eta squared for each treatment effect in this experiment, as shown below:
η^{2}_{A} = SSA / SST = 5.63 / 36.3 = 0.155
η^{2}_{B} = SSB / SST = 5.60 / 36.3 = 0.154
η^{2}_{AB} = SSAB / SST = 0.27 / 36.3 = 0.007
Thus, 15.5% of the variance in the dependent variable can be explained by Factor A; 15.4%, by Factor B; and 0.7% by the AB interaction.
ANOVA Summary Table
It is traditional to summarize ANOVA results in an analysis of variance table. The analysis that we just conducted provides all of the information that we need to produce the following ANOVA summary table:
Analysis of Variance Table
Source  SS  df  MS  F  P 

A  5.63  1  5.63  5.45  0.03 
B  5.6  2  2.8  2.71  0.09 
AB  0.27  2  0.135  0.13  0.88 
WG  24.8  24  1.033  
Total  36.3  29 
This ANOVA table allows any researcher to interpret the results of the experiment, at a glance.
The Pvalue (shown in the last column of the ANOVA table) is the probability that an F statistic would be more extreme (bigger) than the F statistic shown in the table, assuming the null hypothesis is true. When the Pvalue is bigger than the significance level, we accept the null hypothesis; when it is smaller, we reject it.
To assess the strength of a treatment effect, an experimenter can compute eta squared (η^{2}). The computation is easy, using sum of squares entries from the ANOVA table, as shown below:
η^{2} = SS_{EFFECT} / SST
where SS_{EFFECT} is the sum of squares for a treatment effect and SST is the total sum of squares.
Fixed vs. Random Factors
In the analysis above, both factors in the experiment were fixed factors. How would the analysis change if one or both factors were random factors?
As it turns out, there are only a few differences between analysis of variance with fixed factors and analysis of variance with random factors.
 With some effects, the expected mean square will be different for a fixed factor than for a random factor.
 When an expected mean square for an effect is different, the F ratio for that effect will also be different.
 When an F ratio is different, the Pvalue will also be different.
In short, analysis of variance with fixed effects is exactly the same as analysis of variance with random effects up to the point where expected mean squares come into the picture. Therefore, only a few adjustments are required to conduct analysis of variance with random factors. Specifically, we need to (1) find expected mean squares for random factors, (2) recompute F ratios based on the expected mean squares, and (3) find Pvalues for the new F ratios.
To illustrate what is going on, let's repeat the analysis of variance for our experiment with a randomeffects model and with a mixed model.
RandomEffects Model
Assume that both factors in our experiment are random factors. We know that sums of squares, degrees of freedom for effects, and sample estimates of mean squares do not change for fixed factors versus random factors. Therefore, we can use the values that we computed earlier for fixed factors in a new analysis for random factors. Those values are shown in the ANOVA summary table below:
Analysis of Variance Table: Random Effects
Source  SS  df  MS  F  P 

A  5.63  1  5.63  ???  ??? 
B  5.6  2  2.8  ???  ??? 
AB  0.27  2  0.135  ???  ??? 
WG  24.8  24  1.033  
Total  36.3  29 
The ANOVA table has some gaps, indicated by questions marks. To fill in the gaps, we need to:
 Find expected value of mean squares for random factors.
 Compute F ratios, based on expected mean squares.
 Find Pvalues for each F ratio.
Expected Value
The table below shows the expected value of mean squares for a balanced, twofactor, full factorial experiment when both factors are random:
Mean square  Expected value 

MS_{A}  σ^{2}_{WG} + nσ^{2}_{AB} + nqσ^{2}_{A} 
MS_{B}  σ^{2}_{WG} + nσ^{2}_{AB} + npσ^{2}_{B} 
MS_{AB}  σ^{2}_{WG} + nσ^{2}_{AB} 
MS_{WG}  σ^{2}_{WG} 
F Ratio
The F ratio for each effect is defined in such a way that the expected value of the ratio equals one when σ^{2}_{EFFECT} equals zero. The table below shows how to construct F ratios for each effect when an experiment uses a randomeffects model.
Effect  F ratio  Degrees of freedom  

v_{1}  v_{2}  
A 
MS_{A}
MS_{AB}

p1  (p1)(q1) 
B 
MS_{B}
MS_{AB}

q1  (p1)(q1) 
AB 
MS_{AB}
MS_{WG}

(p1)(q1)  pq(n1) 
Effect  F ratio  Degrees of freedom  

v_{1}  v_{2}  
A 
MS_{A}
MS_{AB}

p1  (p1)(q1) 
B 
MS_{B}
MS_{AB}

q1  (p1)(q1) 
AB 
MS_{AB}
MS_{WG}

(p1)(q1)  pq(n1) 
For each effect, v_{1} is the degrees of freedom for the numerator of the F ratio; and v_{2} is the degrees of freedom for the denominator of the ratio.
Applying formulas from the table, we can compute F ratios for the main effects and the interaction effect, as shown below:
F_{A} = F(v_{1}, v_{2}) = F(1, 2) = MS_{A}/MS_{AB} = 5.63 / 0.135 = 41.7
F_{B} = F(v_{1}, v_{2}) = F(2, 2) = MS_{B}/MS_{AB} = 2.8 / 0.135 = 20.7
F_{AB} = F(v_{1}, v_{2}) = F(2, 24) = MS_{AB}/MS_{WG} = 0.135 / 1.033 = 0.13
PValues
At this point, we know the value of each F ratio; and we know the degrees of freedom associated with each F ratio. Therefore, we can use Stat Trek's F Distribution Calculator to find the probability that an F statistic will be bigger than an actual F ratio observed in the experiment.
To illustrate how this can be done, we'll find the Pvalue for the F ratio associated Factor A. We enter three inputs into the F Distribution Calculator: the degrees of freedom v_{1} (1), the degrees of freedom v_{2} (2), and the observed F statistic (41.7).
From the calculator, we see that the P ( F > 41.7 ) equals about 0.02. Therefore, the Pvalue for Factor A is 0.02. Following the same procedure, we can find that the Pvalue for Factor B is 0.05; and the Pvalue for the AB interaction is 0.88.
ANOVA Summary Table
Based on the analysis above, we can fill in the gaps that originally existed in our ANOVA table. Here is the table with the gaps filled in.
Analysis of Variance Table: Random Effects
Source  SS  df  MS  F  P 

A  5.63  1  5.63  41.7  0.02 
B  5.6  2  2.8  20.7  0.05 
AB  0.27  2  0.135  0.13  0.88 
WG  24.8  24  1.033  
Total  36.3  29 
Mixed Model
A mixed model describes an experiment in which at least one factor is a fixed factor, and at least one factor is a random factor. In our experiment, suppose we assume that Factor A is a fixed factor, and Factor B is a random factor.
We know that sums of squares, degrees of freedom for effects, and sample estimates of mean squares do not change for fixed factors versus random factors. Therefore, we can use the values that we computed earlier for fixed factors in a new analysis for random factors. Those values are shown in the ANOVA summary table below:
Analysis of Variance Table: Random Effects
Source  SS  df  MS  F  P 

A  5.63  1  5.63  ???  ??? 
B  5.6  2  2.8  ???  ??? 
AB  0.27  2  0.135  ???  ??? 
WG  24.8  24  1.033  
Total  36.3  29 
The ANOVA table has some gaps, indicated by questions marks. To fill in the gaps, we need to:
 Find expected value of mean squares for fixed factors and random factors.
 Compute F ratios, based on expected mean squares.
 Find Pvalues for each F ratio.
Expected Value
The table below shows the expected value of mean squares for a balanced, twofactor, full factorial experiment when Factor A is fixed and Factor B is random:
Mean square  Expected value 

MS_{A}  σ^{2}_{WG} + nσ^{2}_{AB} + nqσ^{2}_{A} 
MS_{B}  σ^{2}_{WG} + npσ^{2}_{B} 
MS_{AB}  σ^{2}_{WG} + nσ^{2}_{AB} 
MS_{WG}  σ^{2}_{WG} 
F Ratio
The F ratio for each effect is defined in such a way that the expected value of the ratio equals one when σ^{2}_{EFFECT} equals zero. The table below shows how to construct F ratios for each effect when Factor A is a fixed effect, and Factor B is a random effect.
Effect  F ratio  Degrees of freedom  

v_{1}  v_{2}  
A 
MS_{A}
MS_{AB}

p1  (p1)(q1) 
B 
MS_{B}
MS_{WG}

q1  pq(n1) 
AB 
MS_{AB}
MS_{WG}

(p1)(q1)  pq(n1) 
Effect  F ratio  Degrees of freedom  

v_{1}  v_{2}  
A 
MS_{A}
MS_{AB}

p1  (p1)(q1) 
B 
MS_{B}
MS_{WG}

q1  pq(n1) 
AB 
MS_{AB}
MS_{WG}

(p1)(q1)  pq(n1) 
For each effect, v_{1} is the degrees of freedom for the numerator of the F ratio; and v_{2} is the degrees of freedom for the denominator of the ratio.
Applying formulas from the table, we can compute F ratios for the main effects and the interaction effect, as shown below:
F_{A} = F(v_{1}, v_{2}) = F(1, 2) = MS_{A}/MS_{AB} = 5.63 / 0.135 = 41.7
F_{B} = F(v_{1}, v_{2}) = F(2, 24) = MS_{B}/MS_{WG} = 2.8 / 1.033 = 2.71
F_{AB} = F(v_{1}, v_{2}) = F(2, 24) = MS_{AB}/MS_{WG} = 0.135 / 1.033 = 0.13
PValues
At this point, we know the value of each F ratio; and we know the degrees of freedom associated with each F ratio. Therefore, we can use Stat Trek's F Distribution Calculator to find the probability that an F statistic will be bigger than an actual F ratio observed in the experiment.
To illustrate how this can be done, we'll find the Pvalue for the F ratio associated Factor B (the random factor). We enter three inputs into the F Distribution Calculator: the degrees of freedom v_{1} (2), the degrees of freedom v_{2} (24), and the observed F statistic (2.71).
From the calculator, we see that the P ( F > 2.71 ) equals about 0.09. Therefore, the Pvalue for Factor A is 0.09. Following the same procedure, we can find that the Pvalue for Factor A is 0.02; and the Pvalue for the AB interaction is 0.88.
ANOVA Summary Table
Based on the analysis above, we can fill in the gaps that originally existed in our ANOVA table. Here is the table with the gaps filled in.
Analysis of Variance Table: Mixed Model
Source  SS  df  MS  F  P 

A  5.63  1  5.63  41.7  0.02 
B  5.6  2  2.8  2.71  0.09 
AB  0.27  2  0.135  0.13  0.88 
WG  24.8  24  1.033  
Total  36.3  29 