Log-in | Contact Jeff | Email Updates

Question 194:



No answer provided yet.

The ANOVA (analysis of variance) is a test specifically designed to test for differences between multiple means (among many other things), whereas the 2 sample t-test is tests the difference between just two means.

The problem with the ANOVA, is when you do find a statistical difference (that is p<.05) the model does not tell you which mean is statistically different from which, so one usually has to conduct what are called comparison tests. One comparison technique is to simply run 2-sample t-tests on all pairs of means analyzed in the ANOVA, so we can then make the statement that treatment A is statistically higher than treatment B (p<.05) and so forth.

With just 2 or 3 comparisons, this approach is often reasonable especially if considered before the experiment was conducted. However, since we are now making more comparisons, there is a higher chance that we'll detect a difference when one in fact doesn't exist. This is a Type I Error and is what the p-value is referring to. There are ways of managing this higher chance and that is usually to apply some correction to the significance level to produce a Family-Wise error rate. One such technique is called the Bonferroni Correction. The problem is that with more than a few comparisons, the Family-Wise error rate becomes so high that you need to observe very large differences in means (p values less that .01 for example).  For example, in using the Bonferroni Correction, with 3 comparisons you would need to see a differences at p < .0167 (.05/3 = .0167). For 6 (that's comparing all combinations of just 4 means) you would need to observe p < .008.

In short, this approach works for only a few comparisons because correcting for the inflated chance drastically decreases the likelihood of detecting any difference between treatment means.

Not what you were looking for or need help?

Ask a new Question

Browse All 869 Questions

Search All Questions: