Sample size requirements needed to achieve various levels of statistical power using posttest‐only, gain‐score, and analysis of covariance designs in evaluating training interventions have been developed. Results are presented which indicate that the power to detect true effects differs according to the type of design, the correlation between the pre‐ and posttest, and the size of the effect due to the training program. We show that the type of design and correlations between the pre‐ and posttest complexly determine the power curve. Finally, an estimate of typical sample sizes used in training evaluation design has been determined and reviewed to determine the power of the various designs to detect true effects, given this sample‐size specification. Recommendations for type of design are provided based on sample size and projected correlations between pre‐ and posttest scores.
|Number of pages
|Published - Sep 1985