Abstract
Normal-theory statistical tests usually described as requiring uncorrelated errors can be performed if the dependency takes a particular form. i.e., equal correlation among the errors. However, even slightly unequal correlations among the errors are known to result in tests with undesirable properties (e.g., inflated type I error rates). Randomization tests were considered as alternatives to formal-theory procedures. A simulation study was performed to investigate whether the distributional behavior of the two-sample, independent groups randomization test was robust to departures from the assumption of equally correlated errors. The results suggest that the randomization test is not robust to such departures and reinforced the statistical dictum that significance tests should be avoided when errors are unequally correlated.
Original language | English (US) |
---|---|
Pages (from-to) | 75-85 |
Number of pages | 11 |
Journal | Computational Statistics and Data Analysis |
Volume | 11 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1991 |
Keywords
- Correlated errors
- Randomization tests
- Robustness
- Simulation