Abstract
We evaluated three quality assurance procedures for false positive errors, rejecting a batch when in control (Type I error), and for false negative errors, accepting a batch when out-of-control (Type II errors). Thirty-six computer generated quality assurance data points were considered per batch and represented duplicate measurements of three control pool for six analytes Type I errors were estimated from normally distributed deviations and Type II errors were estimated for a two standard deviation shift in one analyte. For each situation, 1000 batches were processed and the number of errors of rejection/acceptance counted. Procedures included the simultaneous evaluation of data from three consecutive batches (procedure A), use of a fixed coefficient of variation cutoff (procedure B) and selection of priority analytes (procedure C). Procedure A had low error rates of 2.4% Type I and 9.5% Type II, lower than those of procedures B and C. Currently, we are tailoring procedure A for the detection of trends. Our simulation system allows rapid experimentation of changes in procedures to optimize performance; generation and processing of 1000 batches took about 20 minutes using the Statistical Analysis System (SAS, v6.09) on a Digital Equipment Corporation Alpha machine. These methods of evaluation provided for the identification and detailed description of a quality assurance procedure with low error rates for multi-analyte assays.
Original language | English (US) |
---|---|
Pages (from-to) | A448 |
Journal | FASEB Journal |
Volume | 11 |
Issue number | 3 |
State | Published - 1997 |