Scientific Posters
ADLM 2024
A-202. Quality control design survey reveals 399 ways to do it wrong
Z. BROOKS. AWEsome Numbers Inc., ((ElevateQC), Worthington, ON, Canada)
Abstract
Background: The effectiveness of statistical quality control processes to detect unacceptable patient risk depends on the choices that laboratory professionals make for the design of quality control charts and acceptable patient risk.
Recommended best practice consists of putting a meaningful recent mean and SD on the QC chart, selecting rules based on the method’s margin for error to unacceptable risk or Sigma value, and basing acceptable risk on clinical need.
Methods: A survey was circulated through LinkedIn and Labvine. Forty respondents who included their credentials were included in the final analysis. Respondents were asked to select the one most correct answer four questions regarding the parameters that govern the effectiveness of quality control processes. 1. The best source of the mean on the QC chart? (Four choices) 2. Best source of the SD on the QC chart? (Five choices) 3. Best way to select QC rules for each QC sample? (Six choices) 4. If an analytical process “fails,” we should stop reporting before [errors are reported.] (Five choices)
A total of 400 potential patterns of QC selections were chosen. (4 * 5 * 5 * 4).
Results:
When asked acceptable error rate if an analytical process “fails,” responses were More than 5% errors are reported – 11%. Any errors are reported – 66% It varies with clinical use of the test – 16% It varies with capabilities of our method – 8%
While most respondents chose at least one of the recommended answers for QC chart design and rules, only three chose the best choice for all parameters.
Conclusions: Despite decades of recommendations of best practice for quality control, typical practice throughout the laboratory community in general does not reflect this.
It is time to consider a new approach to teaching, validating and certifying competency in medical laboratory quality control.
A-203. A Disconnect Between Detectable Patient Error Rates and QC Understanding
Z. BROOKS. AWEsome Numbers Inc., ((ElevateQC), Worthington, ON, Canada)
Abstract
Background: ISO 14971 defines “risk evaluation as the process of comparing the estimated risk against given risk criteria to determine the acceptability of the risk. Effective QC processes would detect if a stable acceptable error rate became unacceptable.
Methods: A survey was circulated through LinkedIn and Labvine. Forty respondents who included their credentials were included in the final analysis. Questions included:
1. What is the best choice for acceptable error rate when an analytical process is stable and in control, 2. If an analytical process “fails,” how many errors are you willing to report before the lab should stop reporting, and 3. “If an analytical process has a 5% stable error, how high would the error rate after failure need to become for a 1-2s rule to detect failure with one QC sample in a single QC run?”
This interactive poster will allow viewers to cast their vote for Question 4 and then reveal an LJ chart depicting a stable error rate of 5% with a shift to show the number of 1-2s failures with their chosen acceptable post-failure error rate.
Results: During stable operation, only 8% wanted zero errors while 38% voted that they would allow up to 5% errors;
After failure, 66% wanted to report zero errors and only 10% would allow 5% failure.
69% believed they would see a 1-2s failure in one run if the error rate rose to less than 50%; 19% believed it required a 100% error rate.
Conclusions: There is a strange disconnect where most believed it was acceptable to report more errors during stable operation than if the method failed.
There is a widespread misunderstanding of the logic that connects management of acceptable patient risk with the capabilities of statistical QC.
B-131. The fallacy of average sigma levels from mixed sample levels
Abstract
Background: Sigma measures the number of SDs (z-value) from the existing sample mean to the nearest analytical performance standard or allowable error limit. Authors and software programs often measure sigma metrics for each QC sample but use an average sigma to compare methods and select QC strategies. That practice leads to dramatic over or under-estimation of the number of errors reported and the selection of inappropriate QC strategies.
Methods: Data samples were created to produce sigma values of 3.0, 4.5, and 6.0. Microsoft Excel function NORMSDIST was used to convert sigma to percent to number of errors per million patients. NORMSINV was used to convert the number of errors per million patients to sigma.
Results: A. Six sigma represents a method with a failure rate of 0.000001% or 0.001 failures of ASP/TEa per million patients. B. Three sigma represents a method with a failure rate of 0.135 percent or 1,350 failures of ASP/TEa per million patients. C. While the average sigma value of samples A and B was 4.5s, the average error rate was 0.0675 percent or 675 failures of ASP/TEa per million patients. D. An error rate was 0.0675 percent converts with the NORMSINV function to a sigma of 3.21. E. A true 4.5 sigma method would have a failure rate of 0.00034 percent or 3.4 failures of ASP/TEa per million patients.
Conclusions: Sigma studies that present an average sigma value underestimate the true number of errors reported. It would be more scientifically correct to either report the number of errors reported or to report the average sigma value based on the average number of errors. Laboratory professionals should interpret sigma studies and publications cautiously if a single sigma is used to represent two or more data sets.
