Risk Management is a science that transcends traditional quality control in medical laboratories.
In this risk simulator that models the information in RiskGATOR™ software, the yellow boxes represent the questions answered by Risk Management; statistical QC considers only the information in the white boxes.
CLSI EP23A defines risk as the combination of the probability and severity of harm. That patient harm can be represented as the clinical cost of unnecessary repeat and additional tests in addition to incorrect or delayed diagnosis or treatment. The relationship between laboratory error and cost was established in this NIST/MAYO study of calcium results.
Calculations of Total Error and Sigma do not vary with the number of patient samples tested. Acceptability of these decades-old statistical metrics is blind to the number of medically incorrect results (MIRs) reported per year and per failure event. With a test volume of 100 per day, if Total Error is less than TEa (Total Error allowed,) and you calculate TE as |bias| + 2 SD, then you would allow a patient failure rate of 2.275% or 830 MIRs per year. If you think 3 sigma is acceptable, you would allow 49 errors per year. (If you ‘believe’ without evidence in the 1.5 sigma shift, you are accepting over 2,000 errors per year at 3 sigma!)
I have been measuring risk as the number of MIR per year for a few years now and the vast majority of methods produce less than one MIR per year.
How can you justify allowing 830 or 49 or 2,000+ errors per year when your peers limit risk to one error per year?
Statistical QC assumes that QC rules will detect failure
- Even if the method has already failed
- Even if you multiply the chart SD x 2, turning your 1-3s rule into a 1-6s rule
- Even if you pick your QC chart mean from a peer value that is 2 SD below your measured mean, turning that 1-6s rule into a 1-8s rule!
Register below to participate in our next Masterclass.
