Let’s start this series of math lessons with an AI history lesson on sigma in medical labs.  

The introduction of Sigma metrics to the medical laboratory was a pivotal shift from simply “checking for errors” to “designing for quality.” It brought the industrial rigor of Motorola’s Six Sigma strategy into the high-stakes world of clinical diagnostics.

  1. The Publication Debut (1990s)

Sigma metrics were first formally adapted and published for medical laboratory data in the mid-to-late 1990s. The primary architect of this transition was Dr. James O. Westgard.

While Six Sigma was popularized by Motorola in 1986 and later GE, Westgard published seminal papers in 1992 and 1996 specifically detailing how to calculate the “Sigma-metric” for analytical processes.[ZB Note 1] By the early 2000s, this became the gold standard for evaluating laboratory performance.

  1. How they were described:

Sigma metrics were described as a way to measure the capability of a laboratory test to remain within its defined “Total Allowable Error” (TEa). It quantifies how many standard deviations (sigmas) fit between the mean of a process and the tolerance limit. [ZB Note 2]

  1. Comparison to Critical Systematic Error ( SEc)

Sigma metrics are intrinsically linked—and often directly compared—to Critical Systematic Error (∆SEc).

In lab medicine, the SEc is the amount of systematic error (shift) that will cause a 5% risk of the test results exceeding the TEa. In the early literature, Westgard demonstrated that the Sigma metric and SEc are essentially the same value when the goal is to detect a shift with 95% confidence.  [ZB Note 3]


 

⇒[ZB Note 1]

Labs are the only setting that calculates sigma using the Westgard formula.  I once hired a Sigma Green Belt freelancer to go into a manufacturing facility and describe how they use sigma.  They do not calculate it – they measure the number of defects per million widgets produced.  They count defects often at ‘gates’ throughout the manufacturing process.  If the defect rate rises above a defined limit, they stop production and improve performance.  In the few settings where sigma is calculated, they do not use %Bias and %CV, as this formula is mathematically flawed.↩ Back to top

⇒[ZB Note 2]

Sigma is a z-value normalized to the nearest TEa limit.  Z-values tell you the percent of values outside the curve.  Sigma is a very complicated and non-linear way to represent percent failure. ↩ Back to top

⇒[ZB Note 3]

  Sigma and SEc are far from the same thing!  SEc is the number of SDs the mean can shift before 5% of results fail.  Sigma is the number of SDs the mean can shift before 50% of results fail.  ↩ Back to top

[The missing link between Sigma Metrics and Risk Metrics]  Risk evaluation is the comparison of estimated risk to acceptable risk.  Patient risk is measured by the number of errors produced.  Sigma metrics are blind to the volume of patient samples tested, and sigma underestimates the number of errors by 50% when bias is zero.  Risk metrics report the number of errors reported per year or per failure event and the probability of a patient receiving an erroneous result as one error every ‘x’ years, months, weeks, days or hours. Risk metrics are intuitively understood by everybody!


There are a limited number of opportunities to become a beta tester or early adopter for RiskGATOR™ software.  Apply here .
We are planning a Risk Management MasterClass.  Register here to be sure to be informed.