Spring into Risk Management! Open Project.
- Objectives & Long Term Goals
- How to Participate
- Current Chapter(s) & Challenge(s)
- Checklist and Surveys
- Overview of Mathematically-OptimiZed Risk Evaluation
- Discussion Forum - LinkedIn Posts & Articles
- Starting Point - 2001
- research best practice to manage patient risk in medical laboratories;
- measure existing patient risk against regulated, evidence-based or clinical allowable error limits for calcium, glucose and sodium
- compare key features of existing and proposed statistical quality control processes;
- evaluate the ability of existing statistical quality control (SQC) processes and "Mathematically-OptimiZed™" SQC processes to
- meet best practice recommendations
- detect simulated method failure
- quantify potential reduction in:
- financial risk exposure from medical laboratory errors
- in-lab costs to perform quality control
- publish a series of books, courses, articles, blogs
- create a process to educate and competency-certify risk management professionals
- discuss user requirements and design suggestions for core risk management software
- verify the ability of submitted software programs to meet customer needs and implement best practice
- Show me the process.
- Why should I bother?
- How much time will this take?
- What risk metrics (data) do I need to send?
- What results will I receive?
- How is anonymity assured?
- Can I just be either a participant, analyst or panelist?
- What does this cost? What's the catch?
- What difference can this one study make?
- Show me the process.
Objectives & Long Term Goals
Ineffective quality control practices expose patients to the risk of incorrect or delayed diagnosis and/or treatment. CLSI EP23-A, CMS Individualized Quality Control Plan (IQCP), CLIA, ISO and WHO now recommend risk management practices for medical laboratories. Laboratories must “ensure test result quality is appropriate for clinical use;” validate “the ability of the QC procedures to detect medically allowable error;” and assess “potential costs both in terms of the patient’s well-being and financial liability.” Early studies have shown significant opportunities to reduce the number and clinical/legal cost of medically-unreliable results by automating published best practice.
This project will:
Long term goals, pending outcome of phase 1 is to create an ongoing member-led organization to:
How to Participate
1. Guests are welcome to review all site material including Current Chapters and Challenges. Please share the link and the knowledge - especially the glossaries.
3. Now you can participate in discussions and start posting brainstorm ideas for the overall project!
4. Enrol in the course Risk Management Resources. Enrol in more courses to explore the site and discover the history of M.O.R.E. Quality. (This site is a little rough yet so please send any Opportunities For Improvement in the brainstorming discussion.)
5. Complete the participant form, to become a participant, analyst or panelist. After you have posted at least two discussion comments you will receive confirmation of your enrolment in"Mini Risk Evaluation Program, Spring 2018"
Timeline Proposed: See methodology. Webinar week of April 16th.
Current Chapter(s) & Challenge(s)
You need to log in as a member and enroll in this course to participate in discussions.
Checklist and Surveys
In early 2018, AWEsome Numbers Inc. invited people to discover the 4 part project to introduce, clarify and PROVE the value of the NEW process of Mathematically-OptimiZed Risk Evaluation(c). See the steps below.
Step 1a: Take Survey #1 on Best Practice for Risk Management in Statistical QC. This survey presents 18 clear specific recommendations from ISO, CLSI & CLIA.
Step 1b: Download Best Practice PDF
Step 2: Take Survey #2 that asks "Does It Make Sense to manage risk [this way]? Survey 2 introduces you to the "DIMS" test (Does It Make Sense?). Unlike statistics, the logic and concepts of risk management make sense and are easy to teach and implement consistently. Try it! Look at QC differently!
Step 3: Watch the replay of the FREE webinar #1 held on January 16th, 2018 This interactive panel discussion presents results from both surveys and looks more closely at a potential practical application of these recommendations using validated software that applies the new automated process of Mathematically-OptimiZed Risk Evaluation(c).
Step 4: Watch the replay of the FREE webinar #2 held on February 6, 2018 "Mini Risk Evaluation Program: PROVE the value with your QC data"
Overview of Mathematically-OptimiZed Risk Evaluation
Discussion Forum - LinkedIn Posts & Articles
Starting Point - 2001
BASIC TERMS AND CONCEPTS 2001
These are the fundamentals described in the 2001 textbook before the introduction of sigma or risk management. Webinar panelists and others will also serve as reviewers, editors and co-authors to rewrite these basics.
Compare this to the Glossary of Risk Management terms in the Risk Management Resources course.
Samples may come to the laboratory from patients, proficiency testing (PT) programs and external quality assessment schemes (EQAS), or manufacturers of quality control (QC) material. When we test a portion of the same QC material each day, we are measuring the same sample.
Samples are tested on an analytical system to produce results that indicate the amount of analyte present. The analyte is the specific substance we are interested in measuring.
In this book, the term PT sample will also apply to EQAS samples. PT samples are sent to the laboratory from external agencies and tested to reflect the laboratory’s performance with patient samples. Erroneous proficiency results indicate that our laboratory is incapable of meeting the accepted standard of performance and can ultimately lead to the loss of our laboratory’s license to perform that specific test or entire class of tests.
The analytical system includes the reagents, calibrators, instruments, disposables, and step-by-step processes needed to produce a result.
Reagents are chemical solutions that react in predictable ways to given analytes; the predicted reaction allows us to measure the amount of an analyte present in each sample.
Calibrators are materials that contain a known amount of analyte.
Instruments mix samples with reagents and compare the reaction of the patient, PT, or QC sample against the known calibrator values or predictable chemical activity of an analyte to produce a result for each sample.
Disposables include the items required to contain samples, deliver set volumes of samples and/or reagents and measure the chemical reaction.
Processes include all of the steps necessary to prepare samples for testing, prepare reagents and calibrators, set up and maintain instruments, combine samples with reagents, calculate the amount of analyte present, and report results.
Results of QC samples are analyzed to assess, and alert us to, changes in method performance. Because of inevitable minor changes in reagents, calibrators, instruments, disposables, and processes, these QC results show a predictable and expected random variation. Measured results on the same QC sample are not always the same; some are higher and some are lower than others due to random variations in the analytical system.
If we create a bar chart of our QC results with the frequency of results on the y-axis and value on the x-axis, we expect a symmetrical bell-shaped distribution of data, as shown in Figure A. This expected random variation has a defined mathematical relationship based on random distribution and is described as Gaussian distribution. This predictable pattern of data distribution is the foundation of statistical analysis in laboratory QC.
When analytical systems are stable, they do not experience significant change, and the QC results exhibit Gaussian distribution with the average value at the center. The dispersion of results around the average is determined by the inherent random variation of the test system. We calculate these values as the mean (average) and standard deviation (variation) to assess method accuracy and precision. An accurate method will produce results with a mean value close to the true or target value for the measured analyte.
Figure I-1. Gaussian distribution shows predictable distribution of data with approximately 68% of points within +/- 1 SD, 95% within +/- 2 SD and almost all data within +/- 3 SD.
The true or target value is the best estimate of the correct value for each control sample. The difference between the measured mean and the target is called bias. Method bias changes from time to time when we change reagents or calibrators,components of the instrument, disposables, or processes.
A precise method will show a relatively small standard deviation, or narrow dispersion, of results around the mean.
If a change occurs in the bias or precision of our analytical system, the pattern of data distribution changes from the expected Gaussian distribution observed prior to the change.
Daily QC encompasses preparing and handling QC samples, testing samples to produce QC results, and assessing individual QC results using QC charts and QC rules. In practice, these activities may be performed several times each day for some analytes, and only once or twice each week or month for others, depending on the frequency of testing, the expected error rate of the method, and the number of patient and PT samples tested.
Stability reflects the inherent error rate in an analytical process; it is a measure of the likelihood that a method will experience sudden changes in accuracy or precision that will adversely affect its ability to meet quality specifications. A method that demonstrates frequent significant changes is described as having a high error rate or low stability.
Each analysis of QC samples brackets an analytical run.
The result(s) of the QC sample(s) in each run reflect the accuracy and precision of the method at that time. Any changes in accuracy or precision will affect the QC samples and
alert us to similar changes in patient and PT samples. Results from daily QC are plotted on a QC chart—an x–y plot with results from testing of QC samples on the y-axis, and the run number or date on the x-axis. The scale of the y-axis is usually set with the mean at the center and the minimum and maximum value often but not always defined by +/- 4 SD.
QC flags are events that indicate unexpected performance, usually triggered when QC results fall outside expected limits selected to meet the criteria of QC rules. QC rules are usually defined by the number of occurrences of QC values that vary from the mean by more than a defined number of SD. We select QC rules to maximize QC flags when changes in method accuracy or precision cause patient or PT results to exceed quality specifications, and minimize QC flags in the absence of such change.
Summary statistics, the mean and SD, compare method accuracy and precision to the target value and total error allowable (TEa) for each QC sample to assess overall method performance. QC flags in reports of summary statistics alert us to changes in the overall accuracy or precision of a method.
Quality specifications are usually defined as a total error allowable (TEa); these limits specify the maximum acceptable variation of results from the target value.
A total error flag in summary statistics indicates that total error, the combined effect of method bias and precision, exceeds the acceptable variation defined by the total error allowable.
Critical systematic error (SEc) is a measure of the number of SD the mean can shift before values will exceed the TEa limit. SEc flags indicate that a method is close to its quality specification and requires close monitoring.
Performance-Driven Quality Control helps us design QC systems to ensure that our laboratory meets the defined quality specification for each method.