Topic outline

  • Comments & Q & A

  • Glossary and key concepts of CLSI EP 23-A

    The glossary and key concepts of CLSI EP 23-A spell out a step by step process to manage and reduce patient risk.

    Reproduced with permission:

    Words in black and/or highlighted text are direct quotations from
    CLSI EP 23-A and IQCP Documents (with permission.)

    Blue text contains comments from Zoe Brooks for discussion forums.

     It is all Risk Management

     

    Foreword

    Although the manufacturer is responsible for quality in design of its measuring system and reagents, the laboratory and, ultimately, the laboratory director are accountable for the quality of test results. To establish effective quality control (QC), laboratories should process an array of information (regulatory requirements, manufacturer-provided information, the laboratory’s environment, and the medical applications of tests performed) through a risk assessment process. This process identifies potential weaknesses in the measuring system and environment that are weighed against the probability for error, the effectiveness of control processes built into the measuring system, and the laboratory’s assessment of risk in consideration of the clinical use of a laboratory result. This document provides guidance to laboratories for establishing a quality control plan (QCP). Once developed, the QCP is monitored for effectiveness and modified as unanticipated failure modes or underestimated risks of error are discovered or as particular control procedures are no longer required once sufficient objective data demonstrating reliable performance have been established. The advantages and limitations of a variety of QC measures are discussed to help the laboratory develop a QCP that is appropriate for its particular measuring system, laboratory, and clinical environment.     

     

    Compliance with EP23 may not satisfy the requirements of all regulatory, accreditation, or certification bodies. Laboratories need to comply with all applicable requirements in the development of their QCPs.

     

    Key Words:    Quality control, risk assessment, risk management

    Laboratory Quality Control Based on Risk Management

    It is all Risk Management  

     

     

     

    1       Scope

     

    This document describes good laboratory practice for developing and maintaining a quality control plan (QCP) for medical laboratory testing using internationally recognized risk management principles. An individual QCP should be established, maintained, and modified as needed for each measuring system. The QCP is based on the performance required for the intended medical application of the test results. Risk mitigation information obtained from the manufacturer and identified by the laboratory, applicable regulatory and accreditation requirements, and the individual health care and laboratory setting are considered in development of the QCP. This document is intended to guide laboratories in determining quality control (QC) procedures that are both appropriate and effective for the test being performed.

    This document may not satisfy the requirements of all regulatory, accreditation, or certification bodies. Laboratories need to comply with all applicable requirements in the development of their QCPs.

     

    2       Introduction

    2.1     Quality Control Plan

    Health care providers need test results that are relevant, accurate, and reliable for patient care. A number of factors can adversely affect the quality of test results and present a risk of harm to the patient, from failures of the measuring system, to operator errors, to environmental conditions. Failure is used in this document in the context of risk management and means, in the broadest sense, a case when the system does not meet the user’s expectation. Failure includes the inability of a measurement process to perform its intended functions satisfactorily or within specified performance limits, errors of a measuring system that may produce an incorrect result, and incorrect use of a measuring system that may cause an incorrect result. Risk management is the systematic application of management policies, procedures, and practices to the tasks of analyzing, evaluating, controlling, and monitoring risk. QC in this document is defined as the set of operations, processes, and procedures designed to monitor the measuring system to ensure the results are reliable for the intended clinical use. QC in this context is broader than, although not necessarily exclusive of, the measurement of QC samples intended to simulate clinical patient samples.

     

    A QCP is a documented strategy to mitigate and prevent errors in testing that describes the practices, resources, and sequences of specified activities to control the quality of a particular measuring system or measurement process to ensure intended purposes are met. The laboratory establishes QCPs to prevent failures and to detect nonconformities that may occur before incorrect results are reported to health care providers and clinical action is taken.

    Development of a QCP requires an understanding of the preexamination (preanalytical), examination (analytical), and postexamination (postanalytical) processes, and identification of the weaknesses (potential failure modes) in these processes where failures can impact a given measuring system and potentially cause patient harm. Although this guideline addresses the examination phase, it is important to recognize that preexamination (preanalytical) and postexamination (postanalytical) processes are also important and may directly influence the acceptability of a measurement result. For example, sample collection, transport, and handling may contribute to the acceptability of a reported result.

     

    The laboratory should manage risk by implementing QCPs that serve to ensure test result quality is appropriate for clinical use of the information by:

    1) Monitoring the testing process for the occurrence of errors

    2) Introducing control procedures to mitigate the occurrence of errors

     

    Given the variety of testing performed in a typical health care facility, with particular measuring systems, examination (analytical) procedures, laboratory environments, and clinical applications, laboratories need guidance to determine effective combinations of control strategies to achieve reliable test results. This document discusses some of the QC tools available to the laboratory and summarizes their advantages and limitations. It describes an approach to develop a QCP that involves 1) collecting the necessary information from manufacturers, literature, regulatory and accreditation agencies, the laboratory’s particular environment, and the clinical application of test results; 2) conducting a risk assessment; and 3) identifying effective control measures to reduce risk.

    In the risk management process, attempts are first made to identify and eliminate the causes of potential process and system failures before implementing measures to detect failures and/or their effects (eg, incorrect test results). Activities to monitor ongoing performance are directed toward the identification of unpredicted events that cause risks, modification of the QCP, and continual improvement (CI). Figure 1 depicts schematically the inputs needed to develop and continually improve a QCP.

    Flow chart of Risk Management

     

    Figure 1. Process to Develop and Continually Improve a QCP. (The terms corrective and preventive action and continual improvement are referred to as CAPA and CI, respectively, in risk management literature.)

     

    2.2   Risk Management

    Application of risk management to the entire life cycle of a laboratory measuring system is described for manufacturers in ISO 14971.[i] The principles described are adapted in this document for use by laboratories to develop a QCP for measuring systems currently in use or introduced in a health care setting.

     

    Risk assessment, the central component of the overall risk management process, is based on the analysis (identifying hazards and estimating the probability and severity of harm) and evaluation of the risks that can result from a measuring system failure, as shown in the following flow chart in Figure 2.

     Life Cycle of Risk Management

    Figure 2. Risk Management Process

     

    During the risk analysis process, each laboratory should consider how built-in and laboratory-applied control procedures for a measuring system reduce the risk of harm from an erroneous result, a delayed result, or the nondelivery of a result. Analysis of the risk of harm should take into consideration information on the measuring system (see Section 6.2), information about the laboratory (see Section 6.3), and the performance required for the medical applications of the test results (see Section 6.4). Risk estimation is the combination of the probability of occurrence of harm and the severity of that harm. Risk estimation should take into account the hazardous situations that could occur from incorrect test results or delays in treatment.

     

    Residual risk is that risk which remains after all control measures have been implemented to prevent and/or mitigate those adverse events that cannot be avoided by improvement of the measuring system, its components, processes, or personnel competencies. A determination of the acceptability of this residual risk of harm to the patient is made for the specific clinical application for which a test result is to be used. This determination is based on an evaluation of the potential costs both in terms of the patient’s well-being and in terms of financial liability of the treating parties vs known benefits to the patient. All laboratory tests have some residual risk associated with them. Therefore, if the known residual risk is found to be unacceptable, then a determination should be made as to the feasibility of adding more risk control measures vs avoiding the risk by not implementing the test or the particular measuring system being considered.

     

    A QCP is developed to reduce residual risk (see Section 7.4), first by measures that prevent failures from occurring, followed by methods that detect failures in time to prevent harm.

     

    The final QCP is the aggregate of all laboratory-applied control procedures required to achieve a clinically acceptable risk of harm to a patient. The final QCP should also comply with all regulatory and accreditation requirements. 

     

    Once the QCP is implemented, any future incidences of measuring system failure are investigated in order to determine the sources of failure and whether modification of the QCP is required (see Section 8). If a new hazard is identified or if the severity or probability of harm is greater than anticipated, the output of risk monitoring feeds back to the appropriate step in the risk analysis process, and risk control procedures are revised to reduce the risk.

     

    The laboratory is ultimately responsible for ensuring that the testing processes and the QCPs have the capability to provide the analytical quality of results required for patient care. 

     

    3       Standard Precautions

    4       Terminology

    4.1     A Note on Terminology

    CLSI, as a global leader in standardization, is firmly committed to achieving global harmonization whenever possible. Harmonization is a process of recognizing, understanding, and explaining differences while taking steps to achieve worldwide uniformity. CLSI recognizes that medical conventions in the global metrological community have evolved differently in the United States, Europe, and elsewhere; that these differences are reflected in CLSI, International Organization for Standardization (ISO), and European Committee for Standardization (CEN) documents; and that legally required use of terms, regional usage, and different consensus timelines are all important considerations in the harmonization process. In light of this, CLSI’s consensus process for development and revision of standards and guidelines focuses on harmonization of terms to facilitate the global application of standards and guidelines.

    To align the usage of terminology in this document with that of ISO, the term accuracy, in its metrological sense, refers to closeness of agreement between a measured quantity value and a true quantity value of a measurand, and comprises both random and systematic effects applied to one result. Trueness is used in this document when referring to the “closeness of the agreement between the average of an infinite number of replicate measured quantity values and a reference quantity value”; the measurement of trueness is usually expressed in terms of bias, which is recognized as the estimate of a systematic error.

    Precision is defined as the “closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions.” As such, it cannot have a numerical value, but may be determined qualitatively as high, medium, or low. For its numerical expression, the term imprecision is used, which is the “dispersion of a set of replicate measurements or values expressed quantitatively by a statistic, such as standard deviation or coefficient of variation.” In addition, different components of precision are defined in EP23, primarily repeatability, ie, “measurement precision under a set of repeatability conditions of measurement,” whereas reproducibility describes the condition of measurement, out of a set of conditions that includes different locations, operators, measuring systems, and replicate measurements on the same or similar objects.

    4.2     Definitions

    acceptable risk – a state achieved in a measuring system where all known potential events have a degree of likelihood for or a level of severity of an adverse outcome small enough such that, when balanced against all known benefits—perceived or real—patients, physicians, institutions, and society are willing to risk the consequences.

    accuracy (of measurement) – closeness of agreement between a measured quantity value and a true quantity value of a measurand (ISO/IEC Guide 99).[ii]

    analyte – component represented in the name of a measurable quantity (ISO 17511)[iii]; NOTE: In the type of quantity “mass of protein in 24-hour urine,” “protein” is the analyte. In “amount of substance of glucose in plasma,” “glucose” is the analyte. In both cases, the long phrase represents the measurand (ISO 17511).iii

    bias (of measurement) – estimate of a systematic measurement error (ISO/IEC Guide 99).ii

    built-in – anything incorporated into the measuring system by the manufacturer.


    commutability (of a reference material) – property of a reference material, demonstrated by the closeness of agreement between the relation among the measurement results for a stated quantity in this material, obtained according to two given measurement procedures, and the relation obtained among the measurement results for other specified materials (ISO/IEC Guide 99)ii; NOTE: Commutability is a property of a reference material, demonstrated by the equivalence of the mathematical relationships among the results of different measurement procedures for a reference material and for representative samples of the type intended to be measured.

     

    continual improvement (CI) – recurring activity to increase the ability to fulfill requirements (ISO 9000 [3.2.13])[iv]; NOTE 1: Also known as continuous improvement; NOTE 2: Includes the actions taken throughout an organization to increase the effectiveness and efficiency of activities and processes in order to provide added benefits to the customer and organization.[v]

     

    control point – a point, step, or procedure in a process at which a control can be applied and, as a result, a hazard can be prevented, eliminated, or reduced.

     

    corrective action – action to eliminate the cause of a detected nonconformity or other undesirable situation (ISO 9000 [3.6.5])iv; NOTE 1: There can be more than one cause for a nonconformity; NOTE 2: Corrective action is taken to prevent recurrence, whereas preventive action (ISO 9000 [3.6.4]iv is taken to prevent occurrence.

     

    electronic control – control procedure or algorithm that checks the electronics, software, or other components or procedures of a diagnostic measuring system via electronic circuits or software logic.

     

    environmental factors – conditions that may affect the analysis that include, but are not limited to, temperature, airflow, humidity, barometric pressure, light, power supply, vibration, electromagnetic radiation, and water.

     

    error – a deviation from truth, accuracy, or correctness; a mistake.

     

    examination – set of operations having the object of determining the value or characteristics of a property (ISO 15189)[vi]; test procedure or measurement procedure.

     

    failure – in the broadest sense, a case when the system does not meet the user’s expectation; NOTE 1: This includes the inability to perform its intended functions satisfactorily or within specified performance limits; NOTE 2: Errors of measurement and errors of use are subsets of failures.

     

    failure mode – manner by which a failure is observed; generally describes the way the failure occurs and its impact on equipment operation.[vii]

     

    fault – state of an item, characterized by the inability to perform a required function, excluding inabilities due to preventive maintenance, other planned actions, or lack of external resources.[viii]

     

    fishbone diagram – diagram that shows the causes of a certain event; NOTE: Common uses of the diagram are product design and quality defect prevention, to identify potential factors causing an overall effect.

     

    harm – physical injury or damage to the health of people, or damage to property or the environment (ISO/IEC Guide 51)[ix]; NOTE: In this guideline, damage to property or the environment is considered harmful only if that damage directly harms people.

     

    hazard – potential source of harm (ISO/IEC Guide 51).ix

     

    imprecision – the random dispersion of a set of replicate measurements, values, or both expressed quantitatively by a statistic, such as standard deviation or coefficient of variation; NOTE: It is defined in terms of repeatability and reproducibility. See also precision.

     

    in vitro diagnostic medical device – a device, whether used alone or in combination, intended by the manufacturer for the in vitro examination of specimens derived from the human body solely to provide information for diagnostic, monitoring, or compatibility purposes. This includes reagents, calibrators, control materials, specimen receptacles, software, and related instruments or apparatus or other articles (GHTF/SG1/N045:2008).[x]

     

    incorrect result – result that does not meet the requirements for its intended medical use; NOTE 1: In the case of quantitative test procedures, a result with a failure of measurement that exceeds a limit based on medical utility; NOTE 2: In the case of qualitative test procedures, a result that is contrary to a true value of the measurand.

     

    instructions for use – information supplied by the manufacturer with an in vitro diagnostic medical device concerning the safe and proper use of the reagent(s) or the safe and correct operation, maintenance, and basic troubleshooting of the instrument (ISO 15197).[xi]

     

    laboratory director – competent person(s) with responsibility for, and authority over, a laboratory (ISO 15189)vi; NOTE 1: For the purposes of ISO 15189,vi the person or persons referred to are designated collectively as laboratory director; NOTE 2: National, regional, and local regulations may apply with regard to qualifications and training (ISO 15189).vi

     

    matrix (of a material system) – totality of components of a material system except the analyte (ISO 17511).iii

     

    matrix effect – influence of a property of the sample, other than the measurand, on the measurement of the measurand according to a specified measurement procedure and thereby on its measured value (ISO 17511).iii

     

    measurand – quantity intended to be measured (ISO/IEC Guide 99)ii; NOTE 1: The specification of a measurand requires knowledge of the kind of quantity, description of the state of the phenomenon, body, or substance carrying the quantity, including any relevant component, and the chemical entities involved (ISO/IEC Guide 99)ii; NOTE 2: A measurand can refer to an analyte concentration, a clotting time, an enzyme activity, an epitope, etc., in a particular sample type.

     

    measurement – process of experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity (ISO/IEC Guide 99).ii

     

    measuring system – set of one or more measuring instruments and often other devices, including any reagent and supply, assembled and adapted to give information used to generate measured quantity values within specified intervals for quantities of specified kinds (ISO/IEC Guide 99)ii;
    NOTE: A measuring system may consist of only one measuring instrument (ISO/IEC Guide 99).ii

     

    middleware – software and hardware inserted between instrument(s) and/or automation line(s) and the laboratory information system to facilitate the management of the instrument, test requests, validation of results, and reporting.[xii]

     

    mitigation – an action to lower or eliminate the risk associated with an adverse situation or to prevent the occurrence of future errors.

                  

    precision (measurement) – closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions (ISO/IEC Guide 99)ii; NOTE 1: Measurement precision is usually expressed numerically by measures of imprecision, such as standard deviation, variance, or coefficient of variation under the specified conditions of measurement (ISO/IEC Guide 99)ii; NOTE 2: The ‘specified conditions’ can be, for example, repeatability conditions of measurement, intermediate precision conditions of measurement, or reproducibility conditions of measurement (see ISO 5725-3:1994)[xiii] (ISO/IEC Guide 99)ii; NOTE 3: Measurement precision is used to define measurement repeatability, intermediate measurement precision, and measurement reproducibility (ISO/IEC Guide 99).ii

     

    preventive action – action to eliminate the cause of a potential nonconformity or any other undesirable potential situation (ISO 9000 [3.6.4])iv; NOTE 1: There can be more than one cause for a potential nonconformity; NOTE 2: Preventive action is taken to prevent occurrence, whereas corrective action (ISO 9000 [3.6.4)] iv is taken to prevent recurrence.

     

    process mapping – graphical descriptions of processes that include detailed flow charts, workflow diagrams, and value stream maps.

     

    quality – degree to which a set of inherent characteristics fulfills requirements (ISO 9000).iv

     

    quality assurance – part of quality management focused on providing confidence that quality requirements will be fulfilled (ISO 9000).iv

     

    quality control (QC) – part of quality management focused on fulfilling quality requirements (ISO 9000)iv; NOTE 1: The set of mechanisms, processes, and procedures designed to monitor the measuring system to ensure the results are reliable for the intended clinical use; NOTE 2: This includes the operational techniques and activities that are used to fulfill requirements for quality; NOTE 3: In clinical laboratory testing, QC includes the procedures intended to monitor the performance of a test procedure to ensure reliable results.

     

    quality control plan (QCP) – a document that describes the practices, resources, and sequences of specified activities to control the quality of a particular measuring system or test process to ensure requirements for its intended purpose are met.

     

    quality control sample – a stable sample designed to simulate a patient sample.

     

     

    quality management system (QMS) – management system to direct and control an organization with regard to quality (ISO 9000)iv; NOTE 1: Systematic and process-oriented efforts are essential to meet quality objectives; NOTE 2: For the purposes of ISO 15189, the “quality” referred to in this definition relates to matters of both management and technical competence (ISO 15189)vi; NOTE 3: A QMS typically includes the organizational structure, resources, processes, and procedures needed to implement quality management; NOTE 4: These principles include the following categories: Documents and Records, Organization, Personnel, Equipment, Purchasing and Inventory, Process Control, Information Management, Occurrence Management, Assessments—External and Internal, Process Improvement, Customer Service, and Facilities and Safety.

     

    reliability – probability that an item will perform its required function under given conditions for a stated time interval.

     

    repeatability (measurement) – measurement precision (closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions) under a set of repeatability conditions of measurement (condition of measurement, out of a set of conditions that includes the same measurement procedure, same operators, same measuring system, same operating conditions, and same location, and replicate measurements on the same or similar objects over a short period of time) (ISO/IEC Guide 99).ii

     

    reproducibility (measurement) – measurement precision (closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under specified conditions) under reproducibility conditions of measurement (condition of measurement, out of a set of conditions that includes different locations, operators, measuring systems, and replicate measurements on the same or similar objects) (ISO/IEC Guide 99).ii

     

    residual risk – risk remaining after risk control measures have been taken (ISO 14971).i

     

    risk – combination of the probability of occurrence of harm and the severity of that harm (ISO/IEC Guide 51).ix

    risk analysis – systematic use of available information to identify hazards and to estimate the risk (ISO/IEC Guide 51)ix; NOTE: Risk analysis includes examination of different sequences of events that can produce hazardous situations and harm (ISO 15189).vi

    risk assessment – overall process comprising a risk analysis and a risk evaluation (ISO/IEC Guide 51).ix

    risk estimation – process used to assign values to the probability of occurrence of harm and the severity of that harm (ISO 14971).i

    risk evaluation – process of comparing the estimated risk against given risk criteria to determine the acceptability of the risk (ISO 14971).i

    risk management – systematic application of management policies, procedures, and practices to the tasks of analyzing, evaluating, controlling, and monitoring risk (ISO 14971).i

     

    sample – one or more parts taken from a system, and intended to provide information on the system, often to serve as a basis for decision on the system or its production (ISO 15189)vi; EXAMPLE: A portion of serum taken from a specimen of coagulated blood.  

     

    severity – measure of the possible consequences of a hazard (ISO 14971).i 

     

    specimen (patient) – discrete portion of a body fluid or tissue taken for examination, study, or analysis of one or more quantities or characteristics, to determine the character of the whole; NOTE: In some countries, the term “specimen” may be used to mean a sample of biological origin intended for examination by a medical laboratory.

     

    stability – the ability of an in vitro diagnostic (IVD) reagent to maintain its performance characteristics consistent over time; NOTE: Stability applies to IVD reagents, calibrators, and controls, when stored, transported, and used in the conditions specified by the manufacturer; reconstituted lyophilized materials, working solutions, and materials removed from sealed containers, when prepared, used, and stored according to the manufacturer’s instructions for use; and measuring instruments or measuring systems after calibration.

     

    system – set of interrelated or interacting elements (ISO 9000).iv

     

    systematic error (of measurement) – component of measurement error that in replicate measurements remains constant or varies in a predictable manner (ISO/IEC Guide 99)ii; NOTE 1: A reference quantity value for a systematic measurement error is a true quantity value, or a measured quantity value of a measurement standard of negligible measurement uncertainty, or a conventional quantity value (ISO/IEC Guide 99)ii; NOTE 2: Systematic measurement error, and its causes, can be known or unknown. A correction can be applied to compensate for a known systematic measurement error (ISO/IEC Guide 99)ii; NOTE 3: Systematic measurement error equals measurement error minus random measurement error (ISO/IEC Guide 99).ii

     

    test – determination of one or more characteristics according to a procedure (ISO 9000).iv

     

    trueness (measurement) – closeness of agreement between the average of an infinite number of replicate measured quantity values and a reference quantity value (ISO/IEC Guide 99)ii; NOTE: Trueness is usually expressed numerically by the statistical measure bias, which is inversely related to trueness.

     

    trueness control material – a reference material with measurand value assigned by a procedure traceable to a higher order reference system and with commutability properties suitable for use to assess the bias of measurement of a specified measurement procedure (modified from ISO 17511).iii

     

    unassayed control material – control material that has no assigned analyte values provided by the manufacturer. 

     

    use error – an error whose root cause derives from an aberration in the process.

     

    user – the laboratory or person using the measuring system.

     

    value stream mapping – a lean manufacturing technique used to analyze the flow of materials and information currently required to bring a product or service to a consumer.

     

    4.3     Analysis of Quality Control Samples

     

    4.3.1 Intralaboratory Quality Control

    Historically, statistical process control of measuring systems has involved the periodic measurement of stable QC materials, designed to mimic as much as possible the analytical behavior of patient samples. Depending on the number and frequency of measuring such QC samples, and the statistical limits set for allowable result variability, the use of appropriate QC materials can provide an effective strategy for detecting clinically significant changes in the quality of results produced, particularly in relation to bias and imprecision. 

     

    The QC samples included during routine testing are subjected to as much of the total measuring system as possible. The principle of testing QC samples is that measuring system failures or errors and use errors that would negatively influence the testing of patient samples also affect the results obtained with the QC samples for many (but importantly, not all) failure modes.

    The use of QC samples to monitor a measuring system involves establishing the mean and standard deviation for the specific QC sample lot and determining statistical limits that will identify unacceptable changes in performance of the measuring system. The QC strategy using QC samples should include the following for each measuring system:

    • The frequency of QC sample test events
    • The type and number of QC samples tested per test event
    • The statistical QC limits used to evaluate the results
    • The frequency of periodic review for detecting shifts and trends
    • The actions taken when results exceed acceptable limits

    In selecting an appropriate QC sample testing frequency, laboratories should consider that any systematic (ie, persistent, nontransient) errors that occur after a QC sample is tested may remain undetected until the next control event. Furthermore, when an error is detected, it may not be known when the error occurred in the time interval since the last QC samples were tested. The clinical use of the test results (eg, the impact of errors on patient care), the stability of the analyte or measurand, stability of the examination (analytical) process, the number of patient samples processed between QC events, and frequency of calibration will influence the maximum interval between control events. 

     

    If a number of laboratories use the same measuring system and the same lot of QC samples, the results obtained may be pooled to provide peer group comparisons between laboratories. See Section 5.1.2 for a discussion of interlaboratory comparisons.

    In evaluating the effectiveness of QC samples, consider the following:

     

    • Suitable QC samples are capable of monitoring only the part of the measuring system in which the samples are used. Other steps in the total measuring system may not be challenged by the control procedure (eg, phlebotomy and sample collection).

     

    • QC samples may not mimic patient samples in all properties. Matrix effects may exist due to noncommutability of a QC sample with clinical samples, which could cause incorrect inferences about the measuring system results for patient samples, eg, QC results following a reagent lot change may not mirror the performance with patient samples.[xiv] Also, QC samples may not mimic patient samples in other respects, such as ability to form clots, create sample bubbles, or challenge measurement selectivity by containing interfering substances.

     

        

    • Sufficient QC samples with a long shelf life are needed to avoid frequent crossover evaluations between lots. Adequate stability once the container is opened should also be considered. Measurand (analyte) instability is a potential source of variability that can confound interpretation of a QC result.

     

    • If an assay is sensitive to contamination or degradation, eg, molecular testing, it may be appropriate to consider one-time-use vials of control material.

     

    Information to establish and maintain an effective QC strategy for quantitative tests using QC samples is contained in CLSI document C24.[xv] 

     

    Qualitative and semiquantitative measuring systems that produce numerical values can also be monitored by statistical process control. For qualitative and semiquantitative measuring systems that do not produce numerical results, QC samples with known measurand (analyte) values can be used to verify method performance.

     

    In addition, undetected bias and excessive imprecision are risks.  The frequency of QC sample procedures must be designed to have a high probability to detect a stated medically allowable error. At the least, the ability of the QC procedures to detect medically allowable error should be  evaluated. 

     

     

    4.3.2 Interlaboratory Quality Control

     

    Additional information about the consistency and reliability of QC test results is obtained when the same lot of QC sample is tested by the same measuring system in multiple laboratories. Statistical analysis of the QC test results is used to determine target values and QC limits. Examples include commercial QC samples with values assigned from “peer group” data from participating laboratories. Pooled patient samples can also be used as a QC sample and shared among laboratories to establish consensus target values.

     

    An effective QC sample strategy generally combines intralaboratory QC to monitor for day-to-day changes with interlaboratory QC to verify that the test results remain consistent. The same limitations associated with using QC samples (eg, stability, matrix effects, measurand [analyte] availability) apply. 

     

    Before Internet reporting of interlaboratory summaries, interlaboratory comparisons could be used only for retrospective checks of performance stability. However, Internet-based interlaboratory programs now provide timelier data analysis and feedback.

     

    4.3.3 Trueness Controls

     

    Target values may also be assigned to reference (or QC sample) materials by reference measurement laboratories using certified primary reference measurement procedures. These control samples may be used to verify the trueness of a laboratory’s measuring system, as long as their commutability with patient samples has been validated for a given measurement procedure. Trueness controls should also have product labeling that states the materials are intended for trueness of measurement, and identifies the measuring systems for which the commutability was validated.

    Trueness controls are generally too expensive for routine use in QC, but they are invaluable for verifying that a measuring system is properly calibrated when it is first implemented or periodically thereafter. Trueness controls are also useful for routine verification of calibration or troubleshooting when the accuracy of patient results is suspect.

     

    4.3.4   Control Materials With Assigned Values (Assayed Quality Control Samples)

    Measuring system-specific values can also be assigned to QC samples as target values. Such materials are often called “assayed controls,” and the values are assigned by the QC material manufacturer or by laboratories in a value assignment program using a given measuring system. These materials are intended to give individual laboratories a means of determining whether their performance is as expected for a given measuring system. The usefulness of these system-specific values depends on the traceability and uncertainty of the assigned values.

     

    4.3.5   Control Materials Without Assigned Values (Unassayed Quality Control Samples)

     

    Unassayed control material is widely used in clinical laboratories. It is generally less expensive than assayed control material and is used to evaluate accuracy and precision. Unassayed QC material has no assigned analyte values provided by the manufacturer and is not linked to specific assay/measuring systems. The end user assigns expected results to unassayed QC material. The manufacturer may indicate whether a specific analyte is present or absent in the QC material preparation without indicating an expected assay result. 

     

    4.3.6   Frequency of Quality Control Sample Testing

    The optimal frequency of performing QC sample procedures depends on built-in and other controls for a given measuring system, the stability of that measuring system, and conditions in the laboratory identified through risk assessment that could affect the reliability of testing such as staff turnover, and the clinical risk of harm to a patient if an erroneous result is reported and acted on. The frequency of control procedures should also conform to applicable regulatory and accreditation requirements.

    Monitoring the measuring system at shorter intervals increases the likelihood that systematic errors are detected before incorrect results are reported, or decreases the time before alerting the health care provider who may have received incorrect results. For example, a laboratory that evaluates the examination (analytical) process every eight hours will identify a systematic error condition much earlier than a laboratory that monitors the examination process every 24 hours. However, the total number of specimens tested in a time interval may also influence the frequency of monitoring. For example, a laboratory that tests 2000 samples in one 24-hour period might perform control procedures several times a day, whereas a laboratory that tests 50 samples in an eight-hour shift might perform control procedures at the beginning and end of a shift. The complex relationship between the frequency of QC sample testing, frequency of false rejection, and the quality of patient results has been explored by Parvin and colleagues.[xvi],[xvii]

     ----------------------------


    [i]   ISO. Medical devicesApplication of risk management to medical devices. ISO 14971. Geneva, Switzerland: International               Organization for Standardization; 2007.

    [ii]   ISO. International vocabulary of metrologyBasic and general concepts and associated terms (VIM). ISO/IEC Guide 99. Geneva, Switzerland: International Organization for Standardization; 2007.

    [iii] ISO. In vitro diagnostic medical devicesMeasurement of quantities in biological samples – Metrological traceability of values assigned to calibrators and control materials. ISO 17511. Geneva, Switzerland: International Organization for Standardization; 2003.

    [iv] ISO. Quality management systems – Fundamentals and vocabulary. ISO 9000. Geneva, Switzerland: International Organization for Standardization; 2005.

    [v]    Westcott, RT. The Certified Manager of Quality/Organizational Excellence Handbook 3rd ed. Milwaukee, WI: ASQ Quality Press; 2006.

    [vi] ISO. Medical laboratories – Particular requirements for quality and competence. ISO 15189. Geneva, Switzerland: International      Organization for Standardization; 2007.

    [vii]             US Department of Defense. Procedures for Performing a Failure Mode, Effects and Criticality Analysis. Definition of failure mode. MIL-              STD-1629A, definition 3.1.14. http://www.goes-r.gov/procurement/antenna_docs/reference/MIL-STD-1629A.pdf. Accessed February 25, 2011.

    [viii]             International Electrotechnical Commission. Dependability and quality of service. Definition of fault. IEV definition 191-05-01.    http://www.electropedia.org/iev/iev.nsf/display?openform&ievref=191-05-01. Accessed February 25, 2011.

    [ix] ISO/IEC. Safety aspects – Guidelines for their inclusion in standards. ISO/IEC Guide 51. Geneva, Switzerland: International Organization for   Standardization; 1999.

    [x]   Global Harmonization Task Force. Principles of In Vitro Diagnostic (IVD) Medical Devices Classification.
    GHTF/SG1/N045:2008. Accessed  February 25, 2011.

    [xi] ISO. In vitro diagnostic test systems – Requirements for blood-glucose monitoring systems for self-testing in managing diabetes mellitus. ISO 15197. Geneva, Switzerland: International Organization for Standardization; 2003.

    [xii]             Chou D. Laboratory information systems. In: Kaplan LA, Pesce AJ, eds. Clinical Chemistry: Theory, Analysis, Correlation. 5th ed. New York, NY: Mosby; 2010:395.

    [xiii]             ISO. Accuracy (trueness and precision) of measurement methods and results Part 3: Intermediate measures of the precision of a standard measurement method. ISO 5725-3. Geneva, Switzerland: International Organization for Standardization; 1994.

    [xiv]             Miller WG, Erek A, Cunningham TD, Oladipo O, Scott MG, Johnson RE. Commutability limitations influence quality control results with different reagent lots. Clin Chem. 2011;57(1):76-83.

    [xv]             CLSI. Statistical Quality Control for Quantitative Measurement Procedures: Principles and Definitions; Approved Guideline—Third Edition. CLSI document C24-A3. Wayne, PA: Clinical and Laboratory Standards Institute; 2006.

    [xvi]             Parvin CA. Assessing the impact of the frequency of quality control testing on the quality of reported patient results. Clin Chem. 2008; 54(12):2049-2054. 

    [xvii]            Yundt-Pacheco J, Parvin CA. The impact of QC frequency on patient results. MLO Med Lab Obs. 2008;40(9):26-27.

    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------

    What does CLIA require?   Code of Federal Regulations
    http://www.gpo.gov/fdsys/pkg/CFR-2010-title42-vol5/xml/CFR-2010-title42-vol5-sec493-1256.xml

      Title 42 - Public HealthVolume: 5Date: 2010-10-01Original Date: 2010-10-01Title: Section 493.1256 - Standard: Control procedures.
      Context:  Title 42 - Public Health. CHAPTER IV - CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED). SUBCHAPTER G - STANDARDS AND CERTIFICATION. PART 493 - LABORATORY REQUIREMENTS. Subpart K - Quality System for Nonwaived Testing.  - Analytic Systems.

         § 493.1256

        Standard: Control procedures.

        (a) For each test system, the laboratory is responsible for having control procedures that monitor the accuracy and precision of the complete analytic process.

         (b) The laboratory must establish the number, type, and frequency of testing control materials using, if applicable, the performance specifications verified or established by the laboratory as specified in § 493.1253(b)(3).

        (c) The control procedures must—

        (1) Detect immediate errors that occur due to test system failure, adverse environmental conditions, and operator performance.

        (2) Monitor over time the accuracy and precision of test performance that may be influenced by changes in test system performance and environmental conditions, and variance in operator performance.

        (d) Unless CMS approves a procedure, specified in Appendix C of the State Operations Manual (CMS Pub. 7), that provides equivalent quality testing, the laboratory must—

        (1) Perform control procedures as defined in this section unless otherwise specified in the additional specialty and subspecialty requirements at §§ 493.1261 through 493.1278.

        (2) For each test system, perform control procedures using the number and frequency specified by the manufacturer or established by the laboratory when they meet or exceed the requirements in paragraph (d)(3) of this section.

        (3) At least once each day patient specimens are assayed or examined perform the following for—

        (i) Each quantitative procedure, include two control materials of different concentrations;

        (ii) Each qualitative procedure, include a negative and positive control material;

        (iii) Test procedures producing graded or titered results, include a negative control material and a control material with graded or titered reactivity, respectively;

        (iv) Each test system that has an extraction phase, include two control materials, including one that is capable of detecting errors in the extraction process; and

        (v) Each molecular amplification procedure, include two control materials and, if reaction inhibition is a significant source of false negative results, a control material capable of detecting the inhibition.

        (4) For thin layer chromatography—

        (i) Spot each plate or card, as applicable, with a calibrator containing all known substances or drug groups, as appropriate, which are identified by thin layer chromatography and reported by the laboratory; and

        (ii) Include at least one control material on each plate or card, as applicable, which must be processed through each step of patient testing, including extraction processes.

        (5) For each electrophoretic procedure include, concurrent with patient specimens, at least one control material containing the substances being identified or measured.

        (6) Perform control material testing as specified in this paragraph before resuming patient testing when a complete change of reagents is introduced; major preventive maintenance is performed; or any critical part that may influence test performance is replaced.

        (7) Over time, rotate control material testing among all operators who perform the test.

        (8) Test control materials in the same manner as patient specimens.

        (9) When using calibration material as a control material, use calibration material from a different lot number than that used to establish a cut-off value or to calibrate the test system.

        (10) Establish or verify the criteria for acceptability of all control materials.

        (i) When control materials providing quantitative results are used, statistical parameters (for example, mean and standard deviation) for each batch and lot number of control materials must be defined and available.

        (ii) The laboratory may use the stated value of a commercially assayed control material provided the stated value is for the methodology and instrumentation employed by the laboratory and is verified by the laboratory.

        (iii) Statistical parameters for unassayed control materials must be established over time by the laboratory through concurrent testing of control materials having previously determined statistical parameters.

          [68 FR 3703, Jan. 24, 2003; 68 FR 50724, Aug. 22, 2003]

     

    • BASIC TERMS AND CONCEPTS 2001

      These are the fundamentals described in the 2001 AACC Press textbook, Performance-Driven Quality Control by Zoe Brooks, before the introduction of sigma or risk management.  Webinar panelists and others will also serve as reviewers, editors and co-authors to rewrite these basics.Performance-Driven Quality Control Zoe Brooks 2001

      ------------------------------------

      Samples may come to the laboratory from patients, proficiency testing (PT) programs and external quality assessment schemes (EQAS), or manufacturers of quality control (QC) material.  When we test a portion of the same QC material each day, we are measuring the same sample.

      Samples are tested on an analytical system to produce results that indicate the amount of analyte present. The analyte is the specific substance we are interested in measuring.

      In this book, the term PT sample will also apply to EQAS samples. PT samples are sent to the laboratory from external agencies and tested to reflect the laboratory’s performance with patient samples. Erroneous proficiency results indicate that our laboratory is incapable of meeting the accepted standard of performance and can ultimately lead to the loss of our laboratory’s license to perform that specific test or entire class of tests.

      The analytical system includes the reagents, calibrators, instruments, disposables, and step-by-step processes needed to produce a result.

      Reagents are chemical solutions that react in predictable ways to given analytes; the predicted reaction allows us to measure the amount of an analyte present in each sample.

      Calibrators are materials that contain a known amount of analyte.

      Instruments mix samples with reagents and compare the reaction of the patient, PT, or QC sample against the known calibrator values or predictable chemical activity of an analyte to produce a result for each sample.

      Disposables include the items required to contain samples, deliver set volumes of samples and/or reagents and measure the chemical reaction.

      Processes include all of the steps necessary to prepare samples for testing, prepare reagents and calibrators, set up and maintain instruments, combine samples with reagents, calculate the amount of analyte present, and report results.

      Results of QC samples are analyzed to assess, and alert us to, changes in method performance. Because of inevitable minor changes in reagents, calibrators, instruments, disposables, and processes, these QC results show a predictable and expected random variation. Measured results on the same QC sample are not always the same; some are higher and some are lower than others due to random variations in the analytical system.

      If we create a bar chart of our QC results with the frequency of results on the y-axis and value on the x-axis, we expect a symmetrical bell-shaped distribution of data, as shown in Figure A. This expected random variation has a defined mathematical relationship based on random distribution and is described as Gaussian distribution. This predictable pattern of data distribution is the foundation of statistical analysis in laboratory QC.

      When analytical systems are stable, they do not experience significant change, and the QC results exhibit Gaussian distribution with the average value at the center. The dispersion of results around the average is determined by the inherent random variation of the test system. We calculate these values as the mean (average) and standard deviation (variation) to assess method accuracy and precision. An accurate method will produce results with a mean value close to the true or target value for the measured analyte.

      Figure I-1. Gaussian distribution shows predictable distribution of data with approximately 68% of points within +/- 1 SD, 95% within +/- 2 SD and almost all data within +/- 3 SD.

      The true or target value is the best estimate of the correct value for each control sample. The difference between the measured mean and the target is called bias. Method bias changes from time to time when we change reagents or calibrators,components of the instrument, disposables, or processes.

      A precise method will show a relatively small standard deviation, or narrow dispersion, of results around the mean.

      If a change occurs in the bias or precision of our analytical system, the pattern of data distribution changes from the expected Gaussian distribution observed prior to the change.

      Daily QC encompasses preparing and handling QC samples, testing samples to produce QC results, and assessing individual QC results using QC charts and QC rules. In practice, these activities may be performed several times each day for some analytes, and only once or twice each week or month for others, depending on the frequency of testing, the expected error rate of the method, and the number of patient and PT samples tested.

      Stability reflects the inherent error rate in an analytical process; it is a measure of the likelihood that a method will experience sudden changes in accuracy or precision that will adversely affect its ability to meet quality specifications. A method that demonstrates frequent significant changes is described as having a high error rate or low stability.

      Each analysis of QC samples brackets an analytical run.

      The result(s) of the QC sample(s) in each run reflect the accuracy and precision of the method at that time. Any changes in accuracy or precision will affect the QC samples and
      alert us to similar changes in patient and PT samples. Results from daily QC are plotted on a QC chart—an x–y plot with results from testing of QC samples on the y-axis, and the run number or date on the x-axis. The scale of the y-axis is usually set with the mean at the center and the minimum and maximum value often but not always defined by +/- 4 SD.

      QC flags are events that indicate unexpected performance, usually triggered when QC results fall outside expected limits selected to meet the criteria of QC rules. QC rules are usually defined by the number of occurrences of QC values that vary from the mean by more than a defined number of SD. We select QC rules to maximize QC flags when changes in method accuracy or precision cause patient or PT results to exceed quality specifications, and minimize QC flags in the absence of such change.

      Summary statistics, the mean and SD, compare method accuracy and precision to the target value and total error allowable (TEa) for each QC sample to assess overall method performance. QC flags in reports of summary statistics alert us to changes in the overall accuracy or precision of a method.

      Quality specifications are usually defined as a total error allowable (TEa); these limits specify the maximum acceptable variation of results from the target value.

      A total error flag in summary statistics indicates that total error, the combined effect of method bias and precision, exceeds the acceptable variation defined by the total error allowable.

      Critical systematic error (SEc) is a measure of the number of SD the mean can shift before values will exceed the TEa limit. SEc flags indicate that a method is close to its quality specification and requires close monitoring.

      Performance-Driven Quality Control helps us design QC systems to ensure that our laboratory meets the defined quality specification for each method.

      • DIMS Dictionary

        This Preview Edition of the DIMS Dictionary of Quality Control, Risk Management and M.O.R.E. was not published. It pre-dates the costing model developed in 2018. Some terms and graphics have changed. Team contributors will revise and edit this.