BASIC CONCEPTS OF QUALITY ASSURANCE PLANS IN THE HEMATOLOGY LABORATORY
Quality assurance is a comprehensive and systematic process that strives to ensure reliable patient results. This process includes every level of laboratory operation. Phlebotomy services, competency testing, error analysis, standard protocols, PPE, quality control, and turnaround time are each a key factor in the quality assurance system. From the time a sample arrives in the laboratory until the results are reported, a rigorous quality assurance system is the key feature in ensuring quality results. Each part of the quality assurance plan or process should be analyzed, monitored, and reconfigured as necessary to emphasize excellence at every outcome. Although many hospitals and research facilities have “quality” professionals who provide oversight for quality assurance plans for their facilities, an elemental understanding of terms related to the total quality assurance plan is required of all staff technologists and students.
Quality control is a large part of the quality assurance program at most facilities. Students will be introduced to the term quality control early and often. It is an essential function in the clinical laboratory. The information that follows provides a brief overview of the quality control procedures used in promoting quality assurance in the hematology laboratory. It is not intended to be comprehensive but introduces terminology and concepts pertinent to the entry-level professional.
Quality
Control Monitoring in the Hematology Laboratory
The analytical component, or the actual measurement of the analyte in body fluids, is monitored in the laboratory by quality control, a component of the laboratory quality assurance plan. Similar to the chemistry laboratory, the analytic method in the hematology laboratory primarily includes nstrumentation and reagents. Standards, or calibrators, are solutions that have a known amount of an analyte and are used to calibrate the method. A standard, or calibrator, has one assigned, or fixed, value. For example, the hemoglobin standard is 12 g/100 mL, meaning that there is exactly 12 g of hemoglobin in 100 mL of solution. Conversely, controls, or control materials, are used to monitor the performance of a method after calibration. Control materials are assayed concurrently with patient samples, and the analyte value for the controls is calculated from the calibration data in the same manner as the unknown or patient’s results are calculated. Control materials are commercially available as stable or liquid materials that are analyzed concurrently with the unknown samples. The control material measured values are compared with their expected values or target range. Acceptance or rejection of the unknown (patient) sample results is dependent on this evaluation process.
A statistical quality control system is used to establish the target range. The procedure involves obtaining at least 20 control values for the analyte to be measured. Ideally, the repeated control results should be the same; however, there will always be variability in the assay. The concept of clustering of the data points about one value is known as central tendency. The mean, mode, and median are statistical parameters used to measure the central tendency. The mean is the arithmetic average of a group of data points; the mode is the value occurring most frequently; and the median is the middle value of a dataset. If the mean, mode, and the median are nearly the same for the control values, the data have a normal distribution. The standard deviation and coefficient of variation are a measure of the spread of the data within the distribution about the mean. Standard deviation is a precision measurement that describes the average “distance” of each data point from the mean in a normal distribution. This measurement is mathematically calculated for a group of numbers. If the measured control values follow a normal distribution curve, 68.6% of the measured values fall within the mean and one standard deviation (SD) from the mean, 95.5% falls within the mean and two standard deviations (2SD) from the mean, and 99.7% fall within the mean and three standard deviations (3SD) from the mean. The 95.5% confidence interval is the accepted limit for the clinical laboratory.
Coefficient of variation (CV) is the standard deviation expressed as a percentage. The lower the CV, the more precise are the data. The usual CV for laboratory results is less than 5%, which indicated that the distribution is tighter around the mean value. Clarifying accuracy and precision is usually a troublesome task as these terms are often used interchangeably. When a test result is accurate, it means that it has come closest to the correct value if the reference or correct value is known. In most cases, once a methodology has been established for a particular analysis, standard or reference material is run to establish a reference interval. Accuracy is defined as the best estimate of the result to the true value.
Precision relates to reproducibility and repeatability of test samples using the same methodology. Theoretically, patient results should be repeatable if analyzed a number of times using the same method. If there is great variability of results around a target value, then the precision is compromised.
Source : Hematology in Practice - Betty Ciesla
The analytical component, or the actual measurement of the analyte in body fluids, is monitored in the laboratory by quality control, a component of the laboratory quality assurance plan. Similar to the chemistry laboratory, the analytic method in the hematology laboratory primarily includes nstrumentation and reagents. Standards, or calibrators, are solutions that have a known amount of an analyte and are used to calibrate the method. A standard, or calibrator, has one assigned, or fixed, value. For example, the hemoglobin standard is 12 g/100 mL, meaning that there is exactly 12 g of hemoglobin in 100 mL of solution. Conversely, controls, or control materials, are used to monitor the performance of a method after calibration. Control materials are assayed concurrently with patient samples, and the analyte value for the controls is calculated from the calibration data in the same manner as the unknown or patient’s results are calculated. Control materials are commercially available as stable or liquid materials that are analyzed concurrently with the unknown samples. The control material measured values are compared with their expected values or target range. Acceptance or rejection of the unknown (patient) sample results is dependent on this evaluation process.
A statistical quality control system is used to establish the target range. The procedure involves obtaining at least 20 control values for the analyte to be measured. Ideally, the repeated control results should be the same; however, there will always be variability in the assay. The concept of clustering of the data points about one value is known as central tendency. The mean, mode, and median are statistical parameters used to measure the central tendency. The mean is the arithmetic average of a group of data points; the mode is the value occurring most frequently; and the median is the middle value of a dataset. If the mean, mode, and the median are nearly the same for the control values, the data have a normal distribution. The standard deviation and coefficient of variation are a measure of the spread of the data within the distribution about the mean. Standard deviation is a precision measurement that describes the average “distance” of each data point from the mean in a normal distribution. This measurement is mathematically calculated for a group of numbers. If the measured control values follow a normal distribution curve, 68.6% of the measured values fall within the mean and one standard deviation (SD) from the mean, 95.5% falls within the mean and two standard deviations (2SD) from the mean, and 99.7% fall within the mean and three standard deviations (3SD) from the mean. The 95.5% confidence interval is the accepted limit for the clinical laboratory.
Coefficient of variation (CV) is the standard deviation expressed as a percentage. The lower the CV, the more precise are the data. The usual CV for laboratory results is less than 5%, which indicated that the distribution is tighter around the mean value. Clarifying accuracy and precision is usually a troublesome task as these terms are often used interchangeably. When a test result is accurate, it means that it has come closest to the correct value if the reference or correct value is known. In most cases, once a methodology has been established for a particular analysis, standard or reference material is run to establish a reference interval. Accuracy is defined as the best estimate of the result to the true value.
Precision relates to reproducibility and repeatability of test samples using the same methodology. Theoretically, patient results should be repeatable if analyzed a number of times using the same method. If there is great variability of results around a target value, then the precision is compromised.
Source : Hematology in Practice - Betty Ciesla