Sunteți pe pagina 1din 30

Published 1996

Chapter 2

Quality Assurance and Quality Control

EUGENE J. KLESTA, Chemical Waste Management, Inc.

JOAN K. BARTZ, Dames & Moore, Richland, Washington

SCOPE
This section will describe the concepts, principles, and general procedures of
quality assurance (QA) as it applies to the analysis of soil samples and materials
related to soil science. Although the primary focus will be on analytical chem-
istry, sampling will be discussed to some extent because of its importance to the
overall accuracy of the generated data. Data produced from the analysis of the
sample are representative of the population and are reliable only if the original
samples themselves are representative of the population.

Sampling
Sampling activities require quality assurance principles such as the devel-
opment and use of standardized sampling procedures, training and documentation
of sampling personnel, and the creation of traceable and defensible information
through the use of labels and chains of custody. A sampling plan should be writ-
ten to describe the location, number, size, and frequency of samples to be taken.
Specific preservation or handling procedures which are not readily inferred must
be used to prevent the contamination of the sample. There are a wide variety of
sampling strategies which can be employed to obtain samples which will result
in the accumulation of desired information. When compositing techniques are to
be used, specific information regarding the number of increments to be compos-
ited and the size of each increment must be included in the sampling plan. When
any deviations from the sampling plan take place, clear, complete documentation
is required to produce a permanent record that some nonconforming practices
were performed. The reason for the deviation also must be included.

Physical Testing
Quality assurance principles are applicable to physical testing. Reliability
of information, precise and accurate data, and defensibility are desirable conse-

Copyright 1996 Soil Science Society of America and American Society of Agronomy, 677 S.
Segoe Rd., Madison, WI 53711, USA. Methods of Soil Analysis. Part 3. Chemical Methods-SSSA
Book Series no. 5.

19
20 KLESTA & BARTZ

quences that occur when standardized procedures are used, record keeping prac-
tices include all pertinent information, test performers are properly trained, com-
plete training documentation is maintained, and a quality control (QC) system is
in place to determine errors, to identify bias, and to require corrective action
when needed. The importance of physical testing is often overlooked. Parameters
such as compressibility, permeability, or particle size distribution can be extreme-
ly important for the success or failure of a particular project. The quality assur-
ance system must include these areas of testing to be an acceptable system.

Chemical Testing

Chemical testing may be commonly referred to as analysis or analytical


chemistry. The principles of QA and QC are more established and generally prac-
ticed more often in the arena of analytical chemistry than in the sampling or phys-
ical testing areas. Regulatory agencies and commercial clients have required that
laboratories implement QA and QC practices to improve the users' confidence in
the analytical data. The following discussion will describe the critical compo-
nents of a quality system and ways to implement QC to improve the accuracy and
precision of analytical data.

POLICY STATEMENT

A policy statement is an essential part of any quality system. Basically it is


a written statement which is developed and communicated to all participants who
are responsible for the implementation and management of the quality system. It
is important to understand the objectives of the quality system and the domain of
its authority. Likewise, the policy statement will communicate, by their absence,
which areas are not included in this particular quality system. Several policy
statements may be needed to cover all areas of concern. The goal to be achieved
by the quality system participants must be established and communicated.

Authority

Participants in a quality program should be able to recognize clearly that


the management, governing body, or academic authority supports, endorses, and
participates in the program. The scope of the program should be defined and
delineated to avoid confusion. Successful implementation of the quality program
results with support "from the top" and fails without it.

Goal

The goal should be included in the policy statement. The goal of the qual-
ity system must be documented in terms that allow for measurable evaluation.
One of the key components of a quality system is an assessment phase. The
assessment of the compliance to requirements of the program cannot be readily
made if the ultimate goal is not understood.
QUALITY ASSURANCE & QUALITY CONTROL 21

Responsibility

Members of the organization who are responsible for the implementation


of the quality system, assessment of the compliance to the requirements, and del-
egation of authority to take corrective action must be designated. Each partici-
pant's responsibilities should be clearly defined. The responsibilities of those in
charge of the organization also should be defined. The more precise the quality
statement and the accompanying responsibilities are, the more successful will be
the implementation of the program.

DEFINITIONS

Quality Assurance

Quality assurance is the system which includes requirements, procedures,


and assessment to ensure that the goal of the program is achieved and to measure
the level of achievement to that goal. Any specific goals that are expected from
the quality assurance activities should be documented. For example, one state-
ment of a major corporation includes these words " ... to provide valid, defensi-
ble data in a timely manner."

Quality Control

Quality control is a system of procedures and practices which result in the


increase in precision and the decrease in bias. The use of duplicate analysis,
spiked samples, standard reference materials, and QC check samples are all
mechanisms used to demonstrate the control of quality. Quality control is a sub-
set of QA. The principles of QC reach the level of the field technician, bench
chemist, and laboratory analyst.

Defensibility

Defensibility is a quality of analytical data, field information, and general


record keeping which allows for the admission of the records into a court of law
to stand on their own merits. The lack of ambiguity, the clarity and accuracy of
information, and the completeness of the explanations by the author all improve
the defensibility of the data.

Traceability

Traceability is a characteristic which is essential to substantiate the validi-


ty of analytical data, the appropriateness of the sample tested, and the correlation
to the calibration standards used. The defensibility of information relies heavily
on its traceability. The correlation of information is a necessary attribute of a
quality system.
22 KLESTA & BARTZ

QUALITY ASSURANCE MANUAL

An essential part of the quality assurance system is the quality assurance


manual. The manual can take a variety of forms. The main attributes for devel-
oping an acceptable manual are: completeness, portability, applicability, and
clarity.
All necessary material must be included in the manual. The users of the
manual need to have a copy of the manual near at hand. Putting together a huge
tome that will not find its way off the shelf defeats the purpose of the manual.
The manual is focused on the activities taking place in the laboratory or in field
operations. Specificity of procedures is necessary to make the manual applicable.
The ability to edit the contents in a reasonable amount of time is necessary. As
the quality system develops, grows, or changes, the quality assurance manual
must keep pace with this progress. The responsible authors of the quality assur-
ance manual should work at removing ambiguity.and confusion.

STANDARD OPERATING PROCEDURES

One of the prerequisites for having and maintaining a good quality system
rests on the need to have all members of the staff operating on "common
ground." Writing and using standard procedures are the first steps toward achiev-
ing comparability. People involved in activities tend to develop their own way of
doing things. Creativity under the guise of improvement can result in changes in
procedures which could result in undesirable consequences and unusable data.
Management of the organization must determine where and when Standard
Operating Procedures (SOPs) are needed.

Organization

A collection of SOPs is necessary to have an acceptable quality system. A


format should be developed so that all SOPs will be in the same format. The stan-
dardized format improves the ease of use by the reader. Standard operating pro-
cedures are used for reference by personnel in the field operations and the labo-
ratory operations. Easily finding "key" information in a SOP results in an over-
all improvement in efficiency. The reader finds the information quickly and
returns to the task at hand. Through continued use of the SOPs, the technical staff
becomes more familiar with the procedure and generally what is expected in day-
to-day operations. Standard operating procedures are essential elements for the
training of new personnel.

Analytical Structure
Standard operating procedures are useful for clarification and standardiza-
tion of analytical methods. All analysts should be performing the analytical pro-
cedures exactly the same way. Because some analytical methods do not contain
QUALITY ASSURANCE & QUALITY CONTROL 23

the specificity needed to ensure complete consistency, it is necessary for the lab-
oratory management to clarify these points of confusion and to specify details
where the methods are lacking. The development of SOPs covering analytical
methods satisfies this requirement. Specification of calculations, choice of labo-
ratory equipment, or time limits for certain steps not addressed in the method are
examples of points to be included in an analytical SOP. General laboratory pro-
cedures which can affect the analytical results should be included in the scope of
SOPs. Glass cleaning procedures, safety concerns and rules, and sample handling
procedures and disposal practices are examples of issues to be covered.

Quality Assurance Structure


Standard operating procedures are needed to describe the quality assurance
structure and responsibilities. When the organization determines that specific
quality control or quality assurance practices are required, then SOPs covering
these areas must be written and distributed to all personnel. If a quality assurance
unit or a quality assurance officer is the mechanism employed to oversee the
quality system and to determine adherence to the program, then the respective
responsibilities, duties, and authority must be spelled out in the SOPs.

Management Responsibility
The management of a sampling or analytical operation has a responsibili-
ty to develop an organized quality system. Leadership is an essential factor to
ensure that a quality program is successful. The use of standardized procedures
results in conformity and clarity. The repeatability of analysis is greatly enhanced
by conformance to standardized procedures. Management must set the "ground
rules" for the rest of the organization. Approval of the standard procedures
should be done by management in a systematic process.
The quality management function is responsible for instituting standard
procedures for the QA and QC practices that are to be performed universally.
Input from the staff performing the sampling and the analyses is an important
consideration for management to solicit. The practicality and usefulness of the
standard procedures will be greatly improved by using the input of the staff.
Adherence to the procedures follows readily when staff members have an inte-
gral part in the development of those procedures.

Employee Responsibility
Each member of the sampling or analytical staff is required to be knowl-
edgeable of all SOPs which impact on their job responsibilities. The current SOP
must be used to perform the sampling or analytical tasks. When it is necessary to
alter the procedure for any number of legitimate reasons, the employee has the
responsibility to notify the management that the SOP is in need of revision. Stan-
dard operating procedures which are written with input from the employee are
generally the most practical and useful. Cooperation in writing and revising the
documents is one of the key attributes of a successful quality program.
24 KLESTA & BARTZ

Analytical Methods
Written analytical methods are a necessity to maintain QA at an acceptable
level. The analysts must follow the steps of a procedure exactly the way they are
written. The comparability of analytical data from analyst to analyst depends on
adherence to written analytical methods. The laboratory should maintain a cur-
rent methods manual which contains all of the methods that are being used in the
laboratory. Methods which are no longer used should be maintained in historical
files, but should be removed from all manuals found in the laboratory. Records
must be maintained to document when a change is made from one version of a
method to a later version of a method. The defensibility of the data depends on
the ability to correlate the data to the version of the method in use at the time of
generation.

Standard Methods
Analytical methods which have undergone a rigorous process for review,
validation, and promulgation are the most desirable methods to be used. Soci-
eties and associations which publish methods have a variety of approval process-
es which are used to quality the method before it is given the final approval of
the organization. The defensibility of analytical data is improved considerably
when data are generated using a standard method. The scope and application of
the method must be appropriate for the material to be analyzed. The misuse of a
standard method is just as unacceptable as using no standard method at all. Some
of the common issuers of methods are: American Society for Testing and Mater-
ials (ASTM), American Public Health Association (APHA), Soil Science Soci-
ety of America (SSSA), Association of Official Analytical Chemists-Interna-
tional (AOAC), United States Environmental Protection Agency (USEPA).

Validation of Methods
Procedures for the validation of analytical methods are needed in the qual-
ity assurance manual. Whether the method is a standard method or one that was
developed within the laboratory, it is critical to demonstrate defensibly that the
analysts are capable of generating results that meet the acceptance criteria of the
method. The number of replicate analyses to be performed and the material to be
used for the validation of the method should be specified clearly in the applica-
ble SOP.

Modification of Methods
When some specific step or procedure of a standard method does not apply
to particular sample matrices or technical improvements can be made, then a
method modification should be written, reviewed, and approved. The nature of
the modification cannot change the basic chemistry of the procedure. Technical
concerns should be reviewed by staff members for correctness and technical
merit. Validation of the modified method should be performed to identify any
potential bias. Modifications must be in writing and should be placed within the
analytical methods manual for ready reference. A standard form for modifica-
tions is helpful to ensure that all necessary issues for a modification to be
QUALITY ASSURANCE & QUALITY CONTROL 2S

approved are included in the modification. The method modification must not be
used before it receives approval.

Approval
An approval system should be put in place to prevent the unauthorized use
of nonstandard methods or method modifications. The person responsible for
method approvals should have significant experience in the use and development
of analytical methods. The approval should be made by the staff member with the
most responsibility for ensuring that analytical data of the highest quality are pro-
duced. In multilaboratory situations, an appointed official should be given the
authority and responsibility to review and approve all method modifications and
nonstandard methods. Distribution of the modification to all holders of the meth-
ods manual also is the responsibility of that appointed official.

Record Keeping

Record keeping is a very important aspect of the quality assurance system.


Analytical data are defensible when the process which was used to generate the
data can be retraced through all steps of the process such as test portion amount,
length of time for extraction step, dilutions, operating conditions of the instru-
ment, and calculations. Data must be recorded and preserved in such a way that
all critical information can be retrieved at some future date.

Logbooks
Logbooks for recording information and data should be bound and
designed for the identification of the book and the uniqueness of the page and line
numbers. Logbooks should be sequentially numbered, traceable from the issuer
to the user, and subject to a system for inventory and archive. Meeting these
requirements will result in completely defensible records.

Benchsheets
Various laboratory organization schemes may necessitate the use of
benchsheets for recording analytical data. The practical benefit of using
benchsheets is that they can be designed and used for specific analytical tests.
Having customized forms can be very useful in assuring the complete documen-
tation of all critical parameters. The analyst is somewhat forced to complete all
of the blanks on a benchsheet. Times, temperatures, flow rates, and calculations
can be incorporated in the format resulting in improved record keeping.
Benchsheets should be pre labeled with unique sequential numbers. When a sig-
nificant number are accumulated, the benchsheets should be identified and bound
in such a way that they cannot be lost or stolen.

Data Recording
Data should be recorded in permanent ink. Black or blue ink is preferred
because it can be photocopied and does not bleed or become illegible as other col-
26 KLESTA & BARTZ

ors do. Data should never be obliterated by crossing out, using opaque correction
fluids, or covered by tape. Corrections to data may be made as described in the
section on "Data Management."

Archiving
Defensibility of information includes the ability to reproduce the informa-
tion at some future date which may occur days, months, or many years after the
information was first generated. To achieve this objective, a system for archiving
must be in place. The procedures for the distribution, use, recovery, and storage
of logbooks must be described. The accumulation and storage of benchsheets and
instrument printouts must be detailed. Secure, fireproof storage is required for all
paper records. An organized system will facilitate retrieval of information. The
system can be set up by date, project, client, or state as long as a test of the sys-
tem during the quality assessment phase of the program proves to be successful.
Procedures for magnetic media should be determined and clearly documented in
an SOP. The use of compact discs-read only memory (CD-ROMs) and laser disks
are becoming more acceptable each year because of the advantages of size reduc-
tion, search capability, and permanence. Use of magnetic or optical media may
not be acceptable as the only means of archiving. The responsible authority for
oversight of the work should be consulted before implementing an optical or
magnetic system which replaces the paper records completely.

Computers
The use of computers to capture, calculate, and store analytical data great-
ly enhances the quality and accessibility of the data. The transcription of data
from one location to another can result in a significant number of errors. Putting
the information into an electronic medium reduces the chance of transcription
errors. Information from a computerized system is retrieved in a fraction of the
time that a "paper" system would take. There are some quality assurance consid-
erations that must be used when developing an electronic system.

Security
Access to the computer system must be controlled. The use of passwords
and a hierarchal organization add to the overall security. Managers and supervi-
sors will have access to a greater amount of information than the analyst or tech-
nician. A magnetic or electronic "audit trail" or history fill should be included in
the computer system. Whenever access is granted to stored information, a sepa-
rate nonaccessible record is made of the transaction. Whenever changes to stored
data are made, a justification should be required by the system before the change
can be completed. Both the original information and the corrected information
should be stored permanently. Instrumentation is becoming more computerized,
and the transfer of data by electronic and magnetic means is becoming more com-
monplace. Because of these developments, security is an essential principle of the
quality system.
QUALITY ASSURANCE & QUALITY CONTROL 27

Backup Procedures
Computer systems require a specialized procedure which is not needed in
typical paper record systems. Because of the possibilities of magnetic distortion
or mechanical failure, duplication of essential data is required to prevent the per-
manent loss of data. The ease of duplication and the relatively small amount of
space needed to store the duplicate records make backup procedures suitable to
computer systems. The concept of maintaining a backup to all paper records is
impractical and limited by space. The development and use of an SOP is essen-
tial to ensure that the backup procedures are done at the required frequency and
in the correct manner.
Tape. When a tape system is used for archiving and backup procedures,
there are some specific requirements to ensure the integrity of the tapes for future
access. Tapes should be rewound periodically (approximately every 6 mo) and
data should be transferred to new tapes on a regular basis (every 5 yr or less). It
also is necessary to maintain the hardware needed to be able to read the tapes. As
technology improves, size, speed, and capacity also change. Storing tapes which
cannot be read because the hardware has not been maintained should be prevent-
ed. Tapes must be stored in secure, hardened, fireproof facilities to prevent cata-
strophic loss.
Alternate Media. The development of alternate media is improving the
longevity of archiving in some condensed form. The use of CD-ROMs or laser
disks which use optical means rather than magnetic means to store information
will increase the amount of time that information can be stored before the trans-
fer to new media is needed. These optical media are estimated to be stable for
approximately 20 yr. Developments of technology such as these will improve the
ability to retrieve data when long-term storage is needed.

Training
Having qualified personnel is one of the primary components of a quality
system. To assure that the people performing the sampling, analysis, and quality
assessment functions are both competent and conscientious, proper training must
be conducted and documented. Although technically educated people have
knowledge which can be applied to the tasks at hand, it is very important that spe-
cific training in procedures is conducted. Consistency between personnel is
improved through proper training. Specific procedural steps or quality control
practices may not be readily discernible even to a technically educated person.
Serial training should be avoided because it allows for "folklore" to creep into the
system. Combining standardized procedures with a good training program will
result in the highest probability for precise and accurate information.

Methods
All personnel who are to perform a specific sampling or analytical method
should receive training in that method and should have documented evidence of
28 KLESTA & BARTZ

proficiency in the method. The training scenario should include the following: (i)
The trainee reads and understands the written method. (ii) The trainee observes
the trainer perform the method in its entirety. (iii) The trainee performs the
method with observation by the trainer. (iv) The trainee performs the method a
second time using a check sample or a reference material to demonstrate profi-
ciency. (In the case of sampling, proficiency is measured by comparing analyti-
cal results from the trainee's sample to those from the trainer's sample.) (v)
Short-term follow-up by the trainer should occur to ensure that the method is
being followed as written and to answer any questions which may need answers
or clarification. One of the major sources of error in sampling or in the laborato-
ry is the result of nonconformance to the method.

Quality Assurance/Quality Control Program


The specific procedures of the quality assurance/quality control program
also require training. The frequency of replicates and spikes, the interpretation of
suspect data, the use and interpretation of control charts, and the corrective
actions which are to be taken when an out of control situation occurs cannot be
left to chance. Knowledge of the procedures, use of quality control "tools," and a
sound understanding of the quality philosophy of the organization are achievable
goals of the training program. Managers will become more comfortable with del-
egating decisions and relying on subordinates to carry out and correct the quali-
ty system, when a training program includes the quality system as a part of the
curriculum.

Documentation
Ail training records must be kept up-to-date and complete. Developing a
training plan is one way to assure that all required training occurs at the proper
frequency. Each sampling or analytical staff member must have a training file
which includes a record of what training is required for that person and docu-
mented evidence that the training has occurred.

Certification
Upon completion of sampling, analytical, or quality assurance/quality con-
trol training, the management of the organization should certify that the person is
qualified to perform the procedures without additional supervision. Annual recer-
tification should be done for all procedures.

Facilities

Proper laboratory facilities will increase the ability to produce high-quality


analytical data. The instrumentation must be capable of generating results which
meet the needs of the data user. Adequate space will aid in limiting cross conta-
mination and loss or breakage of samples. Impervious materials should be used
in the laboratory. Adequate amounts of bench space and hood space should be
provided to prevent overcrowding of activities, some of which may have the
potential to be dangerous.
QUALITY ASSURANCE & QUALITY CONTROL 29

Exposure and Environment


Personal protective equipment (PPE) should be provided to all sampling
and analytical personnel. The use of fume hoods must be required for all activi-
ties which may emit toxic or hazardous fumes. Handling of all "unknown" sam-
ples must be done using rubber gloves to prevent contact with the skin. Exposure
has another meaning with regard to the laboratory equipment. Expensive equip-
ment should not be exposed to excessive heat or cold, dust, or sources of electri-
city which have not been conditioned or stabilized. Air conditioning and dehu-
midifiers should be used where appropriate to prevent overheating or damage
from condensation. The quality of analytical data can be directly related to the
quality of the instrumentation and conditions under which it is operated. Specif-
ic safety rules should be developed and enforced to comply with the Occupational
Safety and Health Act (OSHA) and to prevent any injuries to analytical staff.
Laboratory access should be controlled. Unauthorized personnel should be
allowed into laboratory areas only when escorted. The integrity of the data is best
demonstrated by having an environment which is restricted to staff that have been
properly trained and certified to perform the work.

Instruments and Equipment

Instruments and equipment are primary components of a sampling and


analysis quality assurance system. Sampling events to be performed according to
a written sampling plan cannot be successful unless proper sampling equipment
is provided in adequate quantities. Analytical procedures cannot be followed
properly without instrumentation capable of achieving the desired sensitivity and
precision. Providing the materials is only the first step. Appropriate maintenance
and calibration are needed to ensure quality data.

Maintenance
Written procedures and implementation of decontamination of sampling
equipment are an integral part of assuring quality. Proper use of equipment was
covered to some extent in the section on training. It is essential to keep sampling
equipment in proper working order by replacing expendable parts and by follow-
ing a maintenance plan.
Analytical instruments require servicing at proper intervals. The quality as-
surance manual should contain an instrument maintenance plan. Daily, weekly,
monthly, and yearly preventive maintenance procedures should be enumerated
for each type of instrument. Scheduling of preventative maintenance service calls
by the instrument manufacturer also should be included. A daily instrument
check compared to statistically based control limits should be required as part of
the quality control procedures. This will assist in isolating instrument malfunc-
tions. Maintaining spare parts and keeping a current inventory of parts will assure
reductions in down time and prevent using old or inadequate parts because of the
pressures of data production.
30 KLESTA & BARTZ

Calibration
Calibration is required before the generation of sample data begins. A SOP
should be written and implemented which includes the frequency of calibration,
the number of calibration standards to be used, and the acceptance criteria for the
calibration curve. The number of calibration points is inversely proportional to
the linearity of the curve, as described in the section "Calibration." The range of
concentration to be used for acceptable quantitation should be documented and
followed. The nature of analytical chemistry is based primarily on the compari-
son of the unknown concentrations, i.e., samples, to known concentrations, i.e.,
standards. The use of current, accurate calibration curves cannot be overempha-
sized.

List
The sampling and analytical personnel should maintain a current list of all
sampling and analytical equipment. The instrument list should contain the model
number, serial number, date of purchase, and general condition. The list should
be kept current, reflecting recent acquisitions or retirement of equipment. The list
helps to show current assets of the organization in an easy manner and to assist
management in budgeting for future replacement of old equipment.

Data Management
A SOP should be written for all phases of data management. The collec-
tion, calculation, verification, and reporting of data are to be included. A system
for organization of the data and the process for recovery of archived data should
developed and documented.

Primary Data
Primary data are sometimes referred to as "raw" data. The first record of
the data constitutes primary data. Instruments provide primary data in the form of
printouts, chromatograms, or spectra. All of these formats must be dated and ini-
tialed by the analyst. Sampling events also include forms of primary data in field
notebooks, chains of custody, and sample labels. Analytical procedures that do
not produce printed records require that all primary data be recorded in bound,
numbered logbooks or on prenumbered benchsheets that will be subsequently
bound. The accuracy and completeness of primary data are essential for having
truly defensible records.

Secondary Data
Secondary data ensues when primary data are used to calculate an analyti-
cal result or when primary data are copied into another format. The SOP on data
management must include mechanisms for the review of secondary data and the
procedures for corrective action when errors are found. Quality control limits of
acceptability are another form of secondary data. If the acceptance limits are inac-
curate, a significant amount of analytical data can be generated that appears to be
acceptable when, in fact, it is not.
QUALITY ASSURANCE & QUALITY CONTROL 31

Calculations
All calculations should be clearly understandable. Logbook or benchsheet
formats which include the primary data, the analytical factor, the dilution factor,
and the results are helpful to ensure the accuracy of the calculation and to verify
during data review. Dependence on calculator programs or computer programs
without independent verification of the accuracy of the program can lead to the
generation of a significant number of errors.

Corrections
The SOP should include the correct procedure for making corrections in
primary or secondary data. It is generally accepted to draw a single line through
the incorrect information, add the correct information, and then date and initial
the correction. When it is not obvious why the correction is being made, then
additional notes of explanation should be included. The true test of defensible
data occurs when the trail from primary data to analytical result can be followed
without any explanations from the personnel involved. The records should tell the
"whole story."

Documentation
Documentation includes two concepts: written documents and the process
of documentation. The number, types, distribution, and revision process for all
written manuals must be included in the SOP on documentation. These may
include the sampling procedures, analytical methods, and quality assurance man-
ual along with any other documents that the organization deems necessary. The
personnel responsible for the generation, distribution, and revision of the manu-
als should be designated in the SOP.

ASSESSMENT
A functional quality assurance system includes a mechanism for the assess-
ment of system implementation and performance. The quality assurance system
uses the assessment phase to impart corrective action and subsequent improve-
ments are made to the system. Typically, quality assurance audits are conducted
to determine the status of conformance and to develop action plans to eliminate
occurrences of nonconformance. The assessment can and should be made both
internally and externally.

Internal
Supervisors, managers, or quality assurance officers are the primary agents
for performing internal assessment of the quality assurance system. A plan should
be developed to review various aspects of the program on a frequency that corre-
sponds to the importance of the particular aspect. Some quality assurance activi-
ties may only need to be reviewed on a quarterly basis, whereas others may need
attention weekly. The assessor should use observation of activities along with
reviews of documentation to determine the degree of conformance. Training
32 KLESTA & BARTZ

needs may be determined from the internal assessment function. Good managers
not only know where improvements are needed, but also will take the necessary
steps to correct the situation.

External

Audits performed by regulatory agencies, certifying organizations, or qual-


ity assurance units, as described in Good Laboratory Practices (GLPs) (Code of
Federal Regulations), are the most common sources of external audits. The audit
process should be as nondisruptive as possible. The auditors should be trained in
auditing techniques, communication skills, and the sampling and analytical activ-
ities taking place. The written report should be prepared as soon as possible and
should be used to implement corrective action. Two types of audits are generally
conducted: (i) A performance audit reviews quality control data, bench sheets,
logbooks, and proficiency testing results. The determination of the precision and
accuracy of the generated data is the principal goal of a performance audit. It may
be considered a quantitative assessment of quality. (ii) A system audit reviews
training records, quality manuals, SOPs, methods manuals, sampling procedures,
internal assessment of QA program implementation, and corrective action prac-
tices. It may be considered a qualitative assessment of quality. In either case, the
auditor should use a checklist to be certain that all areas are addressed.

PROFICIENCY TESTING
Proficiency testing schemes play an important role in a quality assurance
system. Samples of known concentration are submitted to a group of laboratories.
The samples are "blind" in as much as the laboratories do not know the true value
for the sample. Some programs may include "double blind" samples. In this case,
the laboratories do not known that the sample is a proficiency sample. The mate-
rials are submitted under the pretense of being a routine sample. In either case,
the results are used to determine the correctness of the laboratory's data and may
be used to certify the laboratory for future work. The "true value" of the profi-
ciency sample can be determined in a variety of ways. The concentration of a syn-
thetic sample can be determined by weights and volumes. All of the non outlier
data received from the laboratories can be used to determine the "true value."
This is commonly referred to as the consensus method. A small group of previ-
ously qualified laboratories may be used in a preliminary round to determine the
"true value" that will be used for subsequent rounds. Reports from the organizers
of the proficiency testing scheme should be used as a feedback mechanism to the
laboratory personnel.

QUALITY CONTROL
Analytical chemistry without QC is guesswork. For the words "quality con-
trol," one may substitute other words such as calibration, contamination control,
stability, precision, or accuracy which are all part of QC.
QUALITY ASSURANCE & QUALITY CONTROL 33

Quality control is the component of a quality assurance program that is pri-


marily based on statistical evaluation of the data resulting from certain control
samples. Control samples include duplicate and spiked routine samples and qual-
ity control check samples. The statistical requirements for control data may be set
according to the needs of the data users or may be based upon the "best" the ana-
lytical method and instrument measurement system can attain. Generally the por-
tions of the quality assurance program that are under the control of the analysts
at the bench or instrument constitute quality control.
A sufficient frequency of control samples needs to be established to allow
evaluation of imprecision and inaccuracy from both sampling and laboratory
sources. For example, sampling related imprecisions and inaccuracies may arise
because of within horizon variability or spatial variability within a soil type or
because of contamination. Laboratory related imprecisions and inaccuracies may
be related, for example, to multiple calibrations, analysts, and sample prepara-
tions or changes in the measurement system such as instrumental drift within an
analytical run.
Quality control samples also are tools that can serve as a routine checks that
analytical results are consistent over time. The quality control data that are col-
lected along with routine data over several agricultural growing seasons, over
several years, or in independent projects can be used to prove that data are com-
parable and therefore can be used as one collective data set. Conversely, the qual-
ity control data may show that the data are biased with respect to growing season,
year, or project; however, the quality control data are then useful in quantifying
the amount of bias and can be used to normalize the routine data.

DEFINmONS
Quality control is based primarily on the use of statistics. A thorough
review of statistical concepts is suggested. Several statistical terms are defmed
here to aid the discussion. In addition, several analytical quality control terms are
defined.

Test Portion
The test portion is the volume or weight of material that is prepared for
measuring the parameter of interest. The amount must be sufficient to be repre-
sentative of the sample. In tum, the sample is assumed to be representative of the
population of interest. For solid materials, particle size reduction by crushing or
grinding may be necessary to allow the use of a smaller test portion weight. Such
mechanical sample manipulation may alter physical characteristics that, in tum,
affect sample attributes such as adsorption, mineral structure, or reactivity.

Replicate
The term replicate describes multiple test portions or multiple instrument
measurements on one prepared test portion. The data may be evaluated by exam-
ining the relative percent difference, relative standard deviation, or coefficient of
variation.
34 KLESTA & BARTZ

Duplicates
Duplicates are specific replicates. This term usually describes two separate
test portions that are subjected to the same preparation procedure and then to the
same measurement procedure. As for replicates, the data may be evaluated by
examining the relative percent difference, relative standard deviation, or coeffi-
cient of variation.

Spiked (or Fortified) Samples


Spiked or fortified samples are routine samples to which known amounts of
analyte are added. The data may be evaluated by examining percentage recovery
of the spike addition. The spike recovery data may be used to evaluate the accu-
racy (and suitability) of the analytical method for the samples of interest. How-
ever, for the following reasons, these data must not be used to "correct" the rou-
tine data: (i) 'Die spike may be more easily recovered than the naturally occurring
analyte of interest. (ii) The spike may be consumed, absorbed, or "fixed" by the
routine sample. (iii) The spike may be otherwise affected by the sample or solu-
tion matrix.
Spikes also may be added to the solution that results from sample dissolu-
tion or extraction. These spike recovery data may be used to evaluate the accura-
cy and precision of the determinative portion of the method. As stated earlier,
routine data must not be corrected for spike recovery.

Quality Control Check Samples


Quality control check samples are taken from materials with parameters of
known value that are subjected to the same preparation and analytical procedures
as the routine samples. A certain parameter is measured repeatedly to evaluate the
control status of both the analytical preparation and the instrumental measure-
ment system. It is advantageous to use a quality control check sample that is of
similar matrix to the routine samples, if such material is available. It also is
advantageous for the parameter of interest to be within the concentration range of
samples that are routinely analyzed. If the matrix and concentration are similar to
that of the routine samples, the analytical precision and accuracy determined for
the quality control check sample can be attributed to the routine samples, as well.

Laboratory Control Samples


A laboratory control sample is spiked reagent water or other blank materi-
al that is measured repeatedly to evaluate the control status of the calibration of
the instrument measurement system. Laboratory control samples also are called
continuing calibration verification (CCV) samples. The analytical precision esti-
mated for the laboratory control sample can be attributed to that concentration
level on the calibration curve.

Standard Reference Materials


Standard Reference Materials (SRMs) are produced and sold by recognized
standards organizations such as the National Institute of Standards and Testing
QUALITY ASSURANCE & QUALITY CONTROL 35

(NIST). The certified values and acceptance criteria are determined for the mate-
rial based on a variety of analytical methods. Not all matrix types are available
from NIST. Confidence in the accuracy of the analytical procedure is increased
significantly when a laboratory can demonstrate satisfactory performance for
SRM analysis.

Control Charts

Control charts are the real-time plots of data derived from control samples
such as duplicates, spikes, and quality control check samples. Acceptance crite-
ria are determined and used to evaluate the data. The plots are made and evaluat-
ed by the analyst so that any out-of-control situation may be corrected immedi-
ately. It is important to note that control charts which are not plotted at the time
of analysis are merely historical records of performance and are not viable mech-
anisms for the control of the analytical process.

Precision

Precision is the closeness of multiple independent measurements to each


other. Precision can be expressed by the range, relative standard deviation, coef-
ficient of variation, or standard deviation of the measurements. Precision is used
to demonstrate the existence or absence of random error.

Reproducibility

Reproducibility is the measurement of the precision of analytical data gen-


erated using the same analytical method at a minimum of two independent labo-
ratories.

Accuracy

Accuracy is the agreement of a measured value to the true value of the para-
meter of interest. Accuracy is often expressed as percentage recovery or the per-
centage difference from the certified value of a standard reference material.

Bias

Bias refers to a systematic difference between the determined value and the
true value. Bias is the measurement of systematic error.

Statistical Process Control

Statistical Process Control (SPC) is a system which uses statistical para-


meters such as the mean and standard deviation to calculate acceptance criteria
which are used for real time control of the process. Statistical process control
does not imply accuracy, but must be established before accuracy can be evalu-
ated.
36 KLESTA & BARTZ

STATISTICS

Quality control samples are useful tools in day-to-day laboratory opera-


tions. it is important that the statistics used for the QC data are easy to calculate
and to understand. Interpretation is facilitated by the use of control charts. Ease
of calculation is aided by the hand-held, scientific calculator which is usually pre-
programmed to calculate arithmetic mean, variance, and standard deviation. Sta-
tistics based on the data collected are the "sample" statistics, whereas the true val-
ues are "population" statistics. Calculation of the sample statistics are based on
(n - 1) degrees of freedom.
Most analytical data for QC samples are normally distributed (assuming
that the laboratory is "in control"). The data are dispersed symmetrically about a
central value (the arithmetic mean) and small deviations from the central value
occur more frequently than large deviations. If plotted as frequency of occurrence
vs. analyte concentration, the data appear as a bell-shaped curve. Within one stan-
dard deviation, 66.6% of the data will occur. Within two standard deviations, 95%
of the data will occur. Finally, within three deviations, 99.7% of the data will
occur. This is the basis for establishing control limits equal to plus-or-minus three
standard deviations.
The mean, variance, and standard deviation are the statistics upon which
other evaluations of the sample data are based.

Control Charts

A control chart for standard reference materials, QC check samples, or lab-


oratory control samples uses the arithmetic mean as the estimate of the central
value. Initially, control limits are defined as a percentage of the mean, usually
10% of the arithmetic mean of a series of measurements. However, after at least
seven measurements have been made, statistical control limits can be established.
The warning limits are set at plus-or-minus two standard deviations from the
mean (and should contain 95% of the values that are derived from the QC sam-
ple), the control limits are set at plus-or-minus three standard deviations from the
mean (and should contain 99.7% of the values for the QC sample).
Five percentage of the data for the QC samples will fall outside the warn-
ing limits, and less than 1% of the data will fall outside the control limits. An out-
of-control situation is considered to exist if two consecutive values fall outside
the warning limits (Taylor, 1987). Because 99.7% of the data should fall within
three standard deviations of the mean, a point outside the control limit is most
probably out-of-control, and corrective action is warranted. For example, if the
out-of-control value is for the standard reference material or QC check sample,
the entire analytical batch should be rerun. This may require reanalysis against
new calibration standards or could require taking new test portions through the
entire preparation procedure. However, if the out-of-control result is for a CCV
sample, only the routine samples within the interval bracketed by the out-of-con-
trol CCV and the previous in-control CCV need to be rerun. Usually this situa-
tion is a result of instrumental drift or some other time-dependent characteristic.
QUALITY ASSURANCE & QUALITY CONTROL 37

Four consecutive points outside the one standard deviation limits also is a cause
for concern.
A systematic trend in QC data also indicates an out-of-control situation.
Such trends may be shown by a series of seven values that occurs above or below
the mean or by patterns that appear in the data, which may relate to variables such
as room temperature, time of day, or the analyst.
As additional data are obtained, warning and control limits need to be
updated on a periodic basis. Depending upon the amount of data generated, this
updating may occur weekly, monthly, yearly, or after a certain number of values
are obtained. Examination of the data will indicate if the control limits are ade-
quate. If data consistently fall within one standard deviation of the mean, the con-
trollimits are too wide to be useful in "controlling" the analytical system. Alter-
nately, if more than 5% of the data fall outside two standard deviations from the
mean, either the control limits do not adequately address the variability of the
analytical system and need to be updated or the system is severely out-of-control.
When the control limits are updated, all data that have been accumulated
should be used for estimating the statistics: mean and standard deviation. This
can best be accomplished by pooling the data. A statistical text can be consulted
as a source for the appropriate equations.
When control charts are maintained and evaluated at the time of analysis,
corrective actions may be taken immediately. This will result in significant time
savings, because routine samples are not analyzed when the measurement system
is out-of-control. Some samples may be subject to a holding time during which
the parameter of interest must be measured. Plotting and evaluating control charts
in real time will allow for any reruns to be done before the samples are invali-
dated because of holding times.

Outliers
An outlier is something that does not belong in the population that is
described by the accumulated statistical data. An outlier may refer to a sample, a
test portion, an analytical measurement, or a grouping of data from one analyti-
cal run or from a laboratory in a proficiency program.
A suspected outlier may be identified on a control chart as a result that is
out-of-control. When all replicate measurements are ranked in order of magni-
tude, a suspected outlier may appear as an extraneous value. Often the outlier is
due to a transcription or calculation error, and the value can be corrected. In other
cases, contamination or analyte-loss may be suspected, especially when a QC
check sample indicates possible contamination or loss. Those outlier values can
be eliminated from the data set for cause.
Several statistical tests have been developed to evaluate data for outliers,
including Dixon, Grubbs, Cochran, and Youden tests. A statistical text (Barnett &
Lewis, 1978) can be consulted as a source for the appropriate test for identifying
outliers in a data set.
As a word of caution, data should not be eliminated capriciously; there
must be either an attributable cause or a statistical basis for the elimination of out-
liers from a data set. After outliers are removed, the descriptive statistics (i.e.,
mean, variance, and standard deviation) are determined.
38 KLESTA & BARTZ

PRECISION

Precision, as described earlier, is an important characteristic that is used to


demonstrate that the process is in control. The range, relative standard deviation,
coefficient of variation, or standard deviation conveys the degree of agreement
among the independent measurements.

Precision of the Analysis

To control the analytical precision or, conversely, to limit the analytical


imprecision, laboratory control samples and QC check samples are used. These
samples are chosen at concentrations near the midpoint of the calibration range.
Laboratory control samples are used for instrumental analyses that are particu-
larly susceptible to instrumental drift. For all analyses, at least one QC check
sample should be included with each analytical batch (run).
The laboratory control sample is analyzed on a certain interval, such as
every 10th measurement, to evaluate the calibration of the instrument. The result
is compared to control limits. If the result is acceptable, the analysis continues; if
the result is not acceptable, the instrument is recalibrated and all samples since
the last in-control result must be reanalyzed. The standard deviation of data from
the laboratory control samples is a measure of the precision of the instrumental
measurement system.
The quality control check sample may be a SRM or an in-house standard.
The parameter of interest must be stable over time in the quality control check
sample. The quality control check sample should be of similar matrix to the rou-
tine samples, and the concentration should be within the same range as for the
routine samples. If the in-house standard is a solution, its source should be dif-
ferent from that of the calibration standards. The quality control check sample
contains a "known" amount of the analyte of interest: for the SRM, values for
mean and standard deviation are listed on a certificate of analysis. For the in-
house standard, the mean analyte concentration and standard deviation are
derived from repetitive analysis, usually conducted in comparison to an SRM (if
an SRM is available). The quality control check sample is subjected to the same
analytical preparation procedure as the routine samples. If the analytical result for
the quality control check sample falls within two standard deviations of the mean,
the quality control check sample is in control. In other words, the analytical
preparation and the instrumental measurement system are considered to be per-
forming with acceptable precision. If the quality control check sample is out-of-
control, various problems are suspected including instrumental malfunction or
analyte contamination or loss. Mer appropriate corrective action, such as instru-
ment recalibration or repreparation of the entire batch of samples, reanalysis is
conducted.
Precision may vary with analyte concentration. In assessing overall analyt-
ical precision for a large study, SRMs or in-house standards that represent sever-
al analyte concentrations (low, medium, high) may be analyzed with each batch.
The resulting data are used to derive the standard deviation at each concentration.
QUALITY ASSURANCE & QUALITY CONTROL 39

Evaluating Sources of Imprecision

The concept of precision may be applied to sample replicates derived from


the routine samples, such as sampling (field) duplicates, preparation duplicates
(splits), analytical duplicates, and measurement duplicates. From such sample
pairs, imprecisions due to sampling, preparation (such as sample drying, sieving,
and splitting), analytical preparation, and instrument measurement can be
assessed. Note that the analytical results for the sampling duplicates also will
contain components of imprecision due to sample splitting and the laboratory
analysis (Le., preparation and analysis), whereas the preparation duplicates also
will contain a component of imprecision due to the laboratory analysis. For stud-
ies that require multiple laboratories or multiple seasons, between laboratory or
between season precision is assessed by using split samples or replicate analyses
of SRMs. Precision may vary with analyte concentration; therefore, it is prudent
to choose analytical duplicates without concern for analyte cQncentration.
Initial evaluations of the sampling, preparation, analytical and measure-
ment duplicates can be made by using the relative standard deviation (RSD)

s
RSD = -=- x 100
X

where s = standard deviation and j{ = mean


Because RSD is dependent upon the mean analyte concentration, the RSD
for duplicates with low concentrations of the analyte of interest may be extreme-
ly high values. The RSD for duplicates containing the analyte of interest at or
above the quantitation limit will stabilize at a value of 20% or lower, depending
on the type of duplicate and the precision of the analytical method. For sampling
duplicates, any RSD less than 20% may represent acceptable routine data. Usu-
ally preparation duplicates have an RSD of approximately 10 to 15%, and ana-
lytical and measurement duplicates, approximately 5 to 10%.
If at least one duplicate pair is included in each sampling event, in each day
of sample preparation, in each analytical batch, and in each instrument run, the
imprecision associated with each activity can be assessed over the course of the
entire project. Usually, analytical duplicates are included at a frequency of 5 or
10%, that is, either 1 in 20 or 1 in 10, with a minimum of one per analytical batch.
Plots of RSD vs. increasing mean analyte concentration can serve two pur-
poses in evaluating routine sample data. Points that fall above the curve indicate
suspect data that should be examined for possible errors, such as transcription,
miscalculation, or sample misidentification. Points above the curve may identify
outliers. In addition, such plots can illustrate the method quantitation limit for the
routine samples.
Within (Le., within the sampling event, within the daily sample preparation
procedure, within the analytical batch, within the instrument run) and overall
(i.e., over all sampling events, over all analytical batches, and over all instrument
runs) imprecisions can be determined from statistical analysis of study data.
40 KLESTA & BARTZ

Statistically, the total variance equals the sum of the variances attributable
to each source

2 -- S2Sampling + S2Prep + S2Analysis + .,. + S2n


STotal

Repeatability and Reproducibility

For purposes of collaborative study of analytical methods, AOAC Interna-


tional describes repeatability and reproducibility as terms that relate specifically
to variability within and among laboratories. Repeatability (sr) is the expression
of "within" laboratory precision only. Reproducibility (SR) describes the precision
of data that occurs "among laboratories" and includes the "within laboratory"
component. For purposes of understanding, a third term is specified as the impre-
cision among laboratories (sd.
These relate to one another as follows

The International Standards Organization (ISO) defines two terms: repeata-


bility value (r) and reproducibility value (R). These terms predict the 95% prob-
ability of agreement between two measured values of identical test material over
the shott-term and the long-term, respectively. The terms repeatability value (r)
and reproducibility value (R) are calculated as follows

r = (2...J2)sr

R = (2...J2)SR

Taylor (1987) uses these concepts to describe the short-term and long-term
standard deviations. The short-term standard deviation is usually smaller than the
long-term standard deviation or, in other words, the measurement system is usu-
ally more precise over short periods of time than over long periods of time. This
is because some sources of variability may not vary over short intervals of time.
For example, the same calibration curve may be used for sample measurements
or the same calibration standards may be used to derive the calibration curve.
Samples may be prepared with the same lots of reagents. The long-term standard
deviation is subject to a greater amount of variability from the identified sources.

ACCURACY

Accuracy is the agreement of a measured value to the true or theoretical


value of the parameter of interest.

True Value

The true value is the mean of the population of interest. This value cannot
be known, but can be estimated by the arithmetic mean derived from samples.
QUALITY ASSURANCE & QUALITY CONTROL 41

The best estimate of the true value is made by the sample mean of measurements
that are free of systematic error, that is, measurements that are unbiased. Both
precise and imprecise measurement systems can yield good estimates of the true
value. However, imprecise measurement systems (e.g., an analyte with a large
standard deviation) need a larger number of replicates to yield a good estimate of
the true value. The more precise a measurement system is, the fewer replicates
are needed to yield a good estimate of the true value.

Bias

Bias exists when a measurement system derives a central value higher or


lower than the true value. Bias may exist between or among data sets that have
been generated by two or more analytical methods, by two or more analysts, over
two or more seasons, or by two or more analytical laboratories. Bias may exist,
even when the data fro~ the independent sources have been shown to be in con-
trol. Bias also may exist because of different sampling or sample preparation
practices. Bias should be evaluated before data sets from different sources are
combined.
Bias can be assessed only for data that have been derived from a measure-
ment system demonstrating statistical control, as described in the section "Con-
trol Charts." The effect of bias on sample results may be negative or positive,
additive or multiplicative, constant or concentration-related, or some combina-
tion of all possibilities. Data should be corrected for bias only if the origin of the
bias is understood and the cumulative effect of all biases on the data can be deter-
mined.
Bias may be attributed to a systematic error in the measurement system or
to an analytical method that produces biased data. For example, an incomplete di-
gestion or an extraction inefficiency caused by a temperature or time effect would
introduce method bias. Examples of measurement system bias are calibration er-
ror, dilution error, contamination of blanks or samples, and loss of the analyte of
interest. The common (although incorrect) practice of reporting the "best two out
of three" replicate analyses introduces bias, as well.
For some projects, sources of bias can be identified and eliminated or min-
imized at the time of sample collection and analysis. An example is contamina-
tion. Adequate use of blanks in the project design (sampling blanks and trip
blanks) and in the analytical laboratory (method or reagent blanks) can identify
possible sources of contamination. If contamination is shown to be a problem,
procedural controls can be instituted to eliminate or minimize identified sources
of contamination. Data are generated and reported to document the analyte levels
found in the various blank samples. Laboratory data should not be corrected for
method blanks. Likewise, analytical instruments should not be "zeroed" on the
method blank. More information is conveyed to the data user by reporting the
blank and routine data "as analyzed" rather than "corrected."
Other sources of bias may be identified through the use of various quality
control measures. For example, daily balance calibration checks will minimize
bias due to weighing. A check on the precision of delivery from a fixed or vari-
able volume pipet may identify the potential for bias in sample aliquoting. The
42 KLESTA & BARTZ

analysis of serial dilutions may identify sample dilution errors. An instrument


performance check will limit potential for bias from an instrumental measure-
ment system. Control limits for the slope of the calibration curve for an instru-
ment may identify bias introduced through calibration error. Quality control
check samples will indicate potential bias from a variety of sources in the overall
measurement system. When such control measures are part of the day-to-day
operation of a laboratory, corrective action can be taken when any check is out-
of-control. Therefore, the potential for producing biased data is minimized.
Another way to identify bias is by examining percentage recovery of spiked
samples. If percentage recovery is high, e.g., 80 to 120%, the analytical method
is probably suitable for the samples of interest. If the recovery falls within estab-
lished control limits, the measurement process is within statistical control. No
matter what value is obtained for recovery, the result should be reported with the
routine data. When recoveries are variable or do not fall within established con-
trol limits, corrective action is warranted. However, for the following reasons,
recovery data must not be used to correct the routine data: (i) The spike may be
more easily recovered than the naturally occurring analyte of interest. (ii) The
spike may be consumed or "fixed" by the routine sample. (iii) The spike may oth-
erwise be affected by the sample or solution matrix. Recovery from spiked sam-
ples is the least useful way in which to evaluate sample bias. Note that recovery
from laboratory control samples, i.e., spiked blanks, describes calibration bias
and does not estimate sample bias.
Some analytical methods produce biased data. This bias may be evaluated
by analysis of SRMs. Another alternative is to compare data derived from one
method to data derived from another, independent method. A statistically signif-
icant difference between the analytical result from the method under evaluation
and the known analyte value for the SRM, or the value obtained by using the ref-
erence method, can be attributed to bias. The bias should be examined for the
range of concentration of the analyte and in various sample matrices. (Consider
that the constituents of mineral and organic soils occur in such variation that
"soil" may describe multiple matrices.) Method bias may be related to the amount
of some reactive constituent in the sample matrix rather than to the analyte con-
centration. Often, if the biased method produces more precise data, the biased
method may be preferred. Another consideration may be the cost and time to per-
form the analysis. If the biased method is more cost and time efficient, the biased
method may be preferred.
In summary, biased data may be obtained from unbiased analytical meth-
ods. Many of the sources of bias can be identified and eliminated or, at least, min-
imized. Real corrective action is preferred to correcting the data for bias. In some
situations, an analytical method that is known to produce biased data may be pre-
ferred for use. For approaches to the examination of interlaboratory bias, refer to
the section "Blind Duplicates and Split Samples."

CONTAMINATION
Contamination of the sample may occur during sampling, shipment, prepa-
ration, storage, or laboratory analysis. The types of blank samples that can be
QUALITY ASSURANCE & QUALITY CONTROL 43

used to examine these sources of contamination were considered in the section


"Bias."
Crossover or "memory effect" is a condition that may occur during analy-
sis and will influence the instrument response. These effects must be minimized,
because it is extremely difficult to quantify them and apply an appropriate cor-
rection factor. For example, when sample preparations are introduced into instru-
ments such as ultraviolet/visible (UV-Vis) spectrophotometers, atomic absorption
spectrophotometers, and inductively coupled plasma spectrometers, the samples
may contact various surfaces, such as plastic, glass, quartz, and metal. These sur-
faces absorb or adsorb part of the sample material. When a sample contains a high
amount of an analyte, measurements on subsequent samples may be influenced
by the release of the absorbed/adsorbed analyte over an extended length of time.
An analyst may test for the susceptibility of an instrument system for demon-
strating memory effects by analyzing the calibration standards in random order
rather than in the order of increasing concentration, Whenever a high concentra-
tion sample has been introduced into an instrument, the calibration blank can be
run after the sample as a check for memory effects. To rectify the problem, the
system can be purged with deionized water, dilute acid, or the method blank until
a subsequent sample is introduced. Some automated instrument systems include
a rinse between samples, and often the length of time for the rinse can be con-
trolled by the instrument software.
Gas chromatographs may exhibit the same conditions when samples which
contain high boiling compounds are not sufficiently purged from the system
before the subsequent sample is introduced. Elimination of the problem is simi-
lar to the procedure described above in that a purge operation should be intro-
duced between samples.

BLIND DUPLICATES AND SPLIT SAMPLES

In the section "Evaluating Sources of Imprecision," the use of sample repli-


cates to evaluate sources of imprecision was presented. Two additional types of
replicate samples are used to examine laboratory performance. Within a labora-
tory, blind duplicates are submitted as a further assessment of within laboratory
precision. These intralaboratory samples also can be used to evaluate the ana-
lysts' ability to handle "difficult" matrices. Between or among laboratories, split
samples are analyzed to estimate interlaboratory precision and bias. In both situ-
ations, the data also should be used as a basis for corrective action.
The use of split samples is influenced by the analytical "situation" that is
under evaluation:

Scenario One

In this situation, analytical data are generated for a specific project. The
split samples are part of the project design. Either the project involves the use of
several laboratories, or one laboratory analyzes samples over several years or sea-
sons. Ultimately, the data from all laboratories or all seasons are to be combined
for use.
44 KLESTA & BARTZ

Blind Duplicates
For meaningful statistics to be obtained for intralaboratory precision, each
laboratory (or one laboratory in each season of the project) analyzes at least seven
pairs of blind duplicates for each analyte at each concentration range of interest.
The incorporation of these additional blind duplicates is unnecessary if the dupli-
cates described in the section "Evaluating Sources of Imprecision" (i) are chosen
randomly or are designated by the project management; (ii) are not given prefer-
ential treatment by the analyst, that is, the duplicate samples are prepared and
analyzed in the same manner as are routine samples; and (iii) are sufficient in
number to derive meaningful statistics for within laboratory precision.

Split Samples
For the assessment of interlaboratory precision and bias, splits from a sta-
ble, homogeneous sample are analyzed by each laboratory (or by one laboratory
during each season) for each analyte at each concentration of interest. Again, at
least seven splits should be provided to each laboratory (or to one laboratory dur-
ing each season) for analysis of each analyte in each concentration range of inter-
est. Statistics are used to evaluate estimates of precision and bias between labo-
ratories.
If interlaboratory bias is negligible (Le., there are no significant differences
between analyte means) and precision is not significantly different between lab-
oratories, the data are comparable. If there are significant differences between
analyte means, but the estimates of within laboratory precision are not signifi-
cantly different, the statistical data from the split samples may be used to derive
correction factors to normalize the routine data prior to further evaluation. If data
generated from the split samples can be correlated to SRM performance data
from the laboratories during the same time frame, accuracy can be evaluated for
all laboratories (or each season) in addition to the relative interlaboratory bias.

Scenario Two

In this situation, data from multiple laboratories are used to make decisions,
such as fertilizer recommendations or whether to excavate contaminated soil or
not. Usually these laboratories are commercial or "production" laboratories. Ulti-
mately the management and the customer want assurance that the data from dif-
ferent sources can be used in combination and will result in the same recommen-
dation.

Blind Duplicates
Blind duplicates are submitted by the laboratory management or quality
assurance personnel on a periodic basis for each analyte of interest. The identity
of the sample is not known by the analyst, and the analyst is not given any infor-
mation about the expected analyte concentration; these samples are referred to as
"double blinds." The submitter examines the resulting data in comparison to the
original, routine data and evaluates the results according to the expectation for
within laboratory precision. When data for any analyte do not meet the accep-
QUALITY ASSURANCE & QUALITY CONTROL 45

tance criteria, possible errors in areas such as calculation, weighing, diluting, and
calibration are investigated. Reanalysis or the submission of another blind dupli-
cate may occur if attributable causes cannot be identified. This is an effective way
to identify parts of the analytical process which may need corrective action or
additional oversight.

Split Samples
Split samples for "production" laboratories may be run in conjunction with
a reference laboratory or may be from a round robin.
When a reference laboratory is used, the originating laboratory provides a
split sample to the reference laboratory. The reference laboratory must use the
same analytical procedure that was used to generate the original, routine data.
The data from the originating and reference laboratories are evaluated according
to the objective for interlaboratory precision. Data which do not meet criteria are
investigated in both the originating and the reference laboratories for attributable
errors. This is another opportunity for identifying parts of the analytical process
which may need corrective action or additional oversight. If a group of "produc-
tion" laboratories uses the same reference laboratory, interpretations can be made
regarding the interlaboratory bias or comparability of data among the "produc-
tion" laboratories. Trends in data can be observed even if interlaboratory bias is
not rigorously determined.
If a proficiency testing program (round robin) is employed for identifying
interlaboratory bias, splits from one homogeneous material are sent from the ref-
eree to the various participating laboratories. All participating laboratories may
not use the same analytical procedure for the analyses unless it is a requirement
of the program. The data are reported to the referee within a certain time frame.
Then the referee assembles the data, performs statistical evaluation of the data,
and issues a report to the participating laboratories. In this way, the proficiency
of the laboratory is assessed against the consensus statistics. If the data are pre-
sented in a graphic display, the relative interlaboratory bias is illustrated.

CALIBRATION

The purpose of calibration is to eliminate or minimize bias in the overall


measurement system. The measurement of analyte concentration is dependent
upon an assemblage of sampling and measurement processes, all of which need
to be calibrated. Often, only the calibration of the detection/instrument system
receives attention, and the support components, that measure variables such as
temperature, volume, mass, particle size, and flow rates, are overlooked. These
support components need to be calibrated to assure that their accuracy is within
acceptable limits.
In order to produce defensible data, the calibration of all components must
be traceable to an acceptable standard or physical constant. For example, a ther-
mometer may be calibrated against an NIST standard thermometer, or the ther-
mometer may be calibrated against the physical constants of an ice bath and boil-
ing water (taking into account barometric pressure and solute content of the
46 KLESTA & BARTZ

water). An analytical balance may be calibrated by a service technician against


weights that were calibrated by NIST or by a state weights-and-measures labora-
tory in comparison to NIST-calibrated weights. In the calibration of physical
standards, the responsibility for traceability belongs to the person or organization
that performs the calibration.
Calibration of an instrument system must be related to standards. The stan-
dards may be prepared by the analyst from materials whose purity or concentra-
tion was verified by a manufacturer against SRMs from NIST. In this example,
only the certified value of the SRM is traceable t~ NIST; all other links in the
traceability chain are the responsibility of the manufacturer and the analyst. In the
laboratory, records for calibration standards must document source of material
and details of preparation and also must include traceability for physical mea-
surements, such as mass and volume, that were made during the preparation.

Range of Analysis

Because of inherent characteristics, some analytical methods may be suit-


able for a specific analyte concentration range only. Other methods are applica-
ble over a wide range, and their usefulness is dependent upon the instrumentation
that is employed for the detection of the analyte. Most detection/instrument sys-
tems rely on an indirect comparison between calibration standards and samples.
The most common calibration protocol is to use a certain number of cali-
bration standards (usually at least three plus a calibration blank) to establish a
"response" at each concentration. Each response is plotted vs. the corresponding
concentration, and a straight line is drawn or fitted to the data using regression or
least squares techniques. A plot on graph paper or using computer software will
show if the data are linear and will identify potential outliers in the calibration
data. Without the aid of complicated, computer-assisted protocols to transform
data or to fit quadratic or polynomial curves to calibration standards, a linear cal-
ibration curve is the easiest to construct and use.
A calibration curve has lower and upper limits to linearity. The interval
between the lowest standard and the origin (0,0) of the plot must not be assumed
to be a straight line. Also, the curve above the highest calibration standard should
not be extrapolated as a linear curve. In general terms, calibration standards must
bracket the expected concentrations of the analyte in the routine samples, or the
samples must be diluted so that the resulting concentration falls within the cali-
bration range.
There are several characteristics of analytical data derived from calibration
curves that impact the results that are reported. These are method detection limit,
limit of quantitation, and limit of linearity.

Method Detection Limit

The method detection limit (MDL) is based on the ability of the method to
determine the concentration of an analyte in a sample matrix. The MDL is calcu-
lated as three times the standard deviation (so) of at least seven replicate mea-
surements of method blanks (if an instrumental response can be measured) or low
QUALITY ASSURANCE & QUALITY CONTROL 47

level samples (at approximately 3 to 5 times the estimated MOL). To assess the
potential variability associated with different analysts, different days of prepara-
tion, and different instrument calibrations, the seven replicate measurements (six
degrees of freedom) should be made on different days or shifts.

MDL = 3so

Because MOLs are based on the standard deviation which is not an addi-
tive statistic, MOLs cannot be averaged. However, data that are obtained over a
period of time can be pooled to obtain an MOL. Often a low-concentration sam-
ple is included in each analytical run as an MOL check sample. The resulting data
can be used to determine whether the stated MOL is maintained over time.

Limit of Quantitation

The limit of quantitation (LOO) is the lowest level at which analytical mea-
surement becomes meaningful in quantitating a result. Analytical results below
this limit are reported as "less than" values. Although the LOO has been arbi-
trarily defined by an American Chemical Society committee as 10 times the stan-
dard deviation of the blank, the data user should examine the data to decide if this
definition is justified. This value is probably suitable for spiked reagent water, but
is not suitable for more complex matrices. It is more likely that, for soils, the
appropriate LOO is a higher mUltiple of the standard deviation. An empirical
value for the LOO can be derived by examining the inflection point in the curve
of RSO vs. increasing analyte concentration for analytical duplicates, as
described in the section "Evaluating Sources of Imprecision."

Limit of Linearity

The limit of linearity (LOL) is the upper level of reliable measurement.


Beyond the LOL, the calibration curve is no longer linear, that is, a change in con-
centration no longer results in a consistent change in the instrument response.

Linear Versus Quadratic Calibration Curves

Linear curves are most commonly used for laboratory analysis. A straight-
line relationship between the concentration and the response is easily understood
and allows direct calculation of the result. A correlation coefficient can be calcu-
lated to determine the "straightness" of the curve. It is commonly accepted that a
calibration curve should have no less than three standards distributed from low to
high concentration. Criteria can be established to evaluate the acceptability of the
curve before it is used. Mter determining the linearity of a curve by using multi-
ple standards, curves with very high correlation coefficients can be determined on
a routine basis with as few as two standards.
Nonlinear calibration curves can be used effectively to analyze samples.
The shape of the curve must be reproducible both in curvature and magnitude. A
nonlinear curve can be used to increase the range of analysis. In atomic absorp-
48 KLESTA & BARTZ

tion spectroscopy, for example, a less sensitive wavelength for an element can be
chosen to eliminate the need for serial dilutions. It is possible to choose a wave-
length less influenced by interferences which has a nonlinear curve for an ele-
ment to improve the accuracy of the results.
The number of calibration standards to use for a nonlinear curve is inverse-
ly proportional to the linearity. When a nonlinear curve is to be used for analysis,
the number of calibration standards must be increased significantly.
Plotting nonlinear curves manually is an acceptable procedure. Day-to-day
comparisons of curvature and magnitude can be made readily.
Computer software used on analytical instruments allows for the use of
nonlinear calibration curves. These computers and software packages which per-
form curve-fitting operations can be used effectively. Quadratic equations or
polynomial equations can be used to define the calibration curve. The analyst
must have sufficient experience to choose the correct equation for the curve. Care
should be taken to assure that the correct equation is used for the curve and that
the system generating the curve is reproducible.

REFERENCES
American Society for Testing and Materials. 1991. Annual book of ASTM standards. ASTM,
Philadelphia, PA.
American Public Health Association. 1992. Standard methods for the examination of water and
wastewater. 18th ed. Am. Public Health Assoc., Am. Water Works Assoc., Water Environ.
Fed., Washington, DC.
Association of Official Analytical Chemists-International. 1990. Official methods of analysis. 15th
ed. AOAC-Int., Arlington, VA.
Barnett, v., and T. Lewis. 1978. Outliers in statistical data. John Wiley & Sons, Chichester, England.
Cochran, W.G. 1947. Some consequences when the assumptions for analysis of variance are not sat-
isfied. Biometrics 3:22-38.
Code of Federal Regulations. Good laboratory practice for nonclinicallaboratory studies. Title 21,
Part 58. U.S. Gov. Print. Office, Washington, DC.
Code of Federal Regulations. Good laboratory practice standards. Title 40, Parts 160 and 792. U.S.
Gov. Print. Office, Washington, DC.
Dixon, W.J., and EJ. Massey, Jr. 1969. Introduction to statistical analysis. 3rd ed. McGraw-Hili Book
Co., New York.
Grubbs, EE. 1969. Procedures for detecting outlying observations in samples. p. 1-21. In Techno-
metrics. Vol. 11.
Klute, A. 1986. Methods of soil analysis. 2nd ed. Agron. Monogr. 9. ASA and SSSA, Madison, WI.
Taylor, J.K. 1987. Quality assurance of chemical measurements. Lewis Publ., Inc., Chelsea, MI.
U.S. Environmental Protection Agency. 1986. Test methods for evaluating solid waste. SW-846. 3rd
ed. U.S. Gov. Print. Office, Washington, DC.
Youden, w.I., and E.H. Steiner. 1975. Statistical manual of the association of official analytical
chemists. AOAC, Washington, DC.

S-ar putea să vă placă și