Sunteți pe pagina 1din 8

Talanta 48 (1999) 729736

Short communication
Intra-laboratory testing of method accuracy from recovery
assays
A. Gustavo Gonzalez *, M. Angeles Herrador, Agust n G. Asuero
Department of Analytical Chemistry, Uni6ersity of Se6ille, 41012, Se6ille, Spain
Received 27 April 1998; received in revised form 6 August 1998; accepted 7 August 1998
Abstract
A revision on intra-laboratory testing of accuracy of analytical methods from recovery assays is given. Procedures
based on spiked matrices and spiked samples are presented and discussed. 1999 Elsevier Science B.V. All rights
reserved.
Keywords: Accuracy; Recovery assays; Spiked samples; Spiked matrices
1. Introduction
The accuracy of an analytical method is a key
feature for validation purposes [1,2]. Four princi-
pal methods have been proposed to the study of
accuracy of analytical methods [35]. They are
based on: (i) the use of certied reference materi-
als (CRM); (ii) the comparison of the proposed
method with a reference one, (iii) the use of
recovery assays on matrices or samples and, (iv)
the round robin studies (collaborative tests).
CRMs, when available, are the preferred con-
trol materials because they are directly traceable
to international standards or units. The procedure
consists in analyzing a sufcient number of CRMs
and comparing the results against the certied
values [4,6,7]. The Community Bureau of Refer-
ence of the Commission of the European Commu-
nity (BCR, Bureau Communautaire de
Reference), the Laboratory of the Government
Chemist of Middlesex (LGC), the National Insti-
tute of Standards and Technology of USA (NIST)
and the National Institute for Environmental
Studies of Japan, provide a general coverage of
certied reference materials [810]. Nevertheless a
series of shortcomings and limitations for CRMs
have been pinpointed [11], specially: Their cost,
the small amounts that may be purchased and the
narrow range covered of matrices and analytes.
The performance of a newly developed method
can be assessed by comparing the results obtained
by it with those found with a reference or com-
parison method of known accuracy and precision
[1216].
* Corresponding author. Tel.: +34 5 4557173; fax: +34 5
4557168; e-mail: agonzale@cica.es
0039-9140/99/$ - see front matter 1999 Elsevier Science B.V. All rights reserved.
PII S0039-9140(98)00271-9
A.G. Gonzalez et al. / Talanta 48 (1999) 729736 730
The use of Collaborative studies to control
methodological bias is a very important topic
[17,18]. However prociency testing and round
robin studies will not be considered here, this
paper dealing with the internal (intra-labora-
tory) control of accuracy.
Unfortunately, within the realm of environ-
mental, toxicological and pharmaceutical analy-
sis, neither CRMs nor alternate methods are
available for new contaminant, toxic and drug-
related analytes. Accordingly, the remaining
methods to check method accuracy, that is, the
recovery assays are the tools of the trade for
the study of accuracy in pharmaceutical analy-
sis.
The aim of the present paper is to revise,
outline and discuss the suitable procedures for
testing accuracy in pharmaceutical analysis from
recovery assays. For the sake of illustration
some examples taken from literature are dis-
cussed.
2. General overview on recovery assays
Before embarking us in the body of the sub-
ject, some terminology should be established.
According to the IUPAC paper (1990) on
nomenclature for sampling in analytical chem-
istry [19,20] we will use the term test portion as
the quantity of material removed from the test
sample which is suitable in size to measure the
concentration of the determinant by using the
selected analytical method. The test portion
may be dissolved with or without reaction to
give the test solution. An aliquot (fractional
part) of the test solution is then measured by
following the analytical operating procedure.
The test portion consists of analyte plus matrix
[21].
Accordingly, if a test portion of weight m is
dissolved into a total volume V, the test solu-
tion will have a concentration c=m/V which is
the sum of the analyte concentration (x) plus
the matrix concentration (z).
Consider now the newly proposed analytical
method which is applied to dissolved test por-
tions of a given sample within the linear dy-
namic range of the analytical response (Y).
This response may be expressed by the follow-
ing relationship involving both analyte and ma-
trix amounts [22]:
Y=A+Bx+Cz+Dxz (1)
where A, B, C and D are constants
A is a constant that does not change when
the concentrations of the matrix, z, and/or the
analyte, x, change. It is called the true sample
blank [23] and may be evaluated by using the
Youden sample plot [2426], which is dened
as the sample response curve [23]. By using
our terminology, the application of the selected
analytical operating procedure to different test
portions, m, (different mass taken from the test
sample) produces different analytical responses
Y as outputs. The plot of Y versus m is the
Youden sample plot and the intercept of the
corresponding regression line is the so called
Total Youden Blank (TYB) which is the true
sample blank [2330]. As will be discussed be-
low, when a matrix without analyte is avail-
able the term A can be more easily determined.
Bx is the fundamental term that justies the
analytical method and it is directly related to
the analytical sensitivity [31].
Cz is the contribution from the matrix, de-
pending only on its amount, z. When this term
occurs, the matrix is called interferent. In gen-
eral this kind of interference is very infrequent,
because a validated analytical method should be
selective enough with respect to the potential
interferences appearing in the samples where
the analyte is determined [32]. The USP mono-
graph [33] denes the selectivity of an analytical
method as its ability to measure accurately an
analyte in presence of interference, such as syn-
thetic precursors, excipients, enantiomers and
known or likely degradation products that may
be present in the sample matrix. Accordingly,
the majority of validated methods do not suffer
from such a direct matrix interference. Anyway,
as it will be discussed below, the method accu-
racy may be tested even when faced with inter-
ferent matrices.
Dxz is an interaction analyte/matrix term.
This matrix effect occurs when the sensitivity of
A.G. Gonzalez et al. / Talanta 48 (1999) 729736 731
the instrument to the analyte is dependent on the
presence of the other species (the matrix) in the
sample [31]. For the sake of determining analytes,
this effect may be overcome by using the method
of standard additions (MOSA) [2330].
Certain types of samples, of which pharmaceu-
tical dosage forms are just an example, enables
the sample matrix to be simulated by a laboratory
preparation procedure with all the excipients
present in their corresponding amounts except the
analyte of interest. In other cases, the sample
matrix may be a synthetic mixture of naturally
complex substances such as an analyte-free body
uids from unmedicated patients. In both cases, it
is said that the placebo is available and recoveries
are obtained from spiked placebos. The term
placebo was used by Cardone [28] to refer to
free-analyte materials. However in order to avoid
confusion and unify the jargon, the word matrix
(or blank matrix) will be used throughout the
text. Sometimes, however, it is not possible to
prepare a matrix without the presence of analyte.
This may occur, for instance with lyophilized
materials, in which the speciation is signicantly
different when the analyte is absent [3]. In these
cases, the matrix is not available and the MOSA
must be applied [2330] the recoveries being ob-
tained from spiked samples. In spiked matrices or
spiked samples, the analyte addition cannot be
blindly performed. Addition should be accom-
plished over the suitable analyte range.
Some analytes when incorporated naturally into
the matrix are chemically bound to the con-
stituents of the matrix. In such cases, the mere
addition to the sample or matrix will not mirror
what happens in practice. It is recommended that
the analyte is added to the matrix and then left in
contact for several hours, preferably overnight,
before applying the analytical method to allow
analyte/matrix interactions to occur [34]. Spiked
samples or matrices are also called fortied ones,
the concentration of analyte added being the cor-
responding fortication level. In the following
the two procedures for demonstrating accuracy
from recovery tests, namely, spiked matrices and
spiked samples will be outlined. For the sake of
generality, in all cases the coefcient C of Eq. (1)
will be considered signicant.
3. Recovery tests from spiked matrices
The recovery test is carried out from spiked
matrices over the analyte range of interest, gener-
ally 75125% of the expected assay value (label
claim or theory), holding the matrix at the nomi-
nal constant level z=z
0
. The analytical response
when analyte is added follows the equation
Y=A+Bx+Cz
0
+Dxz
0
=(A+Cz
0
) +(B+Dz
0
)x=A% +B%x (2)
where A% and B% are the intercept and the slope of
the calibration line in the matrix environment
which are constant because B, C, D and z
0
are
also xed.
By using this calibration function, several
amounts of analyte are added to the matrix. From
the analytical signal at each addition i, Y
i
, the
amount of analyte found is estimated as x
i
=
(Y
i
A%)/B%. The recovery (Rec) may be estimated
as the average of the individual recoveries ob-
tained at each spike i (Rec
i
=x
i
/x
i
). Alternatively,
a regression analysis of found versus added ana-
lyte concentrations may be performed, and the
slope may be taken as the average recovery as it
will be discussed in Section 5.
4. Recovery tests from spiked samples
In a way similar to the preceding section, the
application of the MOSA on the test portions
(matrix plus analyte) is performed. An important
requirement for this technique is that all solu-
tions, unspiked and spiked test portions be diluted
to the same nal volume. If we carry out analyte
additions, x, on the nal solution of a given test
portion of concentration c
0
=x
0
+z
0
(x
0
is the
concentration of analyte coming from the sample,
present in the nal solution), then the analytical
response will be:
Y=A+B(x
0
+x) +Cz
0
+D(x
0
+x)z
0
=
(A+Cz
0
) +(B+Dz
0
)(x
0
+x) =
A% +B%(x
0
+x) =
A% +B%x
0
+B%x=A+B%x (3)
A.G. Gonzalez et al. / Talanta 48 (1999) 729736 732
Note that both A and B% are constant because A,
B, C, D, x
0
and z
0
are also constants. By using
this calibration function, from the analytical sig-
nal at each addition i, Y
i
, the amount of analyte
found is estimated as x
i
=(Y
i
A)/B%. This is an
estimation of the added analyte concentration (x
i
)
instead of the total analyte concentration (x
0
+
x
i
). A crucial requirement is that, the total nal
analyte concentration obtained for the maximum
amount of analyte spiked should remain within
the linear range obtained at the method develop-
ment step. The remainder is the same as it was
discussed above.
5. Evaluation of the recovery from spiked
matrices or samples and signicance tests for
assessing accuracy
The calculation of recoveries from spiked ma-
trices or samples may be performed (i) by com-
puting the individual recoveries for each spiked
amount (fortication level) of analyte, and evalu-
ating the recovery as average; and (ii) from linear
regression analysis of added versus found data.
5.1. Method of the a6eraged reco6ery
In this case, for each fortication level or ana-
lyte spike i, we have the concentration added of
analyte x
i
. From the calibration graph, the esti-
mated concentration of analyte x
i
is obtained. An
individual recovery is then calculated as Rec
i
=x
i
/
x
i
. The mean recovery, Rec, is calculated as aver-
age of the individual ones
Rec=
1
n
%
i =n
i =1
Rec
i
(4)
The average recovery may be tested for signi-
cance by using the Student t-test, the null hypoth-
esis being that the recovery is unity (or 100 in
percentage) and the method is accurate.
t =
Rec1
s
Rec
/n
(5)
with
s
Rec
=
D
%
i =n
i =1
(RecRec
i
)
2
n1
(6)
n being the number of spike levels.
If the t value obtained from Eq. (5) is less than
the tabulated value for n1 degrees of freedom
at a given signicance level, then the null hypoth-
esis is accepted and the method is accurate.
5.2. Regression analysis of added 6ersus found
data
Both in spiked matrices or samples, a regression
analysis of estimated (found) against spiked
(added) analyte concentration is performed. These
studies are not new, indeed. In a landmark paper
[35], Mandel and Linnig studied the accuracy in
chemical analysis using linear calibration curves,
by applying regression analysis to the linear
relationships
x
found
=a+bx
added
(7)
here x
found
and x
added
refer to the above concen-
trations of analyte estimated (x ) and spiked (x).
The theory predicts a value of 1 for the slope, b,
and a value of 0 for the intercept. However, the
occurrence of systematic and random errors in the
analytical procedure may produce deviations of
the ideal situation [36]. Thus, it may occur that
the straight line has a slope of 1 but a non-zero
intercept, coming from a wrongly estimated back-
ground signal, and reecting the need for a blank
correction in the calibration graph. Another pos-
sibility is that the slope is signicantly different
from unity, indicating a source of proportional
error in the proposed analytical method. The plot
may present curvature or even may exhibit pecu-
liar behaviour in case of analyte speciation.
Once the parameters a and b were calculated
from the linear t x
found
versus x
added
, and before
to evaluate the recovery, diagnostic checking of
the residuals (responsesmodel predictions), that
is, x
found
abx
added
at each spike level, should
be applied to assess model t validity [37]. Certain
underlying assumption have been outlined for the
regression analysis such as the independence of
the random error, homoscedasticity and Gaussian
A.G. Gonzalez et al. / Talanta 48 (1999) 729736 733
distribution of the random error [38]. If the model
represents the data suitably, the residuals should
be randomly distributed about the value predicted
by the model equation with normal distribution. A
plot of the residuals on a normal probability paper
is a useful technique [39]. If the error distribution
is normal, then the plot will be linear. On the other
hand, examinations of plots of residuals against
the independent variable (here x
added
) may be of
great help in the diagnosis of the regression models
[40]. Some systematic patterns indicated that the
model is incorrect in some way. A sector pattern
indicates heteroscedasticity in the data [41,42]. A
non linear pattern indicates that the present model
is incorrect [43]. Residuals also may be used to
detect outliers. A very straightforward way is to
consider as outlier any calibration point whose
residual is greater than twice the value of the
standard deviation of the regression line, although
the use of jackknife residuals or the Cook distance
method are more accurate tools for detecting
outliers [44].
The heteroscedasticity revealed by residual anal-
ysis of the added versus found data comes from the
heteroscedasticity in the responses (Y) of the cali-
bration curve which is propagated to the estimates
x
found
. In such a case, instead of using ordinary
least squares methods to t the calibration straight
line, the use of weighted least squares is advised.
The weights w
i
are given by w
i
=1/s
i
2
, where s
i
is
the standard deviation of the responses, replicated
at the concentration of analyte x
i
[41,42,45].
Non-linear patterns detected by residual analy-
sis of the added versus found plots arise because
the calibration graph used for estimating the ana-
lyte concentration is by far non-linear. Curved
patterns suggest the inclusion of a quadratic term
in x
2
in the calibration function. In these situa-
tions, suitable non-linear calibration curves should
be established in order to obtain unbiased esti-
mates of x
found
. Three methods are available for
this purpose, namely the method of the linear
segments, the method of the three-parameter func-
tion and the method based on polynomial func-
tions [46,47].
After the validity of the t has been appraised,
then statistical comparison of a and b with their
idealistic values, 0 and 1, must be performed.
Conventional individual condence intervals for
the slope and the intercept once their standard
deviations s
b
and s
a
are calculated, based on the
t-test (t
a
=a/s
a
; t
b
=b1/s
b
), although fre-
quently used by the workers [4850], can lead to
erroneous conclusions because these tests, when
carried out independently of each other ignore the
strong correlation between slope and intercept
[51]. Instead of these individual tests, the elliptic
joint condence region (EJCR) for the true slope
(i) and intercept (h) derived by Working and
Hotelling [52] and adopted by Mandel and Linnig
[35] is recommended, whose equation is
n(ah)
2
+2

% x
i

(ah)(bi)
+

% x
2
i

(bi)
2
=2s
2
F
2, w
(8)
where n is the number of points, s
2
the regression
variance and F
2, w
the critical value of the
SnedecorFishers statistic with 2 and w=n2
degrees of freedom at a given P% condence level,
usually 95% [53,54].
The centre of ellipse is (a, b). Any point (h, i)
which lies inside the EJCR is compatible with the
data at the chosen condence level P. In order to
check constant (translational) or proportional (ro-
tational) bias, the values h=0 and i=1 are
compared with the estimates a and b using the
EJCR. If the point (0, 1) lies inside the EJCR, then
bias are absent [13] and consequently, the recovery
may be taken as unity (or 100% in percentile scale).
This can be done from easy calculations as de-
scribed in Appendix A.
Once the recovery is computed (from one or
another procedure), it should be checked for ful-
lling the accuracy criteria according to the AOAC
guidelines [55] as indicated in Table 1. Note that
for trace analysis, e.g. drug residues in tissues,
recoveries about 50% is often the best that can be
achieved.
6. Worked example
For the sake of illustration, a case study was
selected. It deals with the recovery of trigonelline
in coffee extracts from ion chromatography.
A.G. Gonzalez et al. / Talanta 48 (1999) 729736 734
Trigonelline is determined in green and roasted
coffee extracts by an ion chromatography pro-
cedure using as stationary phase polybutadi-
enemaleic acid (PBDMA) coated on silica, 2
mmol l
1
aqueous hydrochloric acid (pH 3) as
eluent and UV detection at 254 nm. Test solu-
tions were prepared by reuxing test portions
of 3 g of dried coffee (green or roasted) with
hot water (80C) for 1 h. The extract was
ltered and diluted to 250 ml. For trigonelline
analysis, the test solution is diluted 1:5 (V/V)
and 3 ml of the resulting solution is passed
through a C
18
SPE cartridge and then, the elu-
ent is also passed to collect a total volume of
10 ml. An aliquot of this later solution was
ltered through a 0.45 mm lter unit and subse-
quently injected on the HPLC system [56].
Owing to the lack of suitable certied refer-
ence materials, the validation methodology was
based on recovery assays from spiked extracts
of green and roasted coffees. Roasted coffees
have trigonelline contents within 0.500.85%
(w/w, dry basis). The corresponding extracts
will present trigonelline concentrations in the
range 60102 mg l
1
, which corresponds to
1525.5 mg trigonelline. Six spikes, additions
or fortication levels were selected. Spiked ex-
tracts were allowed overnight and then ana-
lyzed. The added and found amounts of
trigonelline are shown in Table 2.
The individual recoveries are also indicated in
Table 2 at each fortication level. The averaged
recovery is 0.9997 and its standard deviation,
0.0094. By applying Eq. (5), the observed t
value is 0.078. The critical value for 5 degrees
of freedom at a 95% condence level is 2.015,
Table 2
Recovery study of trigonelline on coffee extracts
Residual Estimated x
found
x
added
x
found
Rec
0.14 5 5.02 1.004 4.88
0.05 9.94 0.989 10 9.89
14.94 15.00 0.06 15 0.996
0.25 20.06 20 19.81 0.990
25.12 25.30 0.18 1.012 25
1.007 30.18 0.04 30.22 30
x
added
and x
found
refer to the amounts (in mg) of trigonelline
added and found by analysis in the different extracts.
and therefore, the null hypothesis is accepted
and the recovery do not differ statistically from
100%. The same conclusion could be drawn
from comparison of the averaged recovery in
percentile scale, 99.97, with the ranges provided
by Table 1. Considering that trigonelline con-
tents in coffee are of 0.50.85%, the third row
of the table ( \0.1%) is selected and the corre-
sponding recovery range is 95105%. The aver-
aged recovery 99.97 lies within this interval and
consequently, meets with the requirements of
the AOAC guidelines.
On the other hand, the regression approach
can be carried out. The plot of x
added
versus
x
found
was linear with an intercept of 0.18
and a slope of 1.012. The correlation coefcient
was about 0.9999 and the regression variance
s
2
=0.0355. In Table 2 the estimated values of
x
found
and the corresponding residuals are pre-
sented. Residual analysis did not show any
pathology with the exception of a large value
( 0.25) for the point corresponding to the ad-
dition of 20 mg. However, the absolute value
of this residual is lesser than twice the standard
deviation of the regression line (s=0.1884) and
hence cannot be considered as outlier. A plot
of the residuals on a normal probability paper
was fairly linear, which raties the model valid-
ity.
In order to test if simultaneously the slope
and intercept are not different from the idealis-
tic values h=0 and i=1, the procedure base
on the EJCR was considered as it is explained
Table 1
Analyte recovery depending on the concentration range
Analyte concentration (%) Recovery range (%)
]10 98102
97103 ]1
]0.1 95105
90107 ]0.01
]0.001--]0.00001 80110
60115 ]0.000001
]0.0000001 40120
A.G. Gonzalez et al. / Talanta 48 (1999) 729736 735
in Appendix A. Thus, by taking the critical value
for the SnedecorFisher statistic at a 95% con-
dence level F
2, 4
=6.94, we obtain i
1
=0.2304B1,
and i
2
=1.7770\1. This indicates that the point
(0, 1) lies inside the EJCR and then, the intercept
may be considered to be zero and the slope to be
unity, which leads to that the recovery can be
considered as 100%.
Appendix A
The equation of the isoprobability ellipse is
given by (8). One very easy way to determine if
the point (0, 1), corresponding to the joint null
hypothesis h=0 and i=1, lies inside the ellipse
(and therefore the null hypothesis is accepted) is
to consider the intersections of the straight line
h=0 and the ellipse: Only when the point (0, 1)
lies inside the ellipse, the straight line h=0 and
the ellipse intersect at two points (0, i
1
) and
(0, i
2
) fullling that (for i
2
\i
1
): i
2
\1 and
i
1
B1 simultaneously. Otherwise, the point (0, 1)
is outside the ellipse.
If in Eq. (8) we set h=0 and z=bi, the
following expression is obtained:

% x
2
i
n
z
2
+

2h % x
i
n
z+[na
2
2s
2
F
2, w
] =0
(9)
after the following changes
L=% x
2
i
M=2a % x
i
N=na
2
2s
2
F
2, w
(10)
the roots for z will be
z
1
=
M+M
2
4LN
2L
z
2
=
MM
2
4LN
2L
(11)
and consequently, i
1
=bz
1
and i
2
=bz
2
,
which are the parameters needed to check if the
point (0, 1) lies inside or outside the EJCR.
References
[1] ICH, International Conference on Harmonization, Vali-
dation of Analytical Procedures, Note for Guidance,
Commission of European Community, Brussels, 1995.
[2] M. Thompson, Analyst 121 (1996) 285288.
[3] J.M. Green, Anal. Chem. News Features May 1 (1996)
305A309A.
[4] M. Thompson, Anal. Proc. 27 (1990) 142144.
[5] J.K. Taylor, Anal. Chem. 55 (1983) 600A608A.
[6] R. Sutarno, H.F. Steger, Talanta 32 (1985) 439445.
[7] M. Valcarcel, A. R os, Analyst 120 (1995) 22912297.
[8] E. Prichard, Quality in the Analytical Chemistry Labora-
tory, ACOL series, Appendix 3: Some Sources of Refer-
ence Materials, Wiley, Chichester, UK, 1995, pp.
255256.
[9] M. Valcarcel, A. R os, Materiales de referencia, in: M.
Valcarcel, A. R os (Eds.), La calidad en los laboratorios
anal ticos, Editorial Reverte, Barcelona, 1992, Ch. 6, pp.
177222.
[10] K. Lamble, S.J. Hill, Analyst 120 (1995) 413417.
[11] Analytical Methods Committee, Analyst 120 (1995) 29
34.
[12] A.C. Metha, Analyst 122 (1997) 83R88R.
[13] A.G. Gonzalez, A.G. Asuero, Fresenius J. Anal. Chem.
346 (1993) 885887.
[14] A.G. Gonzalez, A. Marquez, J. Fernandez-Sanz, Comput.
Chem. 16 (1992) 2527.
[15] M. Thompson, Anal. Chem. 61 (1989) 19421945.
[16] B.D. Ripley, M. Thompson, Analyst 112 (1987) 377383.
[17] J.K. Taylor, J. Assoc. Off. Anal. Chem. 69 (1986) 398
400.
[18] G.T. Vernimont, Interlaboratory evaluation of an analyti-
cal process, in: W. Spendley (Ed.), Use of Statistics to
Develop and Evaluate Analytical Methods, ch. 4, AOAC,
Arlington, VA, 1990, pp. 87143.
[19] W. Horwitz, Pure Appl. Chem. 62 (1990) 11931208.
[20] R.E. Majors, LC-GC Int. 5 (1992) 814.
[21] U.R. Kunze, Probenahme und Probenvorbereitung, in:
Grundlagen der quantitativen Analyse, ch. 2, 3rd ed.,
Georg Thieme Verlag, Stutgart, 1990. pp. 24.
[22] R. Ferru s, Analytical function, calibration, interference,
and modellization in quantitative chemical analysis, in:
Miscel-la`nia Enric Cassasas, Bellaterra, Universitat Au-
to` noma de Barcelona, 1991, pp. 147150.
[23] M.J. Cardone, Anal. Chem. 58 (1986) 438445.
[24] W.J. Youden, Anal. Chem. 19 (1947) 946950.
[25] W.J. Youden, Biometrics 3 (1947) 61.
[26] W.J. Youden, Mater. Res. Stand. 1 (1961) 268271.
[27] M.J. Cardone, J. Assoc. Off. Anal. Chem. 66 (1983)
12571282.
[28] M.J. Cardone, J. Assoc. Off. Anal. Chem. 66 (1983)
12831294.
[29] M.J. Cardone, J.G. Lehman, J. Assoc. Off. Anal. Chem.
68 (1985) 199202.
[30] L. Cuadros Rodr guez, A.M. Garc a Campan a, F. Ales
Barrero, C. Jimenez Linares, M. Roman Ceba, J. AOAC
Int. 78 (1995) 471476.
A.G. Gonzalez et al. / Talanta 48 (1999) 729736 736
[31] K.S. Booksh, B.R. Kowalski, Anal. Chem. 66 (1994)
782A791A.
[32] L. Huber, LC-GC Int. 11 (1998) 96105.
[33] United States Pharmacopeia XXIII, National Formulary
XVIII, Rockville, MD, The United States Pharmacopeial
Convention, 1995, pp. 16101612.
[34] E. Prichard, Selecting the method, in: Quality in the
Analytical Chemistry Laboratory, ACOL series, ch. 3,
Wiley, Chichester, UK, 1995, pp 67101.
[35] J. Mandel, F.J. Linnig, Anal. Chem. 29 (1957) 743749.
[36] J.C. Miller, J.N. Miller, Errors in instrumental analysis;
regression and correlation, in: Statistics for Analytical
Chemistry, 3rd ed., ch. 5, Prentice Hall, Chichester, UK,
1993, pp. 101139.
[37] W.P. Gardiner, Statistical Analysis Methods for
Chemists, The Royal Society of Chemistry, Cambridge,
1997, pp. 182185.
[38] A.G. Gonzalez, Anal. Chim. Acta 360 (1998) 227241.
[39] E. Morgan, Chemometrics: Experimental Design, ACOL
Series, Wiley, Chichester, UK, 1991, pp. 126128.
[40] M. Meloun, J. Militky, M. Forina, Chemometrics for
Analytical Chemistry, vol. 2, Ellis Horwood, London,
1994, pp. 6469.
[41] J.S. Garden, D.G. Mitchell, W.N. Mills, Anal. Chem. 52
(1980) 23102315.
[42] M. Davidian, P.D. Haaland, Chem. Int. Lab. Sys. 9
(1990) 231248.
[43] P.C. Meier, R.E. Zu nd, Statistical Methods in Analytical
Chemistry, Wiley, New York, 1993, pp. 9294.
[44] J.N. Miller, Analyst 118 (1993) 455461.
[45] Analytical Methods Committee, Analyst 119 (1994)
23632366.
[46] L.M. Schwartz, Anal. Chem. 49 (1977) 20622068.
[47] L.M. Schwartz, Anal. Chem. 51 (1979) 723727.
[48] Y. Lacroix, Analyse chimie, interpretation des resultats
par le calcul statistique, Masson et Cie, Paris, 1962, pp.
3133.
[49] K. Doerfel, Statistik in der analytische Chemie, 4th ed,
VCH, Weinheim, 1987, pp 137155.
[50] R.J. Tallarida, R.B. Murray, Manual of Pharmacologic
Calculations with Computer Programs, 2nd ed., Springer,
New York, 1987, p. 16.
[51] P.D. Lark, Anal. Chem. 26 (1954) 17121725.
[52] H. Working, H. Hotelling, Proc. J. Am. Stat. Assoc. 24
(1929) 7385.
[53] J.S. Hunter, J. Assoc. Off. Anal. Chem. 64 (1981) 574
583.
[54] K.A. Brownlee, Statistical Theory and Methodology in
Science and Engineering, Wiley, New York, 1965, pp.
362366.
[55] AOAC, Peer Veried Method Program, Manual on Poli-
cies and Procedures, Arlington, VA, November 1993.
[56] M.J. Mart n, F. Pablos, M.A. Bello, A.G. Gonzalez,
Fresenius J. Anal. Chem. 357 (1997) 357358.
.
.

S-ar putea să vă placă și