Sunteți pe pagina 1din 10

Calibration

In this unit, we will review how to construct a calibration curve. Each calibration has its own limits. A proper understanding of those limits will help you develop the best possible calibration and avoid many problems.

Constructing a calibration curve

You typically have two (or more) variables to work with. One (or more) is set at known values. Your analyte Other experimental conditions One is a measured response. Absorbance, current, area, ....

Constructing a calibration curve


For a simple two variable calibration curve we commonly assume that we are dealing with two types of variables: ! Independent - the one we set ! Dependent - the one we measure In reality, both variables should be considered independent. We rely on developing a model to show that the two variables are related.

Constructing a calibration curve

The simplest approach to developing our model is to: Select a series of known analyte standards. Hold other factors constant - or as many as possible Measure the response. Develop a model (calibration curve).

Constructing a calibration curve


Our response may actually rely on a vast number of factors: Examples ! matrix ! interfering analytes ! random errors ! sample preparation ! sample collection method ! ...

Constructing a calibration curve


So our response is actually a measure of an entire method. The relationship between analyte and response is a function of the type of method. Examples ! gravimetry chromatography ! ISE f(mass) = amount f(area) = amount f(mV) = log[ ]

Most methods have a fixed range where the relationship between response and analyte amount is valid.

Constructing a calibration curve


To initially establish linear range, sensitivity and detection limits, we commonly rely on the external standard method. Separately run knowns and unknowns. Assume that the only difference in response if due to the analyte. Develop a model to show any relationships and limits.

response

LOD

lin
LOQ

ar

n ra

ge

LOD LOD limit limitof ofdetection detection LOQ LOQ limit limitof ofquantitation quantitation LOL LOL limit limitof oflinearity linearity

LOL

analyte amount

Linear modeling
General unweighted least squares Assumptions Standards are correct and all errors come from the measurement of response. Variances are independent of analyte concentration.

Linear model

R =b 1 X + b 0 + e
R =response b 1 =slope parameter X =Standard value b 0 = intercept parameter e =residual error

Linear model
Weve already showed how to calculate the slope and intercept.

Linear model
2 sY !X 2 2 N !^X i - X h

2 sb =
0

b1

X Y !XY - ! N! = ` !X j !X - N
2 2

slope

2 sb =
1

2 sY 2 !^X i - X h
2

We can estimate the variances via propagation of errors - produced during ANOVA analysis.

b0 =

!Y - b !X
1

intercept

2 e

!^Y =

2 -Y h -b1 !^X i - X h ^N - 2h

Linear models and uncertainty


Any linear model has some degree of uncertainty associated with it. At a given condence level, our model actually represents a regression band.

Calculating the regression bands


Determine the desired condence limit and look up the proper t value (df = N - 2) Calculate the predicted X value based on the predicted R (response) value. This is so you can make a more complete plot. For each point, calculate your interval value as:

slope range = b 1 ! t s b

intercept range = b 0 ! t s b

Use one sided t value

R V1/2 2 ^X' - X h W S1 C =t s y S + 2 SN W !^X i - X h W T X

Calculating the regression band

Your regression band is then calculated as: Lower Upper b1 X + b0 - C b1 X + b0 + C

You are just plotting out the condence limits for each data point.

Uncertainty near the detection limit


variation variation of of response response

LOD LOQ 100 99% 99% 95% 95%

relative error

LOD Point below which we can no longer be confident that our analyte is present. LOQ Point below which we can no longer confidently report a quantitative value.

variation variation of of standard standard


resulting resulting variation variation of of analysis analysis

-100

Number of measurements and uncertainty


confidence intervals 10% 5%
Relative error

Detection limit, sensitivity & linear range

1%

The calibration curve and regression band can determine{ Detection limit. Smallest amount we can see with a known level of condence. The upper CL at Y=0, then converted to concentration (X). Sensitivity. Smallest change in amount we can see with a known level of condence. Often based on smallest change that an instrument can display.

10 20 30 40 number of measurements

50

Linear range. Range where we can quantify with a known level of condence. (lower = DL, upper = where curve intersects upper/lower CL.

Example

sensitivity

linear range

detection limit
Example
12000 10000

Example

8000

6000

4000

2000

0 0 20 40 60 80 100 120

-2000

2000 1800 1600 1400

Noise +/- 2000

1200 1000 800 600 400 200 0 0 2 4 6 8 10 12 14 16 18 20

Concentration

200

Example
Noise +/- 100

180 160 140 120 100 80 60 40 20 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Concentration

Using the residuals

A plot of the residuals can give you an idea of how well your model ts the data. Saved time by using upper 95% condence limit value that XLStat provided for [ ] = 0 Residual = measured - predicted
error
This This residual residual plot plot indicates indicates a a reasonable reasonable fit fitof ofthe thedata data to tothe themodel. model.

concentration analyte

Using the residuals

+/-2000 example
2 1.5

Standardized residuals

OK, these are reasonable ts of a linear model but we should check the residuals to be sure
Regression of Noise +/- 2000 by Concentration (R!=0.965)
12000 12000

1 0.5 0 0 -0.5 -1 -1.5 20 40 60 80 100 120

Regression of Noise +/- 100 by Concentration (R!=1.000)

10000

10000

Noise +/- 2000

Noise +/- 100

8000

8000

6000

6000

4000

4000

2000 2000 0 0 -2000 -2000 20 40

-2
Concentration
0 20 40 60 80 100 120

Concentration 60 80

100

120

Concentration

+/-100 example
2 1.5 1 0.5 0 0 -0.5 -1 -1.5 -2 20 40 60 80 100 120

Standardized residual
Residuals are normalized by the standard error. This results in both of our plots looking pretty much the same since the error introduced was random (even if it varied by a large factor. Typically, expect to see residual values to be randomly distributed around + 2 units for a good model.
Concentration

Standardized residuals

+ 3 would indicate a problem. + 4 shouldnt happen.

Standardized residual.
2 1.5 1 0.5 0 0 -0.5 -1 -1.5 -2 20 40 60 80 100 120

Using the residuals


2 1.5 1 0.5 0 0 20 40 60 80 100 120

Standardized residuals

Standardized residuals

-0.5 -1 -1.5 -2

Concentration

Concentration

So, by using standardized residuals, we can directly compare the two data sets - even though the actual noise level is signicantly different.

This residual plot indicates that an improper model was used.

Regression of MPG by Weight, tons

40 35 30 25 20 15 10 5 1.5

Here is an example that demonstrates a bad model. Were going to try and model MPG vs. car weight.

MPG

2.5

3.5

4.5

Weight, tons

Standardized residuals / Weight, tons


2.5 2

Now what?
The model clearly has a problem. Researchers attempted different models and found that log(weight) gave a better t.

Standardized residuals

1.5 1 0.5 0 1.5 -0.5 -1 -1.5 -2

2.5

3.5

4.5

Also, MPG is determined differently in other countries - USA uses miles traveled/gallon. It is often full consumed per xed distance (100 miles). So, they used log(wt) vs 100/MPG

Weight, tons

Regression of 100/mpg by
3

Standardized residuals / Log(Wt)

8 7 6
2

Standardized residuals

100/mpg

5 4 3 2 1 0.2

0 0.2 -1

0.3

0.4

0.5

0.6

0.7

-2

0.3

0.4

0.5

0.6

0.7

-3

Log(Wt)

Log(Wt)

Still not perfect.

Residual range is excessive. Ultimately, the best model had to include other factors. Including Drive Ratio turned out to be the key in developing the best model.

Pred(100/mpg) / 100/mpg
7
2 1.5

100/mpg / Standardized residuals

Standardized residuals

1 0.5 0 2 -0.5 -1 -1.5 3 4 5 6 7

100/mpg

2 2 3 4 5 6 7
-2

Pred(100/mpg)

100/mpg

Using the residuals

From the Octane summer/winter blend example.

Standardized residuals / a1360


2 1.5

Standardized residuals

1 0.5 0 0.34 -0.5 -1 -1.5 -2

Its clear that there are two different types of samples so a simple calibration model wont work.
0.36 0.38 0.4 0.42

This residual plot indicates that there is some sort of response dependency based on the sample used. Consider using Analysis of Covariance to conrm.

This was a candidate for ANCOVA,

a1360

Pred(Octane #) / Octane #

From the Octane summer/winter blend example.

93

91

Octane # / Standardized residuals

Octane #

89

87

Standardized residuals

0.5 0 83 -0.5 -1 -1.5 -2 85 87 89 91 93

Pred(Octane #)

Standardized residuals

ANCOVA can then be used to build a model that includes the season factor as a way of merging what are clearly two different (but related) models.

Octane # / Standardized residuals


85
3 2.5

1.5 1

83 83 85 87 89
2 91 1.5 1 0.5 0 -0.5 -1 -1.5 83 85 87 89 91 93

93

Octane #

-2

Octane #

Standard addition

Standard addition
Initial response. No standard addition. R 0 = k C 0 =k c n 0 m V0 Response when a standard is added. R T = R 0 +R S =k c n 0 + n S m V 0 +V S We can then set Q as:

A calibration method where standards are added to replicates of your sample. You then measure the total response. The matrix is near identical for all samples. You can measure the response away from LOD and LOQ values.

Q = R T ^V 0 + V S h = k n 0 + k n S

We can then plot Q vs. ns added using different amounts of the standard.

Sample
With an approach like this,Vn+Vs would be the same for all standards.

no

n1

n2

n3

Standard addition
By extrapolating to the x intercept, where Q=0, we have !

k n 0 =- k n 1

and

n 0 =- n i

The primary advantage of this method is that you can move your measurement from an area of high relative error.

S-ar putea să vă placă și