Chemistry Reference
In-Depth Information
4.3.4 d etectIon l ImIt
The detection limit (DL—also sometimes referred to as the limit of detection or
LOD) is defined as the lowest concentration of an analyte in a sample that can be
detected but not necessarily quantitated. It is a limit test that specifies whether or
not an analyte is above or below a certain value. Usually expressed as the concentra-
tion of the analyte (e.g., percentage, parts per billion) in the sample, the DL can be
determined by approaches based on visual examination, signal-to-noise (S/N), or on
a calculation based upon the standard deviation of the response and the slope of a
calibration curve.
Visual examination can be used in both instrumental and noninstrumental
approaches, for example, the presence or absence of a peak in a chromatogram, or a
color change in a titration. Visual examinations can be highly subjective, however,
and are not in common use.
The S/N approach can be used with analytical procedures that exhibit baseline
noise. Determination of the S/N ratio is performed by comparing measured signals
from samples of known low concentrations of analyte with those of blank samples
and establishing the minimum concentration at which the analyte can be reliably
detected. Typically, the signal is measured from baseline to peak apex and divided
by the peak-to-peak noise determined from a blank injection. It is important that the
noise be measured in the blank chromatogram during the same elution window as
the peak of interest. An S/N ratio between 3:1 and 2:1 is generally considered accept-
able for estimating the detection limit.
Calculations based on the standard deviation of the response and the slope of a
calibration curve is based on the following formula:
DL = 3.3*σ/S
where σ is the standard deviation of the response and S is the slope of the calibration
curve. The slope may be estimated from the calibration curve of the analyte, or a sep-
arate curve approaching the DL may be prepared. The value of σ may be determined
based on the standard deviation of blank injections, the residual standard deviation
of response, or the standard deviation of y-intercepts of the regression lines of the
calibration curve. Table 4.5 provides a simple example of determining the DL using
this formula where the response was determined at five levels (minimum number of
levels for linear curve [Section 4.3.6]).
Determination of σ for the standard deviation of blank injections is performed
by analyzing an appropriate number of blank samples for the magnitude of analyti-
cal background response and calculating the standard deviation of these responses.
When using the calibration curve calculation, the standard error of the y-intercept
(based on regression analysis with zero not included) is recommended as it is a better
indicator of the DL at low concentrations than averages derived at higher concentra-
tions from the residual standard deviation. Although the S/N method is somewhat
less subjective than visual determinations, calculations based on a calibration curve
are the least subjective and have the least operator bias. Regardless of the method
used, multiple samples should be injected at the limit for verification, and the actual
method used should be documented.
Search WWH ::




Custom Search