Geoscience Reference
In-Depth Information
procedures (SOPs) from individual manufacturers regarding recommended protocols and
instrument performance.
6.4.1 Temperature Correction
All fluorescence measurements are subject to alterations in signal as a function of tempera-
ture changes. This can arise either from self-heating by the instrument or from the ambient
temperatures of the deployment environment. The degree to which temperature influences
the fluorescence intensity through self-heating can be determined by laboratory charac-
terization and will vary for each instrument. Many instruments are now equipped with
thermistors and reference values to correct for temperature-related effects on fluorescence
response. When characterizing these effects, attention must be paid to (1) the warm-up
period where sharp increases in temperature and signal occur and (2) the gradual increase
in temperature that may exist after the initial warm-up period. Both periods can exhibit
an influence on fluorescence signal as evidenced from a laboratory experiment where a
solution of standard was continually pumped through a flash lamp-based, flow-through
sensor while submerged in a water bath of constant temperature ( Figure 6.9 ). During the 8
hour experiment, fluorescence varied by 10% and internal temperature by 7 o C. The extent
of these effects are sensor dependent and manufacturers recommend a specific warm-up
time, typically on the order of tens of minutes (but can be longer for flash lamp-based sen-
sors), where signal should not be used for data collection. Although such recommendations
are provided, analysts should repeat the characterization themselves to ensure appropriate
warm-up times of their sensor. Beyond the warm-up period, fluorescence intensity may
continue to increase with temperature, albeit at a smaller rate. If significant, data can be
normalized to temperature to correct for this increase in signal.
6.4.2 Blank Subtraction
In the words of Cullen and Davis ( 2003 : 29) “It is hard to imagine a topic that seems more
boring and trivial than the measurement of nothing. In a sense, determination of an ana-
lytical blank is exactly that - measurement of the signal associated with the absence of
the property being detected.” However, in many cases determining an acceptable blank
is a critical part of a calibration routine. Blank subtraction can be troublesome in some
environments where the magnitude of the sample intensity is small relative to the blank,
and analysts may choose to not apply this step for fear of underestimating the signal or in
some cases, obtaining negative values (i.e., the oligotrophic ocean where the magnitude
of the blank may be similar to the clearest ocean sample). Conversely, the fluorescence
signal of a blank may be insignificant relative to sample water, and blank subtraction may
be irrelevant (e.g., highly concentrated river systems). In either instance blank subtraction
may not be conducted, and it is up to the operator to determine if this is a reasonable step
for his or her application. In an examination of 14 peer-reviewed papers from 1998-2010
Search WWH ::




Custom Search