Biology Reference
In-Depth Information
such differences can be problematic is illustrated
by the use of QTOF and QTRAP mass spectrom-
eters to simultaneously pro
LC-MS data on signal intensity, analyte mass
(over the range
70 to 1,200 amu) and retention
time. The rawdata is usually in the formof a series
of full scan mass spectra comprising the spectral
data for the metabolites (including adducts,
isotopic peaks, and systematic noise, etc.) that
have been acquired over successive time points
(each of
w
le the same eluent
(split 50:50 as it emerged from the column).
The outcome of the study was that the data
from both instruments for the urinary metabolite
pro
les obtained from control and test animals
were readily separated by PCA, the
markers
2 e 20 ms). To process the enormous
amount of data residing in these
w
were spectrometer dependent. 49
files and extract
useful information, specialized software is
needed. This demand has led to the development
of many proprietary, freeware, and in-house
programs; these programs allowinstrument noise
to be removed, baselines to be corrected,
centering, normalization, peak picking, peak inte-
gration and alignment, de-isotoping, adduct
removal, and other tasks to be performed to
reveal the metabolite peaks that were present in
the samples. This work allows the construction
of a peak table that lists the samples, the ions for
the metabolites, and their intensities with the 3D
retention time/mass/intensity information
compressed into two dimensions by combining
the mass and retention time data into a single
QUALITY CONTROL, DATA
ANALYSIS, AND BIOMARKER
DETECTION
Analytical variability resulting from changes in
system performance can pose a major threat to
the successful performance of metabolic pheno-
typing studies, and in LC-MS data such variability
can result from changes in chromatography due
to column degradation (e.g., gradual or cata-
strophic changes in peak shape and retention
time) or changes in spectrometer performance
(e.g., changes inmass accuracy or signal intensity).
Such changes need to bemonitored and corrective
action taken to avoid severe problems in any
subsequent analysis of the data. Our approach to
minimizing such problems revolves around a
standard
Data analysis can be undertaken using
either the proprietary software generally avail-
able from the manufacturer or freely avail-
able open source programs such as MZmine, 50
MetAlign, 3,51 and XCMS, 52 which can be used to
analyzedataonce it has been converted intoa suit-
able format such as netCDF or mzXML. An
advantage to the operator of open source software
packages is that they can be customized for the
individual needs of the study (e.g., a requirement
to accommodate particularly broad or narrow
peaks, etc.). The initial means of examining this
type of metabolic phenotyping data is most
commonly to use multivariate statistical analysis
(e.g., PCA) to highlight differences between
samples from test and control groups; however,
in order to have con
feature.
or QC sample. 10 e 16
These QCs can be prepared by pooling aliquots
of the samples being analyzed, thereby providing
a representative sample, or, where this is not prac-
ticable, by using a bulk sample of the matrix (e.g.,
plasma or serum obtained from a blood bank
or commercial supplier). These QCs are then inter-
spersed every 5 or 10 samples throughout the
run. As each of these samples is identical in
composition, by monitoring the variability
observed in them when they are analyzed, the
quality of the analysis can be assessed. The same
QC samples can also be used to condition the
LC-MS system prior to the start of the run, as
discussed earlier.
Themetabolite pro
quality control
dence that the detected
features are genuine indicators of the condition
being investigated, additional univariate statis-
tical analysis and manual examination of the
raw data represent good practice. Data analysis
ling exercise for each of the
samples in the run generates a large amount of
Search WWH ::




Custom Search