Geoscience Reference
In-Depth Information
Identifying and Predicting the Potential Toxic Effects of Chemicals
In 2007, NRC convened a panel of experts to create a vision and strategy
for toxicity testing that would capitalize on the -omics concepts described in
Appendix C and on other new tools and technologies for the 21st century (NRC
2007a). Conceptually, that vision is not very different from the now classic four-
step approach to risk assessment—hazard identification, exposure assessment,
dose-response assessment, and risk characterization—that was laid out in the
NRC report Risk Assessment in the Federal Government: Managing the Process
(commonly referred to as the Red Book) (NRC 1983) and that has been widely
adopted by EPA as its chemical risk assessment paradigm (EPA 1984, 2000).
However, the vision looks to new tools and technologies that would largely re-
place in vivo animal testing through extensive use of high-throughput in vitro
technologies that use human-derived cells and tissues coupled with computa-
tional approaches that allow characterization of systems-based pathways that
precede toxic responses. The computational approach to predictive toxicology
has many advantages over the current time-consuming, expensive, and some-
what unreliable paradigm of relying on high-dose in vivo animal testing to pre-
dict human responses to low-dose exposures.
Although there is generally widespread agreement that the new panomics
tools (that is, genomics, proteomics, metabolomics, bioinformatics, and related
fields of the molecular sciences), coupled with sophisticated bioinformatics ap-
proaches to data management and analyses, will transform the understanding of
how toxic chemicals produce their adverse effects, much remains to be learned
about the applicability and relevance of in vitro toxicology results to actual hu-
man exposures at low doses. With the fundamental mechanistic knowledge, it
should be easier to distinguish responses that are relevant to humans from re-
sponses that may be species-specific or to identify responses that occur at high
doses but not low doses or vice versa. That knowledge would contribute to a
reduction in the frequency of false-positive and false-negative results that some-
times plague high-dose in vivo animal testing.
A key issue in the use of such technologies is phenotypic anchoring, 1
which is an important step in the validation of an assay. It is essential to validate
treatment-related changes observed in an in vitro -omics experiment as causally
associated with adverse outcomes seen in the individual. A single exposure to
one dose of one chemical can result in a plethora of molecular responses and
hundreds of thousands of data points that reflect the organism's response to that
exposure. Quantitative changes in gene expression (transcriptomics), protein
content (proteomics), later enzymatic activity, and concentrations of metabolic
1 The concept of phenotypic anchoring arose from studies that examined the effects of
chemical exposures on gene expression in tissues (transcriptomics). In that context, the
term is defined as “the relation[ship between] specific alterations in gene expression pro-
files [and] specific adverse effects of environmental stresses defined by conventional
parameters of toxicity such as clinical chemistry and histopathology” (Paules 2003).
Search WWH ::




Custom Search