Biology Reference
In-Depth Information
but had a false positive rate of only 1%. So it was a conservative method for
inclusion of an interaction. With the two together, they estimated that of a list
of 8000 protein interactions from the Database of Interacting Proteins, 7 about
50% are reliable, and using the latter test, identified 3000 of these as likely true
interactions. These are chastening error rates. While-high throughput methods
can generate candidates at an enormous rate, validating that they are indeed inter-
actions requires more intensive analysis on an interaction-by-interaction basis.
Moreover, Deane et al. note but do not deal with the issue of false negatives -
how many interactions may be missed by the high-throughput screen.
Can we do better? One obvious move is to try to improve the quality of
the data. There are many possible sources of error. We should be interested
particularly in systematic ones because this fraction usually indicates problems
we can do something about, by supplementing or recalibrating our methods. It
may also indicate sources of systematic bias in methodological or theoretical
approach (Wimsatt, 1980, 2007). Thus the high number of false positives noted
by Deane likely arises at least in part for a systematic reason: Testing for the
possibility of chemical interaction among all possible reactants does not allow the
presumably substantial number of interactions that do not occur in vivo because
the reactants are spatially or temporally segregated in the organism under natural
conditions - sequestered by design. This points to the need to move beyond the
current focus of NSB on intracellular dynamics. We can hope to correct these
kinds of error, but only by investigating intracellular and intercellular structure
and morphology, how they change through development, and how they may act
to catalyze and compartmentalize reaction dynamics.
4. DATA ERRORS AND MOLAR SYSTEM PROPERTIES
Do we need to know everything? At this level of analysis, we must treat all
sorts of errors as the same: reactions left out or reactions erroneously included,
and assume that whether errors of either sort make a difference depends upon
the context - upon the network structure of the system, which of its products
are crucial, and how sensitive system behavior is to the levels of the products.
At first this suggests detailed local analyses. But at a more molar level, how
sensitive system performance is to errors depends upon what kind of system
we are analyzing. Software is very breakable, but consequences can be large or
small. Y2K turned out to be less of a problem in part not only due to massive
7 Some care was taken in assembling this database. About 2000 of the 8000 were identified in small-scale
experiments (from 800 research articles), and the rest came from four high-throughput screens. Nonetheless,
the overlap between these sources was described as 'petite' - the factor originally motivating the calibrations
they performed.
Search WWH ::




Custom Search