Biomedical Engineering Reference
In-Depth Information
but no quick and intuitive way for content authors to cross-link data throughout the network for later
access—the mechanism that allows today's Internet users to search for information. In 1990,
ARPANET was replaced by the National Science Foundation Network (NSFNET) to connect its
supercomputers to regional networks. Today, NSFNET operates as the high-speed backbone of the
Internet.
Fortunately, and apparently coincidentally, during the period of military expansion in the 1950s and
1960s, federally funded researchers at academic institutions explored ways to manage the growing
store of digital data amid the increasingly complex network of computers and networks. One
development was hypertext, a cross-referencing scheme, where a word in one document is linked to
a word in the same or a different document.
Around the time the ARPANET was born, a number of academic researchers began experimenting
with computer-based systems that used hypertext. For example, in the early 1970s, a team at
Carnegie-Mellon University developed ZOG, a hypertext-based system that was eventually installed
on a U.S. aircraft carrier. ZOG was a reference application that provided the crew with online
documentation that was richly cross-linked to improve speed and efficiency of locating data relevant
to operating shipboard equipment.
In addition to applications for the military, a variety of commercial, hypertext-based document
management systems were spun out of academia and commercial laboratories, such as the Owl
Guide hypertext program from the University of Kent, England, and the Notecards system from Xerox
PARC in California. Both of these systems were essentially stand-alone equivalents of a modern Web
browser, but based on proprietary document formats with content limited to what could be stored on
a hard drive or local area network (LAN). The potential market for these products was limited
because of specialized hardware requirements. For example, the initial version of Owl Guide, which
predated Apple's HyperCard hypertext program, was only available for the Apple Macintosh.
Similarly, Notecards required a Xerox workstation running under a LISP-based operating system.
These and other document management systems allowed researchers to create limited Web-like
environments, but without the advantage of the current Web of millions of documents authored by
others.
In this circuitous way, out of the quest for national security through an indestructible communications
network, the modern Internet was born. Today, the Internet connects bioinformatics researchers in
China, Japan, Europe, and worldwide, regardless of political or national affiliation. It not only provides
communications, including e-mail, videoconferencing, and remote information access. Together with
other networks, the Internet provides for resource sharing and alternate, reliable sources of
bioinformatics data.
As an example of how important networks are in bioinformatics R&D, consider that the typical
microarray laboratory involved in creating genetic profiles for custom drug development and other
purposes generates huge amounts of data. Not only does an individual microarray experiment
generate thousands of data points, usually in the form of 16-bit tiff (tagged image file format) files,
but the experimental design leading up to the experiments, including gene data analysis, involves
access to volumes of timely data as well. Furthermore, analysis and visualization of the experimental
data requires that they be seamlessly and immediately available to other researchers.
The scientific method involves not only formulating a hypothesis and then generating creative and
logical alternative solutions for methods of supporting or refuting it, but also a hypothesis that will
withstand the scrutiny of others. Results must be verifiable and reproducible under similar conditions
in different laboratories. One of the challenges of working with microarrays is that there is still
considerable art involved in creating meaningful results. Results are often difficult to reproduce, even
within the same laboratory. Fortunately, computational methods, including statistical methods, can
help identify and control for some sources of error.
As shown in Figure 3-1 , computers dedicated to experimental design, scanning and image analysis,
expression analysis, and gene data manipulation support the typical microarray laboratory. The
microarray device is only one small component of the overall research and design process. For
Search WWH ::




Custom Search