Information Technology Reference
In-Depth Information
to install gigabit and upward connectivity into public libraries. One can imagine
public libraries serving as a safety net for a failed consumer broadband policy, at least
in some locales, and at the same time also providing some help to people who need
assistance in doing this kind of analysis of cultural memory content, a very interesting
and unexpected set of developments.
Another angle is this question about the mobility of computation, the idea of
sending your queries or computations to the data rather than pulling the data to where
you have got computational capacity and control. This is an issue that has a very, very
rich history; it has a history that reaches into the construction of query languages, into
distributed computing, into the design of distributed computing protocols, the
allocation of function between client and server, virtualization and virtual machines.
If you look at the ideas that motivated the development of things like Java, there again
this notion of protected computing environments, where one could safely deliver
computation to a data environment, go back all the way to the mid-1980s, when Bob
Kahn and Vint Cerf (names you will recognize from the foundational developments
of the Internet), actually developed a vision of what they called knowbots , which were
a way of specifying and packaging computation and moving it to data. (The key
citation here is The Digital Library Project Volume I: The World of Knowbots, 1988;
it is online.) All of these things now are resurfacing in very interesting new ways in
the worlds of big data and cloud environments.
Clearly there are lots and lots of details here, the details are really interesting, and
they are really complicated and really situational. One of the great challenges I think
is going to be coming up with better tools to make more and more of the user
community able to do computational things with literature and data and we are seeing
a huge amount of work in that area right now. Look at the impact of something as
simple as the Google Books Ngram Viewer, where you now have a whole line of
research that basically says “I'd like to look at a century and a half of text starting in
1750 and look at how the usage of certain phrases came and went across that century
and a half and in what kind of topics and in what kind of context.” A lot of the
developments in digital humanities are in part efforts to get better tools interacting
with very large data collections and get such tools to the stage where humanists do not
have to also be computer scientists but can just collaborate with them. This is
abstraction as an enabler, a democratizer, rather than as a means of restricting
functionality. I suspect that the HathiTrust Research Center will be a very important
nexus in developments here.
I will give you one other example, which I think really illustrates how much
progress has been made. Cast your mind back if you have been around that long, to
let's say 1990 and to the wonderful emerging world in the early 1990s of so-called
GIS Geographical Information Systems. You will remember some very expensive
proprietary tools, you will remember that many research libraries had this one
specialized person, the GIS librarian who worked with these geospatial data sets and
these very complicated tools and would work one-on-one with PhD students who
needed to weave them into the work they were doing. Compare that to Google Maps,
Google Earth and similar products from other competitors. We have forgotten how
quickly we progressed from a time when geospatial data was a very specialized
Search WWH ::




Custom Search