Geoscience Reference
In-Depth Information
are still an issue today. Even without access to the fastest supercomputers in the world, our desktop
computers are considerably faster than they were a decade ago. Problems that took weeks or months
to solve can now be computed in minutes or seconds. Turton and Openshaw (1998) showed that the
Cray T3D could solve a benchmark spatial interaction model in just under 3 min, while desktop
computers at that time would take 3.8 days. That same problem could now be solved in seconds on a
standard desktop computer. Computer memory and storage have also vastly increased and become
incredibly affordable. So where do the real limits in computational power lie? To consider this ques-
tion further, we first need to look back at the way in which computers have developed.
Early computing machines were built using vacuum tube technology, for example, in 1945, the
Electronic Numerical Integrator and Calculator (ENIAC) had more than 19,000 vacuum tubes and
weighed roughly 30 tons, filling the size of a small house. The ENIAC could perform up to 5000
calculations per second with a clock speed of 100 kHz (Farrington 1996). The big breakthrough
for computing speed and power came with the development of the transistor, which is an elec-
tronic switch that was able to replace the vacuum tube. The integrated circuit or computer chip was
developed in 1958 and consisted of multiple transistors. As more transistors were added, the speed
of computers increased. Gordon Moore, who was then head of R&D at Fairchild Semiconductor
(which is now Intel), examined the trend at which the number of transistors and hence computing
speed have doubled. In 1965, he predicted a doubling of transistors on a single computer chip every
year but this was changed to every 2 years after 1975 (Moore 1965, 1975). This trend is now referred
to as Moore's law and has largely driven technological advances in computing speed as it has been
adopted as an industry target (Reis and Jess 2004). This trend is projected to continue until at least
2015 (Grueber and Studt 2010) where Moore himself argued that this trend is unsustainable since
limits in transistor size will be reached at the atomic level (Dubash 2005).
To deal with this transistor size limit, ongoing research is addressing how alternative or newly
discovered materials could be used for building computer components. For example, aluminium
was replaced by copper in chips built by IBM in the 1990s, which was a technological innovation
that was crucial for reaching the targets set by Moore's law at that time. Research is now ongoing
into the development of optical chips, which would effectively replace the copper wiring. Using
light photons instead of electrons from electricity, optical chips would give off almost no heat and
would result in much more efficient computer chips than existing ones (Maney et al. 2011).
Most chips and transistors are made with silicon, which could be replaced in the future with
molybdenite (Radisavljevic et al. 2011) or germanium nanoelectrics (Pillarisetty 2011). Recently
discovered graphene, which is a material composed of carbon atoms arranged in a hexagonal pat-
tern of one atom thickness, could be used in combination with silicon (Kim et al. 2011). Other
researchers are exploiting the properties of silicon for quantum computing (Morton et al. 2011),
which is computation at the subatomic level with improvements in speed and memory that will be
exponential compared to the current state. Rather than store and transmit information in a binary
fashion via bits, quantum computing uses quantum bits or qubits, which can have more than one
state at a time (McMahon 2007). This property means that quantum computers have the ability
to store and process much greater amounts of data than conventional bit-based computing, taking
computing to an entirely different level. NASA and Google have formed a partnership to invest in
the first commercial quantum computer developed by the Canadian company D-Wave. The com-
puter, which will cost 15 million USD, will reach speeds 3600 times greater than conventional
computers (Jones 2013). Superconductors have not yet been used in computers but, coupled with
quantum computing, have the potential to perform a million trillion calculations per second or exa-
flop computing, which is roughly 500 times faster than the fastest supercomputers in 2010 (Maney
et al. 2011). There are also other ongoing research challenges focused on making improvements to
existing technologies (Chen 2006), for example, new approaches to chip design and production,
such as extreme ultraviolet lithography or methods to facilitate nanochip fabrication (European
Commission 2010). Whether these advances will make it possible to continue the trend of Moore's
law in the short term will remain to be seen.
Search WWH ::




Custom Search