Geoscience Reference
In-Depth Information
domain-specific knowledge that would allow useful findings to be separated out from those already
known or unhelpful. Recognition of this fact has led to an acknowledged role for visualisation
(Thomas and Cook, 2005, 2006), alongside more quantitative approaches to analysis and a conse-
quent shift towards a collaborative mode of interaction between researchers and their computational
systems. There is a similar trend within the computer vision and artificial intelligence commu-
nity to regard the human as an essential component within many systems and renewed interest in
developing collaborative approaches (e.g. Keel, 2007; Mac Fhearaí et al . , 2011) that can utilise the
strengths of both computers and people working together.* Obviously, visualisation can provide an
approach to data analysis that is based around such collaboration. As we will see later, interaction in
visualisation is typically highly interactive, allowing the user to move around the display to explore
different perspectives on the underlying data and to easily change those aspects of the data that are
emphasised visually and how they are emphasised.
From the perspective of human-computer interaction, visualisation can be viewed as one mem-
ber of a larger family of methods for interacting with information via the senses, by forming images,
scenes or virtual realities to graphically portray data. Other modalities such as touch, sound and
even taste and smell have also been explored as additional channels by which to convey informa-
tion (e.g. Hermann and Hunt, 2005). In sighted humans, vision is the dominant sense, making it the
strongest candidate on which to base interaction with the computer.
Fuelling the more widespread adoption of visualisation is the trend towards increasing graphi-
cal performance on every kind of personal computing device. Early GeoViz required sophisti-
cated and expensive hardware to provide accelerated graphical performance (known as rendering
speed). However, the ever-improving cost-performance of computer hardware has ensured that
suitable graphics capabilities are available now for many platforms, including desktop computers
and mobile devices, and indeed are now considered standard equipment. Highly optimised GPU
(graphics processing unit) hardware and better integration with motherboards, memory and high-
level languages has opened up a world of possibilities, with rendering speeds that would have
been unimaginable 10 years ago. All of this makes it possible to manipulate complex datasets,
using sophisticated visual paradigms, in real time on personal computing devices. And render-
ing speed will continue to improve for some time yet. Still holding us back is a range of more
complex challenges around understanding the effectiveness of particular visualisation methods,
the poor integration of these methods into existing analysis packages and databases and perhaps a
misplaced sense that the resulting subjectivity is problematic to shoehorn into established notions
of scientific rigour.
5.2.2 d efinitionS of k ey t erMS u Sed
Visualisation encompasses both the methods to depict data visually and the means to interact with
this depiction, based around a graphical environment. It involves the production of graphical repre-
sentations of data , often termed visualisations or scenes or displays . A scene may depict ( render )
data as it might appear in the real world to a human observer (e.g. photorealistic), or alternatively
it may transform data values that do not have a true visual appearance (such as depth, soil pH or
rainfall) into an abstract graphical form where they can be readily perceived. This transformation is
known as visual encoding , by mapping the data to some visual variable , such as colour, size, shape
or transparency.
* This is not to say that statistical and machine learning should not be used for complex exploratory analysis. On the con-
trary, there are many tasks that can be better performed by an algorithm than by humans. For example, classification and
the search for minima and maxima in high dimensionality spaces (which form the basis of many of the artificial intel-
ligence techniques discussed in this topic, including neural networks, decision trees and genetic algorithms) are usually
best performed using a computational approach.
And using server farms of similar computers, it becomes possible to render armies of goblins and dwarves and even the
odd dragon onto real or virtual film sets in near real time.
Search WWH ::




Custom Search