Graphics Reference
In-Depth Information
non-computer-generated material. Digital cameras and digital video cameras give
us huge streams of pixels (the individual items in an array of dots that consti-
tutes the image 2 ) to be processed, and the tools for processing them are rapidly
evolving. At the same time, the increased power of computers has allowed the pos-
sibility of enriched forms of graphics. With the availability of digital photography,
sophisticated scanners (Figure 1.2), and other tools, one no longer needs to explic-
itly create models of every object to be shown: Instead, one can scan the object
directly, or even ignore the object altogether and use multiple digital images of it
as a proxy for the thing itself. And with the enriched data streams, the possibility
of extracting more and more information about the data—using techniques from
computer vision, for instance—has begun to influence the possible applications
of graphics. As an example, camera-based tracking technology lets body pose or
gestures control games and other applications (Figure 1.3).
Figure 1.2: A scanner that
projects stripes on a model that
is slowly rotated on a turntable.
The camera records the pattern
of stripes in many positions to
determine the object's shape.
(Courtesy of Polygon Technology,
GMBH).
While graphics has had an enormous impact on the entertainment industry, its
influence in other areas—science, engineering (including computer-aided design
and manufacturing), medicine, desktop publishing, website design, communi-
cation, information handling, and analysis are just a few examples—continues
to grow daily. And new interaction settings ranging from large to small form
factors—virtual reality, room-size displays (Figure 1.4), wearable displays con-
taining twin LCDs in front of the user's eyes, multitouch devices, including large-
scale multitouch tables and walls (Figure 1.5), and smartphones—provide new
opportunities for even greater impact.
For most of the remainder of this chapter, when we speak about graphics appli-
cations we'll have in mind applications such as video games, in which the most
Figure 1.3: Microsoft's Kinect
interface can sense the user's
position and gestures, allowing a
scientist to adjust the view of his
data by “body language”, with-
out a mouse or keyboard. (Data
view courtesy of David Laid-
law; image courtesy of Emanuel
Zgraggen.)
Figure 1.4: An artist stands in a Cave (a room whose walls are displays) and places paint
strokes in 3D. The displays are synchronized with stereo glasses to display imagery so that
it appears to float in midair in the room. Head-tracking technology allows the software to
produce imagery that is correct for the user's position and viewing direction, even as the
user shifts his point of view. (Courtesy of Daniel Keefe, University of Minnesota).
Figure 1.5: Two users interact
with different portions of a large
artwork on a large-scale touch-
enabled display and a touch-
enabled tablet display. (Courtesy
of Brown Graphics Group.)
2. We'llcallthese display pixels to distinguish them from other uses of the term “pixel,”
which we'll introduce in later chapters.
 
Search WWH ::




Custom Search