Game Development Reference
In-Depth Information
Defining “sprite” further
Sprites, in their original incarnation, were made popular on the Atari (called PMGs), Texas
Instruments TI99/4A, Commodore 64, and NES (where they were called OBJs) computers and
video game systems of the late 1970s and early 1980s. A company name Signetics was the first
to create technology that acted like a sprite, though they did not use that name. In the early days,
sprite was simply an extra set of display memory (usually based on display scan lines),
independent from the bitmapped screen. It could be controlled independently from the screen
background, and some early implementations even included collision detection, multicoloring,
layering, and other features. Many early game systems made use of the sprite: the Atari 2600
had three: two players and one ball. Texas Instruments was the first company to use the term
“sprite.” A sprite was part of the hardware of the computer and was specifically used to display
important game objects independently from the background. Usually, the hardware offered only a
limited number of rather small sprites.
Understanding the differences between sprites and blitting
Many popular 1980s game systems and computers (like the Atari 520 ST and Apple IIGS) had no
hardware sprites. This required game developers to use tile sheets and blitting techniques in
software to achieve some of the same features of hardware sprites. The main difference between
the two was that the hardware sprites were written to the display separately than the rest of the
bitmapped screen. Some later computers (the Amiga and Atari STE) actually had hardware blitting
capabilities that combined the best of both worlds—fast bit block transfer rendered to a bitmapped
screen. Usually, systems with hardware blitting were replicating sprite capabilities with custom
chips that were essentially moving bits around really fast in memory, but still displaying the screen
as one big bitmap, with no separate sprite channels independent of the screen.
On systems with no hardware sprites or hardware blitting capabilities (or systems with too few of
either), software techniques were used to replicate the same thing. These software techniques
placed maps of bits on to the screen in a certain order, many times a second to achieve a blitted,
animated screen. This is essentially what we do in Flash when we are doing a blit; we are
replicating a classic software blit.
Tile sheets (or other image map representations) were used with both hardware and software
driven sprites and or blitted memory maps of bits. In Flash, we will use tile sheets too (but there
are certainly other methods of storing blit data). No matter what system is used (hardware sprites,
hardware blitting, or software systems), blitting to the screen requires you to exercise control over
the screen rendering to achieve the effects you wants to achieve.
Bringing Flash into the mix
In Flash, a Sprite is a built-in class. The Flash Player engine renders it, so you don't have to
worry about manually refreshing the screen when a sprite is moved (or changed) on the screen.
Unlike classic hardware sprites, the software only sprites actually are redrawn to the display
background every frame and are not provided by hardware. Flash sprites share a name only with
classic hardware sprites and are essentially a MovieClip without a timeline. Flash uses a
technique called screen invalidation to mark areas of the screen that need to be refreshed
rather than refreshing the entire screen with the vector render engine every frame.
Search WWH ::




Custom Search