An introduction to events (iOS 4)

In the previous topic, you learned how to create the basic view controllers that fulfill the controller role of an MVC architectural model. You’re now ready to start accepting user input, because you can send users to the correct object. Users can interact with your program in two ways: by using the low-level event model or by using event-driven actions. In this topic, you’ll learn the difference between the two types of interactions and how to implement them. Then we’ll look at notifications, a third way that your program can learn about user actions.

Of these three models, events provide the lowest-level detail and ultimately underlie everything else (they’re essential for sophisticated programs), so we’ll begin with events.

We briefly touched on the basics of event management in topic 2. But as we said at the time, we wanted to put off a complete discussion until we could cover events in depth; we’re now ready to tackle that job.

The fundamental unit of user input is the touch: a user puts a finger on the screen. This may be built into a multitouch or a gesture, but the touch remains the building block on which everything else is constructed. It’s the basic unit that we’ll examine in this topic. In this section, we’ll look at how touches and events are related. Let’s start by examining the concept of a responder chain.

The responder chain

When a touch occurs in an SDK program, you have to worry about what responds to the event. That’s because SDK programs are built of tens—perhaps hundreds—of different objects. Almost all of these objects are subclasses of the UIResponder class, which means they contain all the functionality required to respond to an event. What gets to respond?


The answer is embedded in the concept of the responder chain. This is a hierarchy of different objects that are each given the opportunity, in turn, to answer an event message.

Figure 6.1 shows an example of how an event moves up the responder chain. It starts out at the first responder of the key window, which is typically the view where the event occurred—where the user touched the screen. As we’ve already noted, this first responder is probably a subclass of UIResponder—which is the class reference you’ll want to look to for a lot of responder functionality.

Any object in the chain may accept an event and resolve it; when that doesn’t occur, the event moves farther up the list of responders. From a view, an event goes to its superview and then its superview, until it eventually reaches the UIWindow object, which is the superview of everything in your application. It’s useful to note that from the UIWindow downward, the responder chain is the view hierarchy turned on its head; when you’re building hierarchies, they do double duty.

 Events are initially sent to the first responder but then travel up the responder chain until they're accepted.

Figure 6.1 Events are initially sent to the first responder but then travel up the responder chain until they’re accepted.

Although figure 6.1 shows a direct connection from the first responder to the window, there can be any number of objects in this gap in a real-world program.

Often, the normal flow of the responder chain is interrupted by delegation. A specific object (usually a view) delegates another object (usually a view controller) to act for it. You already saw this put to use in your table view in topic 5, but you now understand that delegation occurs as part of the normal movement up the responder chain.

First responders and keyboards

Before we leave the topic of responders, we’d like to mention that the first responder is an important concept. Because this first responder is the object that can accept input, it sometimes takes a special action to show its readiness for input. This is particularly true for text objects like UITextField and UITextView, which (if editable) pop up a keyboard when they become the first responder. This has two immediate consequences.

If you want to pop up a keyboard for the text object, you can do so by turning it into the first responder:

tmp12136_thumb

Similarly, if you want to get rid of a keyboard, you must tell your text object to stop being the first responder:

tmp12137_thumb

We’ll discuss these ideas more when you encounter your first editable text object toward the end of this topic.

If an event gets all the way up through the responder chain to the window and it can’t deal with an event, then it moves up to the UIApplication, which most frequently punts the event to its own delegate: the application delegate, an object that you’ve been using in every program to date.

Ultimately, you, the programmer, must decide what in the responder chain will respond to events in your program. You should keep two factors in mind when you make this decision: how classes of events can be abstracted together at higher levels in your chain, and how you can build your event management using the concepts of MVC.

At the end of this section, we’ll address how you can subvert this responder chain by further regulating events, but for now let’s build on its standard setup.

Touches and events

Now that you know a bit about how events find their way to the appropriate object, we can dig into how they’re encoded by the SDK. First, we want to offer a caveat: usually you won’t need to worry about this level of detail because the standard UIKit objects generally convert low-level events into higher-level actions for you, as we discuss in the second half of this topic. With that said, let’s look at the nuts and bolts of event encoding.

The SDK abstracts events by combining a number of touches (which are represented by UITouch objects) into an event (which is represented by a UIEvent object). An event typically begins when the first finger touches the screen and ends when the last finger leaves the screen. In addition, it should generally include only those touches that happen in the same view.

In this topic, you’ll work mainly with UITouches (which make it easy to parse single-touch events) and not with UIEvents (which are more important for parsing multitouch events). Let’s lead off with a more in-depth look at each.

UITOUCH REFERENCE

A UITouch object is created when a finger is placed on the screen, moves on the screen, or is removed from the screen. A handful of properties and instance methods can give you additional information on the touch, as detailed in table 6.1.

Table 6.1 Additional properties and methods can tell you precisely what happened during a touch event.

Method or property

Type

Summary

phase

Property

Returns a touch phase constant, which indicates whether touch began, moved, ended, or was canceled

tapCount

Property

The number of times the screen was tapped

timestamp

Property

When the touch occurred or changed

view

Property

The view where the touch began

window

Property

The window where the touch began

locationinView:

Method

The current location of the touch in the specified view

previousLocationinView:

Method

The previous location of the touch in the specified view

Together, the methods and properties shown in table 6.1 offer considerable information about a touch, including when and how it occurred.

Only the phase property requires additional explanation. It returns a constant that can be set to one of five values: UITouchPhaseBegan, UITouchPhaseMoved, UITouch-PhaseStationary, UITouchedPhaseEnded, or UITouchPhaseCancelled. You’ll often want to have different event responses based on exactly which phase a touch occurred in, as you’ll see in the event example.

UIEVENT REFERENCE

To make it easy to see how individual touches occur as part of more complex gestures, the SDK organizes UITouches into UIEvents. Figure 6.2 shows how these two sorts of objects interrelate.

Just as with the UITouch object, the UIEvent object contains a number of properties and methods that you can use to figure out more information about your event, as described in table 6.2.

UIEvent objects contain a set of related UITouch objects.

Figure 6.2 UIEvent objects contain a set of related UITouch objects.

Table 6.2 The encapsulating event object has a number of methods and properties that let you access its data.

Method or property

Type

Summary

timestamp

Property

The time of the event

allTouches

Method

All event touches associated with the receiver

touchesForView:

Method

All event touches associated with a view

touchesForWindow:

Method

All event touches associated with a window

The main use of a UIEvent method is to give you a list of related touches that you can break down by several means. If you want to get a list of every touch in an event, or if you want to specify just gestures on a certain part of the screen, then you can do that with UIEvent methods. This ends our discussion of event containers in this topic.

Note that all of these methods compact their touches into an NSSet, which is an object defined in the Foundation framework. You can find a good reference for the NSSet at Apple’s developer resources site.

THE RESPONDER METHODS

How do you access touches and/or events? You do so through a series of four different UIResponder methods, which are summarized in table 6.3.

Each of these methods has two arguments: an NSSet of touches that occurred during the phase in question and a UIEvent that provides a link to the entire event’s worth of touches. You can choose to access either one, as you prefer; as we’ve said, we’ll be playing with the bare touches. We’re now ready to dive into an example that demonstrates how to capture touches in a real-life program.

Table 6.3 The UIResponder methods are the heart of capturing events.

Method

Summary

tmp12-139

Reports UITouchPhaseBegan event when fingers touch the screen

tmp12-140

Reports UITouchPhaseMoved events when fingers move across the screen

tmp12-141

Reports UITouchPhaseEnded events when fingers leave the screen

tmp12-142

Reports UITouchPhaseCancelled events when the phone is put up to your head, or other events that might cause an external cancellation

Next post:

Previous post: