Image Processing Reference
In-Depth Information
1 Introduction
Behavioral studies commonly rely upon extensive time-series observation of animals, and
characterization of their movement, activities and social interactions. Historically this in-
volved scientists (or their students) recording observations by hand—a laborious and error-
prone process. More recently, automation has promised to dramatically increase the quantity
and detail of data collected, and a variety of methods have recently become popular in the im-
portant area of automated tracking, for example, the CTRAX ethomics software [ 1 ], and the
proprietary EthoVision [ 2 ].
Most available solutions demand restricted experimental conditions that may not be desir-
able for the question of interest, or feasible in the field, (or even the lab). For example, in Dro-
sophila melanogaster experiments, it is common to restrict the possibility of flight, and use a
backlit glass substrate for contrast [ 1 ]. A majority of D . melanogaster social interactions occur
on food, and glass is not representative of their normal habitat. Additionally, many tracking
algorithms perform poorly when the number of objects being tracked is not fixed. In such con-
texts, it is difficult to determine whether a large “blob” of pixels in fact represents a single
object or two overlapping objects. Such close contact happens commonly during aggression,
courtship and mating events.
We are particularly interested in describing spontaneous group assembly, and describing
the resultant behavior in those groups. That is, we need to analyze setups with variable num-
bers of flies that frequently come into close contact. As a test-case, we consider data from a set
of experiments in which we recorded fly behavior in an environment consisting of four food
patches, modeled on a published experiment conducted with still cameras [ 3 ] . Each patch was
recorded independently, and flies could freely move among patches, or be of patch (and thus
not recorded). To model group assembly, we need to accurately count individuals on patches,
and measure joining and leaving. We are currently able to detect objects (blobs or putative
flies in video frames against a static background. This method is designed to be relatively for
bust to nonoptimal experimental conditions.
Behavioral annotation requires that we move from static blobs, to individual-fly identiica-
tion and tracking. Here, we build upon our work presented in [ 4 ] , and describe a three-stage
process from video processing to behavioral annotation. First, we present an algorithm that
enables us to assemble trajectories even through multifly blobs. Second, we then utilize these
trajectories in freely available machine-learning behavioral annotation software. The Janelia
Automatic Animal Behavior Annotator (JAABA) is a commonly used animal-behavior annota-
tion software [ 5 ]. We use JAABA to manually flag errors in our tracking algorithm for “single
fly versus “multifly” blobs. This enables subsequent trajectory correction and behavioral an-
notation. Finally, from the subset of trajectories consisting of high-likelihood single-fly blobs,
we train a sex classifier to distinguish males from females. We also train a chasing classii-
er, which together with sex annotation allows us to score important social behaviors, namely
courtship and aggression.
2 Methods
Videos are recorded using four high-resolution Grasshopper digital video cameras (Point
Grey Research Inc., Richmond, Canada) simultaneously filming individual patches at 30 Hz,
RGB, and 800 × 600 resolution. Videos are processed as single frames, by identifying blobs
against an averaged background [ 6 ] . Blobs may contain from one to many individual flies, or
be spurious artifacts. Features of the blobs are extracted using the cvBlobslib package [ 7 ] . The
Search WWH ::

Custom Search