Digital Signal Processing Reference
In-Depth Information
of colliding two protons head-on at nearly the speed of light is incredibly difficult.
The vast majority of collisions occurring in the experiment are low-energy glancing
collisions that are unlikely to produce interesting data. Indeed, one of the common
goals of HEP system designers is to maximize the accelerator's “luminosity”, a
measure of the efficiency with which collision events occur each time the accelerator
causes its particle beams to cross. Beams are not continuously distributed streams
of particles, but are in fact made up of a series of spaced particle clusters. The rate
at which these clusters impact the clusters in the opposing beam is referred to as the
“bunch crossing rate.” Together, the luminosity and bunch crossing rate determine
the collision event rate, and consequently the processing throughput needed to
analyze the events that are produced. Given that most of the collision events are
unlikely to contain interesting data, recording all of these events would be extremely
inefficient in terms of storage, processing resources, and power consumption.
The ideal solution to this problem is to create a signal processing system that
can specifically identify and isolate the data produced by scientifically interesting
collisions and discard data from low-energy collisions. The system then relays only
the potentially interesting data to archival storage. In HEP systems, this identifica-
tion process is called “triggering”. CMS accomplishes its triggering function using
a complex signal processing and computational system dubbed the Triggering and
Data Acquisition Systems, or TriDAS [ 22 ] . TriDAS is a two-level, hybrid system
of signal processing hardware and physics-analysis software. The frontend Level-1
Trigger is custom, dedicated hardware that reduces the peak sensor data rate from
1 PB/s to 75 Gigabytes per second (GB/s). The backend High-Level Trigger is a
software application running on a computing cluster that further reduces the data
rate from 75 GB/s to 100 Megabytes per second (MB/s). At 100 MB/s the data
rate is low enough that it can be transferred to archival storage and analyzed offline
with sophisticated algorithms on high-performance workstations. A diagram of the
CMSTriDASisshowninFig. 3 . This sort of multi-stage architecture which mixes
specialized front-end processing hardware in early levels with more general and
flexible processing systems in later levels is a hallmark of modern trigger design,
though the number of levels and technology used within each level varies. Our
discussion in this chapter will focus more on the front-end of the trigger system
as it more resembles a dedicated DSP system as compared to the back-end, which
is more like a farm of general-purpose computers.
To decide which data to keep versus discard, triggering systems must identify
discrete particles and particle groups created by the collisions. A perfect system
would record data from only those collisions that exhibit the physical phenomena
that are being studied in the experiment. However, due to practical constraints on
how quickly the data can be analyzed and how much data can be economically
stored, it is not feasible to create perfect triggering systems for experiments with
high collision rates. In real systems, designers manage hardware cost by placing a
threshold on the number of particles or particle groups that will be retained from
any one event. The system sorts particles and groups by their energy levels. Data for
the top candidates (up to the threshold) is retained; the remaining data is discarded.
Search WWH ::




Custom Search