Graphics Reference
In-Depth Information
Crossing image resolution boundary beyond 1Mpix has processing and connec-
tivity implications on architecture of the embedded processor in ADAS. At these
resolutions, size of the ISP logic starts to dominate which is pushing the ISP func-
tions off the sensor die to either embedded processor or to a separate ISP companion
device. Secondly, the increase of resolution and frame rate drives demand for a more
effective hardware interface offering higher data bandwidth between the imaging
sensor and the embedded processor. The MIPI CSI2 interface is emerging as a solu-
tion to data bandwidth and pin count challenges of the parallel camera interface used
in all legacy driver assistance systems.
3.4.3 Embedded Processors
Constant demand for higher resolution, higher frame rate, algorithm robustness,
and lower processing latency is driving an exponential increase in processing and
memory bandwidth requirements. Embedded processor for ADAS needs to deliver
high performance, while dissipating minimum amount of heat and at the same time
must meet strict low-cost requirements.
A compromising solution satisfying the conflicting requirements lays somewhere
between two extreme architectures: dedicated hardwired accelerators on one end
and general purpose CPUs on the other. The hardwired acceleration offers high
performance at low cost but gives lowflexibility. Programmability of general purpose
CPUs gives them high flexibility, while impacting their performance or their energy
efficiency.
In order to meet the challenging compute targets while dissipating minimum
power, ADAS architects have to embrace the power of heterogeneous computing,
where each of the heterogeneous elements has a unique strength allowing it to excel
on specific types of processing. A flexible architecture is required to cover many
functions and to maximize reuse across different product lines.
Most, if not all vision algorithms start off with processing characterized by repet-
itive operations at pixel level with high computational requirements and memory
bandwidth (low-level processing). Typical examples of low-level vision processing
functions are image filtering, gradient calculation, edge detection, corner detection,
image pyramids, etc. Low-level processing is typically best served by applying sin-
gle instruction on multiple data (SIMD). Next processing stage has focus on certain
objects or regions of interest that meet particular classification criteria (mid-level
processing). Typical examples of mid-level vision processing functions are integral
image, feature calculation, classification, optical flow, Hough transform, etc. Mid-
level vision is typically best served by using some combination of SIMD andmultiple
instructions on multiple data (MIMD). High-level processing is typically responsi-
ble for final decision-making and tracking and takes input from previous processing
stages. High-level vision includes algorithms with high variability in processing and
data accesses characterized by highly conditional processing [ 14 ].
Search WWH ::




Custom Search