Graphics Reference
In-Depth Information
stated that given the complexity of real-rendering applications today, heuristics may
fail in controlling rendering time. Haines [13] also describes this trend:
“Perhaps one of the most exciting elements of computer graphics is that some of the
ground rules change over time. Physical and perceptual laws are fixed, but what was
once an optimal algorithm might fall by the wayside due to changes in hardware, both
on the CPU and GPU. This gives the field vibrancy: we must constantly re-examine
assumptions, exploit new capabilities, and discard old ways.”
Based on these findings, dissecting the rendering process into distinct compo-
nents that contribute to rendering cycle time is no trivial task. Tack et al. [18] did not
consider overhead time in their performance model because of the complexity and
additional costs it represented. The heuristics proposed in Wimmer and Wonker's
work [19] varied in performance for different applications. This implies that unless
an application is specially built to it into their proposed framework it may not be
easy to achieve stable frame rates across a broader range of applications.
Heuristics ignore non-linearity in their formulation, that is, they assume that func-
tional relationships are always linear. This is unrealistic in practical applications
because of the underlying hardware. Our experiments have shown that the time taken
to render a vertex varies at different total processed vertex counts. The work of Lakhia
et al. on interactive rendering [22] demonstrated that texture size has a non-linear rela-
tionship with the time taken to render a 3D object. Finally, heuristics face the same
challenges as other frame rate control mechanisms in terms of balancing qualitative
requirements such as visual hysteresis [23] and rendering performance.
3.2.3 P uRPose of w oRkload c haRacteRisation and a nalysis
Apart from heuristics in the quest to limit rendering time, researchers also analysed
rendering workloads with the goal of identifying and eradicating bottlenecks at
runtime. Kyöstilä et al. [16] created a debugger and system analyser for graphics
applications running on mobile hardware. Monfort and Grossman [17] attempted to
characterise the rendering workloads of 3D computer games via a specially devel-
oped tool. In recent years, major graphics hardware vendors have provided software
toolkits that allow low level access to their hardware for debugging and in-depth
analysis of graphics workload with the goal of optimising performance of interactive
applications during runtime.
However, workload characterisation and analysis are not adaptive mechanisms that
will bring about stable frame rates. They are helpful only for tracing bottlenecks and
manifesting an application's rendering workload profile. To utilise these mechanisms
for runtime performance, the process usually involves (1) identification of the problem
(such as the cause of a bottleneck) during runtime followed by (2) manual effort to
eradicate the bottleneck offline and then re-run the same scenario. This approach does
not guarantee performance when the application use or 3D scene content changes.
Since interactive rendering usually causes dynamic changes to visual content, the
approach of using workload characterisation and tuning is not generally robust.
Search WWH ::




Custom Search