Information Technology Reference
In-Depth Information
environments is essential. As we use compiler instrumentation, the compiler is
a natural integration point. We use the GCC compiler, also widely used in the
industry. Our instrumentation is generated by a plug-in, which can be loaded
into a standard compiler. We have successfully integrated our plug-in into the
build system of a production environment compiling about million lines of C
code by just adding a simple compiler wrapper.
Our current implementation uses extensive program instrumentation in order
to track memory tags and do state checks at runtime. During compilation, large
amount of detailed program information is saved into the binary and generic in-
strumentation is added to stop the execution at certain points. At the beginning
of the program execution, the saved information and the model description are
used to precompute the effect of each basic block to the data flow. This infor-
mation is then used to interpret changes to the memory tags during execution.
The taken approach avoids the need to recompile for each model, but causes
high performance penalty. It can be argued that dynamic binary instrumentation
could lead to better overall performance and would also allow monitoring of third
party libraries. However, we decided to use compile-time instrumentation as we
believe that the precision of the analysis and error reporting can benefit from
the rich information extracted from the compiler's intermediate representation.
Many optimizations must be made to our implementation in order to gain
acceptable performance for industry use. We plan to add, e.g, static analyses
to our framework in order to reduce the runtime overhead. Nevertheless, it is
obvious that in general plenty of extra computation is required. However, widely
used dynamic analysis tools, such as Valgrind [4], have shown that if the offered
benefits are valuable enough, e.g., the tool finds more errors or makes debugging
easier, even relatively high overhead can be acceptable. Also, often execution
time of a single test case is not what matters, but the total execution time of the
test suite. Thus, high overhead can be compensated with other solutions such
as test case selection techniques [5] that minimize the amount of tests that need
to be run at all.
The proposed approach allows protocol violations to be detected immediately
when they happen. However, a protocol violation itself might merely be a symp-
tom of the real bug. Thus, it is important to aid the developer to locate the
actual root cause. The rich control-flow information saved into the program bi-
nary allows techniques, such as dynamic program slicing, to be incorporated into
the framework in order to support debugging.
4 Conclusions and Future Work
In this paper, we proposed an approach for runtime detection of incorrect use of
APIs. We further argued that in order for an implementation of such an approach
to be adopted by developers, it must be easy to use and seamlessly integrated
into existing development environments. Moreover, we presented a framework
that integrates our implementation to the widely used GCC compiler.
The work presented in this paper is but just a first step. The most obvious
future direction to our research are performance optimizations. Especially static
 
Search WWH ::




Custom Search