Information Technology Reference
In-Depth Information
drives. Since these components are shared amongst applications and the operating
system, the kernel must restrict direct access. In order for a user mode application to
perform these functions, the application must cause a switch to occur from user to
kernel mode with all of the proper information to perform the operation for the user
mode application. The user/kernel mode boundary is a good place to put a monitor for
code execution.
Since, by definition, replication requires I/O operations to complete, obviously
monitoring all I/O requests of non-trusted executables would be a starting point for
this analysis. It is likely that further kernel mode operations would need to be studied.
These could include functions that allow for memory allocation, environment deter-
mination, and access to other processes. From this monitoring, a graph of the code
could be created with only the minimum required components of analysis. The graph
of the code could potentially contain the same information as the graph obtained from
above, except that it would be produced in run-time.
The danger of analyzing the code using this technique is that the code would be al-
lowed to execute freely on the system. For optimum safety, the code should be run in
a sandbox environment. The disadvantage of this is that a clever programmer could
potentially detect that it is in this environment and not decode or decrypt the viral
payload, thereby preventing the analysis.
5.2
Dealing with Dynamically Modified “Good” Code
As systems become more complex, so does the programming behind the applications.
Many known “good” applications and operating system components have known
bugs and, as yet, unknown weaknesses. Although there is an effort to eliminate these
bugs, viruses and worms usually can exploit many of them before systems administra-
tors can apply the appropriate patches or fixes or the programmers can find a correct
them. Therefore it is not safe to say that specific pieces of code that were once “good”
will remain good. In fact, many of these worms affect code while it is executing, dy-
namically changing the operation of the code.
To make matters worse, many of these bugs can be taken advantage of remotely.
Applications that perform many of the required network functions, such as, e-mail
delivery, domain name services and web page services are all potentially prone to
attacks. Many of these applications run unchecked with “super user” privileges. When
these bugs are exploited they can wreak havoc on computer networks. In the summer
of 2001, Code Red targeted Microsoft IIS servers. The worm infected hundreds of
thousands of servers before the nature of the worm was understood.
It is only possible to monitor these types of code in run-time. Any technique de-
signed to monitor these codes cannot have a serious performance penalty. Many of
these applications operate in a high demand environment having to fulfill thousands
of requests per minute. Detection of a code malfunction and an attempt to replicate
needs to be done in near real time as once the payload is delivered, the target host is
most likely instantly infected and targeting other hosts.
Search WWH ::




Custom Search