Java Reference
In-Depth Information
accessing it need to first obtain a lock. We effectively lock the critical resource
instead of the methods accessing it.
5.4.2
The ever-growing cache
If we were to implement this solution in front of a database with significant
size, our cache would steadily grow until we ran out of memory; we simply
have no mechanism for cleaning the cache. A surprising number of commer-
cial applications have this characteristic. Here are some strategies for cleaning
the cache periodically:
For caches of user-related data, either cache user data in the session or
flush the session data when sessions expire. Since most web servers allow
for notification when a session expires, you can use this event to clean
up a command cache as well as session data.
Time stamp elements of the cache. When a cached item is accessed,
update the time stamp. Use a maximum-limit-exceeded exception to
trigger a garbage-collection process. This process iterates through the
cache, expiring a specified number of the oldest items.
Instead of an event-driven garbage collector, have a timed garbage col-
lector that periodically expires elements in the cache. This approach also
requires the addition of a time stamp to cached entries.
5.5
Antipattern: Synchronized Read/Write Bottlenecks
Mark Wells, a former vice president of engineering at Agillion, suggested this
antipattern. In the previous example, our synchronization scheme required us
to lock the hash table objects for every hash table access. This lock is necessary
so that the results of the execution are correct, even if there are multiple
threads. Consider the following class:
class counter {
public static Integer count=0;
public void count() {
Integer temp = count;
temp = temp + 1;
count = temp;
}
Table 5.3 shows a possible timeline for the program with two threads of exe-
cution. Two threads are running the same program.
Search WWH ::




Custom Search