Geography Reference
In-Depth Information
were considered “good enough” for caching needs at the time it was published in 2003.
However, the explosion of web map traffic did not happen until a few years later.
In this section, we propose a cache replacement algorithm that uses a neural network to
estimate the probability of a tile request occurring before a certain period of time, based on the
previously discussed properties of tile requests: recency of reference, frequency of reference,
and size of the referenced tile [19, 20]. Those tiles that are not likely to be requested shortly
are considered as good candidates for replacement.
5.3.1. Related work
The use of neural networks for cache replacement was first introduced by Khalid [21], with the
KORA algorithm. KORA uses backpropagation neural network for the purpose of guiding
the line/block replacement decisions in cache. The algorithm identifies and subsequently
discards the dead lines in cache memories. It is based on previous work by [22], who
suggested the use of a shadow directory in order to look at a longer history when making
decisions with LRU. Later, an improved version of the former, KORA-2, was proposed [23, 24].
Other algorithms based on KORA were also proposed [25, 26]. A survey on applications of
neural networks and evolutionary techniques in web caching can be found in [27]. [28-32]
proposes the use of a backpropagation neural network in a Web proxy cache for taking
replacement decisions. A predictor that learns the patterns of Web pages and predicts the
future accesses is presented in [33]. [34] discusses the use of neural networks to support the
adaptivity of the Class-based Least Recently Used (C-LRU) caching algorithm.
5.3.2. Neural network cache replacement
Artificial neural networks (ANNs) are inspired by the observation that biological learning
systems are composed of very complex webs of interconnected neurons. In the same way,
ANNs are built out of a densely interconnected group of units. Each artificial neuron takes a
number of real-valued inputs (representing the one or more dendrites) and calculates a linear
combination of these inputs. The sum is then passed through a non-linear function, known as
activation function or transfer function , which outputs a single real-value, as shown in Figure 14.
In this work, a special class of layered feed-forward network known as multilayer perceptron
(MLP) has been used, where units at each layer are connected to all units from the preceding
layer. It has an input layer with three inputs, two-hidden layers each one comprised of 3
hidden nodes, and a single output (Figure 15). According to the standard convention, it can be
labeled as a 3/3/3/1 network. It is known that any function can be approximated to arbitrary
accuracy by a network with three layers of units [35].
Learning an artificial neuron involves choosing values for the weights so the desired output
is obtained for the given inputs. Network weights are adjusted through supervised learning
using subsets of the trace data sets, where the classification output of each request is known.
Backpropagation with momentum is the used algorithm for training. The parameters used
for the proposed neural network are summarized in Table 4.
The neural network inputs are three properties of tile requests: recency of reference, frequency
of reference, and the size of the referenced tile. These properties are known to be important
Search WWH ::




Custom Search