Geography Reference
In-Depth Information
bias
x 1
W 1
Σ
Φ
x 2
o(x 1 ,x 2 ,…,x n )
W 1
x n
W 1
Summing
junction
Activation
function
Inputs
Weights
Output
Figure 14. Artificial neuron.
Input
layer
Hidden
layer 1
Hidden
layer 2
Output
layer
Output
Inputs
x 1
x 2
o(x 1 ,x 2 ,…,x n )
x n
Figure 15. Proposed two-hidden layer feed-forward artificial neural network.
in web proxy caching to determine object cachability. Inputs are normalized so that all values
fall into the interval
, by using a simple linear scaling of data as shown in Equation 2,
where x and y are respectively the data values before and after normalization, x min and x max
are the minimum and maximum values found in data, and y max and y min define normalized
interval so y min
[
1, 1
]
y
y max . This can speed up learning for many networks.
x min
x max
x
y
=
y min +(
y max
y min ) ×
(2)
x min
Recency values for each processed tile request are computed as the amount of time since the
previous request of that tile was made. Recency values calculated this way do not address the
case when a tile is requested for the first time. Moreover, measured recency values could be
too disparate to be reflected in a linear scale.
 
Search WWH ::




Custom Search