Information Technology Reference
In-Depth Information
a
b
Fig. 12.22 ( a ) A one hidden layer FNN for image compression, having n neurons at the input and
output layer and m neurons at the hidden layer (m<n)( b ) Image compression using a one hidden
layer FNN: the image is divided into square blocks that become input training patterns to the NN
for all the nodes of the hidden layer, plus the weights between the hidden and the
output layer, is less than the total number of pixels of the image, compression is
achieved. The fewer the number of units in the middle layer, the higher the degree
of compression [ 117 , 118 ].
In FNN-based image compression, an image of size L L pixels is first divided
into P square blocks of equal size M b M b . The total number of blocks is P D
L 2
M b .
Each square block is then arranged into a vector of dimension n 1, i.e. n D M b ,
that is fed to the neural network as an input training pattern. All such vectors are
 
Search WWH ::




Custom Search