Information Technology Reference
In-Depth Information
5. Unsupervised Learning
The example networks presented so far were designed manually to highlight dif-
ferent features of the Neural Abstraction Pyramid architecture. While the manually
designed networks are relatively easy to interpret, their utility is limited by the low
network complexity. Only relatively few features can be designed manually. If mul-
tiple layers of abstraction are needed, the design complexity explodes with height,
as the number of different feature arrays and the number of potential weights per
feature increase exponentially.
Hence, there is no choice but to use machine learning techniques to automati-
cally design the network's connectivity from a dataset that describes the application
at hand. Generally, three types of machine learning are distinguished [159]:
Supervised Learning: A sequence of input/output pairs ( x 1 , y 1 ) , ( x 2 , y 2 ) ,...,
( x N , y N ) is given to the learning machine. Its goal is to produce the correct
output y i if it is confronted with a new input x i .
Unsupervised Learning: The machine sees only the input sequence x 1 , x 2 ,...,
x N . Its goal is to build representations that can be used for reasoning, decision
making, predictions, communication, and other tasks.
Reinforcement Learning: The learning machine is now a situated agent that can
produce actions a 1 ,a 2 ,...,a N which affect the state of the world around it and
hence the later inputs x . The agent receives rewards r 1 ,r 2 ,...,r N and has the
goal to maximize them in the long term.
Reinforcement learning [223] requires an agent acting within a world. It is much
more general than the other two types of learning but cannot be applied to a percep-
tion network alone. If the Neural Abstraction Pyramid were complemented by an
inverse pyramidal network that expands abstract decisions to concrete actions, rein-
forcement learning would be a promising technique for training that agent.
Supervised learning is covered in the next chapter. The remainder of this chap-
ter discusses how unsupervised learning techniques can be applied in the Neural
Abstraction Pyramid framework. The chapter is organized as follows: In the next
section, I briefly discuss several techniques for unsupervised learning. Then, an al-
gorithm for learning a hierarchy of sparse features in the Neural Abstraction Pyra-
mid is proposed. In Section 5.3 this algorithm is applied to a dataset of handwritten
digits. The emerging features are used as input for a supervised digit classifier in
Section 5.4.
Search WWH ::




Custom Search