Information Technology Reference
In-Depth Information
|
and rotation through an angle equal to Arg
a
in counterclockwise direction, followed
by translation in a direction defined by the Arg
b
through a distance equal to
,
,
∈
|
where
z
a
b
C
. Evidently, this is an expansion or contraction by a factor
a
.
In following two Examples
5.1
and
5.2
, different networks (2-M-2) are trained for
input-output mapping (refer to Fig.
5.1
) over a set of points lying on a line and
passing through a reference point. The hidden layer of considered networks contain
one
C
RSP or
C
RSS or two
C
RPN or three conventional neurons, respectively. The
generalization is tested over other standard geometric curves like circle and ellipse.
|
b
|
Example 5.1
Scaling, Rotation, and Translation
This example investigates the behavior of different networks, which learned the
composition of all three transformations defined in Eq. (
5.2
). All networks are run up
to 4,500 epochs with
C
BP (
ʷ
=
0
.
001) and 1,000 epochs with
C
RPROP algorithm
μ
−
,μ
+
10
(
−
6
)
,
max
(
=
0
.
4
=
1
.
2
,
min
=
=
0
.
04
,
0
=
0
.
01). The learning
patterns form a set of points z, which are contracted by factor
ʱ
=
1
/
2, rotated
counterclockwise over 3
ˀ/
4 radians and displaced b
y
b
=
(
−
0
.
1
+
j
×
0
.
2
)
. There
/
√
2
/
√
2
are 21 training inputs lie on the line
y
=
x
,
(
−
1
≤
x
≤
1
)
, referenced at
origin. The training output points lie on the line
y
=
0
.
2,
(
−
0
.
6
≤
x
≤
0
.
4
)
with
reference (
2). Transformations in Fig.
5.2
shows the generalization over
circle with different networks and learning algorithms. The input test points lie on
the circle
x
2
−
0
.
1
,
0
.
y
2
R
2
, with
R
+
=
=
0
.
9. The desired output points should lie on the
2
2
2
, where radius vector of each point is rotated
circle
(
x
+
0
.
1
)
+
(
y
−
0
.
2
)
=
(
R
/
2
)
ˀ/
by 3
4. The rotation of the circle is denoted by a small opening.
Example 5.2
Scaling and Rotation
Here, we investigate the behavior of the considered networks, which learned the
composition of rotation and scaling.
2
(
z
)
=
az
(5.3)
e
i
˄
where
a
=
ʱ
in Eq. (
5.3
) rotates the vector
z
by
˄
in counterclockwise direction
and dilates or contracts it by a factor
.
This example explores the behavior of different network and learning algorithm
for mapping
ʱ
2. All networks are run up to 6,500 epochs with
C
−
BP
(
ʷ
=
μ
−
=
,μ
+
=
0
.
003) and 1,000 epochs with
C
RPROP algorithm (
0
.
4
1
.
2
,
min
=
10
(
−
6
)
,
max
=
0
.
005
,
0
=
0
.
01). The 21 learning input patterns lying on a line
y
=
x
−
0
.
1,
(
−
0
.
9071
≤
x
≤
0
.
507
)
are contracted by
ʱ
=
1
/
2 and rotated over
ˀ/
2 radians anticlockwise to output patterns
y
=−
x
−
0
.
5
,(
−
0
.
553
≤
x
≤
0
.
153
)
,
2
3). The input test points lying on the ellipse
(
x
+
0
.
2
)
at reference point (
−
0
.
2
,
−
0
.
+
a
2
2
2
2
(
y
+
0
.
3
)
1 would hopefully be mapped to points lying on
(
x
+
0
.
2
)
+
(
y
+
0
.
3
)
=
=
1
b
2
2
2
(
b
/
2
)
(
a
/
2
)
at reference (
−
0
.
2
,
−
0
.
3), where
a
=
0
.
7
,
b
=
0
.
3. Transformations displayed in
Fig.
5.3
show the generalization over ellipse.
Search WWH ::
Custom Search