Information Technology Reference
In-Depth Information
:
R
n
ˆ
−
1
ˆ
−ₒ
where
R
is a continuous strictly monotonic function and
is its
ˆ
inverse function. The function
is called a generator of the quasi-arithmetic mean
n
and
k
=
0
M
ˆ
.
ˉ
∈[
0
,
1
]
ˉ
k
=
1. The class of all quasi-arithmetic means are
characterized by function
. One of the very notable class is root-power mean or
generalized mean that covers the entire interval between the min and max operations.
It is defined corresponding to the function
ˆ
x
d
,
d
ˆ
:
x
−ₒ
∈
R
\{
0
}
. The weighted
root-power mean (
M
d
), is defined as:
n
1
/
d
ˉ
k
x
k
M
(
x
1
,
x
2
...
x
n
;
ˉ
1
,ˉ
2
...ˉ
n
;
d
)
=
(4.25)
k
=
0
Dyckhoff and Pedrycz [
36
] discuss generalized mean as a model of compensative
operator that fits the data relatively better. Here, the modifiable degree of compensa-
tion is accomplished by changing the value of generalization parameter
d
. Depending
on its value, the model embraces a full spectrum of classical means. In the limit cases
d
ₒ±∞
, model behaves as the maximum and minimum operator respectively. As
d
2 then
arguments combined yield their harmonic, arithmetic, and quadratic means, respec-
tively. If we use the generalized mean for aggregation, it is possible to go through
all possible variations of means of the input signals to a neuron [
35
,
36
,
38
]. This
motivated to utilize the idea underlying the weighted root-power mean in Eq. (
4.25
)
to define a new aggregation function for nonconventional neural units in complex
domain. The net potential of this complex root-power mean neuron (
C
RPN) may
conveniently be expressed as:
ₒ
0,
M
converges to the geometric mean. Similarly, when
d
=−
1
,
1
,
1
/
d
n
w
k
z
k
ʩ(
z
1
,
z
2
...
z
n
;
w
1
,
w
2
...
w
n
;
d
)
=
(4.26)
k
=
0
The weighted root-power mean aggregation operation designed a fundamental class
of higher-order neuron unit. In comparison to conventional neuron, it gives more
freedom to change the functionality of a neuron by choosing the appropriate value
of generalization parameter '
d
'.
Now, from Eq.
4.26
, the output of proposed
C
RPN may be given as:
⊛
1
/
d
⊞
⊠
n
⊝
w
k
z
k
Y
(
z
1
,
z
2
...
z
n
;
w
1
,
w
2
...
w
n
;
d
)
=
f
C
(4.27)
k
=
0
The motivation for using (
4.27
) is that it gives more freedom to change the function-
ality of a neuron by choosing the appropriate value of power coefficient
d
.Itisworth
indicating that the (
4.27
) presenting
C
RPN is general enough and different existing
Search WWH ::
Custom Search