Information Technology Reference
In-Depth Information
∀
i,k
∈
N
,
∀
j
1
,j
2
∈
L
i
, the equation in (5) can be
be positive or negative, so
+
α
k∈N
(2
e
j
1
−
∂φ
∂φ
∂e
j
2
2
e
j
2
)=0.
As the global objective function in 1 is assumed to be convex over the set
∂e
j
1
−
translated to
V
⊂
R
, by applying the mean value theorem for the convex function, we have
∂φ
∂e
j
1
−
∂φ
∂e
j
2
(
e
j
1
−
e
j
2
)
=
H
(
φ
)
|
ξe
j
1
(6)
+(1
−
ξ
)
e
j
2
). Multiply (
e
j
1
−
e
j
2
)
where
ξ
∈
(0
,
1),
H
(
φ
) is the Hessian matrix of function
φ
(
·
left to both side of equation (6), we have
e
j
2
)(
∂φ
∂φ
∂e
j
2
(
e
j
1
−
(
e
j
1
−
e
j
2
)
2
∂e
j
1
−
)=
H
(
φ
)
|
ξe
j
1
(7)
+(1
−ξ
)
e
j
2
Substituting the equation (7) with the equation (6), we have
α
k∈N
(2
e
j
1
−
2
e
j
2
)
2
=
H
(
φ
)
(
e
j
1
−
e
j
2
)
2
0
≥−
|
ξe
j
1
(8)
+(1
−ξ
)
e
j
2
According to the nature of the convex function
φ
(
·
), we know that its Hessian
matrix will be positive semi-definite, that is
H
(
φ
)
|
ξe
j
1
+(1
−ξ
)
e
j
2
≥
0.
α
k∈N
(2
e
j
1
−
2
e
j
2
)
2
So the equation (8) can be simplified to 0
≥−
≥
0,
L
i
,wehave
e
j
1
=
e
j
2
.
In the process of state space design, it is noticed that the sum of the estimation
from all the agents regarding any specific agent
k
s value is equal to the
n
which implies
∀
i,k
∈
N,
∀
j
1
,j
2
∈
times the agent
k
s value, that is
i∈N
e
i
(
t
)=
nv
k
(
t
). Coupled with
∀
i,k
∈
N
,
N
,
e
i
=
v
k
. This completes the proof.
Next, we will need to examine the relationship between the Nash equilibrium
and the optimal solution of the distributed optimization problem.
Theorem 3.
: Model the optimization problem in (1) as the state based ordinal
potential game proposed in section (4.2) with any positive constant
α
.Suppose
the interaction topology is undirected, time-varying, and the sequence of sens-
ing/communication matrixes is sequentially complete, then the resulting Nash
L
i
,
e
j
1
=
e
j
2
,wecanhave
∀
j
1
,j
2
∈
∀
i,k
∈
equilibrium
(
x,a
)=
(
v,e
)(
v,e
)
is optimal solution of the distributed optimiza-
tion problem in (1).
Proof
:Accordingtothe
theorem 2
, we know that all the estimations from any
agent
i
N
regarding the value of any specific agent
k
is equal to the true value
of agent
k
. Therefore, consider the following class of change in the value instead
of the change in the estimation. That is, a new action profile
a
=(
a
i
,a
−i
)=
∈
(
v
i
, v
−i
)(
e
i
, e
−i
)
which can be specifically expressed as
v
i
=
v
i
+
δ
and
e
i
=
e
i
,
where
V
i
.
Accordingly, the change in the local objective function for agent
i
can be
expressed as follows.
n
∀
δ
∈
R
,v
i
+
v
i
+
δ
∈
n
n
s
ij
ΔU
i
=
s
ij
φ
(
v
1
,...,v
i
+
δ,...,v
n
)
−
s
ij
φ
(
v
1
,...,v
i
,...,v
n
)
(9)
j
=1
j
=1
j
=1
Search WWH ::
Custom Search