Information Technology Reference
In-Depth Information
Fig. 1. The inherent structure of qubit neuron
where w i =[ w j 1 ,w j 2 ,...,w jn ] T is the input weight vector, and b i is the threshold
of the i th hidden node. w i ·
x j denotes the inner product of w i and x j . The linear
function is chosen as the activation function of the output nodes here.
And then, the state of the qubit is represented as
=cos( π
+ sin( π
|
ϕ
=cos( θ )
|
0
+ sin( θ )
|
1
2 u )
|
0
2 u )
|
1
(5)
When the neuron is triggered, the qubit state collapses into state | 1 . The neuron
state z is the probability with which the qubit will be found in the state
|
1
.
z = f ( θ )=sin 2 ( θ )=sin 2 [ π
2 ( w i ·
x j + b i )]( j =1 , 2 ,...,N )
(6)
According to Equations (4)-(6), the i th hidden neuron output is given by
HID i =sin 2 [ π
2 ( w i · x j + b i )]( j =1 , 2 ,...,N )
(7)
Finally, we obtain the network output for the j th sample:
N
N
β i sin 2 [ π
β i HID i =
2 ( w i ·
x j + b i )]( j =1 , 2 ,...,N )
o j =
(8)
i =1
i =1
where β i =[ β i 1 i 2 ,...,β im ] T is the output weight vector.
3.2 ELM Algorithm for QNN
For the feedforward neural network, gradient descent-based methods like back-
propagation (BP) algorithm [14] and evolutionary algorithms [15] are taken
as the traditional learning rule. However, these learning methods are time-
consuming. Comparatively, the ELM algorithm reaches the solutions straight-
forwardly. And ELM is not to take much long time to train the feedforward
network.
 
Search WWH ::




Custom Search