Information Technology Reference
In-Depth Information
Fig. 3.2 The network
structure for the preference
function
the true ranker used in the final test is defined by ( 3.2 ). It is not obvious whether
the optimal preference function learned by loss minimization can really lead to the
optimal ranker. To answer this concern, in [ 12 ], it has been proven that the loss
with respect to the ranker can be bounded by the loss with respect to the preference
function, as follows:
u<v ( 1
+ u,v : y u,v = 1 L(h
;
h(x σ 1 (u) ,x σ 1 (v) ))
x u ,x v ,y u,v )
L(σ, y)
u,v : y u,v = 1 1
(3.3)
3.2.2 SortNet: Neural Network-Based Sorting Algorithm
The goal of [ 28 ] is also to learn a preference function from training data. Specifi-
cally, a neural network with the structure as shown in Fig. 3.2 is used as the prefer-
ence function.
As can be seen from Fig. 3.2 , there are two outputs of the network, P(x u
x v ) and P(x u
x v ) . Since the network approximates a preference function, the
following constraints on these two outputs are enforced:
P(x u
x v )
=
P(x v
x u ).
(3.4)
Specifically, these two outputs are generated as follows:
sigmoid
i,i
)
P(x u
x v )
=
(w i,
h i (x u ,x v )
+
w i ,
h i (x u ,x v )
+
b
sigmoid
i,i
(w i, h i (x v ,x u ) + w i , h i (x v ,x u ) + b )
=
= P(x v x u ),
(3.5)
Search WWH ::




Custom Search