Information Technology Reference
In-Depth Information
That is the prior of ʲ j,m is N (0 j,m ) only when ʴ j =1and ʷ j,m =1,i.e. X j is
in S and is active for the m -th response, Y m .Otherwise ʲ j,m is set to be zero. In
fact, this coefficient prior has also been used in Chen et al. (2014) [3]. For the prior
assumption on the noise variance ˃ 2 , as usual, we choose the inverse gamma conjugate
prior ˃ 2
IG( a/ 2 ,b/ 2). Finally in the prior distribution, ( ʴ j j,m j,m ), j =1 ,...,p
are assumed to be independent and given ʴ j =1, ( ʷ j,m j,m ), m =1 ,...,M are
assumed to be independent of each others, too.
Based on the prior set-up, we can use Gibbs sampler to draw posterior samples of
the indicators and the coefficients. Similar to group-wise Gibbs sampler in Algorithm 1,
the key step is to compute the likelihood ratios of the indicators in the first and second
sets respectively, and then the posterior probabilities for ʴ j =1and ʷ j,m =1can be
computed accordingly. Thus we can sample these indicators from the corresponding
posterior Bernoulli distributions. First, consider the multi-response model in Eq. (1).
Based on the assumption of independence between Y 1 ,
···
,Y M , the likelihood ratio Z j
of the variable X j is represented as
M
Z j = P ( Y j =1 −j , {ʲ −j,m ,m =1 , ··· ,M},˃ )
P ( Y m j =1 −j −j,m )
P ( Y m j =0 −j −j,m ) .
P ( Y j =0 −j , {ʲ −j,m ,m =1 , ··· ,M},˃ ) =
m =1
Let k =
{
( k 1 ,
···
,k M ): k m =0or 1 ,m =1 ,
···
,M
}
denote the set of all possible
combinations of ( ʷ j, 1 ,
···
j,M ). It is easy to show that
M
Z j =
(
b j,k m ) ,
m =1
k =( k 1 ,··· ,k M )
where
b j,k m = P ( Y m j,m j,m = k m j =1 −j −j,m ) P ( ʲ j,m j,m = k m j =1) j,m
P ( Y m j =0 −j −j,m )
.
If k m =0, then we can simply obtain b j,k m =0 = ˁ j,m .When k m =1,then
exp
j,m
ʲ j,m
2 ˄ j,m
(1 −ˁ j,m )
2 ˀ˄ j,m
2 ˃ 2 ( R j,m − ʲ j,m X j ) ( R j,m − ʲ j,m X j )
1
b j,k m =1
exp
R j,m R j,m
=
1
2 ˃ 2
X j X j ˄ j,m + ˃ 2 · exp
( R j,m X j ) 2 ˄ j,m
2 ˃ 2 ( ˃ 2 + X j X j ˄ j,m )
(1 − ˁ j,m ) ˃
=
j,m j,m exp r j,m
.
˃ 2
=(1 − ˁ j,m ) ×
2 ˃ 2
j,m
Thus the likelihood ratio Z j of the indicator ʴ j can be represented as
M
(1 − ˁ j,m ) ×
.
j,m j,m exp r j,m
2 ˃ 2
j,m
k m
˃ 2
· ˁ (1 −k m )
j,m
Z j =
m =1
k =( k 1 ,··· ,k M )
(10)
 
Search WWH ::




Custom Search