Information Technology Reference
In-Depth Information
probabilities of ʷ 8 , 6 , ʷ 9 , 5 , ʷ 9 , 6 ,and ʷ 9 , 9 are still larger than 0.5. Thus the proposed two-
layer Gibbs sampler can successfully recover supports correctly. The posterior means
of the coefficients for the selected variables are also shown in Table 2.
posterior probability of δ j
0
20
40
60
80
100 120 140 160 180 200
variable
Fig. 1. The posterior probabilities of ʴ j : P ( ʴ j =1 | Y ) estimated by the two-layer Gibbs sampler
in the simulated example
posterior probability of η jm
eta_[7,m]
eta_[8,m]
eta_[9,m]
eta_[11,m]
eta_[12,m]
eta_[13,m]
Fig. 2. The posterior probabilities of ʷ j,m : P ( ʷ j,m =1 j =1 , Y ) obtained by the two-layer
Gibbs sampler in the simulated example
To compare with the other approaches, first the group-wise Gibbs sampler, Algo-
rithm 1, is used for the same simulation data. In the group-wise Gibbs sampler, indi-
cator variables, ʴ j ,j =1 ,
,p, are only adopted in the model. To implement this
algorithm, the prior parameters set-up are chosen the same as these in the two-layer
Gibbs sampler, and the median probability criterion is also adopt for the posterior
···
 
Search WWH ::




Custom Search