Biology Reference
In-Depth Information
Algorithm 2.2 Hill-Climbing Algorithm
1. Choose a network structure G over V , usually (but not necessarily) empty.
2. Compute the score of G , denoted as Score G =
(
)
Score
G
.
3. Set maxscore
Score G .
4. Repeat the following steps as long as maxscore increases:
=
a. for every possible arc addition, deletion or reversal not resulting in a cyclic
network:
i. compute the score of the modified network G , Score G =
G )
Score
(
:
G and Score G =
ii. if Score G >
Score G ,set G
=
Score G .
b. update maxscore with the new value of Score G .
5. Return the directed acyclic graph G .
Further improvements are possible by leveraging the symmetry of Markov blankets
implied in Definition 2.3 and shown in Sect. 2.3 .
2.2.2 Score-Based Structure Learning Algorithms
Score-based structure learning algorithms (also known a search-and-score algo-
rithms ) represent the application of general heuristic optimization techniques to the
problem of learning the structure of a Bayesian network. Each candidate network
is assigned a network score reflecting its goodness of fit, which the algorithm then
attempts to maximize. Some examples from this class of algorithms are the follow-
ing:
Greedy search algorithms such as hill-climbing with random restarts or tabu
search ( Bouckaert , 1995 ). These algorithms explore the search space start-
ing from a network structure (usually the empty graph) and adding, delet-
ing, or reversing one arc at a time until the score can no longer be improved
(see Algorithm 2.2 ).
Genetic algorithms, which mimic natural evolution through the iterative selection
of the “fittest” models and the hybridization of their characteristics ( Larranaga
et al. , 1997 ). In this case the search space is explored through the crossover
(which combines the structure of two networks) and mutation (which introduces
random alterations) stochastic operators.
Simulated annealing ( Bouckaert , 1995 ). This algorithm performs a stochastic
local search by accepting changes that increase the network score and, at the same
time, allowing changes that decrease it with a probability inversely proportional
to the score decrease.
A comprehensive review of these heuristics, as well as related approaches from the
field of artificial intelligence, is provided in Russell and Norvig ( 2009 ).
 
Search WWH ::




Custom Search