Biomedical Engineering Reference
In-Depth Information
In principle, the structure of SMLNs could be learned in a similar fashion as
for MLNs. This task could be made easier by imposing further restrictions such
as the Markovian assumption, or through the use of declarative bias . This can be
performed using the structure learning algorithms proposed in [ 2 , 3 ]. In this chapter,
we assume the structure of the model to be fixed and try to learn optimal parameters
for discriminative training of SMLNs for the problem of protein fold recognition
from sequences of secondary structure.
4.5
MAP Inference Using Iterated Robust Tabu Search
In logic, one of the main problems is determining whether a KB is satisfiable (SAT
problem), i.e., if there is a truth assignment to ground atoms that make the KB
true. This problem is NP-complete. However, stochastic local search (SLS) methods
have made great progress toward efficiently solving SAT problems with hundreds of
thousands of variables in a few minutes. The optimization variant of SAT is MAX-
SAT where the problem is to find a truth assignment that maximizes the number of
satisfied clauses (unweighted MAX-SAT) or if clauses have an associated weight,
maximize the total weight of the satisfied clauses (weighted MAX-SAT). Both,
unweighted and weighted MAX-SAT are NP-complete problems. First-order prob-
lems can also be successfully solved through SLS methods by performing first a
propositionalization and then applying a SAT solver.
Some of the currently best performing SLS algorithms for MAX-SAT are tabu
search (TS) algorithms, dynamic local search and ILS. Here, we will describe a
combination of a variant of TS with ILS to build a hybrid SAT solver that we will
use for inference and learning in SMLNs.
4.5.1
Iterated Local Search and Robust Tabu Search
Many widely known and high-performance local search algorithms make use of
randomized choice in generating or selecting candidate solutions for a given com-
binatorial problem instance. These algorithms are called SLS algorithms [ 11 ]and
represent one of the most successful and widely used approaches for solving hard
combinatorial problem. Many “simple” SLS methods come from other search meth-
ods by just randomizing the selection of the candidates during search, such as
randomized iterative improvement (RII) and uniformed random walk. Many other
SLS methods combine “simple” SLS methods to exploit the abilities of each of
these during search. These are known as hybrid SLS methods [ 11 ]. ILS is one of
these metaheuristics because it can be easily combined with other SLS methods.
One of the simplest and most intuitive ideas for addressing the fundamental issue
of escaping local optima is to use two types of SLS steps: one for reaching local op-
tima as efficiently as possible, and the other for effectively escaping local optima.
Search WWH ::




Custom Search