Information Technology Reference
In-Depth Information
sub-structures in a given graph, has been widely investigated since 1973. The ba-
sic maximal clique enumeration algorithm called Base BK [7] algorithm proposed
by Bron and Kerbosch was first published in 1973. Since then, many algorithms
have been developed to solve this classic problem from different perspectives.
In general, these algorithms could be classified into two groups. In terms of
the first group, people use the depth-first searching method and adopt specific
pruning policies to improve the e ciency. All possible combinations of vertices in
the network constitute a search tree with each node corresponding to a candidate
maximal clique. The basic idea is to treat the process of enumerating all the
maximal cliques as a depth-first traversal of this tree. The algorithms of Bron ,
Tsukyuama [8] and Makino [9] are typical ones of this kind.
Bron 's BK algorithm maintains three dynamically changing sets: COMPSUB ,
a global set containing the current growing clique; CANDIDATES ,alocalset
holding all vertices that will eventually be added to the current COMPSUB ;
NOT , a local set containing all vertices that have been previously added to
COMPSUB . Each candidate maximal clique corresponds to a node in the search
tree and a function called EXTEND traverses this search tree by the depth-first
order recursively.
Makino 's algorithm[9] develops a search tree by defining a child-parent rela-
tionship between two maximal cliques and traverses this search tree to list every
maximal clique. In addition, given a lower bound f , this algorithm divides graph
G into two sub-graphs: V 1 and V 2 . V 1 consists of the vertices whose degree is
larger than f and the left vertices constitute V 2 . It first finds all maximal cliques
in V 1 and and stores them in set Q . Then it finds all maximal cliques in V 2
while eliminates those in Q that are already contained by some maximal cliques
in V 2 . This strategy will relatively improve the eciency of the algorithm in
sparse graphs. However, there exist two major drawbacks. First, this algorithm
involves too many expensive set operations. If the average size of the maximal
cliques is large, the eciency of the algorithm will decrease. Second, the selec-
tion of f depends on each specific problem and thus puts a great impact on the
corresponding eciency as well.
With respect to the second group, people have borrowed the join-and-pruning
strategy. Kose 's algorithm[10] belongs to this class. It takes advantage of the
fact that every clique of size k ( k
2), is comprised of two cliques of size k
1
that share k
2 vertices. The basic principle is to build all possible 3-cliques by
joining every two 2-cliques. Any 2-clique that cannot become a component of a
3-clique is declared maximal and output. This procedure is repeated according
to the increasing order of the clique size until it is no longer possible to build a
larger one. Nevertheless, this algorithm also has several drawbacks. On the one
hand, building the candidate cliques in this join-and-pruning manner requires us
to store both the ( k
1)- and k -cliques, which will consume too much space when
the problem size is great. On the other hand, every time a k -clique is formed, all
( k
1)-cliques contained within the new clique must be marked as being used
to eliminate the duplicates. This process will also occupy too much space by the
order of 2 n in the worst case.
 
Search WWH ::




Custom Search