Database Reference
In-Depth Information
to the parent nodes in the spanning tree. The parent nodes of the
spanning tree then builds a new classifier by combining all the classifiers
it has received from its children and by subsampling a portion of the
dataset with same proportion of negative and positive examples. These
intermediate nodes then send the classifiers again upstream and the base
station builds a single classifier which represent all the data over all the
nodes. The paper discusses the fact that a short but wide spanning tree
increases the communication cost of sending the classifiers to the next
node but reduces the overall accuracy due to smaller number of hops,
while a tall but narrow tree suffers from the opposite effect. Finally,
the paper presents extensive experimental results on simulated wireless
testbed to show that this method offers better accuracy and energy
consumption compared to a baseline ensemble method in which meta
classifiers are learned independently at each node and then (majority)
voting is applied during test phase.
The above algorithm suffers from one major drawback — it requires
synchronization in every time step and hence can be expensive to de-
ploy for the next generation of large sensor networks. In a recent paper,
Bhaduri et al. [9] have proposed a decision tree learning algorithm which
can build the same tree on all the nodes in an asynchronous fashion. The
main building block of the algorithm is the scalable distributed majority
voting protocol first discussed in the paper by Wolff and Schuster [59].
Given a pair of real numbers a i and b i at each node, this algorithm de-
cides if i a i > i b i in a very communication ecient fashion, without
needing a node to exchange messages even if a i and b i are changing.
Based on this protocol, first, the authors show that comparison of two
features can be accomplished by concurrently running 4 majority votes.
The next step is to choose top 1-out-of- k attributes and this can be eas-
ily accomplished by running the previous comparison per attribute pair.
Finally, the tree can be built asynchronously by performing this 1 out
of k comparison for each level of the tree. First of all, this algorithm is
guaranteed to converge to the globally correct solution on convergence.
Extensive experimental results also show that the algorithm is commu-
nication ecient, even when the data is changing.
Probabilistic gossip based protocols have been used extensively for
many WSN algorithms due to their simplicity in implementation and
asymptotically guaranteed convergence. Distributed consensus algo-
rithms such as averaging, summation, max/min etc. can be eciently
computed using gossip protocols in which a node randomly chooses an-
other node and exchanges information with it. This process continues
for some iterations whereby it can be shown that the error reduces ex-
ponentially at each iteration. Using such a protocol, Chiuso et al. [15]
Search WWH ::




Custom Search