Database Reference
In-Depth Information
sensed data, while others explicitly aim at maximizing the lifetime of the
entire sensor network 2 .
2.2.1 Proposed Techniques. The KEN technique [25] builds
and maintains dynamic probabilistic models over the sensor readings,
taking into account the spatio-temporal correlations that exist in the sen-
sor readings. These models organize the sensor nodes in non-overlapping
groups, and are shared by the sensor nodes and the sink. The expected
values of the probabilistic models are the values that are recorded by the
sink. If the sensors observe that these values are more than ε VT away
from the sensed values, then a model update is triggered.
The PAQ [98] and SAF [97] methods employ linear regression and
autoregressive models, respectively, for modeling the measurements pro-
duced by the nodes, with SAF leading to a more accurate model than
PAQ.
Silberstein et al. [86, 87] describe for providing continuous data with-
out continuous reporting, but with checks against the actual data. To
achieve this goal, this approach introduces temporal and spatio-temporal
suppression schemes, which use the in-network monitoring to reduce the
communication rate to the central server. Based on these schemes, data
is routed over a chain architecture. At the end of this chain, the nodes
that are most near to central server send the aggregate change of the
data to it. Since in this scheme (and in data-driven approaches in gen-
eral) the loss of a model update is crucial 3 , special provision is taken for
handling network failures [87], so as to ensure correctness.
A recent study proposes a new linear model, DBP [79]. The model
is trained using m data points, where the first and the last l points are
called edge points , and is computed as the slope δ of the segment that
connects the average values over the l edge points at the beginning and
end of the training phase. This model mitigates the problem of noise
and outliers: instead of trying to reduce the approximation error to the
data points in the recent past, DBP aims at producing models that are
consistent with the trends in the recently-observed data. Consequently,
it leads to improved performance, especially in noisy settings. Moreover,
the computation of this model is very simple, and therefore appealing
for implementation on resource-scarce nodes.
2 Note that by minimizing the energy consumption of the network, it is possible that the
energy of a few specific sensor nodes is depleted much faster than the average. Obviously,
this is not desirable, since it may jeopardize the correct operation of the entire network.
3 Losing a single model-update message has the potential to introduce large errors at the
sink, as the latter will continue to predict sensor values with an out-of-date model until the
next one is received.
Search WWH ::




Custom Search