Information Technology Reference
In-Depth Information
make such techniques unsuitable for many applications. Moreover, the analysis requires
use of the worst-case timing for operations, which is pessimistic and often inapplicable.
Further, the method is not suitable for handling the dynamic changes which are often
required, e.g. for changes in the operating conditions for different modes of execution,
or for fault-tolerance.
It has long been known that dynamic allocation of priorities is better suited for han-
dling changes in the operating environment. The disadvantage is that schedulability can
no longer be analyzed statically.
Recent studies ([Gossler & Sifakis 2000]) have shown how to combine proof of
timed program properties with the use of dynamic priorities. There are still a num-
ber of questions about efficient implementation of dynamic priorities and under what
conditions schedulability analysis will still be possible.
8
A New Generation
The embedded systems described here so far are the 'traditional' ones where there is
centralized or distributed control of an application. In this, control is intended to be
precise and provided by software executing on a reliable hardware system. This re-
quires the use of relatively complex processors with a large complement of supporting
hardware.
Two directions of development throw open a whole new range of problems that re-
quire radically different techniques for analysis. The first is an evolution of the current
mobile technology for large-scale data analysis and control ([Stankovic et al 2005]) on
a geographic level. The second relates to recent work on devices such as the Berkeley
Mote ([Warneke et al 2002]) which are intended for large-scale distribution.
In both cases, single nodes are not expected to be fully reliable or to have long life-
times; in fact, the Berkeley Mote has inherent limitations of power that mean that each
node will cease operation in a relatively short time. Single nodes have very simple func-
tionality yet the ensemble of nodes as a whole must be capable of computing accurate
results. A great deal of work will be needed for the analysis of such systems and very
little of the existing methods of analysis for embedded systems is likely to be of use here.
9T ing
There is an inevitability about testing which transcends any disapprobation from the
formal techniques community. All systems must be tested because there is no other
way of discovering remnant errors during the development cycle, from the requirement
to the coding stage. This is even more so of embedded systems because it is only by
testing them in situ that it is possible to exercise the code in a realistic environment.
However, such operational testing will be inherently limited: the more realistic the
testing environment, the less control and repeatability there will be and therefore the
harder to discover the causes of errors. So any reduction in the need for operational
testing is of major importance.
 
Search WWH ::




Custom Search