Energy Management in Wireless Networked Embedded Systems (information science)

Introduction

Real-time systems have undergone an evolution in the last several years in terms of their number and variety of applications, as well as in complexity. A natural result of these advances, coupled with those in sensor techniques and networking, have led to the rise of a new class of applications that fall into the distributed real-time embedded systems category (Loyall, Schantz, Corman, Paunicka, & Fernandez, 2005; Report, 2006). Recent technological advancements in device scaling have been instrumental in enabling the mass production of such devices at reduced costs. As a result, applications with a number of internet-worked embedded systems have become prominent. At the same time, there has been a need to move from stand-alone real-time unit into a network of units that collaborate to achieve a real-time functionality. Extensive research has been carried out to achieve real-time guarantees over a set of nodes distributed over wired networks (Siva Ram Murthy & Manimaran, 2001). However, there exist a number of realtime applications in domains, such as industrial processing, military, robotics and tracking, that require the nodes to communicate over the wireless medium where the application dynamics prevent the existence of a wired communication infrastructure. These applications present challenges beyond those of traditional embedded or networked systems, since they involve many heterogeneous nodes and links, shared and constrained resources, and are deployed in dynamic environments where resource contention is dynamic and communication channel is noisy (Report, 2006, Loyall et al., 2005). Hence, resource management in embedded realtime networks requires efficient algorithms and strategies that achieve competing requirements, such as time sensitive energy-efficient reliable message delivery. In what follows, we discuss some applications in this category, and discuss their requirements and the research challenges.


Safety-critical mobile applications running on resource-constrained embedded systems will play an increasingly important role in domains such as automotive systems, space, robotics, and avionics. The core controlling module in such mission critical applications is an embedded system consisting of a number of autonomous components. These components form a wireless (ad hoc) network for cooperatively communicating with each other to achieve the desired functionality. In these applications, a failure or violation of deadlines can be disastrous, leading to loss of life, money, or equipment. Hence, there arises a need to coordinate and operate within stringent timing constraints, overcoming the limitations of the wireless network. For example, robots used in urban search and rescue missions cooperate together and with humans in overlapping workspaces. For this working environment to remain safe and secure, not only must internal computations of robots meet their deadlines, but timely coordination of robots behavior is also required (Report, 2006). Other such medium-scale distributed real-time embedded applications include target tracking systems that perform surveillance, detection, and tracking of time critical targets (Loyall et al., 2005), or a mobile robotics application where a team of autonomous robots cooperate in achieving a common goal such as using sensor feeds to locate trapped humans in a building on fire. Other more passive applications include the use of networked embedded systems to monitor critical infrastructure such as electric grids (Leon, Vittal, & Manimaran, 2007). These applications need to meet certain real-time constraints in response to transient events, such as fast-moving targets, where the time to detect and respond to events is shortened significantly. In surveillance systems, for example, communication delays within sensing and actuating loops directly affect the quality of tracking. While providing real-time guarantees is the primary requirement in these applications, mechanisms need to exist to meet other crucial system needs such as energy consumption and accuracy (Rusu, Melhem & Mosse, 2003). In most cases, there are tradeoffs involved in balancing these competing requirements.

background

The typical architecture in a distributed real-time embedded system consists of several processor-controlled nodes interconnected through one or more interconnection networks. The system software running on each node enables the execution of one or more concurrent tasks that are activated by the arrival of triggering events generated by the external environment, a timer, or arrival of a message from another task. A response to an event generally involves several tasks to be executed on different nodes, and several messages to be exchanged in the network. The tasks on the same node may share data and resources using synchronization mechanisms present in shared memory systems, and also interact with tasks on other nodes by exchanging messages using the services provided by the communication subsystem. For the proper functioning of the whole system, each individual task, as well as all the messages exchanged, need to be completed before specified deadlines.

The workload in the maj ority of the distributed embedded real-time applications is similar to those found in traditional real-time systems comprising of periodic and aperiodic tasks. Periodic tasks form the base load invoked at regular intervals, while aperiodic tasks include the transient load generated in response to alarm or an external environment stimuli. However, one can expect stronger cooperation between the internetworked units in more dynamic and complex systems inducing richer communication patterns than simple periodic messages. For distributed real-time embedded system, the primary requirement is that there is an end-to-end timing requirement that needs to be met. This implies that there exists a set of messages with complex precedence constraints that need to be exchanged between the networked nodes before some deadline. Hence, one needs to characterize the different message communications and computations that are possible, and perform a preruntime analysis to guarantee, a priori, that all the task deadlines will be met. Moreover, in a distributed real-time system, the ability to meet task deadlines largely depends on the underlying task allocation, and hence, we need a preruntime task allocation algorithm that takes into consideration the real-time constraints. Intertask communication significantly influences the response time of these distributed applications and hence, the design needs to account for the effect of delays imposed by the communication network and precedence constraints imposed by the communicating tasks during task allocation. Since the inherent nature of many of the discussed applications precludes the use of wired networks, wireless networks are commonly used in such applications.

The wireless medium is inherently unreliable due to characteristics such as fading and interference. Hence, to guarantee that tasks should meet timing constraints, it becomes necessary to develop techniques that characterize the unreliability in the network channel, and take them into account while making transmission scheduling decisions. Energy management is another crucial aspect for internet-worked embedded devices. These devices contain not only radio and computer components, but also complete system functionalities, such as networking functions across all levels of the protocol stack. Energy savings and allocation among these modules will affect the life time of these battery-powered devices. Energy management also needs to be considered, together with other constraints in size, real-time requirements, functionalities, and network connectivity.

In summary, the combination of temporal requirements, limited resources and power, networked system architectures,time-varying wireless channel, and high reliability requirements presents unique challenges (Loyall et al., 2005; Report 2006).The end goal of most of the research in this area is to devise efficient resource management algorithms for energy-constrained and highly dynamic wireless networks in order to support end-to-end system requirements that are comparable to their wireline counterparts.

Main Focus of the Chapter

Energy management is one of the key issues in the design and operation of networked embedded systems, which involves energy management at the system level considering both computing and communication subsystems. For embedded computing, there are well known techniques, such as dynamic voltage scaling (DVS) (Aydin, Melhem, Mosse, & Alvarez, 2004; Shin, Kim, & Lee, 2005) and dynamic power adaptation (DPM), that have been exploited by intertask and intratask scheduling algorithms. For wireless communication, techniques such as dynamic modulation scaling (DMS) (Raghunathan, Schurgers, Park, & Srivas-tava, 2002), dynamic code scaling (DCS), power adaptation (Raghunathan, Pereira, Srivastava, & Gupta, 2005), and adaptive duty cycling have been employed for minimizing energy consumption. These techniques essentially provide energy-time tradeoff, that is, the lesser the time taken for execution of tasks or transmission of messages, the higher the energy consumed.

Our research contributions are in the design of a comprehensive energy management framework, with associated off-line and online scheduling algorithms, for networked embedded systems. In system-level energy management (Kumar, Sudha & Manimaran, 2007; Unsal & Koren, 2003), the fundamental question to be answered is how much of the available slack be allocated to each of task execution and message transmission. In general, the computation energy (for a CPU cycle) consumed is much lesser than the communication energy (for a bit of data transmission) for currently available technologies. Therefore, allocating as much slack as possible for communication energy optimization sounds appealing on the surface. However, our analysis shows that there is a diminishing return when the transmission time is increased beyond a certain threshold, with coding taken into account (Kumar et al., 2007). Therefore, the slack should be allocated in a balanced manner between computing and communication subsystems, considering current energy levels of tasks and messages and the channel condition. In our research, we considered DVS and DMS for energy optimizations in computing and communication subsystems, respectively.

The major challenge in performing energy management in networked embedded systems lies in estimating the exact workload required by the application. The exact workload determines the least power mode that the device can operate at, while meeting the deadlines. In case of local computation, the workload refers to the task execution times, which exhibit a wide variation from their worst-case estimates. Most of the existing research speculates the task execution times. On the other hand, in the case of messages, the workload refers to the number of retransmissions required over a wireless link for a successful transmission. In our research, we consider both real-time constraints and channel conditions (reliability) while achieving energy efficiency of the networked embedded system. The proposed energy-aware resource management approach, shown in Figure 1, has the following three key components: computing subsystem energy management, communication subsystem energy management, and system-level energy management.

Energy Management at Computing Subsystem: This deals with the energy-aware real-time scheduling of tasks on a local node. Specifically, the goal is to minimize the processor energy consumption while meeting all the task deadlines. Specifically, we have designed cross-layer task scheduling algorithms that exploit intratask information, such as path locality information (if available) and run-time branching information, at the intertask scheduling level. We have designed a generic scheme, called early basic block execution, that aims at minimizing the energy consumption by reducing the nondeterminism in the workload; this is achieved by exploiting the control flow graph (CFG) of each task at the intertask level. The basic idea is as follows (Kumar, Sudha & Manimaran, 2006): “whenever the current task generates a slack due to a shorter branch execution, the early execution algorithm uses this slack to execute the basic blocks of the other ready tasks rather than using the entire available slack for slowing down the processor for the current task, with the objective of knowing other tasks’ branching decisions which would otherwise be known at a later point of time.” By performing such early execution of the basic blocks, the proposed algorithm builds a better picture of the workload at an earlier point in time. This will be exploited to scale the voltage/frequency appropriately across tasks, as opposed to within a task. This approach can acquire a much better idea of the future workload (branching decisions) than a crude speculation, and hence, has the potential to offer significantly higher energy savings. These algorithms can be employed in networked as well as stand-alone embedded systems. The performance of such an algorithm depends on the nature of the workload and the overhead incurred by the algorithm itself.

Figure 1. Schematic of system-level energy management with computing and communication subsystems

Schematic of system-level energy management with computing and communication subsystems

Energy Management at Communication Subsystem: This deals with the energy-aware real-time scheduling of the internode messages over the wireless medium that is prone to phenomenon, such as fading, noise, and interference. Specifically, given a set of messages, each with a source and a destination, the goal is to transmit them, with the objective of minimizing the energy consumption of the communication subsystem while meeting all the message deadlines with a given probability of success. Due to the fading and noisy nature of the wireless channel, it is not feasible to guarantee 100% reliability.

We propose to estimate the channel condition using past feedback from the receivers, and based on the channel condition estimation/prediction, design efficient message transmission strategies, namely determining appropriate power level, modulation format, and coding scheme, for a given set of messages, such that they are successfully transmitted by the deadlines with maximum energy efficiency. We propose to include error-control coding in the energy consumption consideration, in addition to the modulation adaptation, as in DMS and power adaptation. By first quantifying the reliability (i.e., message success probability) using error-exponent and outage probability, we study the problem of allocating available slacks for communication among the messages in a way that maximizes the energy reduction.

Energy Management at the Networked System Level: In a typical networked embedded application, each node performs some local computation (task), and communicates the results (message) to a remote node in the network. Both task and message deadlines must be guaranteed in order to provide end-to-end deadline guarantees. In order to minimize the total energy consumption while guaranteeing the deadlines, the algorithm needs to optimally distribute the available slack among different tasks and messages. Task utilizes the slack to perform DVS, while the message uses the slack to perform DMS, or any other similar technique that trades off time for energy. In general, the computation energy is much less than the communication energy. Therefore, allocating maximum slack to communication subsystem sounds appealing on the surface. However, as can be seen from a more refined analysis, with coding taken into account, there is diminishing returns when the transmission time is increased beyond a certain threshold. Therefore, there should be a balance between the computing subsystem and the communication subsystem in slack distribution. For specific instances of the problem, we have validated such tradeoffs through theoretical analysis, considering the transmission time, wireless channel condition, and different overheads encountered in practice. Based on the results of our analysis, we have designed efficient energy-aware slack distribution algorithms that consider related tasks and messages in an integrated manner (Kumar et al., 2007). The challenge is to design distributed algorithms for slack distribution.

The optimization metric for the system-level energy-management algorithm depends on the requirements of the underlying application. Sample metrics include minimizing the total energy of the entire system, minimizing the maximum energy of the nodes in the system; at a higher level, metrics include maximizing the life time of the networked embedded system satisfying the coverage, connectivity, real-time, and/or reliability properties. The resource management consists of static and dynamic scheduling of tasks and messages. In static scheduling, the workload is periodic and the schedule is constructed off-line for the given workload on the target system platform. There are two options here: (1) construct an energy optimized feasible schedule using energy unoptimized feasible schedule as the input; (2) construct an energy optimized feasible schedule using the workload and target networked platform as inputs. While the first problem assumes a feasible schedule is produced by an existing distributing real-time scheduling algorithm, the second problem does not rely on such a schedule and hence, is a harder problem than the first one. In dynamic scheduling, two options exist: (1) reclaim both static and dynamic slacks and use them for energy optimization through a dynamic slack distribution algorithm. The static slacks are the ones that were left in the schedule as “holes,” and the dynamic slacks are the ones that are created due to situations such as when the actual computation time of a task is less than its scheduled worst-case computation time, or when the actual transmission time (including retransmissions) is less than the worst-case transmission time of a message. In dynamic scheduling, efficient means to keeping track of the slack is critical for achieving high energy performance, and moreover, dynamic slack reclamation and distribution should be done in a distributed manner (with less or no coordination among nodes) without leading to any deadline violation anomalies.

future trends

This work opens up several avenues for further research in the emerging area of networked embedded systems, which include the following: (1) Real-time networked embedded system architectures and resource management algorithms: (2) Cross-layer algorithms for compute-communication energy management; (3) Cross-layer algorithms for sense-compute-communication energy management; (4) Implementation of such architectures and algorithms for real-world applications; (5) Specific research on energy-management algorithms include: (a) designing static system-level energy- aware scheduling algorithms taking task set and the network architecture as inputs, as opposed to taking a given feasible schedule as input; (b) studying the tradeoff involving other energy optimization techniques (e.g., DPM vs. DMS, DVS vs. DMS+DCS), and designing static and dynamic slack allocation algorithms for these specific instances; (c) addressing all these research problems in the context of multihop networks with structured topologies (e.g., tree and mesh) and arbitrary topologies.

conclusion

Real-time embedded systems play a prominent role in a variety of applications ranging from medical sensors in the human body to signaling sensors in war fields. The consumer domain of the embedded devices is large and ever increasing. A natural result of this trend coupled with those in sensor technologies and wireless communications have led to the rise of a new class of systems, called the networked embedded systems. Energy management is of the key issues in the design and operation of such systems, which involves energy management at the system level considering both computing and communication subsystems. This chapter advocated cross-layer algorithms for energy-aware resource management in networked embedded systems, considering an integrated workload of tasks and messages and their respective power management techniques. The fundamental question to be answered is how much of the available slack be allocated to each of task execution and message transmission. The answer lies in analyzing the characteristics of tasks and messages, considering their current energy levels, deadlines, and the channel condition. The research highlighted in this chapter opens up several research directions in wireless networked embedded systems.

key terms

Cross-Layer Algorithms: Two or more layers of the system (e.g., computing and communication layers) work syn-ergistically to achieve the stated objective of the system.

Embedded System: Computing system that is a core part of a large system to achieve sense, process, and actuation capabilities.

Energy-Aware Resource Management: The goal of minimizing energy consumption in the system.

Energy-Time Tradeoffs: This refers to the tradeoff involving time to execute a task or to transmit a message vs. the amount of energy consumed. Lesser the time taken for execution of tasks or transmission of messages, the higher the energy consumed.

Real-Time Workload: The workload consists of a set of tasks and messages that have precedence relations among themselves, and each task/message has specific deadline before which the execution/transmission must be completed.

System-Level Resource Management: The goal of achieving system-level resource management objectives as opposed to making subsystem level optimizations.

Wireless Embedded Network: A set of embedded nodes connected through a wireless network; the wireless channel is time-variant.

Next post:

Previous post: