Digital Signal Processing Reference
In-Depth Information
It is well known that perfect cooperation (a two-antenna MIMO transceiver) achieves
a diversity gain of d = 2 and a multiplexing gain of r = 2 [3]. On the other hand, the
interference channel (without node cooperation) in Figure 12.3 provides no diversity or
multiplexing gain (i.e., d = 1 and r = 1).
12.3
Capacity Bounds
In this section, we first present capacity bounds for cooperative diversity, indicating a
multiplexing gain of only 1 at high SNRs, which is a somewhat negative result. However,
the main message is that node cooperation can provide a large additive gain and a diver-
sity gain of 2.
While the capacities of most point-to-point channels are known, this is not the case
for wireless multinode channels. Indeed, we only know the capacities of the Gaussian
MAC and the broadcast channel. For all other multinode channels, e.g., the relay and
interference channels, capacities are known only in special cases. However, it is possible
to obtain upper and lower bounds on the capacity, which are often very close, thereby
practically indicating the capacity. A lower bound is the rate that can be attained by
some coding scheme and is therefore an achievable rate. All rates higher than the upper
bound cannot be achieved. If the lower and upper bounds overlap entirely, the complete
rate region is known. However, if the two bounds do not overlap, the gap between them
characterizes the unknown region.
There are two main ideas in obtaining achievable rates for cooperative channels. The
first idea is based on nodes decoding messages from other nodes and re-encoding them.
The second lies in exploiting the joint statistics between the data at cooperating nodes by
means of coding with side information , i.e., Wyner-Ziv coding [13] or dirty-paper coding
[14]. Specifically, it turns out that Wyner-Ziv coding achieves the capacity of receiver
cooperation (asymptotically as the interference and SNR approach infinity), while dirty-
paper coding plays a major role in transmitter cooperation. Below we give a brief sum-
mary of coding with side information.
12.3.1 Coding with Side Information
Distributed source coding addresses separate compression and joint decompression of
correlated sources [8]. Its foundation was laid by Slepian and Wolf [15], who defined the
rate region for lossless compression of two correlated discrete sources showing a surpris-
ing result: separate encoding and joint decoding suffer no rate loss compared to the case
when the sources are compressed jointly. The framework was extended and general-
ized in [16], where the problem of lossy compression under distortion constraints, called
multiterminal source coding, was posed and the bounds given.
A special case of multiterminal source coding is source coding with side information
at the decoder, or Wyner-Ziv coding (WZC). The WZC problem considers lossy com-
pression of source X under the distortion constraint when a correlated source S —called
side information—is available at the decoder but not at the encoder (see Figure 12.4 ) .
This rate-distortion problem was first considered by Wyner and Ziv in [13], where the
 
Search WWH ::




Custom Search