Browsing by Subject "Network information theory"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Coding for Relay Networks with Parallel Gaussian Channels(2013-05-07) Huang, Yu-ChihA wireless relay network consists of multiple source nodes, multiple destination nodes, and possibly many relay nodes in between to facilitate its transmission. It is clear that the performance of such networks highly depends on information for- warding strategies adopted at the relay nodes. This dissertation studies a particular information forwarding strategy called compute-and-forward. Compute-and-forward is a novel paradigm that tries to incorporate the idea of network coding within the physical layer and hence is often referred to as physical layer network coding. The main idea is to exploit the superposition nature of the wireless medium to directly compute or decode functions of transmitted signals at intermediate relays in a net- work. Thus, the coding performed at the physical layer serves the purpose of error correction as well as permits recovery of functions of transmitted signals. For the bidirectional relaying problem with Gaussian channels, it has been shown by Wilson et al. and Nam et al. that the compute-and-forward paradigm is asymptotically optimal and achieves the capacity region to within 1 bit; however, similar results beyond the memoryless case are still lacking. This is mainly because channels with memory would destroy the lattice structure that is most crucial for the compute-and-forward paradigm. Hence, how to extend compute-and-forward to such channels has been a challenging issue. This motivates this study of the extension of compute-and-forward to channels with memory, such as inter-symbol interference. The bidirectional relaying problem with parallel Gaussian channels is also studied, which is a relevant model for the Gaussian bidirectional channel with inter-symbol interference and that with multiple-input multiple-output channels. Motivated by the recent success of linear finite-field deterministic model, we first investigate the corresponding deterministic parallel bidirectional relay channel and fully characterize its capacity region. Two compute-and-forward schemes are then proposed for the Gaussian model and the capacity region is approximately characterized to within a constant gap. The design of coding schemes for the compute-and-forward paradigm with low decoding complexity is then considered. Based on the separation-based framework proposed previously by Tunali et al., this study proposes a family of constellations that are suitable for the compute-and-forward paradigm. Moreover, by using Chinese remainder theorem, it is shown that the proposed constellations are isomorphic to product fields and therefore can be put into a multilevel coding framework. This study then proposes multilevel coding for the proposed constellations and uses multistage decoding to further reduce decoding complexity.Item Multi-scale error-correcting codes and their decoding using belief propagation(2014-05) Yoo, Yong Seok; Fiete, Ila; Vishwanath, SriramThis work is motivated from error-correcting codes in the brain. To counteract the effect of representation noise, a large number of neurons participate in encoding even low-dimensional variables. In many brain areas, the mean firing rates of neurons as a function of represented variable, called the tuning curve, have unimodal shape centered at different values, defining a unary code. This dissertation focuses on a new type of neural code where neurons have periodic tuning curves, with a diversity of periods. Neurons that exhibit this tuning are grid cells of the entorhinal cortex, which represent self-location in two-dimensional space. First, we investigate mutual information between such multi-scale codes and the coded variable as a function of tuning curve width. For decoding, we consider maximum likelihood (ML) and plausible neural network (NN) based models. For unary neural codes, Fisher information increases with narrower tuning, regardless of the decoding method. By contrast, for the multi-scale neural code, the optimal tuning curve width depends on the decoding method. While narrow tuning is optimal for ML decoding, a finite width, matched to statistics of the noise, is optimal with a NN decoder. This finding may explain why actual neural tuning curves have relatively wide tuning. Next, motivated by the observation that multi-scale codes involve non-trivial decoding, we examine a decoding algorithm based on belief propagation (BP) because BP promises certain gains in decoding efficiency. The decoding problem is first formulated as a subset selection problem on a graph and then approximately solved by BP. Even though the graph has many cycles, BP converges to a fixed point after few iterations. The mean square error of BP approaches to that of ML at high signal-to-noise ratios. Finally, using the multi-scale code, we propose a joint source-channel coding scheme that allows separate senders to transmit complementary information over additive Gaussian noise channels without cooperation. The receiver decodes one sender's codeword using the other as side information and achieves a lower distortion using the same number of transmissions. The proposed scheme offers a new framework to design distributed joint source-channel codes for continuous variables.