Bridging Shannon and Hamming: Codes for Computationally Simple Channels

Coding theory has had two divergent schools of thought, dating back to its origins, based on the underlying model of the noisy channel. Shannon’s theory modeled the channel as a stochastic process with a known probability law. Hamming’s work suggested a worst-case model, where the channel is subject only to a limit on the number of errors it may cause. These two approaches share several common tools, however in terms of quantitative results, the classical results in the harsher Hamming model are much weaker.

In this talk, we will discuss a line of research aimed at bridging between these models. We will begin by surveying some approaches that rely on setup assumptions (such as shared randomness) to construct codes against worst-case errors with information rate similar to what is possible against random errors. We then turn to our results for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the total fraction of errors is bounded by a parameter p, and (b) the process which adds the errors is sufficiently “simple” computationally. Such channel models are well-motivated since physical noise processes may be mercurial, but are not computationally intensive. Also, as with codes for worst-case errors, codes for such channels can handle errors whose true behavior is unknown or varying over time.

We will describe an explicit construction of polynomial time encodable/decodable codes with rate approaching Shannon capacity 1-h(p) that can correct an arbitrary error pattern with total fraction of errors bounded by p, provided the channel’s action is oblivious to the codeword. We will hint at an extension to channels limited to online logarithmic space that gives efficient codes with optimal rate that enable recovery of a short list containing the correct message. (A similar claim holds for channels admitting polynomial size circuits, assuming the existence of pseudorandom generators.) Our results do not use any shared randomness or other setup assumptions.

Based on joint work with Adam Smith.

Speaker Details

Venkatesan Guruswami received his Bachelor’s degree from the Indian Institute of Technology at Madras in 1997 and his Ph.D. from the Massachusetts Institute of Technology in 2001. He is currently an Associate Professor in the Computer Science Department at Carnegie Mellon University. From 2002-09, he was a faculty member in the Department of Computer Science and Engineering at the University of Washington. Dr. Guruswami was a Miller Research Fellow at the University of California, Berkeley during 2001-02, and was a member in the School of Mathematics, Institute for Advanced Study during 2007-08.

Dr. Guruswami’s research interests span a broad array of topics including the theory of error-correcting codes, approximation algorithms and non-approximability results for NP-hard optimization problems, explicit combinatorial constructions and pseudorandomness, probabilistically checkable proofs, computational complexity theory, and algebraic algorithms.

Dr. Guruswami currently serves on the editorial boards of the SIAM Journal on Computing, IEEE Transactions on Information Theory, and the ACM Transactions on Computation Theory. He is recipient of the Packard Fellowship, Sloan Fellowship, NSF CAREER award, ACM’s Doctoral Dissertation Award, and the IEEE Information Theory Society Paper Award.

Date:
Speakers:
Venkatesan Guruswami
Affiliation:
Carnegie Mellon University
    • Portrait of Jeff Running

      Jeff Running