Superintegrable Hamiltonian system: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Helpful Pixie Bot
m ISBNs (Build KH)
 
en>EmausBot
m Bot: Migrating 1 langlinks, now provided by Wikidata on d:Q7643453
 
Line 1: Line 1:
If an existing Word - Press code is found vulnerable, Word - Press will immediately issue an update for that. Also, you may want to opt for a more professioanl theme if you are planning on showing your site off to a high volume of potential customers each day. SEO Ultimate - I think this plugin deserves more recognition than it's gotten up till now. In the recent years, there has been a notable rise in the number of companies hiring Indian Word - Press developers. Over a million people are using Wordpress to blog and the number of Wordpress users is increasing every day. <br><br>Choosing what kind of links you'll be using is a ctitical aspect of any linkwheel strategy, especially since there are several different types of links that are assessed by search enginesIf you beloved this article and also you would like to acquire more info relating to [http://htxurl.info/wordpressbackupplugin779519 wordpress backup] nicely visit the web site. You do not catch a user's attention through big and large pictures that usually takes a millennium to load up. This plugin allows a blogger get more Facebook fans on the related fan page. Apart from these, you are also required to give some backlinks on other sites as well. Moreover, many Word - Press themes need to be purchased and designing your own WP site can be boring. <br><br>Digital photography is a innovative effort, if you removethe stress to catch every position and viewpoint of a place, you free yourself up to be more innovative and your outcomes will be much better. The following piece of content is meant to make your choice easier and reassure you that the decision to go ahead with this conversion is requited with rich benefits:. Possibly the most downloaded Word - Press plugin, the Google XML Sitemaps plugin but not only automatically creates a site map linking to everyone your pages and posts, it also notifies Google, Bing, Yahoo, and Ask. Our skilled expertise, skillfulness and excellence have been well known all across the world. Websites using this content based strategy are always given top scores by Google. <br><br>You can add keywords but it is best to leave this alone. I have compiled a few tips on how you can start a food blog and hopefully the following information and tips can help you to get started on your food blogging creative journey. Enterprise, when they plan to hire Word - Press developer resources still PHP, My - SQL and watch with great expertise in codebase. Fast Content Update  - It's easy to edit or add posts with free Wordpress websites. Fortunately, Word - Press Customization Service is available these days, right from custom theme design, to plugin customization and modifying your website, you can take any bespoke service for your Word - Press development project. <br><br>Many developers design websites and give them to the clients, but still the client faces problems to handle the website. s ability to use different themes and skins known as Word - Press Templates or Themes. The days of spending a lot of time and money to have a website built are long gone. Word - Press is an open source content management system which is easy to use and offers many user friendly features. You can check out the statistics of page of views for your web pages using free tools that are available on the internet.
'''Distributed source coding''' ('''DSC''') is an important problem in [[information theory]] and [[communication]]. DSC problems regard the compression of multiple correlated information sources that do not communicate with each other.<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1328091 "Distributed source coding for sensor networks" by Z. Xiong, A.D. Liveris, and S. Cheng]</ref>  By modeling the correlation between multiple sources at the decoder side together with [[channel code]]s, DSC is able to shift the computational complexity from encoder side to decoder side, therefore provide appropriate frameworks for applications with complexity-constrained sender, such as [[sensor networks]] and video/multimedia compression (see [[distributed video coding]]<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1657820&isnumber=34703 "Distributed video coding in wireless sensor networks" by Puri, R.  Majumdar, A.  Ishwar, P.  Ramchandran, K. ]</ref>). One of the main properties of distributed source coding is that the computational burden in encoders is shifted to the joint decoder.
 
==History==
In 1973, [[David Slepian]] and [[Jack Keil Wolf]] proposed the information theoretical lossless compression bound on distributed compression of two statistically dependent [[IID|i.i.d.]] sources X and Y.<ref name=swbound>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1055037 "Noiseless coding of correlated information sources" by D. Slepian and J. Wolf]</ref>  After that, this bound was extended to cases with more than two sources by [[Thomas M. Cover]] in 1975,<ref name=swergodic>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1055356 "A proof of the data compression theorem of Slepian and Wolf for ergodic sources" by T. Cover]</ref> while the theoretical results in the lossy compression case are presented by [[Aaron D. Wyner]] and [[Jacob Ziv]] in 1976.<ref name=wzbound>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1055508 "The rate-distortion function for source coding with side information at the decoder" by A. Wyner and J. Ziv]</ref>
 
Although the theorems on DSC were proposed on 1970s, it was after about 30 years that attempts were started for practical techniques, based on the idea that DSC is closely related to channel coding proposed in 1974 by [[Aaron D. Wyner]].<ref name=swpractical>[http://ieeexplore.ieee.org/xpls/freeabs_all.jsp?arnumber=1055171 "Recent results in Shannon theory" by A. D. Wyner]</ref> The asymmetric DSC problem was addressed by S. S. Pradhan and K. Ramchandran in 1999, which focused on statistically dependent binary and Gaussian sources and used scalar and trellis  [[coset]] constructions to solve the problem.<ref name=discus>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1055508 "Distributed source coding using syndromes (DISCUS): design and construction" by S. S. Pradhan and K. Ramchandran]</ref> They further extended the work into the symmetric DSC case.<ref name=discus2>[http://ieeexplore.ieee.org/xpls/freeabs_all.jsp?arnumber=838176 "Distributed source coding: symmetric rates and applications to sensor networks" by S. S. Pradhan and K. Ramchandran]</ref>
 
[[Syndrome decoding]] technology was first used in distributed source coding by the [[DISCUS]] system of SS Pradhan and K Ramachandran (Distributed Source Coding Using Syndromes).<ref name=discus/> They compress binary block data from one source into syndromes and transmit data from the other source uncompressed as [[side information]].  This kind of DSC scheme achieves asymmetric compression rates per source and results in ''asymmetric'' DSC. This asymmetric DSC scheme can be easily extended to the case of more than two correlated information sources. There are also some DSC schemes that use [[parity bit]]s rather than syndrome bits.
 
The correlation between two sources in DSC has been modeled as a [[virtual channel]] which is usually referred as a [[binary symmetric channel]].<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1281474 "Distributed code constructions for the entire Slepian–Wolf rate region for arbitrarily correlated sources" by Schonberg, D.  Ramchandran, K.  Pradhan, S.S.]</ref><ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1512420 "Generalized coset codes for distributed binning" by Pradhan, S.S.  Ramchandran, K.]</ref>
 
Starting from [[DISCUS]], DSC has attracted significant research activity and more sophisticated channel coding techniques have been adopted into DSC frameworks, such as [[Turbo Code]], [[LDPC]] Code, and so on.
 
Similar to the previous lossless coding framework based on Slepian–Wolf theorem, efforts have been taken on lossy cases based on the Wyner–Ziv theorem. Theoretical results on quantizer designs was provided by R. Zamir and S. Shamai,<ref name=wzquantize>[http://ieeexplore.ieee.org/xpls/freeabs_all.jsp?arnumber=706450 "Nested linear/lattice codes for Wyner–Ziv encoding" by R. Zamir and S. Shamai]</ref> while different frameworks have been proposed based on this result, including a nested lattice quantizer and a trellis-coded quantizer.
 
Moreover, DSC has been used in video compression for applications which require low complexity video encoding, such as sensor networks, multiview video camcorders, and so on.<ref name=dvc>[http://ieeexplore.ieee.org/xpls/freeabs_all.jsp?arnumber=1369699 "Distributed Video Coding" by B. Girod, etc. ]</ref>
 
With deterministic and probabilistic discussions of correlation model of two correlated information sources, DSC schemes with more general compressed rates have been developed.<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1614079 "On code design for the Slepian–Wolf problem and lossless multiterminal networks" by Stankovic, V.  Liveris, A.D.  Zixiang Xiong  Georghiades, C.N.]</ref><ref>[http://portal.acm.org/citation.cfm?id=1226544 "A general and optimal framework to achieve the entire rate region for Slepian–Wolf coding" by P. Tan and J. Li]</ref><ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4471935 "Distributed source coding using short to moderate length rate-compatible LDPC codes: the entire Slepian–Wolf rate region" by Sartipi, M.  Fekri, F.]</ref> In these ''non-asymmetric'' schemes, both of two correlated sources are compressed.
 
Under a certain deterministic assumption of correlation between information sources, a DSC framework in which any number of information sources can be compressed in a distributed way has been demonstrated by X. Cao and M. Kuijper.<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?isnumber=4895364&arnumber=4895396&count=299&index=31 "A distributed source coding framework for multiple sources" by Xiaomin Cao and Kuijper, M.]</ref> This method performs non-asymmetric compression with flexible rates for each source, achieving the same overall compression rate as repeatedly applying asymmetric DSC for more than two sources.
 
==Theoretical bounds==
The information theoretical lossless compression bound on DSC (the [[Slepian–Wolf bound]]) was first purposed by [[David Slepian]] and [[Jack Keil Wolf]] in terms of entropies of correlated information sources in 1973.<ref name=swbound/> They also showed that two isolated sources can compress data as efficiently as if they were communicating with each other. This bound has been extended to the case of more than two correlated sources by [[Thomas M. Cover]] in 1975.<ref name=swergodic/>
 
Similar results were obtained in 1976 by [[Aaron D. Wyner]] and [[Jacob Ziv]] with regard to lossy coding of joint Gaussian sources.<ref name=wzbound/>
 
===Slepian–Wolf bound===
Distributed Coding is the coding of two or more dependent sources with separate encoders and joint decoder. Given two statistically dependent i.i.d. finite-alphabet random sequences X and Y, Slepian–Wolf theorem includes theoretical bound for the lossless coding rate for distributed coding of the two sources as below:<ref name=swbound/>
 
: <math>R_X\geq H(X|Y), \,</math>
 
: <math>R_Y\geq H(Y|X), \, </math>
 
: <math>R_X+R_Y\geq H(X,Y). \, </math>
 
If both the encoder and decoder of the two sources are independent, the lowest rate we can achieve for lossless compression is <math>H(X)</math> and <math>H(Y)</math> for <math>X</math> and <math>Y</math> respectively, where <math>H(X)</math> and <math>H(Y)</math> are the entropies of <math>X</math> and <math>Y</math>. However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved. As long as the total rate of <math>X</math> and <math>Y</math> is larger than their joint entropy <math>H(X,Y)</math> and none of the sources is encoded with a rate larger than its entropy, distributed coding can achieve arbitrarily small error probability for long sequences.
 
A special case of distributed coding is compression with decoder side information, where source <math>Y</math> is available at the decoder side but not accessible at the encoder side. This can be treated as the condition that <math>R_Y=H(Y)</math> has already been used to encode <math>Y</math>, while we intend to use <math>H(X|Y)</math> to encode <math>X</math>. The whole system is operating in an asymmetric way (compression rate for the two sources are asymmetric).
 
===Wyner–Ziv bound===
Shortly after Slepian–Wolf theorem on lossless distributed compression was published, the extension to lossy compression with decoder side information was proposed as Wyner–Ziv theorem.<ref name=wzbound/> Similarly to lossless case, two statistically dependent i.i.d. sources <math>X</math> and <math>Y</math> are given, where <math>Y</math> is available at the decoder side but not accessible at the encoder side. Instead of lossless compression in Slepian–Wolf theorem, Wyner–Ziv theorem looked into the lossy compression case.
 
Wyner–Ziv theorem presents the achievable lower bound for the bit rate of <math>X</math> at given distortion <math>D</math>. It was found that for Gaussian memoryless sources and mean-squared error distortion, the lower bound for the bit rate of <math>X</math> remain the same no matter whether side information is available at the encoder or not.
 
==Virtual channel==
'''Deterministic''' model
 
'''Probabilistic''' model
 
==Asymmetric DSC vs. symmetric DSC==
Asymmetric DSC means that, different bitrates are used in coding the input sources, while same bitrate is used in symmetric DSC. Taking a DSC design with two sources for example, in this example <math>X</math> and <math>Y</math> are two discrete, memoryless, uniformly distributed sources which generate set of variables <math>\mathbf{x}</math> and <math>\mathbf{y}</math> of length 7 bits and the Hamming distance between <math>\mathbf{x}</math> and <math>\mathbf{y}</math> is at most one. The Slepian–Wolf bound for them is:
 
:<math>R_X+R_Y \geq 10</math>
:<math>R_X \geq 5</math>
:<math>R_Y \geq 5</math>
 
This means, the theoretical bound is <math>R_X+R_Y=10</math> and symmetric DSC means 5 bits for each source. Other pairs with <math>R_X+R_Y=10</math> are asymmetric cases with different bit rate distributions between <math>X</math> and <math>Y</math>, where <math>R_X=3</math>, <math>R_Y=7</math> and <math>R_Y=3</math>, <math>R_X=7</math> represent two extreme cases called decoding with side information.
 
==Practical distributed source coding==
===Slepian–Wolf coding – lossless distributed coding===
It was understood that [[Slepian–Wolf coding]] is closely related to channel coding in 1974,<ref name=swpractical/> and after about 30 years, practical DSC started to be implemented by different channel codes. The motivation behind the use of channel codes is from two sources case, the correlation between input sources can be modeled as a virtual channel which has input as source <math>X</math> and output as source <math>Y</math>. The [[DISCUS]] system proposed by S. S. Pradhan and K. Ramchandran in 1999 implemented DSC with [[syndrome decoding]], which worked for asymmetric case and was further extended to symmetric case.<ref name=discus/><ref name=discus2/>
 
The basic framework of syndrome based DSC is that, for each source, its input space is partitioned into several cosets according to the particular channel coding method used. Every input of each source gets an output indicating which coset the input belongs to, and the joint decoder can decode all inputs by received coset indices and dependence between sources. The design of channel codes should consider the correlation between input sources.
 
A group of codes can be used to generate coset partitions,<ref>[http://ieeexplore.ieee.org/xpls/freeabs_all.jsp?arnumber=21245 "Coset codes. I. Introduction and geometrical classification" by G. D. Forney]</ref> such as trellis codes and lattice codes. Pradhan and Ramchandran designed rules for construction of sub-codes for each source, and presented result of trellis-based coset constructions in DSC, which is based on [[convolution code]] and set-partitioning rules as in [[Trellis modulation]], as well as lattice code based DSC.<ref name=discus/><ref name=discus2/> After this, embedded trellis code was proposed for asymmetric coding as an improvement over their results.<ref>[http://ieeexplore.ieee.org/xpls/freeabs_all.jsp?arnumber=917167 "Design of trellis codes for source coding with side information at the decoder" by X. Wang and M. Orchard]</ref>
 
After DISCUS system was proposed, more sophisticated channel codes have been adapted to the DSC system, such as [[Turbo Code]], [[LDPC]] Code and Iterative Channel Code. The encoders of these codes are usually simple and easy to implement, while the decoders have much higher computational complexity and are able to get good performance by utilizing source statistics. With sophisticated channel codes which have performance approaching the capacity of the correlation channel, corresponding DSC system can approach the Slepian–Wolf bound.
 
Although most research focused on DSC with two dependent sources, Slepian–Wolf coding has been extended to more than two input sources case, and sub-codes generation methods from one channel code was proposed by V. Stankovic, A. D. Liveris, etc. given particular correlation models.<ref>[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1281475 "Design of Slepian–Wolf codes by channel code partitioning" by V. Stankovic, A. D. Liveris, Z. Xiong and C. N. Georghiades]</ref>
 
====General theorem of Slepian–Wolf coding with syndromes for two sources====
'''Theorem''': Any pair of correlated uniformly distributed sources, <math>X, Y \in \left\{0,1\right\}^n</math>, with <math>\mathbf{d_H}(X, Y) \leq t</math>, can be compressed separately at a rate pair <math>(R_1, R_2)</math> such that <math> R_1, R_2 \geq n-k, R_1+R_2 \geq 2n-k</math>, where <math>R_1</math> and <math>R_2</math> are integers, and <math>k \leq n-\log(\sum_{i=0}^t{n \choose i})</math>. This can be achieved using an <math>(n,k,2t+1)</math> binary linear code.
 
''Proof'': The Hamming bound for an <math>(n,k,2t+1)</math> binary linear code is <math>k \leq n-\log(\sum_{i=0}^t{n \choose i})</math>, and we have Hamming code achieving this bound, therefore we have such a binary linear code <math>\mathbf{C}</math> with <math>k\times n</math> generator matrix <math>\mathbf{G}</math>. Next we will show how to construct syndrome encoding based on this linear code.
 
Let <math>R_1+R_2=2n-k</math> and <math>\mathbf{G_1}</math> be formed by taking first <math>(n-R_1)</math> rows from <math>\mathbf{G}</math>, while <math>\mathbf{G_2}</math> is formed using the remaining <math>(n-R_2)</math> rows of <math>\mathbf{G}</math>. <math>\mathbf{C_1}</math> and <math>\mathbf{C_2}</math> are the subcodes of the Hamming code generated by <math>\mathbf{G_1}</math> and <math>\mathbf{G_2}</math> respectively, with <math>\mathbf{H_1}</math> and <math>\mathbf{H_2}</math> as their parity check matrices.
 
For a pair of input <math>\mathbf{(x, y)}</math>, the encoder is given by <math>\mathbf{s_1}=\mathbf{H_1}\mathbf{x}</math> and <math>\mathbf{s_2}=\mathbf{H_2}\mathbf{y}</math>. That means, we can represent <math>\mathbf{x}</math> and <math>\mathbf{y}</math> as <math>\mathbf{x=u_1G_1+c_{s1}}</math>, <math>\mathbf{y=u_2G_2+c_{s2}}</math>, where <math>\mathbf{c_{s1}, c_{s2}}</math> are the representatives of the cosets of <math>\mathbf{s1, s2}</math> with regard to <math>\mathbf{C_1, C_2}</math> respectively. Since we have <math>\mathbf{y=x+e}</math> with <math>w(\mathbf{e}) \leq t</math>. We can get <math>\mathbf{x+y=uG+c_{s}=e}</math>, where <math>\mathbf{u=\left[ u_1, u_2\right] }</math>, <math>\mathbf{c_{s}=c_{s1}+c_{s2}}</math>.
 
Suppose there are two different input pairs with the same syndromes, that means there are two different strings <math>\mathbf{u^1, u^2} \in \left\{ 0,1\right\}^k</math>, such that <math>\mathbf{u^1G+c_{s}=e}</math> and <math>\mathbf{u^2G+c_{s}=e}</math>. Thus we will have <math>\mathbf{(u^1-u^2)G=0}</math>. Because minimum Hamming weight of the code <math>\mathbf{C}</math> is <math>2t+1</math>, the distance between <math>\mathbf{u_1G}</math> and <math>\mathbf{u_2G}</math> is <math>\geq 2t+1</math>. On the other hand, according to <math>w(\mathbf{e}) \leq t</math> together with <math>\mathbf{u^1G+c_{s}=e}</math> and <math>\mathbf{u^2G+c_{s}=e}</math>, we will have <math>d_H(\mathbf{u^1G, c_{s}}) \leq t</math> and <math>d_H(\mathbf{u^2G, c_{s}}) \leq t</math>, which contradict with <math>d_H(\mathbf{u^1G, u^2G}) \geq 2t+1</math>.  Therefore, we cannot have more than one input pairs with the same syndromes.
 
Therefore, we can successfully compress the two dependent sources with constructed subcodes from an <math>(n,k,2t+1)</math> binary linear code, with rate pair <math>(R_1, R_2)</math> such that <math> R_1, R_2 \geq n-k, R_1+R_2 \geq 2n-k</math>, where <math>R_1</math> and <math>R_2</math> are integers, and <math>k \leq n-\log(\sum_{i=0}^t{n \choose i})</math>. ''Log'' indicates ''Log<sub>2</sub>''.
 
====Slepian–Wolf coding example====
Take the same example as in the previous '''Asymmetric DSC vs. Symmetric DSC''' part, this part presents the corresponding DSC schemes with coset codes and syndromes including asymmetric case and symmetric case. The Slepian–Wolf bound for DSC design is shown in the previous part.
 
=====Asymmetric case (<math>R_X=3</math>, <math>R_Y=7</math>)=====
In this case, the length of an input variable <math>\mathbf{y}</math> from source <math>Y</math> is 7 bits, therefore it can be sent lossless with 7 bits independent of any other bits. Based on the knowledge that <math>\mathbf{x}</math> and <math>\mathbf{y}</math> have Hamming distance at most one, for input <math>\mathbf{x}</math> from source <math>X</math>, since the receiver already has <math>\mathbf{y}</math>, the only possible <math>\mathbf{x}</math> are those with at most 1 distance from <math>\mathbf{y}</math>. If we model the correlation between two sources as a virtual channel, which has input <math>\mathbf{x}</math> and output <math>\mathbf{y}</math>, as long as we get <math>\mathbf{y}</math>, all we need to successfully "decode" <math>\mathbf{x}</math> is "parity bits" with particular error correction ability, taking the difference between <math>\mathbf{x}</math> and <math>\mathbf{y}</math> as channel error. We can also model the problem with cosets partition. That is, we want to find a channel code, which is able to partition the space of input <math>X</math> into several cosets, where each coset has a unique syndrome associated with it. With a given coset and <math>\mathbf{y}</math>, there is only one <math>\mathbf{x}</math> that is possible to be the input given the correlation between two sources.
 
In this example, we can use the <math>(7,4, 3)</math> binary [[Hamming Code]] <math>\mathbf{C}</math>, with parity check matrix <math>\mathbf{H}</math>. For an input <math>\mathbf{x}</math> from source <math>X</math>, only the syndrome given by <math>\mathbf{s}=\mathbf{H}\mathbf{x}</math> is transmitted, which is 3 bits. With received <math>\mathbf{y}</math> and <math>\mathbf{s}</math>, suppose there are two inputs <math>\mathbf{x_1}</math> and <math>\mathbf{x_2}</math> with same syndrome <math>\mathbf{s}</math>. That means <math>\mathbf{H}\mathbf{x_1}=\mathbf{H}\mathbf{x_2}</math>, which is <math>\mathbf{H}(\mathbf{x_1}-\mathbf{x_2})=0</math>. Since the minimum Hamming weight of <math>(7,4,3)</math> Hamming Code is 3, <math>d_H(\mathbf{x_1}, \mathbf{x_2})\geq 3</math>. Therefore the input <math>\mathbf{x}</math> can be recovered since <math>d_H(\mathbf{x}, \mathbf{y})\leq 1</math>.
 
Similarly, the bits distribution with <math>R_X=7</math>, <math>R_Y=3</math> can be achieved by reversing the roles of <math>X</math> and <math>Y</math>.
 
=====Symmetric case=====
In symmetric case, what we want is equal bitrate for the two sources: 5 bits each with separate encoder and joint decoder. We still use linear codes for this system, as we used for asymmetric case. The basic idea is similar, but in this case, we need to do coset partition for both sources, while for a pair of received syndromes (corresponds to one coset), only one pair of input variables are possible given the correlation between two sources.
 
Suppose we have a pair of [[linear code]] <math>\mathbf{C_1}</math> and <math>\mathbf{C_2}</math> and an encoder-decoder pair based on linear codes which can achieve symmetric coding. The encoder output is given by: <math>\mathbf{s_1}=\mathbf{H_1}\mathbf{x}</math> and <math>\mathbf{s_2}=\mathbf{H_2}\mathbf{y}</math>. If there exists two pair of valid inputs <math>\mathbf{x_1}, \mathbf{y_1}</math> and <math>\mathbf{x_2}, \mathbf{y_2}</math> generating the same syndromes, i.e. <math>\mathbf{H_1}\mathbf{x_1} = \mathbf{H_1}\mathbf{x_2}</math> and <math>\mathbf{H_1}\mathbf{y_1} = \mathbf{H_1}\mathbf{y_2}</math>, we can get following(<math>w()</math> represents Hamming weight):
 
<math>\mathbf{y_1}=\mathbf{x_1}+\mathbf{e_1}</math>, where <math>w(\mathbf{e_1}) \leq 1</math>
 
<math>\mathbf{y_2}=\mathbf{x_2}+\mathbf{e_2}</math>, where <math>w(\mathbf{e_2}) \leq 1</math>
 
Thus: <math>\mathbf{x_1}+\mathbf{x_2} \in \mathbf{C_1}</math>
 
<math>\mathbf{y_1}+\mathbf{y_2}=\mathbf{x_1}+\mathbf{x_2}+\mathbf{e_3} \in \mathbf{C_2}</math>
 
where <math>\mathbf{e_3}=\mathbf{e_2}+\mathbf{e_1}</math> and <math>w(\mathbf{e_3}) \leq 2</math>. That means, as long as we have the minimum distance between the two codes larger than <math>3</math>, we can achieve error-free decoding.
 
The two codes <math>\mathbf{C_1}</math> and <math>\mathbf{C_2}</math> can be constructed as subcodes of the <math>(7, 4, 3)</math> Hamming code and thus has minimum distance of <math>3</math>. Given the [[generator matrix]] <math>\mathbf{G}</math> of the original Hamming code, the generator matrix <math>\mathbf{G_1}</math> for <math>\mathbf{C_1}</math> is constructed by taking any two rows from <math>\mathbf{G}</math>, and <math>\mathbf{G_2}</math> is constructed by the remaining two rows of <math>\mathbf{G}</math>. The corresponding <math>(5\times7)</math> [[Parity check matrix|parity-check matrix]] for each sub-code can be generated according to the generator matrix and used to generate syndrome bits.
 
===Wyner–Ziv coding – lossy distributed coding===
In general, a Wyner–Ziv coding scheme is obtained by adding a quantizer and a de-quantizer to the Slepian–Wolf coding scheme. Therefore, a Wyner–Ziv coder design could focus on the quantizer and corresponding reconstruction method design. Several quantizer designs have been proposed, such as a nested lattice quantizer,<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1289429 "Nested quantization and Slepian–Wolf coding: a Wyner–Ziv coding paradigm for i.i.d. sources" by Z. Xiong, A. D. Liveris, S. Cheng and Z. Liu]</ref> trellis code quantizer<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4784347 "Wyner–Ziv coding based on TCQ and LDPC codes" by Y. Yang, S. Cheng, Z. Xiong and W. Zhao]</ref> and Lloyd quantization method.<ref>[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1193992 "Design of optimal quantizers for distributed source coding" by D. Rebollo-Monedero, R. Zhang and B. Girod]</ref>
 
===Large scale distributed quantization===
Unfortunately, the above approaches do not scale (in design or operational complexity requirements) to sensor networks of large sizes, a scenario where distributed compression is most helpful. If there are N sources transmitting at R bits each (with some distributed coding scheme), the number of possible reconstructions scales <math> 2^{NR}</math>. Even for moderate values of N and R (say N=10, R = 2), prior design schemes become impractical. Recently, an approach,<ref>[http://www.scl.ece.ucsb.edu/pubs/pubs_D/d10_4.pdf "Towards large scale distributed source coding" by S. Ramaswamy, K. Viswanatha, A. Saxena and K. Rose]</ref> using ideas borrowed from Fusion Coding of Correlated Sources, has been proposed where design and operational complexity are traded against decoder performance. This has allowed distributed quantizer design for network sizes reaching 60 sources, with substantial gains over traditional approaches.
 
The central idea is the presence of a bit-subset selector which maintains a certain subset of the received (NR bits, in the above example) bits for each source. Let <math> \mathcal{B}</math> be the set of all subsets of the NR bits i.e.
 
:<math>\mathcal{B} = 2^{\{1,...,NR\}} </math>
 
Then, we define the bit-subset selector mapping to be
<br />
:<math> \mathcal{S} : \{1,...,N\} \rightarrow \mathcal{B} </math>
 
Note that each choice of the bit-subset selector imposes a storage requirement (C) that is exponential in the cardinality of the set of chosen bits.
<br />
:<math> C = \sum_{n=1}^N 2^{|\mathcal{S}(n)|} </math>
 
This allows a judicious choice of bits that minimize the distortion, given the constraints on decoder storage. Additional limitations on the set of allowable subsets are still needed. The effective cost function that needs to be minimized is a weighted sum of distortion and decoder storage
<br />
:<math> J = D + \lambda C </math>
 
The system design is performed by iteratively (and incrementally) optimizing the encoders, decoder and bit-subset selector till convergence.
 
==Non-asymmetric DSC==
{{Empty section|date=June 2010}}
 
==Non-asymmetric DSC for more than two sources==
The syndrome approach can still be used for more than two sources. Let us consider <math>a</math> binary sources of length-<math>n</math> <math> \mathbf{x}_1,\mathbf{x}_2,\cdots, \mathbf{x}_a \in \{0,1\}^n </math>. Let <math> \mathbf{H}_1, \mathbf{H}_2, \cdots, \mathbf{H}_s </math> be the corresponding coding matrices of sizes <math> m_1 \times n, m_2 \times n, \cdots, m_a \times n</math>. Then the input binary sources are compressed into <math> \mathbf{s}_1 = \mathbf{H}_1 \mathbf{x}_1,  \mathbf{s}_2 = \mathbf{H}_2 \mathbf{x}_2, \cdots, \mathbf{s}_a = \mathbf{H}_a \mathbf{x}_a </math> of total <math> m= m_1 + m_2 + \cdots m_a </math> bits. Apparently, two source tuples cannot be recovered at the same time if they share the same syndrome. In other words, if all source tuples of interest have different syndromes, then one can recover them losslessly.
 
General theoretical result does not seem to exist. However, for a restricted kind of source so-called Hamming source <ref name="HCMS">[http://arxiv.org/pdf/1001.4072 "Hamming Codes for Multiple Sources" by R. Ma and S. Cheng]</ref> that only has at most one source different from the rest and at most one bit location not all identical, practical lossless DSC is shown to exist in some cases. For the case when there are more than two sources, the number of source tuple in a Hamming source is <math>2^n (a n + 1)</math>. Therefore, a packing bound that <math>2^m \ge 2^n (a n + 1)</math> obviously has to satisfy. When the packing bound is satisfied with equality, we may call such code to be perfect (an analogous of perfect code in error correcting code).<ref name="HCMS" />
 
A simplest set of <math> a, n, m</math> to satisfy the packing bound with equality is <math> a=3, n=5, m=9 </math>. However, it turns out that such syndrome code does not exist.<ref>[http://tulsagrad.ou.edu/samuel_cheng/papers/dcc10.pdf "The Non-existence of Length-5 Slepian–Wolf Codes of Three Sources" by S. Cheng and R. Ma]</ref> The simplest (perfect) syndrome code with more than two sources have <math> n = 21 </math> and <math> m = 27 </math>. Let
 
<math>
\mathbf{Q}_1 =
\begin{pmatrix}
1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \\
0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \\
0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \\
0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \\
0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \\
0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 1
\end{pmatrix},
</math>
<math>
\mathbf{Q}_2=   
\begin{pmatrix}
0 \; 0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \\
1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \\
0 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 1 \\
1 \; 0 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \\
0 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \\
0 \; 0 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0
\end{pmatrix},
</math>
<math>
\mathbf{Q}_3=   
\begin{pmatrix}
1 \; 0 \; 0 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1  \\
1 \; 1 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1  \\
0 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0  \\
1 \; 0 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1  \\
0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 1 \; 0 \; 0  \\
0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1
\end{pmatrix},
</math>
<math>
\mathbf{G} = [ \mathbf{0} | \mathbf{I}_9]
</math>,
and
<math>
\mathbf{G}=\begin{pmatrix}
\mathbf{G}_1 \\ \mathbf{G}_2 \\ \mathbf{G}_3
\end{pmatrix}
</math>
such that <math>
\mathbf{G}_1, \mathbf{G}_2, \mathbf{G}_3
</math>
are any partition of <math> \mathbf{G} </math>.
 
<math>
\mathbf{H}_1= \begin{pmatrix}
\mathbf{G}_1 \\ \mathbf{Q}_1
\end{pmatrix},
\mathbf{H}_2= \begin{pmatrix}
\mathbf{G}_2 \\ \mathbf{Q}_2
\end{pmatrix},
\mathbf{H}_3= \begin{pmatrix}
\mathbf{G}_3 \\ \mathbf{Q}_3
\end{pmatrix}
</math>
can compress a Hamming source (i.e., sources that have no more than one bit different will all have different syndromes).<ref name="HCMS" />
For example, for the symmetric case, a possible set of coding matrices are
<math>
\mathbf{H}_1 =
\begin{pmatrix}
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \\
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \\
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \\
1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \\
0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \\
0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \\
0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \\
0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \\
0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 1
\end{pmatrix},
</math>
<math>
\mathbf{H}_2=   
\begin{pmatrix}
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \\
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \\
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \\
0 \; 0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \\
1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \\
0 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 1 \\
1 \; 0 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \\
0 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \\
0 \; 0 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0
\end{pmatrix},
</math>
<math>
\mathbf{H}_3=   
\begin{pmatrix}
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \\
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \\
0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \\
1 \; 0 \; 0 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1  \\
1 \; 1 \; 0 \; 0 \; 1 \; 0 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1  \\
0 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0  \\
1 \; 0 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1  \\
0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 0 \; 0 \; 1 \; 1 \; 0 \; 1 \; 0 \; 0  \\
0 \; 0 \; 1 \; 0 \; 1 \; 1 \; 0 \; 1 \; 1 \; 1 \; 0 \; 0 \; 1 \; 1 \; 1 \; 1 \; 0 \; 0 \; 0 \; 1 \; 1
\end{pmatrix}.
</math>
 
==See also==
*[[Linear code]]
*[[Syndrome decoding]]
*[[Low-density parity-check code]]
*[[Turbo Code]]
 
==References==
{{Reflist}}
 
{{DEFAULTSORT:Distributed Source Coding}}
[[Category:Information theory]]
[[Category:Coding theory]]
[[Category:Wireless sensor network]]
[[Category:Data transmission]]

Latest revision as of 06:21, 13 April 2013

Distributed source coding (DSC) is an important problem in information theory and communication. DSC problems regard the compression of multiple correlated information sources that do not communicate with each other.[1] By modeling the correlation between multiple sources at the decoder side together with channel codes, DSC is able to shift the computational complexity from encoder side to decoder side, therefore provide appropriate frameworks for applications with complexity-constrained sender, such as sensor networks and video/multimedia compression (see distributed video coding[2]). One of the main properties of distributed source coding is that the computational burden in encoders is shifted to the joint decoder.

History

In 1973, David Slepian and Jack Keil Wolf proposed the information theoretical lossless compression bound on distributed compression of two statistically dependent i.i.d. sources X and Y.[3] After that, this bound was extended to cases with more than two sources by Thomas M. Cover in 1975,[4] while the theoretical results in the lossy compression case are presented by Aaron D. Wyner and Jacob Ziv in 1976.[5]

Although the theorems on DSC were proposed on 1970s, it was after about 30 years that attempts were started for practical techniques, based on the idea that DSC is closely related to channel coding proposed in 1974 by Aaron D. Wyner.[6] The asymmetric DSC problem was addressed by S. S. Pradhan and K. Ramchandran in 1999, which focused on statistically dependent binary and Gaussian sources and used scalar and trellis coset constructions to solve the problem.[7] They further extended the work into the symmetric DSC case.[8]

Syndrome decoding technology was first used in distributed source coding by the DISCUS system of SS Pradhan and K Ramachandran (Distributed Source Coding Using Syndromes).[7] They compress binary block data from one source into syndromes and transmit data from the other source uncompressed as side information. This kind of DSC scheme achieves asymmetric compression rates per source and results in asymmetric DSC. This asymmetric DSC scheme can be easily extended to the case of more than two correlated information sources. There are also some DSC schemes that use parity bits rather than syndrome bits.

The correlation between two sources in DSC has been modeled as a virtual channel which is usually referred as a binary symmetric channel.[9][10]

Starting from DISCUS, DSC has attracted significant research activity and more sophisticated channel coding techniques have been adopted into DSC frameworks, such as Turbo Code, LDPC Code, and so on.

Similar to the previous lossless coding framework based on Slepian–Wolf theorem, efforts have been taken on lossy cases based on the Wyner–Ziv theorem. Theoretical results on quantizer designs was provided by R. Zamir and S. Shamai,[11] while different frameworks have been proposed based on this result, including a nested lattice quantizer and a trellis-coded quantizer.

Moreover, DSC has been used in video compression for applications which require low complexity video encoding, such as sensor networks, multiview video camcorders, and so on.[12]

With deterministic and probabilistic discussions of correlation model of two correlated information sources, DSC schemes with more general compressed rates have been developed.[13][14][15] In these non-asymmetric schemes, both of two correlated sources are compressed.

Under a certain deterministic assumption of correlation between information sources, a DSC framework in which any number of information sources can be compressed in a distributed way has been demonstrated by X. Cao and M. Kuijper.[16] This method performs non-asymmetric compression with flexible rates for each source, achieving the same overall compression rate as repeatedly applying asymmetric DSC for more than two sources.

Theoretical bounds

The information theoretical lossless compression bound on DSC (the Slepian–Wolf bound) was first purposed by David Slepian and Jack Keil Wolf in terms of entropies of correlated information sources in 1973.[3] They also showed that two isolated sources can compress data as efficiently as if they were communicating with each other. This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975.[4]

Similar results were obtained in 1976 by Aaron D. Wyner and Jacob Ziv with regard to lossy coding of joint Gaussian sources.[5]

Slepian–Wolf bound

Distributed Coding is the coding of two or more dependent sources with separate encoders and joint decoder. Given two statistically dependent i.i.d. finite-alphabet random sequences X and Y, Slepian–Wolf theorem includes theoretical bound for the lossless coding rate for distributed coding of the two sources as below:[3]

RXH(X|Y),
RYH(Y|X),
RX+RYH(X,Y).

If both the encoder and decoder of the two sources are independent, the lowest rate we can achieve for lossless compression is H(X) and H(Y) for X and Y respectively, where H(X) and H(Y) are the entropies of X and Y. However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved. As long as the total rate of X and Y is larger than their joint entropy H(X,Y) and none of the sources is encoded with a rate larger than its entropy, distributed coding can achieve arbitrarily small error probability for long sequences.

A special case of distributed coding is compression with decoder side information, where source Y is available at the decoder side but not accessible at the encoder side. This can be treated as the condition that RY=H(Y) has already been used to encode Y, while we intend to use H(X|Y) to encode X. The whole system is operating in an asymmetric way (compression rate for the two sources are asymmetric).

Wyner–Ziv bound

Shortly after Slepian–Wolf theorem on lossless distributed compression was published, the extension to lossy compression with decoder side information was proposed as Wyner–Ziv theorem.[5] Similarly to lossless case, two statistically dependent i.i.d. sources X and Y are given, where Y is available at the decoder side but not accessible at the encoder side. Instead of lossless compression in Slepian–Wolf theorem, Wyner–Ziv theorem looked into the lossy compression case.

Wyner–Ziv theorem presents the achievable lower bound for the bit rate of X at given distortion D. It was found that for Gaussian memoryless sources and mean-squared error distortion, the lower bound for the bit rate of X remain the same no matter whether side information is available at the encoder or not.

Virtual channel

Deterministic model

Probabilistic model

Asymmetric DSC vs. symmetric DSC

Asymmetric DSC means that, different bitrates are used in coding the input sources, while same bitrate is used in symmetric DSC. Taking a DSC design with two sources for example, in this example X and Y are two discrete, memoryless, uniformly distributed sources which generate set of variables x and y of length 7 bits and the Hamming distance between x and y is at most one. The Slepian–Wolf bound for them is:

RX+RY10
RX5
RY5

This means, the theoretical bound is RX+RY=10 and symmetric DSC means 5 bits for each source. Other pairs with RX+RY=10 are asymmetric cases with different bit rate distributions between X and Y, where RX=3, RY=7 and RY=3, RX=7 represent two extreme cases called decoding with side information.

Practical distributed source coding

Slepian–Wolf coding – lossless distributed coding

It was understood that Slepian–Wolf coding is closely related to channel coding in 1974,[6] and after about 30 years, practical DSC started to be implemented by different channel codes. The motivation behind the use of channel codes is from two sources case, the correlation between input sources can be modeled as a virtual channel which has input as source X and output as source Y. The DISCUS system proposed by S. S. Pradhan and K. Ramchandran in 1999 implemented DSC with syndrome decoding, which worked for asymmetric case and was further extended to symmetric case.[7][8]

The basic framework of syndrome based DSC is that, for each source, its input space is partitioned into several cosets according to the particular channel coding method used. Every input of each source gets an output indicating which coset the input belongs to, and the joint decoder can decode all inputs by received coset indices and dependence between sources. The design of channel codes should consider the correlation between input sources.

A group of codes can be used to generate coset partitions,[17] such as trellis codes and lattice codes. Pradhan and Ramchandran designed rules for construction of sub-codes for each source, and presented result of trellis-based coset constructions in DSC, which is based on convolution code and set-partitioning rules as in Trellis modulation, as well as lattice code based DSC.[7][8] After this, embedded trellis code was proposed for asymmetric coding as an improvement over their results.[18]

After DISCUS system was proposed, more sophisticated channel codes have been adapted to the DSC system, such as Turbo Code, LDPC Code and Iterative Channel Code. The encoders of these codes are usually simple and easy to implement, while the decoders have much higher computational complexity and are able to get good performance by utilizing source statistics. With sophisticated channel codes which have performance approaching the capacity of the correlation channel, corresponding DSC system can approach the Slepian–Wolf bound.

Although most research focused on DSC with two dependent sources, Slepian–Wolf coding has been extended to more than two input sources case, and sub-codes generation methods from one channel code was proposed by V. Stankovic, A. D. Liveris, etc. given particular correlation models.[19]

General theorem of Slepian–Wolf coding with syndromes for two sources

Theorem: Any pair of correlated uniformly distributed sources, X,Y{0,1}n, with dH(X,Y)t, can be compressed separately at a rate pair (R1,R2) such that R1,R2nk,R1+R22nk, where R1 and R2 are integers, and knlog(i=0t(ni)). This can be achieved using an (n,k,2t+1) binary linear code.

Proof: The Hamming bound for an (n,k,2t+1) binary linear code is knlog(i=0t(ni)), and we have Hamming code achieving this bound, therefore we have such a binary linear code C with k×n generator matrix G. Next we will show how to construct syndrome encoding based on this linear code.

Let R1+R2=2nk and G1 be formed by taking first (nR1) rows from G, while G2 is formed using the remaining (nR2) rows of G. C1 and C2 are the subcodes of the Hamming code generated by G1 and G2 respectively, with H1 and H2 as their parity check matrices.

For a pair of input (x,y), the encoder is given by s1=H1x and s2=H2y. That means, we can represent x and y as x=u1G1+cs1, y=u2G2+cs2, where cs1,cs2 are the representatives of the cosets of s1,s2 with regard to C1,C2 respectively. Since we have y=x+e with w(e)t. We can get x+y=uG+cs=e, where u=[u1,u2], cs=cs1+cs2.

Suppose there are two different input pairs with the same syndromes, that means there are two different strings u1,u2{0,1}k, such that u1G+cs=e and u2G+cs=e. Thus we will have (u1u2)G=0. Because minimum Hamming weight of the code C is 2t+1, the distance between u1G and u2G is 2t+1. On the other hand, according to w(e)t together with u1G+cs=e and u2G+cs=e, we will have dH(u1G,cs)t and dH(u2G,cs)t, which contradict with dH(u1G,u2G)2t+1. Therefore, we cannot have more than one input pairs with the same syndromes.

Therefore, we can successfully compress the two dependent sources with constructed subcodes from an (n,k,2t+1) binary linear code, with rate pair (R1,R2) such that R1,R2nk,R1+R22nk, where R1 and R2 are integers, and knlog(i=0t(ni)). Log indicates Log2.

Slepian–Wolf coding example

Take the same example as in the previous Asymmetric DSC vs. Symmetric DSC part, this part presents the corresponding DSC schemes with coset codes and syndromes including asymmetric case and symmetric case. The Slepian–Wolf bound for DSC design is shown in the previous part.

Asymmetric case (RX=3, RY=7)

In this case, the length of an input variable y from source Y is 7 bits, therefore it can be sent lossless with 7 bits independent of any other bits. Based on the knowledge that x and y have Hamming distance at most one, for input x from source X, since the receiver already has y, the only possible x are those with at most 1 distance from y. If we model the correlation between two sources as a virtual channel, which has input x and output y, as long as we get y, all we need to successfully "decode" x is "parity bits" with particular error correction ability, taking the difference between x and y as channel error. We can also model the problem with cosets partition. That is, we want to find a channel code, which is able to partition the space of input X into several cosets, where each coset has a unique syndrome associated with it. With a given coset and y, there is only one x that is possible to be the input given the correlation between two sources.

In this example, we can use the (7,4,3) binary Hamming Code C, with parity check matrix H. For an input x from source X, only the syndrome given by s=Hx is transmitted, which is 3 bits. With received y and s, suppose there are two inputs x1 and x2 with same syndrome s. That means Hx1=Hx2, which is H(x1x2)=0. Since the minimum Hamming weight of (7,4,3) Hamming Code is 3, dH(x1,x2)3. Therefore the input x can be recovered since dH(x,y)1.

Similarly, the bits distribution with RX=7, RY=3 can be achieved by reversing the roles of X and Y.

Symmetric case

In symmetric case, what we want is equal bitrate for the two sources: 5 bits each with separate encoder and joint decoder. We still use linear codes for this system, as we used for asymmetric case. The basic idea is similar, but in this case, we need to do coset partition for both sources, while for a pair of received syndromes (corresponds to one coset), only one pair of input variables are possible given the correlation between two sources.

Suppose we have a pair of linear code C1 and C2 and an encoder-decoder pair based on linear codes which can achieve symmetric coding. The encoder output is given by: s1=H1x and s2=H2y. If there exists two pair of valid inputs x1,y1 and x2,y2 generating the same syndromes, i.e. H1x1=H1x2 and H1y1=H1y2, we can get following(w() represents Hamming weight):

y1=x1+e1, where w(e1)1

y2=x2+e2, where w(e2)1

Thus: x1+x2C1

y1+y2=x1+x2+e3C2

where e3=e2+e1 and w(e3)2. That means, as long as we have the minimum distance between the two codes larger than 3, we can achieve error-free decoding.

The two codes C1 and C2 can be constructed as subcodes of the (7,4,3) Hamming code and thus has minimum distance of 3. Given the generator matrix G of the original Hamming code, the generator matrix G1 for C1 is constructed by taking any two rows from G, and G2 is constructed by the remaining two rows of G. The corresponding (5×7) parity-check matrix for each sub-code can be generated according to the generator matrix and used to generate syndrome bits.

Wyner–Ziv coding – lossy distributed coding

In general, a Wyner–Ziv coding scheme is obtained by adding a quantizer and a de-quantizer to the Slepian–Wolf coding scheme. Therefore, a Wyner–Ziv coder design could focus on the quantizer and corresponding reconstruction method design. Several quantizer designs have been proposed, such as a nested lattice quantizer,[20] trellis code quantizer[21] and Lloyd quantization method.[22]

Large scale distributed quantization

Unfortunately, the above approaches do not scale (in design or operational complexity requirements) to sensor networks of large sizes, a scenario where distributed compression is most helpful. If there are N sources transmitting at R bits each (with some distributed coding scheme), the number of possible reconstructions scales 2NR. Even for moderate values of N and R (say N=10, R = 2), prior design schemes become impractical. Recently, an approach,[23] using ideas borrowed from Fusion Coding of Correlated Sources, has been proposed where design and operational complexity are traded against decoder performance. This has allowed distributed quantizer design for network sizes reaching 60 sources, with substantial gains over traditional approaches.

The central idea is the presence of a bit-subset selector which maintains a certain subset of the received (NR bits, in the above example) bits for each source. Let be the set of all subsets of the NR bits i.e.

=2{1,...,NR}

Then, we define the bit-subset selector mapping to be

𝒮:{1,...,N}

Note that each choice of the bit-subset selector imposes a storage requirement (C) that is exponential in the cardinality of the set of chosen bits.

C=n=1N2|𝒮(n)|

This allows a judicious choice of bits that minimize the distortion, given the constraints on decoder storage. Additional limitations on the set of allowable subsets are still needed. The effective cost function that needs to be minimized is a weighted sum of distortion and decoder storage

J=D+λC

The system design is performed by iteratively (and incrementally) optimizing the encoders, decoder and bit-subset selector till convergence.

Non-asymmetric DSC

Template:Empty section

Non-asymmetric DSC for more than two sources

The syndrome approach can still be used for more than two sources. Let us consider a binary sources of length-n x1,x2,,xa{0,1}n. Let H1,H2,,Hs be the corresponding coding matrices of sizes m1×n,m2×n,,ma×n. Then the input binary sources are compressed into s1=H1x1,s2=H2x2,,sa=Haxa of total m=m1+m2+ma bits. Apparently, two source tuples cannot be recovered at the same time if they share the same syndrome. In other words, if all source tuples of interest have different syndromes, then one can recover them losslessly.

General theoretical result does not seem to exist. However, for a restricted kind of source so-called Hamming source [24] that only has at most one source different from the rest and at most one bit location not all identical, practical lossless DSC is shown to exist in some cases. For the case when there are more than two sources, the number of source tuple in a Hamming source is 2n(an+1). Therefore, a packing bound that 2m2n(an+1) obviously has to satisfy. When the packing bound is satisfied with equality, we may call such code to be perfect (an analogous of perfect code in error correcting code).[24]

A simplest set of a,n,m to satisfy the packing bound with equality is a=3,n=5,m=9. However, it turns out that such syndrome code does not exist.[25] The simplest (perfect) syndrome code with more than two sources have n=21 and m=27. Let

Q1=(100000100001110110000010000110000100000111001000011000011101011000100001100010011110000010000110101101111000001000011001001101), Q2=(000101101111010001111100010110111101111000010001111011100000101101000111101011100111010100111110001011011001010011111110101110), Q3=(100101001110100111111110010000111001111111011001100011111101110101100110001001111001010110111000100110100001011011100111100011), G=[0|I9], and G=(G1G2G3) such that G1,G2,G3 are any partition of G.

H1=(G1Q1),H2=(G2Q2),H3=(G3Q3) can compress a Hamming source (i.e., sources that have no more than one bit different will all have different syndromes).[24] For example, for the symmetric case, a possible set of coding matrices are H1=(000000000000000000100000000000000000000010000000000000000000001100000100001110110000010000110000100000111001000011000011101011000100001100010011110000010000110101101111000001000011001001101), H2=(000000000000000100000000000000000000010000000000000000000001000000101101111010001111100010110111101111000010001111011100000101101000111101011100111010100111110001011011001010011111110101110), H3=(000000000000100000000000000000000010000000000000000000001000000100101001110100111111110010000111001111111011001100011111101110101100110001001111001010110111000100110100001011011100111100011).

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  1. "Distributed source coding for sensor networks" by Z. Xiong, A.D. Liveris, and S. Cheng
  2. "Distributed video coding in wireless sensor networks" by Puri, R. Majumdar, A. Ishwar, P. Ramchandran, K.
  3. 3.0 3.1 3.2 "Noiseless coding of correlated information sources" by D. Slepian and J. Wolf
  4. 4.0 4.1 "A proof of the data compression theorem of Slepian and Wolf for ergodic sources" by T. Cover
  5. 5.0 5.1 5.2 "The rate-distortion function for source coding with side information at the decoder" by A. Wyner and J. Ziv
  6. 6.0 6.1 "Recent results in Shannon theory" by A. D. Wyner
  7. 7.0 7.1 7.2 7.3 "Distributed source coding using syndromes (DISCUS): design and construction" by S. S. Pradhan and K. Ramchandran
  8. 8.0 8.1 8.2 "Distributed source coding: symmetric rates and applications to sensor networks" by S. S. Pradhan and K. Ramchandran
  9. "Distributed code constructions for the entire Slepian–Wolf rate region for arbitrarily correlated sources" by Schonberg, D. Ramchandran, K. Pradhan, S.S.
  10. "Generalized coset codes for distributed binning" by Pradhan, S.S. Ramchandran, K.
  11. "Nested linear/lattice codes for Wyner–Ziv encoding" by R. Zamir and S. Shamai
  12. "Distributed Video Coding" by B. Girod, etc.
  13. "On code design for the Slepian–Wolf problem and lossless multiterminal networks" by Stankovic, V. Liveris, A.D. Zixiang Xiong Georghiades, C.N.
  14. "A general and optimal framework to achieve the entire rate region for Slepian–Wolf coding" by P. Tan and J. Li
  15. "Distributed source coding using short to moderate length rate-compatible LDPC codes: the entire Slepian–Wolf rate region" by Sartipi, M. Fekri, F.
  16. "A distributed source coding framework for multiple sources" by Xiaomin Cao and Kuijper, M.
  17. "Coset codes. I. Introduction and geometrical classification" by G. D. Forney
  18. "Design of trellis codes for source coding with side information at the decoder" by X. Wang and M. Orchard
  19. "Design of Slepian–Wolf codes by channel code partitioning" by V. Stankovic, A. D. Liveris, Z. Xiong and C. N. Georghiades
  20. "Nested quantization and Slepian–Wolf coding: a Wyner–Ziv coding paradigm for i.i.d. sources" by Z. Xiong, A. D. Liveris, S. Cheng and Z. Liu
  21. "Wyner–Ziv coding based on TCQ and LDPC codes" by Y. Yang, S. Cheng, Z. Xiong and W. Zhao
  22. "Design of optimal quantizers for distributed source coding" by D. Rebollo-Monedero, R. Zhang and B. Girod
  23. "Towards large scale distributed source coding" by S. Ramaswamy, K. Viswanatha, A. Saxena and K. Rose
  24. 24.0 24.1 24.2 "Hamming Codes for Multiple Sources" by R. Ma and S. Cheng
  25. "The Non-existence of Length-5 Slepian–Wolf Codes of Three Sources" by S. Cheng and R. Ma