Connes embedding problem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
 
en>Jesse V.
m tags and general fixes, added orphan tag using AWB (8842)
Line 1: Line 1:
The main advantage of using the blog is that anyone can use the Word - Press blog and customize the elements in the theme regardless to limited knowledge about internet and website development. You can either install Word - Press yourself or use free services offered on the web today. One really cool features about this amazing and free wp plugin is that the code it generates is completely portable. s and intelligently including a substantial amount of key words in the title tags, image links, etc. This particular wordpress plugin is essential for not only having the capability where you improve your position, but to enhance your organic searches for your website. <br><br>
{{Expert-subject|Mathematics|date=December 2009}}
'''Sequential decoding''' is a limited memory technique for decoding [[tree codes]].   Sequential decoding is mainly used is as an approximate decoding algorithm for long constraint-length [[convolutional code]]s. This approach may not be as accurate as the [[Viterbi algorithm]] but can save a substantial amount of computer memory.


Any business enterprise that is certainly worth its name should really shell out a good deal in making sure that they have the most effective website that provides related info to its prospect. Best of all, you can still have all the functionality that you desire when you use the Word - Press platform. This may possibly also permit it currently being seriously straightforward to modify the hues within your Ad - Sense code so the ads blend nicely with the many term broad internet word wide web web page in case you can come to your conclusion to run the adverts. By purchasing Word - Press weblogs you can acquire your very own domain title and have total command of your web site. Moreover, many Word - Press themes need to be purchased and designing your own WP site can be boring. <br><br>When you have virtually any issues about exactly where along with the way to use [http://www.nt-protect.host.sk/?wordpress_dropbox_backup_868429 wordpress dropbox backup], you possibly can email us at the website. Your Word - Press blog or site will also require a domain name which many hosting companies can also provide. Now if we talk about them one by one then -wordpress blog customization means customization of your blog such as installation of wordpress on your server by wordpress developer which will help you to acquire the SEO friendly blog application integrated with your site design as well as separate blog administration panel for starting up your own business blog,which demands a experienced wordpress designer. Possibly the most downloaded Word - Press plugin, the Google XML Sitemaps plugin but not only automatically creates a site map linking to everyone your pages and posts, it also notifies Google, Bing, Yahoo, and Ask. Storing write-ups in advance would have to be neccessary with the auto blogs. Article Source: Stevens works in Internet and Network Marketing. <br><br>Word - Press has plenty of SEO benefits over Joomla and Drupal. In case you need to hire PHP developers or hire Offshore Code - Igniter development services or you are looking for Word - Press development experts then Mindfire Solutions would be the right choice for a Software Development partner. Exacting subjects in reality must be accumulated in head ahead of planning on your high quality theme. The company gains commission from the customers' payment. Wordpress template is loaded with lots of prototype that unite graphic features and content area. <br><br>Internet is not only the source for information, it is also one of the source for passive income. Visit our website to learn more about how you can benefit. In simple words, this step can be interpreted as the planning phase of entire PSD to wordpress conversion process. It is a fact that Smartphone using online customers do not waste much of their time in struggling with drop down menus. The 2010 voting took place from July 7 through August 31, 2010.
Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree.
 
There is a range of sequential decoding approaches based on choice of metric and algorithm. Metrics include:
*Fano metric
*Zigangirov metric
*Gallager metric
Algorithms include:
*Stack algorithm
*Fano algorithm
*Creeper algorithm
 
==Fano metric==
 
Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further.  The Fano metric (named after [[Robert Fano]]) allows one to calculate from which is the best node to explore further.  This metric is optimal given no other constraints (e.g. memory).
 
For a [[binary symmetric channel]] (with error probability <math>p</math>) the Fano metric can be derived via [[Bayes theorem]].  We are interested in following the most likely path <math>P_i</math> given an explored state of the tree <math>X</math> and a received sequence <math>{\mathbf r}</math>. Using the language of [[probability]] and [[Bayes theorem]] we want to choose the maximum over <math>i</math> of:
:<math>\Pr(P_i|X,{\mathbf r}) \propto \Pr({\mathbf r}|P_i,X)\Pr(P_i|X)</math>
 
We now introduce the following notation:
*<math>N</math> to represent the maximum length of transmission in branches
*<math>b</math> to represent the number of bits on a branch of the code (the denominator of the [[code rate]], <math>R</math>).
*<math>d_i</math> to represent the number of bit errors on path <math>P_i</math> (the [[Hamming distance]] between the branch labels and the received sequence)
*<math>n_i</math> to be the length of <math>P_i</math> in branches.
 
We express the [[likelihood]] <math>\Pr({\mathbf r}|P_i,X)</math> as <math>p^{d_i} (1-p)^{n_ib-d_i} 2^{-(N-n_i)b}</math> (by using the binary symmetric channel likelihood for the first <math>n_ib</math> bits followed by a uniform prior over the remaining bits).
 
We express the [[prior probability|prior]] <math>\Pr(P_i|X)</math> in terms of the number of branch choices one has made, <math>n_i</math>, and the number of branches from each node, <math>2^{Rb}</math>. 
 
Therefore:
:<math>
\begin{align}
\Pr(P_i|X,{\mathbf r}) &\propto p^{d_i} (1-p)^{n_ib-d_i} 2^{-(N-n_i)b} 2^{-n_iRb} \\
&\propto p^{d_i} (1-p)^{n_ib-d_i} 2^{n_ib} 2^{-n_iRb}
\end{align}
</math>
 
We can equivalently maximise the log of this probability, i.e.
:<math>
\begin{align}
&d_i \log_2 p + (n_ib-d_i) \log_2 (1-p) +n_ib-n_iRb
\\= &d_i(\log_2 p +1-R) + (n_ib-d_i)(\log_2 (1-p) + 1-R)
\end{align}
</math>
 
This last expression is the Fano metric. The important point to see is that we have two terms here: one based on the number of wrong bits and one based on the number of right bits. We can therefore update the Fano metric simply by adding <math> \log_2 p +1-R</math> for each non-matching bit and <math>\log_2 (1-p) + 1-R</math> for each matching bit.
 
==Computational cutoff rate==
 
For sequential decoding to a good choice of decoding algorithm, the number of states explored wants to remain small (otherwise an algorithm which deliberately explores all states, e.g. the [[Viterbi algorithm]], may be more suitable). For a particular noise level there is a maximum coding rate <math>R_0</math> called the computational cutoff rate where there is a finite backtracking limit. For the binary symmetric channel:
:<math>R_0 = 1-\log_2(1+2\sqrt{p(1-p)})</math>
 
==Algorithms==
===Stack algorithm===
 
The simplest algorithm to describe is the "stack algorithm" in which the best <math>N</math> paths found so far are stored. Sequential decoding may introduce an additional error above Viterbi decoding when the correct path has <math>N</math> or more highly scoring paths above it; at this point the best path will drop off the stack and be no longer considered.
 
===Fano algorithm===
 
The famous Fano algorithm (named after [[Robert Fano]]) has a very low memory requirement and hence is suited to hardware implementations.  This algorithm explores backwards and forward from a single point on the tree.
 
# The Fano algorithm is a sequential decoding algorithm that does not require a stack.
# The Fano algorithm can only operate over a code tree because it cannot examine path merging.
# At each decoding stage, the Fano algorithm retains the information regarding three paths: the current path, its immediate predecessor path, and one of its successor paths.
# Based on this information, the Fano algorithm can move from the current path to either its immediate predecessor path or the selected successor path; hence, no stack is required for queuing all examined paths.
# The movement of the Fano algorithm is guided by a dynamic threshold T that is an integer multiple of a fixed step size ¢.
# Only the path whose path metric is no less than T can be next visited. According to the algorithm, the process of codeword search continues to move forward along a code path, as long as the Fano metric along the code path remains non-decreasing.
# Once all the successor path metrics are smaller than T, the algorithm moves backward to the predecessor path if the predecessor path metric beats T; thereafter, threshold examination will be subsequently performed on another successor path of this revisited predecessor.
# In case the predecessor path metric is also less than T, the threshold T is one-step lowered so that the algorithm is not trapped on the current path.
# For the Fano algorithm, if a path is revisited, the presently examined dynamic threshold is always lower than the momentary dynamic threshold at the previous visit, guaranteeing that looping in the algorithm does not occur, and that the algorithm can ultimately reach a terminal node of the code tree, and stop.
 
==References==
 
*[[John Wozencraft]] and B. Reiffen, ''Sequential decoding'', ISBN 0-262-23006-2
*Rolf Johannsesson and Kamil Sh. Zigangirov, ''Fundamentals of convolutional coding'' (chapter 6), ISBN 0-470-27683-5
 
==External links==
*"[http://demonstrations.wolfram.com/CorrectionTrees/ Correction trees]" - simulator of correction process using priority queue to choose maximum metric node (called weight)
 
{{DEFAULTSORT:Sequential Decoding}}
[[Category:Error detection and correction]]

Revision as of 22:57, 26 December 2012

Template:Expert-subject Sequential decoding is a limited memory technique for decoding tree codes. Sequential decoding is mainly used is as an approximate decoding algorithm for long constraint-length convolutional codes. This approach may not be as accurate as the Viterbi algorithm but can save a substantial amount of computer memory.

Sequential decoding explores the tree code in such a way to try to minimise the computational cost and memory requirements to store the tree.

There is a range of sequential decoding approaches based on choice of metric and algorithm. Metrics include:

  • Fano metric
  • Zigangirov metric
  • Gallager metric

Algorithms include:

  • Stack algorithm
  • Fano algorithm
  • Creeper algorithm

Fano metric

Given a partially explored tree (represented by a set of nodes which are limit of exploration), we would like to know the best node from which to explore further. The Fano metric (named after Robert Fano) allows one to calculate from which is the best node to explore further. This metric is optimal given no other constraints (e.g. memory).

For a binary symmetric channel (with error probability ) the Fano metric can be derived via Bayes theorem. We are interested in following the most likely path given an explored state of the tree and a received sequence . Using the language of probability and Bayes theorem we want to choose the maximum over of:

We now introduce the following notation:

We express the likelihood as (by using the binary symmetric channel likelihood for the first bits followed by a uniform prior over the remaining bits).

We express the prior in terms of the number of branch choices one has made, , and the number of branches from each node, .

Therefore:

We can equivalently maximise the log of this probability, i.e.

This last expression is the Fano metric. The important point to see is that we have two terms here: one based on the number of wrong bits and one based on the number of right bits. We can therefore update the Fano metric simply by adding for each non-matching bit and for each matching bit.

Computational cutoff rate

For sequential decoding to a good choice of decoding algorithm, the number of states explored wants to remain small (otherwise an algorithm which deliberately explores all states, e.g. the Viterbi algorithm, may be more suitable). For a particular noise level there is a maximum coding rate called the computational cutoff rate where there is a finite backtracking limit. For the binary symmetric channel:

Algorithms

Stack algorithm

The simplest algorithm to describe is the "stack algorithm" in which the best paths found so far are stored. Sequential decoding may introduce an additional error above Viterbi decoding when the correct path has or more highly scoring paths above it; at this point the best path will drop off the stack and be no longer considered.

Fano algorithm

The famous Fano algorithm (named after Robert Fano) has a very low memory requirement and hence is suited to hardware implementations. This algorithm explores backwards and forward from a single point on the tree.

  1. The Fano algorithm is a sequential decoding algorithm that does not require a stack.
  2. The Fano algorithm can only operate over a code tree because it cannot examine path merging.
  3. At each decoding stage, the Fano algorithm retains the information regarding three paths: the current path, its immediate predecessor path, and one of its successor paths.
  4. Based on this information, the Fano algorithm can move from the current path to either its immediate predecessor path or the selected successor path; hence, no stack is required for queuing all examined paths.
  5. The movement of the Fano algorithm is guided by a dynamic threshold T that is an integer multiple of a fixed step size ¢.
  6. Only the path whose path metric is no less than T can be next visited. According to the algorithm, the process of codeword search continues to move forward along a code path, as long as the Fano metric along the code path remains non-decreasing.
  7. Once all the successor path metrics are smaller than T, the algorithm moves backward to the predecessor path if the predecessor path metric beats T; thereafter, threshold examination will be subsequently performed on another successor path of this revisited predecessor.
  8. In case the predecessor path metric is also less than T, the threshold T is one-step lowered so that the algorithm is not trapped on the current path.
  9. For the Fano algorithm, if a path is revisited, the presently examined dynamic threshold is always lower than the momentary dynamic threshold at the previous visit, guaranteeing that looping in the algorithm does not occur, and that the algorithm can ultimately reach a terminal node of the code tree, and stop.

References

  • John Wozencraft and B. Reiffen, Sequential decoding, ISBN 0-262-23006-2
  • Rolf Johannsesson and Kamil Sh. Zigangirov, Fundamentals of convolutional coding (chapter 6), ISBN 0-470-27683-5

External links

  • "Correction trees" - simulator of correction process using priority queue to choose maximum metric node (called weight)