# Guruswami–Sudan list decoding algorithm

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. Using unique decoder one can correct up to ${\displaystyle \delta /2}$fraction of errors. But when error rate is greater than ${\displaystyle \delta /2}$, unique decoder will not able to output the correct result. List decoding overcomes that issue. List decoding can correct more than ${\displaystyle \delta /2}$ fraction of errors.

There are many efficient algorithms that can perform List decoding. list decoding algorithm for Reed–Solomon (RS) codes by Sudan which can correct up to ${\displaystyle 1-{\sqrt {2R}}}$ errors is give first. Later on more efficient GuruswamiSudan list decoding algorithm, which can correct up to ${\displaystyle 1-{\sqrt {R}}}$ errors is discussed.

Here is the plot between rate R and distance ${\displaystyle \delta }$ for different algorithms.

## Algorithm 1 (Sudan's list decoding algorithm)

### Problem statement

Output: A list of all functions ${\displaystyle f:F\mapsto F}$ satisfying

${\displaystyle f(x)}$ is a polynomial in ${\displaystyle x}$ of degree at most d with | {${\displaystyle i|f(x_{i})=y_{i}}$ } |${\displaystyle \geq t}$ -- (1)

To understand Sudan's Algorithm better, one may want to first know another algorithm which can be considered as the earlier version or the fundamental version of the algorithms for list decoding RS codes - the Berlekamp–Welch algorithm. Welch and Berlekamp initially came with an algorithm which can solve the problem in polynomial time with best threshold on ${\displaystyle t}$ to be ${\displaystyle t\geq (n+d+1)/2}$. The mechanism of Sudan's Algorithm is almost the same as the algorithm of Berlekamp–Welch Algorithm, except in the step 1, one wants to compute a bivariate polynomial of bounded ${\displaystyle (1,k)}$ degree. Sudan's list decoding algorithm for Reed–Solomon code which is an improvement on Berlekamp and Welch algorithm, can solve the problem with ${\displaystyle t=({\sqrt {2nd}})}$.This bound is better than the unique decoding bound ${\displaystyle 1-\left({\frac {R}{2}}\right)}$ for ${\displaystyle R<0.07}$.

### Algorithm

Definition 1 (weighted degree)

For weights ${\displaystyle w_{x},w_{y},\epsilon Z^{+}}$, the ${\displaystyle (w_{x},w_{y})}$ – weighted degree of monomial ${\displaystyle q_{ij}x^{i}y^{j}}$ is ${\displaystyle iw_{x}+jw_{y}}$. The ${\displaystyle (w_{x},w_{y})}$ – weighted degree of a polynomial ${\displaystyle Q(x,y)=\sum _{ij}q_{ij}x^{i}y^{j}}$ is the maximum, over the monomials with non-zero coefficients, of the ${\displaystyle (w_{x},w_{y})}$ – weighted degree of the monomial.

E.g. : ${\displaystyle 3xy}$ is a monomial in variables ${\displaystyle x,y}$ with a coefficient of 3.

Algorithm:

Inputs: ${\displaystyle n,d,t}$; {${\displaystyle (x_{1},y_{1})\cdots (x_{n},y_{n})}$} /* Parameters l,m to be set later. */

Step 1: Find any function ${\displaystyle Q:F^{2}\mapsto F}$ satisfying ${\displaystyle Q(x,y)}$ has (1,d)-weighted degree at most ${\displaystyle m+ld}$, (2) for every ${\displaystyle i}$ ${\displaystyle \in }$ n, ${\displaystyle Q(x,y)=0,}$ ${\displaystyle Q}$ is not identically zero.

Step 2. Factor the polynomial Q into irreducible factors.

Step 3. Output all the polynomials ${\displaystyle f}$ such that ${\displaystyle (y-f(x))}$ is a factor of Q and ${\displaystyle f(x_{i})=y_{i}}$ for at least t values of ${\displaystyle i}$ ${\displaystyle \in }$ n

### Analysis

One have to prove that the above algorithm runs in polynomial time and outputs the correct result. That can be done by proving following set of claims.

Claim 1:

If a function ${\displaystyle Q:F^{2}\mapsto F}$ satisfying (2) exists, then one can find it in polynomial time.

Proof:

Note that a bivariate polynomial ${\displaystyle Q(x,y)}$ of ${\displaystyle (1,k)}$ degree at most ${\displaystyle D}$ can be represented as follows: Let ${\displaystyle Q(x,y)=\sum _{j=0}^{l}\sum _{k=0}^{m+(l-j)d}q_{kj}x^{k}y^{j}}$. Then one have to find the coefficients ${\displaystyle q_{kj}}$ satisfying the constraints ${\displaystyle \sum _{j=0}^{l}\sum _{k=0}^{m+(l-j)d}q_{kj}x^{k}y^{j}=0}$, for every ${\displaystyle i\epsilon [n]}$. This is a linear set of equations in the unknowns {${\displaystyle q_{kj}}$}. One can find a solution using Gaussian elimination in polynomial time.

Claim 2:

If ${\displaystyle (m+1)(l+1)+d{\begin{pmatrix}l+1\\2\end{pmatrix}}>n}$ then there exists a function ${\displaystyle Q(x,y)}$ satisfying (2)

Proof:

To ensure a non zero solution exists, the number of variables in ${\displaystyle Q(x,y)}$ should be greater than the number of constraints. Assume that maximum degree ${\displaystyle deg_{X}(Q)}$ of ${\displaystyle X}$in ${\displaystyle Q(x,y)}$ be m and maximum degree ${\displaystyle deg_{Y}(Q)}$ of ${\displaystyle Y}$in ${\displaystyle Q(x,y)}$ be l. Then the degree of ${\displaystyle Q(x,y)}$ will be atmost ${\displaystyle m+ld}$. One have to see that the linear system is homogenous. The setting ${\displaystyle q_{jk}=0}$ satisfies all linear constraints. However this does not satisfy (2), since the solution can be identically zero. To ensure that non-zero solutions exists, One have to make sure that number of unknowns in the linear system to be ${\displaystyle (m+1)(l+1)+d{\begin{pmatrix}l+1\\2\end{pmatrix}}>n}$, so that one can have a non zero ${\displaystyle Q(x,y)}$. Since this value is greater than n, there are more variables than constraints and therefore a non-zero solution exists.

Claim 3:

If ${\displaystyle Q(x,y)}$ is a function satisfying (2) and ${\displaystyle f(x)}$ is function satisfying (1) and ${\displaystyle t>m+ld}$, then ${\displaystyle (y-f(x))}$ divides ${\displaystyle Q(x,y)}$

Proof:

Consider a function ${\displaystyle p(x)=Q(x,f(x))}$. This is a polynomial in ${\displaystyle x}$, and argue that it has degree at most ${\displaystyle m+ld}$. Consider any monomial ${\displaystyle q_{jk}x^{k}y^{j}}$ of ${\displaystyle Q(x)}$. Since ${\displaystyle Q}$ has ${\displaystyle (1,d)}$-weighted degree at most ${\displaystyle m+ld}$, one can say that ${\displaystyle k+jd\leq m+ld}$. Thus the term ${\displaystyle q_{kj}x^{k}f(x)^{j}}$ is a polynomial in ${\displaystyle x}$ of degree at most ${\displaystyle k+jd\leq m+ld}$. Thus ${\displaystyle p(x)}$ has degree at most ${\displaystyle m+ld}$

Next argue that ${\displaystyle p(x)}$ is identically zero. Since ${\displaystyle Q(x_{i},f(x_{i}))}$ is zero whenever ${\displaystyle y_{i}=f(x_{i})}$, one can say that ${\displaystyle p(x_{i})}$ is zero for strictly greater than ${\displaystyle m+ld}$ points. Thus ${\displaystyle p}$has more zeroes than its degree and hence is identically zero, implying ${\displaystyle Q(x,f(x))\equiv 0}$

Finding optimal values for ${\displaystyle m}$ and ${\displaystyle l}$. Note that ${\displaystyle m+ld and ${\displaystyle (m+1)(l+1)+d{\begin{pmatrix}l+1\\2\end{pmatrix}}>n}$ For a given value ${\displaystyle l}$, one can compute the smallest ${\displaystyle m}$ for which the second condition holds By interchanging the second condition one can get ${\displaystyle m}$ to be at most ${\displaystyle (n+1-d{\begin{pmatrix}l+1\\2\end{pmatrix}})/2-1}$ Substituting this value into first condition one can get ${\displaystyle t}$ to be at least ${\displaystyle {\frac {n+1}{l+1}}+{\frac {dl}{2}}}$ Next minimize the above equation of unknown parameter ${\displaystyle l}$. One can do that by taking derivative of the equation and equating that to zero By doing that one will get, ${\displaystyle l={\sqrt {\frac {2(n+1)}{d}}}-1}$ Substituting back the ${\displaystyle l}$ value into ${\displaystyle m}$ and ${\displaystyle t}$ one will get ${\displaystyle m\geq {\sqrt {\frac {(n+1)d}{2}}}-{\sqrt {\frac {(n+1)d}{2}}}+{\frac {d}{2}}-1={\frac {d}{2}}-1}$ ${\displaystyle t>{\sqrt {\frac {2(n+1)d^{2}}{d}}}-{\frac {d}{2}}-1}$ ${\displaystyle t>{\sqrt {2(n+1)d}}-{\frac {d}{2}}-1}$

## Algorithm 2 (Guruswami–Sudan list decoding algorithm)

### Definition

Consider a ${\displaystyle (n,k)}$ Reed–Solomon code over the finite field ${\displaystyle \mathbb {F} =GF(q)}$ with evaluation set ${\displaystyle (\alpha _{1},\alpha _{2},\ldots ,\alpha _{n})}$ and a positive integer ${\displaystyle r}$, the Guruswami-Sudan List Decoder accepts a vector ${\displaystyle \beta =(\beta _{1},\beta _{2},\ldots ,\beta _{n})}$ ${\displaystyle \in }$ ${\displaystyle \mathbb {F} ^{n}}$ as input, and outputs a list of polynomials of degree ${\displaystyle \leq k}$ which are in 1 to 1 correspondence with codewords.

The idea is to add more restrictions on the bi-variate polynomial ${\displaystyle Q(x,y)}$ which results in the increment of constraints along with the number of roots.

### Multiplicity

For example: Let ${\displaystyle Q(x,y)=y-4x^{2}}$.

Hence, ${\displaystyle Q(x,y)}$ has a zero of multiplicity 1 at (0,0).

Hence, ${\displaystyle Q(x,y)}$ has a zero of multiplicity 1 at (0,0).

Hence, ${\displaystyle Q(x,y)}$ has a zero of multiplicity 2 at (0,0).

### Algorithm

Let the transmitted codeword be ${\displaystyle (f(\alpha _{1}),f(\alpha _{2}),\ldots ,f(\alpha _{n}))}$,${\displaystyle (\alpha _{1},\alpha _{2},\ldots ,\alpha _{n})}$ be the support set of the transmitted codeword & the received word be ${\displaystyle (\beta _{1},\beta _{2},\ldots ,\beta _{n})}$

The algorithm is as follows:

Interpolation step

For a received vector ${\displaystyle (\beta _{1},\beta _{2},\ldots ,\beta _{n})}$, construct a non-zero bi-variate polynomial ${\displaystyle Q(x,y)}$ with ${\displaystyle (1,k)-}$weighted degree of at most ${\displaystyle d}$ such that ${\displaystyle Q}$ has a zero of multiplicity ${\displaystyle r}$ at each of the points ${\displaystyle (\alpha _{i},\beta _{i})}$ where ${\displaystyle 1\leq i\leq n}$

${\displaystyle Q(\alpha _{i},\beta _{i})=0\,}$

Factorization step

Recall that polynomials of degree ${\displaystyle \leq k}$ are in 1 to 1 correspondence with codewords. Hence, this step outputs the list of codewords.

### Analysis

#### Interpolation step

Lemma: Interpolation step implies ${\displaystyle {\begin{pmatrix}r+1\\2\end{pmatrix}}}$ constraints on the coefficients of ${\displaystyle a_{i}}$

Proof of Equation 1:

${\displaystyle Q(x+\alpha ,y+\beta )=\sum _{i,j}a_{i,j}(x+\alpha )^{i}(y+\beta )^{j}}$
${\displaystyle Q(x+\alpha ,y+\beta )=\sum _{i,j}a_{i,j}{\Bigg (}\sum _{u}{\begin{pmatrix}i\\u\end{pmatrix}}x^{u}\alpha ^{i-u}{\Bigg )}{\Bigg (}\sum _{v}{\begin{pmatrix}i\\v\end{pmatrix}}y^{v}\beta ^{j-v}{\Bigg )}}$.................Using binomial expansion
${\displaystyle Q(x+\alpha ,y+\beta )=\sum _{u,v}x^{u}y^{v}{\Bigg (}\sum _{i,j}{\begin{pmatrix}i\\u\end{pmatrix}}{\begin{pmatrix}i\\v\end{pmatrix}}a_{i,j}\alpha ^{i-u}\beta ^{j-v}{\Bigg )}}$
${\displaystyle Q(x+\alpha ,y+\beta )=\sum _{u,v}}$ ${\displaystyle Q_{u,v}(\alpha ,\beta )x^{u}y^{v}}$

Proof of Lemma:

${\displaystyle Q_{u,v}}$ ${\displaystyle (\alpha ,\beta )}$ ${\displaystyle \equiv }$ ${\displaystyle 0}$ such that ${\displaystyle 0\leq u+v\leq r-1}$
${\displaystyle u}$ can take ${\displaystyle r-v}$ values as ${\displaystyle 0\leq v\leq r-1}$. Thus, the total number of constraints is

Thus, ${\displaystyle {\begin{pmatrix}r+1\\2\end{pmatrix}}}$ number of selections can be made for ${\displaystyle (u,v)}$ and each selection implies constraints on the coefficients of ${\displaystyle a_{i}}$

### Factorization step

Proposition:

Proof:

where, ${\displaystyle L(x,y)}$ is the quotient obtained when ${\displaystyle Q(x,y)}$ is divided by ${\displaystyle y-p(x)}$ ${\displaystyle R(x)}$ is the remainder

Theorem:

Proof:

As proved above,

${\displaystyle {\frac {D(D+2)}{2(k-1)}}>n{\begin{pmatrix}r+1\\2\end{pmatrix}}}$ where LHS is the upper bound on the number of coefficients of ${\displaystyle Q(x,y)}$ and RHS is the earlier proved Lemma.

${\displaystyle D={\sqrt {knr(r-1)}}\,}$
${\displaystyle t>\left\lceil {\sqrt {kn-{\frac {1}{2}}}}\right\rceil >\left\lceil {\sqrt {kn}}\right\rceil }$

Hence proved, that Guruswami–Sudan List Decoding Algorithm can list decode Reed-Solomon(RS) codes up to ${\displaystyle 1-{\sqrt {R}}}$ errors.