# Berlekamp–Welch algorithm

The Berlekamp–Welch algorithm, also known as the Welch–Berlekamp algorithm, is named for Elwyn R. Berlekamp and Lloyd R. Welch. The algorithm efficiently corrects errors in BCH codes and Reed–Solomon codes (which are a subset of BCH codes). Unlike many other decoding algorithms, and in correspondence with the code-domain Berlekamp–Massey algorithm that uses syndrome decoding and the dual of the codes, the Berlekamp–Welch decoding algorithm provides a method for decoding Reed–Solomon codes using just the generator matrix and not syndromes.

## History on decoding Reed–Solomon codes

1. In 1960, Peterson came up with an algorithm for decoding BCH codes. His algorithm solves the important second stage of the generalized BCH decoding procedure and is used to calculate the error locator polynomial coefficients that in turn provide the error locator polynomial. This is crucial to the decoding of BCH codes.
2. In 1963, Gorenstein–Zierler saw that BCH codes and Reed–Solomon codes have a common generalization and that the decoding algorithm extends to more general situation.
3. In 1968 / 69, Elwyn Berlekamp invented an algorithm for decoding BCH codes. James Massey recognized its application to linear feedback shift registers and simplified the algorithm. Massey termed the algorithm the LFSR Synthesis Algorithm (Berlekamp Iterative Algorithm) but it is now known as the Berlekamp–Massey algorithm.
4. In 1986, The Welch–Berlekamp algorithm was developed to solve the decoding equation of Reed–Solomon codes, using a fast method to solve a certain polynomial equation. The Berlekamp – Welch algorithm has a running time complexity of ${\mathcal {O}}(N^{3})$ . We will in the following sections look at the Gemmel and Sudan’s exposition of the Berlekamp Welch Algorithm.

## Error locator polynomial of Reed–Solomon codes

$E(X)=\prod _{\alpha _{i}\in S}(X-\alpha _{i})$ where $S=\{\alpha _{i}|P(\alpha _{i})\neq y_{i}\}$ However since both $E(X)$ and $P(X)$ are unknown, the main task of the decoding algorithm would be to find $P(X)$ . To do this we use a seemingly useless yet very powerful method and define another polynomial $Q(X)$ as $Q(X)$ = $P(X)E(X)$ . This is because the $n$ equations with $e+k$ we need to solve are quadratic in nature. Thus by defining a product of two variables that gives rise to a quadratic term as one unknown variable, we increase the number of unknowns but make the equations linear in nature. This method is called linearization and is a very powerful tool.

## The Berlekamp–Welch decoder and algorithm

The Welch–Berlekamp decoder for Reed–Solomon codes consists of the Welch– Berlekamp algorithm augmented by some additional steps that prepare the received word for the algorithm and interpret the result of the algorithm.

The output of the decoder is either the polynomial $P(X)$ , or in some cases, a failure. This decoder functions in two steps as follows:

According to the algorithm, in the cases where it does not output a failure, it outputs a $P(X)$ that is the correct and desired polynomial. To prove that, the algorithm always outputs the desired polynomial, we need to prove a few claims we have made while describing the algorithm. Let us go ahead and do so now.

This above claim however just reiterates and proves the fact that there exists a pair of polynomials $E(X)$ and $Q(X)$ such that $P(X)$ = $Q(X)/E(X)$ . It however does not necessarily guarantee the fact that the algorithm we discussed above would indeed output such a pair of polynomials. We therefore move on to look at another claim that helps establish this fact using the above claim and thereby proving the correctness of the algorithm.

Claim 2: For any two distinct solutions $(E_{1}(X),Q_{1}(X))\neq (E_{2}(X),Q_{2}(X))$ that satisfy the first step of the Berlekamp Welch algorithm given above, they will also satisfy the equation ${Q_{1}(X) \over E_{1}(X)}={Q_{2}(X) \over E_{2}(X)}$ Thus based on the above claims, we can safely state that the output of the Berlekamp Welch algorithm, when outputting the polynomial $P(X)$ is correct.

## Example The error locator polynomial serves to "neutralize" errors in P by making Q zero at those points, so that the system of linear equations is not affected by the inaccuracy in the input.

Consider a simple example where a redundant set of points are used to represent the line $y=5-x$ , and one of the points is incorrect. The points that the algorithm gets as an input are $(1,4),(2,3),(3,4),(4,1)$ , where $(3,4)$ is the defective point. The algorithm must solve the following system of equations:

{\begin{alignedat}{1}Q(1)&=4*E(1)\\Q(2)&=3*E(2)\\Q(3)&=4*E(3)\\Q(4)&=1*E(4)\\\end{alignedat}} {\begin{alignedat}{10}q_{0}&+&q_{1}&+&q_{2}&-&4e_{0}&-&4&=&0\\q_{0}&+&2q_{1}&+&4q_{2}&-&3e_{0}&-&6&=&0\\q_{0}&+&3q_{1}&+&9q_{2}&-&4e_{0}&-&12&=&0\\q_{0}&+&4q_{1}&+&16q_{2}&-&e_{0}&-&4&=&0\end{alignedat}} This system can be solved through Gaussian elimination, and gives the values:

$q_{0}=-15,q_{1}=8,q_{2}=-1,e_{0}=-3$ ${Q(x) \over E(x)}=P(x)=5-x$ $5-x$ fits three of the four points given, so it is the most likely to be the original polynomial.