# Lagrange polynomial

Jump to navigation Jump to search

In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of distinct points ${\displaystyle x_{j}}$ and numbers ${\displaystyle y_{j}}$, the Lagrange polynomial is the polynomial of the least degree that at each point ${\displaystyle x_{j}}$ assumes the corresponding value ${\displaystyle y_{j}}$ (i.e. the functions coincide at each point). The interpolating polynomial of the least degree is unique, however, and it is therefore more appropriate to speak of "the Lagrange form" of that unique polynomial rather than "the Lagrange interpolation polynomial", since the same polynomial can be arrived at through multiple methods. Although named after Joseph Louis Lagrange, who published it in 1795, it was first discovered in 1779 by Edward Waring and it is also an easy consequence of a formula published in 1783 by Leonhard Euler.[1]

Lagrange interpolation is susceptible to Runge's phenomenon, and the fact that changing the interpolation points requires recalculating the entire interpolant can make Newton polynomials easier to use. Lagrange polynomials are used in the Newton–Cotes method of numerical integration and in Shamir's secret sharing scheme in cryptography.

This image shows, for four points ((−9, 5), (−4, 2), (−1, −2), (7, 9)), the (cubic) interpolation polynomial L(x) (in black), which is the sum of the scaled basis polynomials y00(x), y11(x), y22(x) and y33(x). The interpolation polynomial passes through all four control points, and each scaled basis polynomial passes through its respective control point and is 0 where x corresponds to the other three control points.

## Definition

Given a set of k + 1 data points

${\displaystyle (x_{0},y_{0}),\ldots ,(x_{j},y_{j}),\ldots ,(x_{k},y_{k})}$

where no two ${\displaystyle x_{j}}$ are the same, the interpolation polynomial in the Lagrange form is a linear combination

${\displaystyle L(x):=\sum _{j=0}^{k}y_{j}\ell _{j}(x)}$

of Lagrange basis polynomials

${\displaystyle \ell _{j}(x):=\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{k})}{(x_{j}-x_{k})}},}$

where ${\displaystyle 0\leq j\leq k}$. Note how, given the initial assumption that no two ${\displaystyle x_{i}}$ are the same, ${\displaystyle x_{j}-x_{m}\neq 0}$, so this expression is always well-defined. The reason pairs ${\displaystyle x_{i}=x_{j}}$ with ${\displaystyle y_{i}\neq y_{j}}$ are not allowed is that no interpolation function ${\displaystyle L}$ such that ${\displaystyle y_{i}=L(x_{i})}$ would exist; a function can only get one value for each argument ${\displaystyle x_{i}}$. On the other hand, if also ${\displaystyle y_{i}=y_{j}}$, then those two points would actually be one single point.

For all ${\displaystyle j\neq i}$, ${\displaystyle \ell _{j}(x)}$ includes the term ${\displaystyle (x-x_{i})}$ in the numerator, so the whole product will be zero at ${\displaystyle x=x_{i}}$:

${\displaystyle \ell _{j\neq i}(x_{i})=\prod _{m\neq j}{\frac {x_{i}-x_{m}}{x_{j}-x_{m}}}={\frac {(x_{i}-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x_{i}-x_{i})}{(x_{j}-x_{i})}}\cdots {\frac {(x_{i}-x_{k})}{(x_{j}-x_{k})}}=0.}$

On the other hand,

${\displaystyle \ell _{i}(x_{i}):=\prod _{m\neq i}{\frac {x_{i}-x_{m}}{x_{i}-x_{m}}}=1}$

In other words, all basis polynomials are zero at ${\displaystyle x=x_{i}}$, except ${\displaystyle \ell _{i}(x)}$, for which it holds that ${\displaystyle \ell _{i}(x_{i})=1}$, because it lacks the ${\displaystyle (x-x_{i})}$ term.

It follows that ${\displaystyle y_{i}\ell _{i}(x_{i})=y_{i}}$, so at each point ${\displaystyle x_{i}}$, ${\displaystyle L(x_{i})=y_{i}+0+0+\dots +0=y_{i}}$, showing that ${\displaystyle L}$ interpolates the function exactly.

## Proof

The function L(x) being sought is a polynomial in ${\displaystyle x}$ of the least degree that interpolates the given data set; that is, assumes value ${\displaystyle y_{j}}$ at the corresponding ${\displaystyle x_{j}}$ for all data points ${\displaystyle j}$:

${\displaystyle L(x_{j})=y_{j}\qquad j=0,\ldots ,k}$

Observe that:

1. In ${\displaystyle \ell _{j}(x)}$ there are k factors in the product and each factor contains one x, so L(x) (which is a sum of these k-degree polynomials) must also be a k-degree polynomial.
2. ${\displaystyle \ell _{j}(x_{i})=\prod _{m=0,\,m\neq j}^{k}{\frac {x_{i}-x_{m}}{x_{j}-x_{m}}}}$

We consider what happens when this product is expanded. Because the product skips ${\displaystyle m=j}$, if ${\displaystyle i=j}$ then all terms are ${\displaystyle {\frac {x_{j}-x_{m}}{x_{j}-x_{m}}}=1}$ (except where ${\displaystyle x_{j}=x_{m}}$, but that case is impossible, as pointed out in the definition section—in that term, ${\displaystyle m=j}$, and since ${\displaystyle m\neq j}$, ${\displaystyle i\neq j}$, contrary to ${\displaystyle i=j}$). Also if ${\displaystyle i\neq j}$ then since ${\displaystyle m\neq j}$ does not preclude it, one term in the product will be for ${\displaystyle m=i}$, i.e. ${\displaystyle {\frac {x_{i}-x_{i}}{x_{j}-x_{i}}}=0}$, zeroing the entire product. So

1. ${\displaystyle \ell _{j}(x_{i})=\delta _{ji}={\begin{cases}1,&{\text{if }}j=i\\0,&{\text{if }}j\neq i\end{cases}}}$
${\displaystyle L(x_{i})=\sum _{j=0}^{k}y_{j}\ell _{j}(x_{i})=\sum _{j=0}^{k}y_{j}\delta _{ji}=y_{i}.}$

Thus the function L(x) is a polynomial with degree at most k and where ${\displaystyle L(x_{i})=y_{i}}$.

Additionally, the interpolating polynomial is unique, as shown by the unisolvence theorem at polynomial interpolation article.

## Main idea

Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial L(x) =∑j=0k x j mj, we must invert the Vandermonde matrix (xi ) j to solve L(xi) = yi for the coefficients mj of L(x). By choosing a better basis, the Lagrange basis, L(x) = ∑j=0k lj(x) yj, we merely get the identity matrix, δij, which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.

This construction is analogous to the Chinese Remainder Theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.

## Examples

### Example 1

The tangent function and its interpolant

Find an interpolation formula for ƒ(x) = tan(x) given this set of known values:

{\displaystyle {\begin{aligned}x_{0}&=-1.5&&&&&f(x_{0})&=-14.1014\\x_{1}&=-0.75&&&&&f(x_{1})&=-0.931596\\x_{2}&=0&&&&&f(x_{2})&=0\\x_{3}&=0.75&&&&&f(x_{3})&=0.931596\\x_{4}&=1.5&&&&&f(x_{4})&=14.1014.\end{aligned}}}

The Lagrange basis polynomials are:

${\displaystyle \ell _{0}(x)={x-x_{1} \over x_{0}-x_{1}}\cdot {x-x_{2} \over x_{0}-x_{2}}\cdot {x-x_{3} \over x_{0}-x_{3}}\cdot {x-x_{4} \over x_{0}-x_{4}}={1 \over 243}x(2x-3)(4x-3)(4x+3)}$
${\displaystyle \ell _{1}(x)={x-x_{0} \over x_{1}-x_{0}}\cdot {x-x_{2} \over x_{1}-x_{2}}\cdot {x-x_{3} \over x_{1}-x_{3}}\cdot {x-x_{4} \over x_{1}-x_{4}}={}-{8 \over 243}x(2x-3)(2x+3)(4x-3)}$
${\displaystyle \ell _{2}(x)={x-x_{0} \over x_{2}-x_{0}}\cdot {x-x_{1} \over x_{2}-x_{1}}\cdot {x-x_{3} \over x_{2}-x_{3}}\cdot {x-x_{4} \over x_{2}-x_{4}}={3 \over 243}(2x+3)(4x+3)(4x-3)(2x-3)}$
${\displaystyle \ell _{3}(x)={x-x_{0} \over x_{3}-x_{0}}\cdot {x-x_{1} \over x_{3}-x_{1}}\cdot {x-x_{2} \over x_{3}-x_{2}}\cdot {x-x_{4} \over x_{3}-x_{4}}=-{8 \over 243}x(2x-3)(2x+3)(4x+3)}$
${\displaystyle \ell _{4}(x)={x-x_{0} \over x_{4}-x_{0}}\cdot {x-x_{1} \over x_{4}-x_{1}}\cdot {x-x_{2} \over x_{4}-x_{2}}\cdot {x-x_{3} \over x_{4}-x_{3}}={1 \over 243}x(2x+3)(4x-3)(4x+3).}$

Thus the interpolating polynomial then is

{\displaystyle {\begin{aligned}L(x)&={1 \over 243}{\Big (}f(x_{0})x(2x-3)(4x-3)(4x+3)\\&{}\qquad {}-8f(x_{1})x(2x-3)(2x+3)(4x-3)\\&{}\qquad {}+3f(x_{2})(2x+3)(4x+3)(4x-3)(2x-3)\\&{}\qquad {}-8f(x_{3})x(2x-3)(2x+3)(4x+3)\\&{}\qquad {}+f(x_{4})x(2x+3)(4x-3)(4x+3){\Big )}\\&=4.834848x^{3}-1.477474x.\end{aligned}}}

### Example 2

We wish to interpolate ƒ(x) = x2 over the range 1 ≤ x ≤ 3, given these three points:

{\displaystyle {\begin{aligned}x_{0}&=1&&&f(x_{0})&=1\\x_{1}&=2&&&f(x_{1})&=4\\x_{2}&=3&&&f(x_{2})&=9.\end{aligned}}}

The interpolating polynomial is:

{\displaystyle {\begin{aligned}L(x)&={1}\cdot {x-2 \over 1-2}\cdot {x-3 \over 1-3}+{4}\cdot {x-1 \over 2-1}\cdot {x-3 \over 2-3}+{9}\cdot {x-1 \over 3-1}\cdot {x-2 \over 3-2}\\[10pt]&=x^{2}.\end{aligned}}}

### Example 3

We wish to interpolate ƒ(x) = x3 over the range 1 ≤ x ≤ 3, given these three points:

The interpolating polynomial is:

{\displaystyle {\begin{aligned}L(x)&={1}\cdot {x-2 \over 1-2}\cdot {x-3 \over 1-3}+{8}\cdot {x-1 \over 2-1}\cdot {x-3 \over 2-3}+{27}\cdot {x-1 \over 3-1}\cdot {x-2 \over 3-2}\\[8pt]&=6x^{2}-11x+6.\end{aligned}}}

### Notes

Example of Lagrange polynomial interpolation divergence.

The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant.

But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials.

Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes.[2]

The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas.

## Barycentric interpolation

Using

${\displaystyle \ell (x)=(x-x_{0})(x-x_{1})\cdots (x-x_{k})}$

we can rewrite the Lagrange basis polynomials as

${\displaystyle \ell _{j}(x)={\frac {\ell (x)}{x-x_{j}}}{\frac {1}{\prod _{i=0,i\neq j}^{k}(x_{j}-x_{i})}}}$

or, by defining the barycentric weights[3]

${\displaystyle w_{j}={\frac {1}{\prod _{i=0,i\neq j}^{k}(x_{j}-x_{i})}}}$

we can simply write

${\displaystyle \ell _{j}(x)=\ell (x){\frac {w_{j}}{x-x_{j}}}}$

which is commonly referred to as the first form of the barycentric interpolation formula.

The advantage of this representation is that the interpolation polynomial may now be evaluated as

${\displaystyle L(x)=\ell (x)\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}y_{j}}$

which, if the weights ${\displaystyle w_{j}}$ have been pre-computed, requires only ${\displaystyle {\mathcal {O}}(n)}$ operations (evaluating ${\displaystyle \ell (x)}$ and the weights ${\displaystyle w_{j}/(x-x_{j})}$) as opposed to ${\displaystyle {\mathcal {O}}(n^{2})}$ for evaluating the Lagrange basis polynomials ${\displaystyle \ell _{j}(x)}$ individually.

The barycentric interpolation formula can also easily be updated to incorporate a new node ${\displaystyle x_{k+1}}$ by dividing each of the ${\displaystyle w_{j}}$, ${\displaystyle j=0\dots k}$ by ${\displaystyle (x_{j}-x_{k+1})}$ and constructing the new ${\displaystyle w_{k+1}}$ as above.

We can further simplify the first form by first considering the barycentric interpolation of the constant function ${\displaystyle g(x)\equiv 1}$:

${\displaystyle g(x)=\ell (x)\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}.}$

Dividing ${\displaystyle L(x)}$ by ${\displaystyle g(x)}$ does not modify the interpolation, yet yields

${\displaystyle L(x)={\frac {\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}y_{j}}{\sum _{j=0}^{k}{\frac {w_{j}}{x-x_{j}}}}}}$

which is referred to as the second form or true form of the barycentric interpolation formula. This second form has the advantage that ${\displaystyle \ell (x)}$ need not be evaluated for each evaluation of ${\displaystyle L(x)}$.

## Finite fields

The Lagrange polynomial can also be computed in finite fields. This has applications in cryptography, such as in Shamir's Secret Sharing scheme.

## References

1. {{#invoke:citation/CS1|citation |CitationClass=citation }}.
2. {{#invoke:citation/CS1|citation |CitationClass=citation }}.
3. {{#invoke:Citation/CS1|citation |CitationClass=journal }}