# Inequality of arithmetic and geometric means

In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same.

The simplest non-trivial case — i.e., with more than one variable — for two non-negative numbers Template:Mvar and Template:Mvar, is the statement that

${\frac {x+y}{2}}\geq {\sqrt {xy}}$ with equality if and only if x = y. This case can be seen from the fact that the square of a real number is always non-negative (greater than or equal to zero) and from the elementary case (a ± b)2 = a2 ± 2ab + b2 of the binomial formula:

{\begin{aligned}0&\leq (x-y)^{2}\\&=x^{2}-2xy+y^{2}\\&=x^{2}+2xy+y^{2}-4xy\\&=(x+y)^{2}-4xy.\end{aligned}} In other words (x + y)2 ≥ 4xy, with equality precisely when (xy)2 = 0, i.e. x = y. For a geometrical interpretation, consider a rectangle with sides of length Template:Mvar and Template:Mvar, hence it has perimeter 2x + 2y and area Template:Mvar. Similarly, a square with all sides of length Template:Radical has the perimeter and the same area as the rectangle. The simplest non-trivial case of the AM–GM inequality implies for the perimeters that 2x + 2y ≥ 4Template:Radical and that only the square has the smallest perimeter amongst all rectangles of equal area.

The general AM–GM inequality corresponds to the fact that the natural logarithm, which converts multiplication to addition, is a strictly concave function; using Jensen's inequality the general proof of the inequality follows.

${\frac {\ln x+\ln y}{2}}\leq \ln \left({\frac {x+y}{2}}\right)$ Extensions of the AM–GM inequality are available to include weights or generalized means.

## Background

The arithmetic mean, or less precisely the average, of a list of Template:Mvar numbers x1, x2, . . . , xn is the sum of the numbers divided by Template:Mvar:

${\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}.$ The geometric mean is similar, except that it is only defined for a list of nonnegative real numbers, and uses multiplication and a root in place of addition and division:

${\sqrt[{n}]{x_{1}\cdot x_{2}\cdots x_{n}}}.$ If x1, x2, . . . , xn > 0, this is equal to the exponential of the arithmetic mean of the natural logarithms of the numbers:

$\exp \left({\frac {\ln {x_{1}}+\ln {x_{2}}+\cdots +\ln {x_{n}}}{n}}\right).$ ## The inequality

Restating the inequality using mathematical notation, we have that for any list of Template:Mvar nonnegative real numbers x1, x2, . . . , xn,

${\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\geq {\sqrt[{n}]{x_{1}\cdot x_{2}\cdots x_{n}}}\,,$ and that equality holds if and only if x1 = x2 = · · · = xn.

## Geometric interpretation

In two dimensions, 2x1 + 2x2 is the perimeter of a rectangle with sides of length x1 and x2. Similarly, is the perimeter of a square with the same area. Thus for n = 2 the AM–GM inequality states that only the square has the smallest perimeter amongst all rectangles of equal area.

The full inequality is an extension of this idea to Template:Mvar dimensions. Every vertex of an Template:Mvar-dimensional box is connected to Template:Mvar edges. If these edges' lengths are x1, x2, . . . , xn, then x1 + x2 + · · · + xn is the total length of edges incident to the vertex. There are 2n vertices, so we multiply this by 2n; since each edge, however, meets two vertices, every edge is counted twice. Therefore we divide by 2 and conclude that there are 2n−1n edges. There are equally many edges of each length and Template:Mvar lengths; hence there are 2n−1 edges of each length and the total edge-length is 2n−1(x1 + x2 + · · · + xn). On the other hand,

$2^{n-1}n{\sqrt[{n}]{x_{1}x_{2}\cdots x_{n}}}$ is the total length of edges connected to a vertex on an Template:Mvar-dimensional cube of equal volume. Since the inequality says

${x_{1}+x_{2}+\cdots +x_{n} \over n}\geq {\sqrt[{n}]{x_{1}x_{2}\cdots x_{n}}},$ we get

$2^{n-1}(x_{1}+x_{2}+\cdots +x_{n})\geq 2^{n-1}n{\sqrt[{n}]{x_{1}x_{2}\cdots x_{n}}}\,$ with equality if and only if x1 = x2 = · · · = xn.

Thus the AM–GM inequality states that only the [[Hypercube|Template:Mvar-cube]] has the smallest sum of lengths of edges connected to each vertex amongst all Template:Mvar-dimensional boxes with the same volume.

## Example application

Consider the function

$f(x,y,z)={\frac {x}{y}}+{\sqrt {\frac {y}{z}}}+{\sqrt[{3}]{\frac {z}{x}}}$ for all positive real numbers Template:Mvar, Template:Mvar and Template:Mvar. Suppose we wish to find the minimal value of this function. First we rewrite it a bit:

{\begin{aligned}f(x,y,z)&=6\cdot {\frac {{\frac {x}{y}}+{\frac {1}{2}}{\sqrt {\frac {y}{z}}}+{\frac {1}{2}}{\sqrt {\frac {y}{z}}}+{\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}+{\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}+{\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}}{6}}\\&=6\cdot {\frac {x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}}{6}}\end{aligned}} with

$x_{1}={\frac {x}{y}},\qquad x_{2}=x_{3}={\frac {1}{2}}{\sqrt {\frac {y}{z}}},\qquad x_{4}=x_{5}=x_{6}={\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}.$ Applying the AM–GM inequality for n = 6, we get

{\begin{aligned}f(x,y,z)&\geq 6\cdot {\sqrt[{6}]{{\frac {x}{y}}\cdot {\frac {1}{2}}{\sqrt {\frac {y}{z}}}\cdot {\frac {1}{2}}{\sqrt {\frac {y}{z}}}\cdot {\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}\cdot {\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}\cdot {\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}}}\\&=6\cdot {\sqrt[{6}]{{\frac {1}{2\cdot 2\cdot 3\cdot 3\cdot 3}}{\frac {x}{y}}{\frac {y}{z}}{\frac {z}{x}}}}\\&=2^{2/3}\cdot 3^{1/2}.\end{aligned}} Further, we know that the two sides are equal exactly when all the terms of the mean are equal:

$f(x,y,z)=2^{2/3}\cdot 3^{1/2}\quad {\mbox{when}}\quad {\frac {x}{y}}={\frac {1}{2}}{\sqrt {\frac {y}{z}}}={\frac {1}{3}}{\sqrt[{3}]{\frac {z}{x}}}.$ All the points (x, y, z) satisfying these conditions lie on a half-line starting at the origin and are given by

$(x,y,z)={\biggr (}x,{\sqrt[{3}]{2}}{\sqrt {3}}\,x,{\frac {3{\sqrt {3}}}{2}}\,x{\biggr )}\quad {\mbox{with}}\quad x>0.$ ## Practical applications

An important practical application in financial mathematics is to computing the rate of return: the annualized return, computed via the geometric mean, is less than the average annual return, computed by the arithmetic mean (or equal if all returns are equal). This is important in analyzing investments, as the average return overstates the cumulative effect.

## Proofs of the AM–GM inequality

There are several ways to prove the AM–GM inequality; for example, it can be inferred from Jensen's inequality, using the concave function ln(Template:Mvar). It can also be proven using the rearrangement inequality. Considering length and required prerequisites, the elementary proof by induction given below is probably the best recommendation for first reading.

### Idea of the first two proofs

We have to show that

${\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\geq {\sqrt[{n}]{x_{1}x_{2}\cdots x_{n}}}$ with equality only when all numbers are equal. If xixj, then replacing both Template:Mvar and Template:Mvar by (xi + xj)/2 will leave the arithmetic mean on the left-hand side unchanged, but will increase the geometric mean on the right-hand side because

${\Bigl (}{\frac {x_{i}+x_{j}}{2}}{\Bigr )}^{2}-x_{i}x_{j}={\Bigl (}{\frac {x_{i}-x_{j}}{2}}{\Bigr )}^{2}>0.$ Thus right-hand side will be largest — so the idea — when all Template:Mvars are equal to the arithmetic mean

$\alpha ={\frac {x_{1}+x_{2}+\ldots +x_{n}}{n}},$ thus as this is then the largest value of right-hand side of the expression, we have

${\frac {x_{1}+x_{2}+\ldots +x_{n}}{n}}=\alpha ={\sqrt[{n}]{\alpha \alpha \ldots \alpha }}\geq {\sqrt[{n}]{x_{1}x_{2}\ldots x_{n}}}.$ This is a valid proof for the case n = 2, but the procedure of taking iteratively pairwise averages may fail to produce Template:Mvar equal numbers in the case n ≥ 3. An example of this case is x1 = x2x3: Averaging two different numbers produces two equal numbers, but the third one is still different. Therefore, we never actually get an inequality involving the geometric mean of three equal numbers.

Hence, an additional trick or a modified argument is necessary to turn the above idea into a valid proof for the case n ≥ 3.

### Proof by induction

With the arithmetic mean

$\alpha ={\frac {\ x_{1}+\cdots +x_{n}}{n}}$ of the non-negative real numbers x1, . . . , xn, the AM–GM statement is equivalent to

$\alpha ^{n}\geq x_{1}x_{2}\cdots x_{n}\,$ with equality if and only if α = xi for all i ∈ {1, . . . , n}.

For the following proof we apply mathematical induction and only well-known rules of arithmetic.

Induction basis: For n = 1 the statement is true with equality.

Induction hypothesis: Suppose that the AM–GM statement holds for all choices of Template:Mvar non-negative real numbers.

Induction step: Consider n + 1 non-negative real numbers x1, . . . , xn+1, . Their arithmetic mean Template:Mvar satisfies

$(n+1)\alpha =\ x_{1}+\cdots +x_{n}+x_{n+1}.\,$ If all numbers are equal to Template:Mvar, then we have equality in the AM–GM statement and we are done. Otherwise we may find one number that is greater than Template:Mvar and one that is smaller than Template:Mvar, say xn > α and xn+1 < α. Then

$(x_{n}-\alpha )(\alpha -x_{n+1})>0\,.\qquad (*)$ Now consider the Template:Mvar numbers x1, . . . , xn–1, y with

$y:=x_{n}+x_{n+1}-\alpha \geq x_{n}-\alpha >0\,,$ which are also non-negative. Since

$n\alpha =x_{1}+\cdots +x_{n-1}+\underbrace {x_{n}+x_{n+1}-\alpha } _{=\,y},$ Template:Mvar is also the arithmetic mean of Template:Mvar numbers x1, . . . , xn–1, y and the induction hypothesis implies

$\alpha ^{n+1}=\alpha ^{n}\cdot \alpha \geq x_{1}x_{2}\cdots x_{n-1}y\cdot \alpha .\qquad (**)$ Due to (*) we know that

$(\underbrace {x_{n}+x_{n+1}-\alpha } _{=\,y})\alpha -x_{n}x_{n+1}=(x_{n}-\alpha )(\alpha -x_{n+1})>0,$ hence

$y\alpha >x_{n}x_{n+1}\,,\qquad ({*}{*}{*})$ in particular α > 0. Therefore, if at least one of the numbers x1, . . . , xn–1 is zero, then we already have strict inequality in (**). Otherwise the right-hand side of (**) is positive and strict inequality is obtained by using the estimate (***) to get a lower bound of the right-hand side of (**). Thus, in both cases we get

$\alpha ^{n+1}>x_{1}x_{2}\cdots x_{n-1}x_{n}x_{n+1}\,,$ which completes the proof.

### Proof by Cauchy using forward–backward induction

The following proof by cases relies directly on well-known rules of arithmetic but employs the rarely used technique of forward-backward-induction. It is essentially from Augustin Louis Cauchy and can be found in his Cours d'analyse.

#### The case where all the terms are equal

If all the terms are equal:

$x_{1}=x_{2}=\cdots =x_{n},$ then their sum is nx1, so their arithmetic mean is x1; and their product is x1n, so their geometric mean is x1; therefore, the arithmetic mean and geometric mean are equal, as desired.

#### The case where not all the terms are equal

It remains to show that if not all the terms are equal, then the arithmetic mean is greater than the geometric mean. Clearly, this is only possible when n > 1.

This case is significantly more complex, and we divide it into subcases.

##### The subcase where n = 2

If n = 2, then we have two terms, x1 and x2, and since (by our assumption) not all terms are equal, we have:

{\begin{aligned}{\Bigl (}{\frac {x_{1}+x_{2}}{2}}{\Bigr )}^{2}-x_{1}x_{2}&={\frac {1}{4}}(x_{1}^{2}+2x_{1}x_{2}+x_{2}^{2})-x_{1}x_{2}\\&={\frac {1}{4}}(x_{1}^{2}-2x_{1}x_{2}+x_{2}^{2})\\&={\Bigl (}{\frac {x_{1}-x_{2}}{2}}{\Bigr )}^{2}>0,\end{aligned}} hence

${\frac {x_{1}+x_{2}}{2}}>{\sqrt {x_{1}x_{2}}}$ as desired.

##### The subcase where n = 2k

Consider the case where n = 2k, where Template:Mvar is a positive integer. We proceed by mathematical induction.

In the base case, k = 1, so n = 2. We have already shown that the inequality holds when n = 2, so we are done.

Now, suppose that for a given k > 1, we have already shown that the inequality holds for n = 2k−1, and we wish to show that it holds for n = 2k. To do so, we apply the inequality twice for 2k-1 numbers and once for 2 numbers to obtain:

{\begin{aligned}{\frac {x_{1}+x_{2}+\cdots +x_{2^{k}}}{2^{k}}}&{}={\frac {{\frac {x_{1}+x_{2}+\cdots +x_{2^{k-1}}}{2^{k-1}}}+{\frac {x_{2^{k-1}+1}+x_{2^{k-1}+2}+\cdots +x_{2^{k}}}{2^{k-1}}}}{2}}\\[7pt]&\geq {\frac {{\sqrt[{2^{k-1}}]{x_{1}x_{2}\cdots x_{2^{k-1}}}}+{\sqrt[{2^{k-1}}]{x_{2^{k-1}+1}x_{2^{k-1}+2}\cdots x_{2^{k}}}}}{2}}\\[7pt]&\geq {\sqrt {{\sqrt[{2^{k-1}}]{x_{1}x_{2}\cdots x_{2^{k-1}}}}{\sqrt[{2^{k-1}}]{x_{2^{k-1}+1}x_{2^{k-1}+2}\cdots x_{2^{k}}}}}}\\[7pt]&={\sqrt[{2^{k}}]{x_{1}x_{2}\cdots x_{2^{k}}}}\end{aligned}} where in the first inequality, the two sides are equal only if

$x_{1}=x_{2}=\cdots =x_{2^{k-1}}$ and

$x_{2^{k-1}+1}=x_{2^{k-1}+2}=\cdots =x_{2^{k}}$ (in which case the first arithmetic mean and first geometric mean are both equal to x1, and similarly with the second arithmetic mean and second geometric mean); and in the second inequality, the two sides are only equal if the two geometric means are equal. Since not all 2k numbers are equal, it is not possible for both inequalities to be equalities, so we know that:

${\frac {x_{1}+x_{2}+\cdots +x_{2^{k}}}{2^{k}}}>{\sqrt[{2^{k}}]{x_{1}x_{2}\cdots x_{2^{k}}}}$ as desired.

##### The subcase where n < 2k

If Template:Mvar is not a natural power of 2, then it is certainly less than some natural power of 2, since the sequence 2, 4, 8, . . . , 2k, . . . is unbounded above. Therefore, without loss of generality, let Template:Mvar be some natural power of 2 that is greater than Template:Mvar.

So, if we have Template:Mvar terms, then let us denote their arithmetic mean by Template:Mvar, and expand our list of terms thus:

$x_{n+1}=x_{n+2}=\cdots =x_{m}=\alpha .$ We then have:

{\begin{aligned}\alpha &={\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\\[6pt]&={\frac {{\frac {m}{n}}\left(x_{1}+x_{2}+\cdots +x_{n}\right)}{m}}\\[6pt]&={\frac {x_{1}+x_{2}+\cdots +x_{n}+{\frac {m-n}{n}}\left(x_{1}+x_{2}+\cdots +x_{n}\right)}{m}}\\[6pt]&={\frac {x_{1}+x_{2}+\cdots +x_{n}+\left(m-n\right)\alpha }{m}}\\[6pt]&={\frac {x_{1}+x_{2}+\cdots +x_{n}+x_{n+1}+\cdots +x_{m}}{m}}\\[6pt]&>{\sqrt[{m}]{x_{1}x_{2}\cdots x_{n}x_{n+1}\cdots x_{m}}}\\[6pt]&={\sqrt[{m}]{x_{1}x_{2}\cdots x_{n}\alpha ^{m-n}}}\,,\end{aligned}} so

$\alpha ^{m}>x_{1}x_{2}\cdots x_{n}\alpha ^{m-n}$ and

$\alpha >{\sqrt[{n}]{x_{1}x_{2}\cdots x_{n}}}$ as desired.

### Proof by induction using basic calculus

The following proof uses mathematical induction and some basic differential calculus.

Induction basis: For n = 1 the statement is true with equality.

Induction hypothesis: Suppose that the AM–GM statement holds for all choices of Template:Mvar non-negative real numbers.

Induction step: In order to prove the statement for n + 1 non-negative real numbers x1, . . . , xn, xn+1, we need to prove that

${\frac {x_{1}+\cdots +x_{n}+x_{n+1}}{n+1}}-({x_{1}\cdots x_{n}x_{n+1}})^{\frac {1}{n+1}}\geq 0$ with equality only if all the n + 1 numbers are equal.

If all numbers are zero, the inequality holds with equality. If some but not all numbers are zero, we have strict inequality. Therefore, we may assume in the following, that all n + 1 numbers are positive.

We consider the last number xn+1 as a variable and define the function

$f(t)={\frac {x_{1}+\cdots +x_{n}+t}{n+1}}-({x_{1}\cdots x_{n}t})^{\frac {1}{n+1}},\qquad t>0.$ Proving the induction step is equivalent to showing that f(t) ≥ 0 for all t > 0, with f(t) = 0 only if x1, . . . , xn and Template:Mvar are all equal. This can be done by analyzing the critical points of Template:Mvar using some basic calculus.

The first derivative of Template:Mvar is given by

$f'(t)={\frac {1}{n+1}}-{\frac {1}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n+1}}t^{-{\frac {n}{n+1}}},\qquad t>0.$ A critical point t0 has to satisfy f′(t0) = 0, which means

$({x_{1}\cdots x_{n}})^{\frac {1}{n+1}}t_{0}^{-{\frac {n}{n+1}}}=1.$ After a small rearrangement we get

$t_{0}^{\frac {n}{n+1}}=({x_{1}\cdots x_{n}})^{\frac {1}{n+1}},$ and finally

$t_{0}=({x_{1}\cdots x_{n}})^{\frac {1}{n}},$ which is the geometric mean of x1, . . . , xn. This is the only critical point of Template:Mvar. Since f′′(t) > 0 for all t > 0, the function Template:Mvar is strictly convex and has a strict global minimum at t0. Next we compute the value of the function at this global minimum:

{\begin{aligned}f(t_{0})&={\frac {x_{1}+\cdots +x_{n}+({x_{1}\cdots x_{n}})^{1/n}}{n+1}}-({x_{1}\cdots x_{n}})^{\frac {1}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n(n+1)}}\\&={\frac {x_{1}+\cdots +x_{n}}{n+1}}+{\frac {1}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n}}-({x_{1}\cdots x_{n}})^{\frac {1}{n}}\\&={\frac {x_{1}+\cdots +x_{n}}{n+1}}-{\frac {n}{n+1}}({x_{1}\cdots x_{n}})^{\frac {1}{n}}\\&={\frac {n}{n+1}}{\Bigl (}{\frac {x_{1}+\cdots +x_{n}}{n}}-({x_{1}\cdots x_{n}})^{\frac {1}{n}}{\Bigr )}\geq 0,\end{aligned}}. where the final inequality holds due to the induction hypothesis. The hypothesis also says that we can have equality only when x1, . . . , xn are all equal. In this case, their geometric mean  t0 has the same value, Hence, unless x1, . . . , xn, xn+1 are all equal, we have f(xn+1) > 0. This completes the proof.

This technique can be used in the same manner to prove the generalized AM–GM inequality and Cauchy–Schwarz inequality in Euclidean space Rn.

### Proof by Pólya using the exponential function

George Pólya provided a proof similar to what follows. Let f(x) = ex–1x for all real Template:Mvar, with first derivative f′(x) = ex–1 – 1 and second derivative f′′(x) = ex–1. Observe that f(1) = 0, f′(1) = 0 and f′′(x) > 0 for all real Template:Mvar, hence Template:Mvar is strictly convex with the absolute minimum at x = 1. Hence x ≤ ex–1 for all real Template:Mvar with equality only for x = 1.

Consider a list of non-negative real numbers x1, x2, . . . , xn. If they are all zero, then the AM–GM inequality holds with equality. Hence we may assume in the following for their arithmetic mean α > 0. By Template:Mvar-fold application of the above inequality, we obtain that

{\begin{aligned}{{\frac {x_{1}}{\alpha }}{\frac {x_{2}}{\alpha }}\cdots {\frac {x_{n}}{\alpha }}}&\leq {e^{{\frac {x_{1}}{\alpha }}-1}e^{{\frac {x_{2}}{\alpha }}-1}\cdots e^{{\frac {x_{n}}{\alpha }}-1}}\\&=\exp {\Bigl (}{\frac {x_{1}}{\alpha }}-1+{\frac {x_{2}}{\alpha }}-1+\cdots +{\frac {x_{n}}{\alpha }}-1{\Bigr )},\qquad (*)\end{aligned}} with equality if and only if xi = α for every i ∈ {1, . . . , n}. The argument of the exponential function can be simplified:

{\begin{aligned}{\frac {x_{1}}{\alpha }}-1+{\frac {x_{2}}{\alpha }}-1+\cdots +{\frac {x_{n}}{\alpha }}-1&={\frac {x_{1}+x_{2}+\cdots +x_{n}}{\alpha }}-n\\&=n-n\\&=0.\end{aligned}} Returning to (*),

${\frac {x_{1}x_{2}\cdots x_{n}}{\alpha ^{n}}}\leq e^{0}=1,$ which produces x1 x1 · · · xnαn, hence the result

${\sqrt[{n}]{x_{1}x_{2}\cdots x_{n}}}\leq \alpha .$ ## Generalizations

### Weighted AM–GM inequality

There is a similar inequality for the weighted arithmetic mean and weighted geometric mean. Specifically, let the nonnegative numbers x1, x2, . . . , xn and the nonnegative weights w1, w2, . . . , wn be given. Set w = w1 + w2 + · · · + wn. If w > 0, then the inequality

${\frac {w_{1}x_{1}+w_{2}x_{2}+\cdots +w_{n}x_{n}}{w}}\geq {\sqrt[{w}]{x_{1}^{w_{1}}x_{2}^{w_{2}}\cdots x_{n}^{w_{n}}}}$ holds with equality if and only if all the Template:Mvar with wk > 0 are equal. Here the convention 00 = 1 is used.

If all wk = 1, this reduces to the above inequality of arithmetic and geometric means.

### Proof using Jensen's inequality

Using the finite form of Jensen's inequality for the natural logarithm, we can prove the inequality between the weighted arithmetic mean and the weighted geometric mean stated above.

Since an Template:Mvar with weight wk = 0 has no influence on the inequality, we may assume in the following that all weights are positive. If all Template:Mvar are equal, then equality holds. Therefore, it remains to prove strict inequality if they are not all equal, which we will assume in the following, too. If at least one Template:Mvar is zero (but not all), then the weighted geometric mean is zero, while the weighted arithmetic mean is positive, hence strict inequality holds. Therefore, we may assume also that all Template:Mvar are positive.

Since the natural logarithm is strictly concave, the finite form of Jensen's inequality and the functional equations of the natural logarithm imply

{\begin{aligned}\ln {\Bigl (}{\frac {w_{1}x_{1}+\cdots +w_{n}x_{n}}{w}}{\Bigr )}&>{\frac {w_{1}}{w}}\ln x_{1}+\cdots +{\frac {w_{n}}{w}}\ln x_{n}\\&=\ln {\sqrt[{w}]{x_{1}^{w_{1}}x_{2}^{w_{2}}\cdots x_{n}^{w_{n}}}}.\end{aligned}} Since the natural logarithm is strictly increasing,

${\frac {w_{1}x_{1}+\cdots +w_{n}x_{n}}{w}}>{\sqrt[{w}]{x_{1}^{w_{1}}x_{2}^{w_{2}}\cdots x_{n}^{w_{n}}}}.$ ### Other generalizations

Other generalizations of the inequality of arithmetic and geometric means include: