{{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} In mathematics, a quadratic irrational (also known as a quadratic irrationality or quadratic surd) is an irrational number that is the solution to some quadratic equation with rational coefficients.[1] Since fractions in the coefficients of a quadratic equation can be cleared by multiplying both sides by their common denominator, a quadratic irrational is an irrational root of some quadratic equation whose coefficients are integers. The quadratic irrationals form the real algebraic numbers of degree 2 and can, therefore, be expressed in this form:

${\displaystyle {a+b{\sqrt {c}} \over d}}$

for integers a, b, c, d; with b and d non-zero, and with c > 1 and square-free. This implies that the quadratic irrationals have the same cardinality as ordered quadruples of integers, and are therefore countable.

The rational numbers together with all quadratic irrationals with a given c form a field, called a real quadratic field. In particular, their inverses are of the same form, since

${\displaystyle {d \over a+b{\sqrt {c}}}={ad-bd{\sqrt {c}} \over a^{2}-b^{2}c}.\,}$

This field is often called the field obtained by adjoining √Template:Overline to the rational numbers, and denoted Q(√Template:Overline).

Quadratic irrationals have useful properties, especially in relation to continued fractions, where we have the result that all quadratic irrationals, and only quadratic irrationals, have periodic continued fraction forms. For example

${\displaystyle {\sqrt {3}}=1.732\ldots =[1;1,2,1,2,1,2,\ldots ]}$

## Square root of non-square is irrational

The definition of quadratic irrationals requires them to satisfy two conditions: they must satisfy a quadratic equation and they must be irrational. The solutions to the quadratic equation ax2 + bx + c = 0 are

${\displaystyle {\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.}$

Thus quadratic irrationals are precisely those numbers in this form that are not rational. Since b and 2a are both integers, asking when the above quantity is irrational is the same as asking when the square root of an integer is irrational. The answer to this is that the square root of any natural number that is not a square number is irrational.

The square root of 2 was the first such number to be proved irrational. Theodorus of Cyrene proved the irrationality of the square roots of whole numbers up to 17, but stopped there, probably because the algebra he used couldn't be applied to the square root of numbers greater than 17. Euclid's Elements Book 10 is dedicated to classification of irrational magnitudes. The original proof of the irrationality of the non-square natural numbers depends on Euclid's lemma.

Many proofs of the irrationality of the square roots of non-square natural numbers implicitly assume the fundamental theorem of arithmetic, which was first proven by Carl Friedrich Gauss in his Disquisitiones Arithmeticae. This asserts that every integer has a unique factorization into primes. For any rational non-integer in lowest terms there must be a prime in the denominator which does not divide into the numerator. When the numerator is squared that prime will still not divide into it because of the unique factorization. Therefore the square of a rational non-integer is always a non-integer; by contrapositive, the square root of an integer is always either another integer, or irrational.

Euclid used a restricted version of the fundamental theorem and some careful argument to prove the theorem. His proof is in Euclid's Elements Book X Proposition 9.[2]

The fundamental theorem of arithmetic is not actually required to prove the result though. There are self-contained proofs by Richard Dedekind,[3] among others. The following proof was adapted by Colin Richard Hughes from a proof of the irrationality of the square root of two found by Theodor Estermann in 1975.[4][5]

Assume D is a non-square natural number, then there is a number n such that:

n2 < D < (n + 1)2,

so in particular

0 < √Dn < 1.

Assume the square root of D is a rational number p/q, assume the q here is the smallest for which this is true, hence the smallest number for which qD is also an integer. Then:

(√Dn)qD = qDnqD

is also an integer. But 0 < (√D − n) < 1 so (√D − n)q < q. Hence (√D − n)q is an integer smaller than q such that (√D − n)qD is also an integer. This is a contradiction since q was defined to be the smallest number with this property; hence √D cannot be rational.