Institution (computer science): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Tillmo
 
No edit summary
 
Line 1: Line 1:
[[File:Mona Lisa eigenvector grid.png|thumb|270px|In this [[shear mapping]] the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping, and since its length is unchanged its eigenvalue is 1.]]
It is very common to have a dental emergency -- a fractured tooth, an abscess, or severe pain when chewing. Over-the-counter pain medication is just masking the problem. Seeing an emergency dentist is critical to getting the source of the problem diagnosed and corrected as soon as possible.<br><br><br><br>Here are some common dental emergencies:<br>Toothache: The most common dental emergency. This generally means a badly decayed tooth. As the pain affects the tooth's nerve, treatment involves gently removing any debris lodged in the cavity being careful not to poke deep as this will cause severe pain if the nerve is touched. Next rinse vigorously with warm water. Then soak a small piece of cotton in oil of cloves and insert it in the cavity. This will give temporary relief until a dentist can be reached.<br><br>At times the pain may have a more obscure location such as decay under an old filling. As this can be only corrected by a dentist there are two things you can do to help the pain. Administer a pain pill (aspirin or some other analgesic) internally or dissolve a tablet in a half glass (4 oz) of warm water holding it in the mouth for several minutes before spitting it out. DO NOT PLACE A WHOLE TABLET OR ANY PART OF IT IN THE TOOTH OR AGAINST THE SOFT GUM TISSUE AS IT WILL RESULT IN A NASTY BURN.<br><br>Swollen Jaw: This may be caused by several conditions the most probable being an abscessed tooth. In any case the treatment should be to reduce pain and swelling. An ice pack held on the outside of the jaw, (ten minutes on and ten minutes off) will take care of both. If this does not control the pain, an analgesic tablet can be given every four hours.<br><br>Other Oral Injuries: Broken teeth, cut lips, bitten tongue or lips if severe means a trip to a dentist as soon as possible. In the mean time rinse the mouth with warm water and place cold compression the face opposite the injury. If there is a lot of bleeding, apply direct pressure to the bleeding area. If bleeding does not stop get patient to the emergency room of a hospital as stitches may be necessary.<br><br>Prolonged Bleeding Following Extraction: Place a gauze pad or better still a moistened tea bag over the socket and have the patient bite down gently on it for 30 to 45 minutes. The tannic acid in the tea seeps into the tissues and often helps stop the bleeding. If bleeding continues after two hours, call the dentist or take patient to the emergency room of the nearest hospital.<br><br>Broken Jaw: If you suspect the patient's jaw is broken, bring the upper and lower teeth together. Put a necktie, handkerchief or towel under the chin, tying it over the head to immobilize the jaw until you can get the patient to a dentist or the emergency room of a hospital.<br><br>Painful Erupting Tooth: In young children teething pain can come from a loose baby tooth or from an erupting permanent tooth. Some relief can be given by crushing a little ice and wrapping it in gauze or a clean piece of cloth and putting it directly on the tooth or gum tissue where it hurts. The numbing effect of the cold, along with an appropriate dose of aspirin, usually provides temporary relief.<br><br>In young adults, an erupting 3rd molar (Wisdom tooth), especially if it is impacted, can cause the jaw to swell and be quite painful. Often the gum around the tooth will show signs of infection. Temporary relief can be had by giving aspirin or some other painkiller and by dissolving an aspirin in half a glass of warm water and holding this solution in the mouth over the sore gum. AGAIN DO NOT PLACE A TABLET DIRECTLY OVER THE GUM OR CHEEK OR USE THE ASPIRIN SOLUTION ANY STRONGER THAN RECOMMENDED TO PREVENT BURNING THE TISSUE. The swelling of the jaw can be reduced by using an ice pack on the outside of the face at intervals of ten minutes on and ten minutes off.<br><br>If you liked this article and also you would like to acquire more info relating to [http://www.youtube.com/watch?v=90z1mmiwNS8 dentist DC] kindly visit our web-page.
 
An '''eigenvector''' of a [[square matrix]] <math>A</math> is a non-zero [[vector (mathematics)|vector]] <math>v</math> that, when the matrix is [[matrix multiplication|multiplied]] by <math>v</math>, yields a constant multiple of <math>v</math>, the multiplier being commonly denoted by <math>\lambda</math>. That is:
 
<math>A v = \lambda v</math>
 
(Because this equation uses [[Matrix multiplication#All_matrices|post-multiplication]] by <math>v</math>, it describes a [[Eigenvalues_and_eigenvectors#Left_and_right_eigenvectors|right eigenvector]].)
 
The number <math>\lambda</math> is called the '''eigenvalue''' of <math>A</math> corresponding to <math>v</math>.<ref name=WolframEigenvector>
  Wolfram Research, Inc. (2010) [http://mathworld.wolfram.com/Eigenvector.html ''Eigenvector'']. Accessed on 2010-01-29.
</ref>
 
In [[analytic geometry]], for example, a three-element vector may be seen as an arrow in three-dimensional space starting at the origin. In that case, an eigenvector <math>v</math> is an arrow whose direction is either preserved or exactly reversed after multiplication by <math>A</math>. The corresponding eigenvalue determines how the length of the arrow is changed by the operation, and whether its direction is reversed or not, determined by whether the eigenvalue is negative or positive.
 
In abstract [[linear algebra]], these concepts are naturally extended to more general situations, where the set of real scalar factors is replaced by any [[field (mathematics)|field]] of [[scalar (mathematics)|scalars]] (such as [[algebraic numbers|algebraic]] or complex numbers); the set of [[Cartesian coordinates|Cartesian]] vectors <math>\mathbb{R}^n</math> is replaced by any [[vector space]] (such as the [[continuous function]]s, the [[polynomial]]s or the [[trigonometric series]]), and matrix multiplication is replaced by any [[linear operator]] that maps vectors to vectors (such as the [[derivative (calculus)|derivative]] from [[calculus]]). In such cases, the "vector" in "eigenvector" may be replaced by a more specific term, such as "[[eigenfunction]]", "[[eigenmode]]", "[[eigenface]]", or "eigenstate". Thus, for example, the exponential function <math>f(x) = a^x</math> is an eigenfunction of the derivative operator " <math>{}'</math> ", with eigenvalue <math>\lambda = \ln a</math>, since its derivative is <math>f'(x) = (\ln a)a^x = \lambda f(x)</math>.
 
The set of all eigenvectors of a matrix (or linear operator), each paired with its corresponding eigenvalue, is called the '''eigensystem''' of that matrix.<ref>William H. Press, Saul A. Teukolsky, William T. Vetterling, Brian P. Flannery (2007), [http://www.nr.com/ ''Numerical Recipes: The Art of Scientific Computing'', Chapter 11: ''Eigensystems.'', pages=563–597. Third edition, Cambridge University Press. ISBN 9780521880688 ]</ref>  Any multiple of an eigenvector is also an eigenvector, with the same eigenvalue. An '''eigenspace''' of a matrix <math>A</math> is the set of all eigenvectors with the same eigenvalue, together with the [[zero vector]].<ref name=WolframEigenvector/>  An '''eigenbasis''' for <math>A</math> is any [[basis (linear algebra)|basis]] for the set of all vectors that consists of linearly independent eigenvectors of <math>A</math>.  Not every matrix has an eigenbasis, but every [[symmetric matrix]] does.
 
The terms '''characteristic vector''', '''characteristic value''', and '''characteristic space''' are also used for these concepts. The prefix '''[[wikt:eigen|eigen-]]''' is adopted from the [[German language|German]] word ''eigen'' for "self-" or "unique to", "peculiar to", or "belonging to."
 
Eigenvalues and eigenvectors have many applications in both pure and applied mathematics. They are used in [[matrix factorization]], in [[quantum mechanics]], and in many other areas.
 
==Definition==
 
===Eigenvectors and eigenvalues of a real matrix===
[[File:Eigenvalue equation.svg|thumb|right|250px|Matrix <math>A</math> acts by stretching the vector <math>x</math>, not changing its direction, so <math>x</math> is an eigenvector of <math>A</math>.]]
{{see also|Euclidean vector|Matrix (mathematics)}}
<!--No need to mention "linear operators" or "linear algebra" here. There is a section for that below.-->
In many contexts, a vector can be assumed to be a list of real numbers (called ''elements''), written vertically with brackets around the entire list, such as the vectors ''u'' and ''v'' below. Two vectors are said to be [[scalar multiplication|scalar multiples]] of each other (also called [[Parallel (geometry)|parallel]] or [[collinearity|collinear]]) if they have the same number of elements, and if every element of one vector is obtained by multiplying each corresponding element in the other vector by the same number (known as a ''scaling factor'', or a ''scalar''). For example, the vectors
:<math>u = \begin{bmatrix}1\\3\\4\end{bmatrix}\quad\quad\quad</math> and <math>\quad\quad\quad v = \begin{bmatrix}-20\\-60\\-80\end{bmatrix}</math>
are scalar multiples of each other, because each element of <math>v</math> is −20 times the corresponding element of <math>u</math>.
 
A vector with three elements, like <math>u</math> or <math>v</math> above, may represent a point in three-dimensional space, relative to some [[Cartesian coordinates|Cartesian coordinate system]]. It helps to think of such a vector as the tip of an arrow whose tail is at the origin of the coordinate system. In this case, the condition "<math>u</math> is parallel to <math>v</math>" means that the two arrows lie on the same straight line, and may differ only in length and direction along that line.
 
If we [[matrix multiplication|multiply]] any square matrix <math>A</math> with <math>n</math> rows and <math>n</math> columns by such a vector <math>v</math>, the result will be another vector <math>w = A v </math>, also with <math>n</math> rows and one column.  That is,
:<math>\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} \quad\quad</math> is mapped to <math>
\begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n \end{bmatrix}  \;=\;
\begin{bmatrix} A_{1,1} & A_{1,2} & \ldots & A_{1,n} \\
A_{2,1} & A_{2,2} & \ldots & A_{2,n} \\
\vdots &  \vdots &  \ddots &  \vdots \\
A_{n,1} & A_{n,2} & \ldots & A_{n,n} \\
\end{bmatrix}
\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}
</math>
where, for each index <math>i</math>,
:<math> w_i = A_{i,1} v_1 + A_{i,2} v_2 + \cdots + A_{i,n} v_n = \sum_{j = 1}^{n} A_{i,j} v_j</math>
In general, if <math>v_j</math> are not all zeros, the vectors <math>v</math> and <math>A v</math> will not be parallel. When they ''are'' parallel (that is, when there is some real number <math>\lambda</math> such that <math>A v = \lambda v</math>) we say that <math>v</math> is an '''eigenvector''' of <math>A</math>.  In that case, the scale factor <math>\lambda</math> is said to be the '''eigenvalue''' corresponding to that eigenvector.
 
In particular, multiplication by a 3×3 matrix <math>A</math> may change both the direction and the magnitude of an arrow <math>v</math> in three-dimensional space.  However, if <math>v</math> is an eigenvector of <math>A</math> with eigenvalue <math>\lambda</math>, the operation may only change its length, and either keep its direction or [[point reflection|flip]] it (make the arrow point in the exact opposite direction). Specifically, the length of the arrow will increase if <math>|\lambda| > 1</math>,  remain the same if <math>|\lambda| = 1</math>, and decrease it if <math>|\lambda|< 1</math>.  Moreover, the direction will be precisely the same if <math>\lambda > 0</math>, and flipped if <math>\lambda < 0</math>.  If <math>\lambda = 0</math>, then the length of the arrow becomes zero.
 
====An example====
[[File:Eigenvectors.gif|right|frame|The transformation matrix <math>\bigl[ \begin{smallmatrix} 2 & 1\\ 1 & 2 \end{smallmatrix} \bigr]</math> preserves the direction of vectors parallel to <math>\bigl[ \begin{smallmatrix} 1 \\ 1 \end{smallmatrix} \bigr]</math> (in blue) and <math>\bigl[ \begin{smallmatrix} 1 \\ -1 \end{smallmatrix} \bigr]</math> (in violet). The points that lie on the line through the origin, parallel to an eigenvector, remain on the line after the transformation. The vectors in red are not eigenvectors, therefore their direction is altered by the transformation. See also: [[:File:Eigenvectors-extended.gif|An extended version, showing all four quadrants]].]]
 
For the transformation matrix
:<math>A = \begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix},</math>
the vector
:<math>v = \begin{bmatrix} 4 \\ -4 \end{bmatrix}</math>
is an eigenvector with eigenvalue 2. Indeed,
:<math>A v = \begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix} \begin{bmatrix} 4 \\ -4 \end{bmatrix} = \begin{bmatrix} 3 \cdot 4 + 1 \cdot (-4) \\ 1 \cdot 4 + 3 \cdot (-4) \end{bmatrix}</math>
::<math> = \begin{bmatrix} 8 \\ -8 \end{bmatrix} = 2 \cdot \begin{bmatrix} 4 \\ -4 \end{bmatrix}.</math>
On the other hand the vector
:<math>v = \begin{bmatrix} 0 \\ 1 \end{bmatrix}</math>
is ''not'' an eigenvector, since
:<math>\begin{bmatrix} 3 & 1\\1 & 3 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \cdot 0 + 1 \cdot 1 \\ 1 \cdot 0 + 3 \cdot 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 3 \end{bmatrix},</math>
and this vector is not a multiple of the original vector <math>v</math>.
 
====Another example====
For the matrix
:<math>A= \begin{bmatrix} 1 & 2 & 0\\0 & 2 & 0\\ 0 & 0 & 3\end{bmatrix},</math>
we have
:<math>A \begin{bmatrix} 1\\0\\0 \end{bmatrix} = \begin{bmatrix} 1\\0\\0 \end{bmatrix} = 1 \cdot \begin{bmatrix} 1\\0\\0 \end{bmatrix},\quad\quad</math>
:<math>A \begin{bmatrix} 0\\0\\1 \end{bmatrix} = \begin{bmatrix} 0\\0\\3 \end{bmatrix} = 3 \cdot \begin{bmatrix} 0\\0\\1 \end{bmatrix}.\quad\quad</math>
Therefore, the vectors <math>[1,0,0]^\mathsf{T}</math> and <math>[0,0,1]^\mathsf{T}</math> are eigenvectors of <math>A</math> corresponding to the eigenvalues 1 and 3 respectively. (Here the symbol <math>{}^\mathsf{T}</math> indicates [[transpose of a matrix|matrix transposition]], in this case turning the row vectors into column vectors.)
 
====Trivial cases====
The [[identity matrix]] <math>I</math> (whose general element <math>I_{i j}</math> is 1 if <math>i = j</math>, and 0 otherwise) maps every vector to itself.  Therefore, every vector is an eigenvector of <math>I</math>, with eigenvalue 1.
 
More generally, if <math>A</math> is a [[diagonal matrix]] (with <math>A_{i j} = 0</math> whenever <math>i \neq j</math>), and <math>v</math> is a vector parallel to axis <math>i</math> (that is, <math>v_i \neq 0</math>, and <math>v_j = 0</math> if <math>j \neq i</math>), then <math>A v = \lambda v</math> where <math>\lambda = A_{i i}</math>.  That is, the eigenvalues of a diagonal matrix are the elements of its main diagonal. This is trivially the case of ''any'' 1 ×1 matrix.
 
===General definition===
The concept of eigenvectors and eigenvalues extends naturally to abstract [[linear transformation]]s on abstract [[vector space]]s. Namely, let <math>V</math> be any vector space over some [[field (algebra)|field]] <math>K</math> of [[scalar (mathematics)|scalars]], and let <math>T</math> be a linear transformation mapping <math>V</math> into <math>V</math>.  We say that a non-zero vector <math>v</math> of <math>V</math> is an '''eigenvector''' of <math>T</math> if (and only if) there is a scalar <math>\lambda</math> in <math>K</math> such that
:<math>T(v)=\lambda v</math>.
This equation is called the [[eigenvalue equation]] for <math>T</math>, and the scalar <math>\lambda</math>  is the '''eigenvalue''' of <math>T</math> corresponding to the eigenvector <math>v</math>.  Note that <math>T(v)</math> means the result of applying the operator <math>T</math> to the vector <math>v</math>, while <math>\lambda v </math> means the product of the scalar <math>\lambda</math> by <math>v</math>.<ref>See {{Harvnb|Korn|Korn|2000|loc=Section 14.3.5a}}; {{Harvnb|Friedberg|Insel|Spence|1989|loc=p. 217}}</ref>
 
The matrix-specific definition is a special case of this abstract definition.  Namely, the vector space <math>V</math> is the set of all column vectors of a certain size <math>n</math>×1, and <math>T</math> is the linear transformation that consists in multiplying a vector by the given <math>n\times n</math> matrix <math>A</math>.
 
Some authors allow <math>v</math> to be the [[zero vector]] in the definition of eigenvector.<ref>{{Citation|last=Axler|first= Sheldon |title=Linear Algebra Done Right|edition=2nd |chapter=Ch. 5|page= 77}}</ref> This is reasonable as long as we define eigenvalues and eigenvectors carefully:  If we would like the zero vector to be an eigenvector, then we must first define an eigenvalue of <math> T </math> as a scalar <math> \lambda </math> in <math>K</math> such that there is a ''nonzero'' vector <math> v </math> in <math>V</math> with <math> T(v) = \lambda v </math>.  We then define an eigenvector to be a vector <math> v </math> in <math>V</math> such that there is an eigenvalue <math> \lambda </math> in <math>K</math> with <math> T(v) = \lambda v </math>. This way, we ensure that it is not the case that every scalar is an eigenvalue corresponding to the zero vector.
 
===Eigenspace and spectrum=== <!-- Geometric multiplicity links here -->
 
If <math>v</math> is an eigenvector of <math>T</math>, with eigenvalue <math>\lambda</math>, then any [[scalar multiplication|scalar multiple]] <math>\alpha v </math> of <math>v</math> with nonzero <math>\alpha</math> is also an eigenvector with eigenvalue <math>\lambda</math>, since <math>T(\alpha v) = \alpha T(v) = \alpha(\lambda v) = \lambda(\alpha v)</math>.  Moreover, if <math>u</math> and <math>v</math> are eigenvectors with the same eigenvalue <math>\lambda</math>, then <math>u+v</math> is also an eigenvector with the same eigenvalue <math>\lambda</math>.  Therefore, the set of all eigenvectors with the same eigenvalue <math>\lambda</math>, together with the zero vector, is a [[linear subspace]] of <math>V</math>, called the '''eigenspace''' of <math>T</math> associated to <math>\lambda</math>.<ref>{{Harvnb|Shilov|1977|loc=p. 109}}</ref><ref>[[b:The Book of Mathematical Proofs/Algebra/Linear Transformations#Lemma for the eigenspace|Lemma for the eigenspace]]</ref>  If that subspace has dimension 1, it is sometimes called an '''eigenline'''.<ref>''[http://books.google.com/books?id=pkESXAcIiCQC&pg=PA111 Schaum's Easy Outline of Linear Algebra]'', p. 111</ref>
 
The '''geometric multiplicity''' <math>\gamma_T(\lambda)</math> of an eigenvalue <math>\lambda</math> is the dimension of the eigenspace associated to <math>\lambda</math>, i.e. number of [[linear independence|linearly independent]] eigenvectors with that eigenvalue.
 
The eigenspaces of ''T'' always form a direct sum (and as a consequence any family of eigenvectors for different eigenvalues is always linearly independent). Therefore the sum of the dimensions of the eigenspaces cannot exceed the dimension ''n'' of the space on which ''T'' operates, and in particular there cannot be more than ''n'' distinct eigenvalues.<ref name="Shilov_lemma">For a proof of this lemma, see {{Harvnb|Roman|2008|loc=Theorem 8.2 on p. 186}}; {{Harvnb|Shilov|1977|loc=p. 109}}; {{Harvnb|Hefferon|2001|loc=p. 364}}; {{Harvnb|Beezer|2006|loc=Theorem EDELI on p. 469}}; and [[b:Famous Theorems of Mathematics/Algebra/Linear Transformations#Lemma for linear independence of eigenvectors|Lemma for linear independence of eigenvectors]]</ref>
 
Any subspace spanned by eigenvectors of <math>T</math> is an [[invariant subspace]] of <math>T</math>, and the restriction of ''T'' to such a subspace is diagonalizable.
 
The set of eigenvalues of <math>T</math> is sometimes called the [[Spectrum of a matrix|spectrum]] of <math>T</math>.
 
===Eigenbasis===
An '''eigenbasis''' for a linear operator <math>T</math> that operates on a vector space <math>V</math> is a basis for <math>V</math> that consists entirely of eigenvectors of <math>T</math> (possibly with different eigenvalues).  Such a basis exists precisely if the direct sum of the eigenspaces equals the whole space, in which case one can take the union of bases chosen in each of the eigenspaces as eigenbasis. The matrix of ''T'' in a given basis is diagonal precisely when that basis is an eigenbasis for ''T'', and for this reason ''T'' is called '''diagonalizable''' if it admits an eigenbasis.
 
==Generalizations to infinite-dimensional spaces==
{{details|Spectral theorem}}
The definition of eigenvalue of a linear transformation <math>T</math> remains valid even if the underlying space <math>V</math> is an infinite dimensional [[Hilbert space|Hilbert]] or [[Banach space]]. Namely, a scalar <math>\lambda</math> is an eigenvalue if and only if there is some nonzero vector <math>v</math> such that <math>T(v) = \lambda v</math>.
 
===Eigenfunctions===
A widely used class of linear operators acting on infinite dimensional spaces are the [[differential operator]]s on [[function space]]s. Let <math>D</math> be a linear differential operator in on the space <math>\mathbf{C^\infty}</math> of infinitely [[derivative|differentiable]] real functions of a real argument <math>t</math>. The eigenvalue equation for <math>D</math> is the [[differential equation]]
:<math>D f = \lambda f</math>
The functions that satisfy this equation are commonly called '''[[eigenfunctions]]'''. For the derivative operator <math>d/dt</math>, an eigenfunction is a function that, when differentiated, yields a constant times the original function. If <math>\lambda</math> is zero, the generic solution is a constant function.  If <math>\lambda</math> is non-zero, the solution is an [[exponential function]]
: <math>f(t) = Ae^{\lambda t}.\ </math>
Eigenfunctions are an essential tool in the solution of differential equations and many other applied and theoretical fields. For instance, the exponential functions are eigenfunctions of any [[shift invariant operator|shift invariant linear operator]].  This fact is the basis of powerful [[Fourier transform]] methods for solving all sorts of problems.
 
===Spectral theory===
If <math>\lambda</math> is an eigenvalue of <math>T</math>, then the operator <math>T-\lambda I</math> is not one-to-one, and therefore its inverse <math>(T-\lambda I)^{-1}</math> is not defined.  The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional ones. In general, the operator <math>T - \lambda I</math> may not have an inverse, even if <math>\lambda</math> is not an eigenvalue.
 
For this reason, in [[functional analysis]] one defines the [[spectrum (functional analysis)|spectrum of a linear operator]] <math>T</math> as the set of all scalars <math>\lambda</math> for which the operator <math>T-\lambda I</math> has no [[bounded operator|bounded]] inverse.  Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them.
 
===Associative algebras and representation theory===
More algebraically, rather than generalizing the vector space to an infinite dimensional space, one can generalize the algebraic object that is acting on the space, replacing a single operator acting on a vector space with an [[algebra representation]] – an [[associative algebra]] acting on a [[module (mathematics)|module]]. The study of such actions is the field of [[representation theory]].
 
A closer analog of eigenvalues is given by the [[weight (representation theory)|representation-theoretical concept of weight]], with the analogs of eigenvectors and eigenspaces being ''weight vectors'' and ''weight spaces.''
 
==Eigenvalues and eigenvectors of matrices==
 
===Characteristic polynomial===
The eigenvalue equation for a matrix <math>A</math> is
: <math>A v - \lambda v = 0,</math>
which is equivalent to
: <math>(A-\lambda I)v = 0,</math>
where <math>I</math> is the <math>n\times n</math> [[identity matrix]]. It is a fundamental result of linear algebra that an equation <math>M v = 0</math> has a non-zero solution <math>v</math> if, and only if, the [[determinant]] <math>\det(M)</math> of the matrix <math>M</math> is zero.  It follows that the eigenvalues of <math>A</math> are precisely the real numbers <math>\lambda</math> that satisfy the equation
: <math>\det(A-\lambda I) = 0</math>
 
The left-hand side of this equation can be seen (using [[Leibniz formula for determinants|Leibniz' rule]] for the determinant) to be a [[polynomial]] function of the variable <math>\lambda</math>. The [[degree of a polynomial|degree]] of this polynomial is <math>n</math>, the order of the matrix.  Its [[coefficient]]s depend on the entries of <math>A</math>, except that its term of degree <math>n</math> is always <math>(-1)^n\lambda^n</math>. This polynomial is called the ''[[characteristic polynomial]]'' of <math>A</math>; and the above equation is called the ''characteristic equation'' (or, less often, the ''secular equation'') of <math>A</math>.
 
For example, let <math>A</math> be the matrix
:<math>A =
\begin{bmatrix}
2 & 0 & 0 \\
0 & 3 & 4 \\
0 & 4 & 9
\end{bmatrix}</math>
 
The characteristic polynomial of <math>A</math> is
:<math>\det (A-\lambda I) \;=\; \det \left(\begin{bmatrix}
2 & 0 & 0 \\
0 & 3 & 4 \\
0 & 4 & 9
\end{bmatrix} - \lambda
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}\right) \;=\;
\det \begin{bmatrix}
2 - \lambda & 0 & 0 \\
0 & 3 - \lambda & 4 \\
0 & 4 & 9 - \lambda
\end{bmatrix}</math>
which is
:<math> (2 - \lambda) \bigl[ (3 - \lambda) (9 - \lambda) - 16 \bigr] = -\lambda^3 + 14\lambda^2 - 35\lambda + 22</math>
The roots of this polynomial are 2, 1, and 11.  Indeed these are the only three eigenvalues of <math>A</math>, corresponding to the eigenvectors <math>[1,0,0]',</math> <math>[0,2,-1]',</math> and <math>[0,1,2]'</math> (or any non-zero multiples thereof).
 
====In the real domain====
Since the eigenvalues are roots of the characteristic polynomial, an <math>n\times n</math> matrix has at most <math>n</math> eigenvalues.  If the matrix has real entries, the coefficients of the characteristic polynomial are all real; but it may have fewer than <math>n</math> real roots, or no real roots at all.
 
For example, consider the [[permutation matrix|cyclic permutation matrix]]
:<math>A = \begin{bmatrix} 0 & 1 & 0\\0 & 0 & 1\\ 1 & 0 & 0\end{bmatrix}</math>
This matrix shifts the coordinates of the vector up by one position, and moves the first coordinate to the bottom.  Its characteristic polynomial is <math>1 - \lambda^3</math> which has one real root <math>\lambda_1 = 1</math>.  Any vector with three equal non-zero elements is an eigenvector for this eigenvalue. For example,
:<math>
    A \begin{bmatrix} 5\\5\\5 \end{bmatrix} =
    \begin{bmatrix} 5\\5\\5 \end{bmatrix} =
    1 \cdot \begin{bmatrix} 5\\5\\5 \end{bmatrix}
  </math>
 
====In the complex domain====
The [[fundamental theorem of algebra]] implies that the characteristic polynomial of an <math>n\times n</math> matrix <math>A</math>, being a polynomial of degree <math>n</math>, has exactly <math>n</math> complex [[root]]s.  More precisely, it can be [[factorization|factored]] into the product of <math>n</math> linear terms,
:<math> \det(A-\lambda I) = (\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots(\lambda_n - \lambda)</math>
where each <math>\lambda_i</math> is a complex number. The numbers <math>\lambda_1</math>, <math>\lambda_2</math>, ... <math>\lambda_n</math>,  (which may not be all distinct) are roots of the polynomial, and are precisely the eigenvalues of <math>A</math>.
 
Even if the entries of <math>A</math> are all real numbers, the eigenvalues may still have non-zero imaginary parts (and the elements of the corresponding eigenvectors will therefore also have non-zero imaginary parts).  Also, the eigenvalues may be [[irrational number]]s even if all the entries of <math>A</math> are [[rational number]]s, or all are integers. However, if the entries of <math>A</math> are [[algebraic number]]s (which include the rationals), the eigenvalues will be (complex) algebraic numbers too.
 
The non-real roots of a real polynomial with real coefficients can be grouped into pairs of [[complex conjugate]] values, namely with the two members of each pair having the same real part and imaginary parts that differ only in sign.  If the degree is odd, then by the [[intermediate value theorem]] at least one of the roots will be real. Therefore, any real matrix with odd order will have at least one real eigenvalue; whereas a real matrix with even order may have no real eigenvalues.
 
In the example of the 3×3 cyclic permutation matrix <math>A</math>, above, the characteristic polynomial <math>1 - \lambda^3</math> has two additional non-real roots, namely
:<math>\lambda_2 = -1/2 + \mathbf{i}\sqrt{3}/2\quad\quad</math> and <math>\quad\quad\lambda_3 = \lambda_2^* = -1/2 - \mathbf{i}\sqrt{3}/2</math>,
where <math>\mathbf{i}= \sqrt{-1}</math> is the imaginary unit.  Note that <math>\lambda_2\lambda_3 = 1</math>, <math>\lambda_2^2 = \lambda_3</math>, and <math>\lambda_3^2 = \lambda_2</math>. Then
:<math>
  A \begin{bmatrix} 1 \\ \lambda_2 \\ \lambda_3 \end{bmatrix} =
  \begin{bmatrix} \lambda_2\\ \lambda_3 \\1 \end{bmatrix} =
  \lambda_2 \cdot \begin{bmatrix} 1\\ \lambda_2 \\ \lambda_3 \end{bmatrix}
  \quad\quad
  </math> and <math>
  \quad\quad
  A \begin{bmatrix} 1 \\ \lambda_3 \\ \lambda_2 \end{bmatrix} =
  \begin{bmatrix} \lambda_3 \\ \lambda_2 \\ 1 \end{bmatrix} =
  \lambda_3 \cdot \begin{bmatrix} 1 \\ \lambda_3 \\ \lambda_2 \end{bmatrix}
  </math>
Therefore, the vectors <math>[1,\lambda_2,\lambda_3]'</math> and <math>[1,\lambda_3,\lambda_2]'</math> are eigenvectors of <math>A</math>, with eigenvalues <math>\lambda_2</math>, and <math>\lambda_3</math>, respectively.
 
===Algebraic multiplicities===
Let <math>\lambda_i</math> be an eigenvalue of an <math>n\times n</math> matrix <math>A</math>.  The ''algebraic multiplicity'' <math>\mu_A(\lambda_i)</math> of <math>\lambda_i</math> is its [[multiplicity (mathematics)#Multiplicity of a root of a polynomial|multiplicity as a root]] of the characteristic polynomial, that is, the largest integer <math>k</math> such that <math>(\lambda - \lambda_i)^k</math> [[polynomial division|divides evenly]] that polynomial.
 
Like the geometric multiplicity <math>\gamma_A(\lambda_i)</math>, the algebraic multiplicity is an integer between 1 and <math>n</math>; and the sum <math>\boldsymbol{\mu}_A</math> of <math>\mu_A(\lambda_i)</math> over all ''distinct'' eigenvalues also cannot exceed <math>n</math>.  If complex eigenvalues are considered, <math>\boldsymbol{\mu}_A</math> is exactly <math>n</math>.
 
It can be proved that the geometric multiplicity <math>\gamma_A(\lambda_i)</math> of an eigenvalue never exceeds its algebraic multiplicity <math>\mu_A(\lambda_i)</math>.  Therefore, <math>\boldsymbol{\gamma}_A</math> is at most  <math>\boldsymbol{\mu}_A</math>.
 
If  <math>\gamma_A(\lambda_i) = \mu_A(\lambda_i)</math>, then <math>\lambda_i</math> is said to be a ''semisimple eigenvalue''.
 
====Example====
For the matrix:
<math>A= \begin{bmatrix}
2 & 0 & 0 & 0 \\
1 & 2 & 0 & 0 \\
0 & 1 & 3 & 0  \\
0 & 0 & 1 & 3 
\end{bmatrix},</math>
:the characteristic polynomial of <math>A</math> is <math>\det (A-\lambda I) \;=\;
\det \begin{bmatrix}
2- \lambda & 0 & 0 & 0 \\
1 & 2- \lambda & 0 & 0 \\
0 & 1 & 3- \lambda & 0  \\
0 & 0 & 1 & 3- \lambda 
\end{bmatrix}=  (2 - \lambda)^2 (3 - \lambda)^2 </math>,
:being the product of the diagonal with a lower triangular matrix.
 
The roots of this polynomial, and hence the eigenvalues, are 2 and 3.
The ''algebraic multiplicity'' of each eigenvalue is 2; in other words they are both double roots.
On the other hand, the ''geometric multiplicity'' of the eigenvalue 2 is only 1, because its eigenspace is spanned by the vector <math>[0,1,-1,1]</math>, and is therefore 1 dimensional.
Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by <math>[0,0,0,1]</math>. Hence, the total algebraic multiplicity of A, denoted <math>\mu_A</math>, is 4, which is the most it could be for a 4 by 4 matrix. The geometric multiplicity <math>\gamma_A</math> is 2, which is the smallest it could be for a matrix which has two distinct eigenvalues.
 
===Diagonalization and eigendecomposition===
If the sum <math>\boldsymbol{\gamma}_A</math> of the geometric multiplicities of all eigenvalues is exactly <math>n</math>, then <math>A</math> has a set of <math>n</math> linearly independent eigenvectors.  Let <math>Q</math> be a square matrix whose columns are those eigenvectors, in any order. Then we will have <math>A Q = Q\Lambda </math>, where <math>\Lambda</math> is the diagonal matrix such that <math>\Lambda_{i i}</math> is the eigenvalue associated to column <math>i</math> of <math>Q</math>. Since the columns of <math>Q</math> are linearly independent, the matrix <math>Q</math> is invertible.  Premultiplying both sides by <math>Q^{-1}</math> we get <math>Q^{-1}A Q = \Lambda</math>.  By definition, therefore, the matrix <math>A</math> is [[diagonalizable matrix|diagonalizable]].
 
Conversely, if <math>A</math> is diagonalizable, let <math>Q</math> be a non-singular square matrix such that <math>Q^{-1} A Q</math> is some diagonal matrix <math>D</math>.  Multiplying both sides on the left by <math>Q</math> we get <math>A Q = Q D </math>.  Therefore each column of <math>Q</math> must be an eigenvector of <math>A</math>, whose eigenvalue is the corresponding element on the diagonal of <math>D</math>. Since the columns of <math>Q</math> must be linearly independent, it follows that <math>\boldsymbol{\gamma}_A = n</math>.  Thus <math>\boldsymbol{\gamma}_A</math> is equal to <math>n</math> if and only if <math>A</math> is diagonalizable.
 
If <math>A</math> is diagonalizable, the space of all <math>n</math>-element vectors can be decomposed into the direct sum of the eigenspaces of <math>A</math>. This decomposition is called the [[eigendecomposition of a matrix|eigendecomposition]] of <math>A</math>, and it is preserved under change of coordinates.
 
A matrix that is not diagonalizable is said to be [[defective matrix|defective]]. For defective matrices, the notion of eigenvector can be generalized to [[generalized eigenvector]]s, and that of diagonal matrix to a [[Jordan form]] matrix. Over an algebraically closed field, any matrix <math>A</math> has a [[Jordan form]] and therefore admits a basis of generalized eigenvectors, and a decomposition into [[generalized eigenspace]]s
 
===Further properties===
Let <math>A</math> be an arbitrary <math>n\times n</math> matrix of complex numbers with eigenvalues <math>\lambda_1</math>, <math>\lambda_2</math>, ... <math>\lambda_n</math>. (Here it is understood that an eigenvalue with algebraic multiplicity <math>\mu</math> occurs <math>\mu</math> times in this list.) Then
* The [[trace (linear algebra)|trace]] of <math>A</math>, defined as the sum of its diagonal elements, is also the sum of all eigenvalues:
:<math>\operatorname{tr}(A) = \sum_{i=1}^n A_{i i} = \sum_{i=1}^n \lambda_i = \lambda_1+ \lambda_2 +\cdots+ \lambda_n</math>.
* The [[determinant]] of <math>A</math> is the product of all eigenvalues:
:<math>\operatorname{det}(A) = \prod_{i=1}^n \lambda_i=\lambda_1\lambda_2\cdots\lambda_n</math>.
* The eigenvalues of the <math>k</math>th power of <math>A</math>, i.e. the eigenvalues of <math>A^k</math>, for any positive integer <math>k</math>, are <math>\lambda_1^k,\lambda_2^k,\dots,\lambda_n^k</math>
* The matrix <math>A</math> is invertible if and only if all the eigenvalues <math>\lambda_i</math> are nonzero.
* If <math>A</math> is invertible, then the eigenvalues of  <math>A^{-1}</math> are <math>1/\lambda_1,1/\lambda_2,\dots,1/\lambda_n</math>
* If <math>A</math> is equal to its [[conjugate transpose]] <math>A^*</math> (in other words, if <math>A</math> is [[Hermitian matrix|Hermitian]]), then every eigenvalue is real.  The same is true of any a [[symmetric matrix|symmetric]] real matrix. If <math>A</math> is also [[Positive-definite matrix|positive-definite]], positive-semidefinite, negative-definite, or negative-semidefinite every eigenvalue is positive, non-negative, negative, or non-positive respectively.
* Every eigenvalue of a [[unitary matrix]] has absolute value <math>|\lambda|=1</math>.
 
=== Left and right eigenvectors ===
{{see also|left and right (algebra)}}
The use of matrices with a single column (rather than a single row) to represent vectors is traditional in many disciplines.  For that reason, the word "eigenvector" almost always means a '''right eigenvector''', namely a ''column'' vector that must be placed to the ''right'' of the matrix <math>A</math> in the defining equation
:<math>A v = \lambda v</math>.
There may be also single-''row'' vectors that are unchanged when they occur on the ''left'' side of a product with a square matrix <math>A</math>; that is, which satisfy the equation
:<math>u A = \lambda u</math>
Any such row vector <math>u</math> is called a '''left eigenvector''' of <math>A</math>.
 
The left eigenvectors of <math>A</math> are transposes of the right eigenvectors of the transposed matrix <math>A^\mathsf{T}</math>, since their defining equation is equivalent to
:<math>A^\mathsf{T} u^\mathsf{T}  = \lambda u^\mathsf{T}</math>
 
It follows that, if <math>A</math> is [[Hermitian matrix|Hermitian]], its left and right eigenvectors are [[complex conjugate vector space|complex conjugates]]. In particular if <math>A</math> is a real symmetric matrix, they are the same except for transposition.
 
==Calculation==
{{main|Eigenvalue algorithm}}
 
===Computing the eigenvalues===
The eigenvalues of a matrix <math>A</math> can be determined by finding the roots of the characteristic polynomial. Explicit [[algebraic solution|algebraic formulas]] for the roots of a polynomial exist only if the degree <math>n</math> is 4 or less. According to the [[Abel–Ruffini theorem]] there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more.
 
It turns out that any polynomial with degree <math>n</math> is the characteristic polynomial of some [[companion matrix]] of order <math>n</math>.  Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate [[numerical method]]s.
 
In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required [[accuracy]].<ref name=TrefethenBau/>  However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable [[round-off error]]s, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by [[Wilkinson's polynomial]]).<ref name=TrefethenBau>{{Citation|first1=Lloyd N. |last1=Trefethen |first2= David|last2= Bau|title=Numerical Linear Algebra|publisher=SIAM|year=1997}}</ref>
 
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the advent of the [[QR algorithm]] in 1961.
<ref name=TrefethenBau/> Combining the [[Householder transformation]] with the LU decomposition results in an algorithm with better convergence than the QR algorithm.{{citation needed|date=March 2013}} For large [[Hermitian matrix|Hermitian]] [[sparse matrix|sparse matrices]], the [[Lanczos algorithm]] is one example of an efficient [[iterative method]] to compute eigenvalues and eigenvectors, among several other possibilities.<ref name=TrefethenBau/>
 
===Computing the eigenvectors===
Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding non-zero solutions of the eigenvalue equation, that becomes a [[linear system|system of linear equations]] with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix
:<math>A = \begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix}</math>
we can find its eigenvectors by solving the equation <math>A v = 6 v</math>, that is
:<math>\begin{bmatrix} 4 & 1\\6 & 3 \end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} = 6 \cdot \begin{bmatrix}x\\y\end{bmatrix}</math>
This matrix equation is equivalent to two [[linear equation]]s
:<math>
  \left\{\begin{matrix} 4x + {\ }y &{}= 6x\\6x + 3y &{}=6 y\end{matrix}\right.
  \quad\quad\quad</math> that is <math>
  \left\{\begin{matrix} -2x+ {\ }y &{}=0\\+6x-3y &{}=0\end{matrix}\right.
</math>
Both equations reduce to the single linear equation <math>y=2x</math>. Therefore, any vector of the form <math>[a,2a]'</math>, for any non-zero real number <math>a</math>, is an eigenvector of <math>A</math> with eigenvalue <math>\lambda = 6</math>.
 
The matrix <math>A</math> above has another eigenvalue <math>\lambda=1</math>. A similar calculation shows that the corresponding eigenvectors are the non-zero solutions of <math>3x+y=0</math>, that is, any vector of the form <math>[b,-3b]'</math>, for any non-zero real number <math>b</math>.
 
Some numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation.
 
==History==
Eigenvalues are often introduced in the context of [[linear algebra]] or [[matrix (mathematics)|matrix theory]]. Historically, however, they arose in the study of [[quadratic form]]s and [[differential equation]]s.
 
In the 18th century [[Leonhard Euler|Euler]] studied the rotational motion of a [[rigid body]] and discovered the importance of the [[Principal axis (mechanics)|principal axes]]. [[Lagrange]] realized that the principal axes are the eigenvectors of the inertia matrix.<ref>See {{Harvnb|Hawkins|1975|loc=§2}}</ref> In the early 19th century, [[Augustin Louis Cauchy|Cauchy]] saw how their work could be used to classify the [[quadric surface]]s, and generalized it to arbitrary dimensions.<ref name="hawkins3">See {{Harvnb|Hawkins|1975|loc=§3}}</ref> Cauchy also coined the term ''racine caractéristique'' (characteristic root) for what is now called ''eigenvalue''; his term survives in ''[[Characteristic polynomial#Characteristic equation|characteristic equation]]''.<ref name="kline807">See {{Harvnb|Kline|1972|loc=pp. 807–808}}</ref>
 
[[Joseph Fourier|Fourier]] used the work of Laplace and Lagrange to solve the [[heat equation]] by [[separation of variables]] in his famous 1822 book ''[[Théorie analytique de la chaleur]]''.<ref>See {{Harvnb|Kline|1972|loc=p. 673}}</ref> [[Jacques Charles François Sturm|Sturm]] developed Fourier's ideas further and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues.<ref name="hawkins3" /> This was extended by [[Charles Hermite|Hermite]] in 1855 to what are now called [[Hermitian matrix|Hermitian matrices]].<ref name="kline807" /> Around the same time, [[Francesco Brioschi|Brioschi]] proved that the eigenvalues of [[orthogonal matrix|orthogonal matrices]] lie on the [[unit circle]],<ref name="hawkins3" /> and [[Alfred Clebsch|Clebsch]] found the corresponding result for [[skew-symmetric matrix|skew-symmetric matrices]].<ref name="kline807" /> Finally, [[Karl Weierstrass|Weierstrass]] clarified an important aspect in the [[stability theory]] started by Laplace by realizing that [[defective matrix|defective matrices]] can cause instability.<ref name="hawkins3" />
 
In the meantime, [[Joseph Liouville|Liouville]] studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called ''[[Sturm–Liouville theory]]''.<ref>See {{Harvnb|Kline|1972|loc=pp. 715–716}}</ref> [[Hermann Schwarz|Schwarz]] studied the first eigenvalue of [[Laplace's equation]] on general domains towards the end of the 19th century, while [[Henri Poincaré|Poincaré]] studied [[Poisson's equation]] a few years later.<ref>See {{Harvnb|Kline|1972|loc=pp. 706–707}}</ref>
 
At the start of the 20th century, [[David Hilbert|Hilbert]] studied the eigenvalues of [[integral operator]]s by viewing the operators as infinite matrices.<ref>See {{Harvnb|Kline|1972|loc=p. 1063}}</ref> He was the first to use the [[German language|German]] word ''eigen'' to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by [[Helmholtz]]. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.<ref>See {{Harvnb|Aldrich|2006}}</ref>
 
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when [[Richard Edler von Mises|Von Mises]] published the [[power method]]. One of the most popular methods today, the [[QR algorithm]], was proposed independently by [[John G.F. Francis]]<ref>{{Citation|first=J. G. F. |last=Francis|title=The QR Transformation, I (part 1)|journal=The Computer Journal|volume= 4|issue= 3|pages =265–271 |year=1961|doi=10.1093/comjnl/4.3.265}} and {{Citation|doi=10.1093/comjnl/4.4.332|first=J. G. F. |last=Francis|title=The QR Transformation, II (part 2)|journal=The Computer Journal|volume=4|issue= 4| pages= 332–345|year=1962}}</ref> and [[Vera Kublanovskaya]]<ref>{{Citation|first=Vera N. |last=Kublanovskaya|title=On some algorithms for the solution of the complete eigenvalue problem|journal=USSR Computational Mathematics and Mathematical Physics|volume= 3| pages= 637–657 |year=1961}}. Also published in: {{Citation|journal=Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki|volume=1|issue=4| pages =555–570 |year=1961}}</ref> in 1961.<ref>See {{Harvnb|Golub|van Loan|1996|loc=§7.3}}; {{Harvnb|Meyer|2000|loc=§7.3}}</ref>
 
==Applications==
 
===Eigenvalues of geometric transformations===
The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors.
{| class="wikitable" style="text-align:center; margin:1em auto 1em auto;"
|-
|
| [[Scaling (geometry)|scaling]]
| unequal scaling
| [[Rotation (geometry)|rotation]]
| [[Shear mapping|horizontal shear]]
| [[hyperbolic rotation]]
|-
|illustration
|| [[File:Homothety in two dim.svg|100px|Equal scaling ([[homothety]])]]
|| [[File:Unequal scaling.svg|100px|Vertical shrink (<math>k_2 < 1</math>) and horizontal stretch (<math>k_1 > 1</math>) of a unit square.]]
|| [[File:Rotation.png|100px|Rotation by 50 degrees]]
|| [[File:Shear.svg|100px|center|Horizontal shear mapping]]
|| [[File:Squeeze r=1.5.svg|100px|<math>e^\mathsf{T} = \frac 3 2</math>]]
|-
|matrix
| <math> \begin{bmatrix}k & 0\\0 & k\end{bmatrix}</math><br />&nbsp;<br />&nbsp;
| <math> \begin{bmatrix}k_1 & 0\\0 & k_2\end{bmatrix}</math><br />&nbsp;<br />&nbsp;
| <math> \begin{bmatrix}c & -s \\ s & c\end{bmatrix} </math><br /><math>c=\cos\theta</math><br /><math>s=\sin\theta</math>
| <math> \begin{bmatrix}1 & k\\ 0 & 1\end{bmatrix} </math><br />&nbsp;<br />&nbsp;
| <math>\begin{bmatrix} c & s \\ s & c \end{bmatrix}</math><br /><math>c=\cosh \varphi</math><br /><math>s=\sinh \varphi</math>
|-
|characteristic<br />polynomial
| <math>\ (\lambda - k)^2</math>
| <math>(\lambda - k_1)(\lambda - k_2)</math>
| <math>\lambda^2 - 2c\lambda + 1</math>
| <math>\ (\lambda - 1)^2</math>
| <math>\lambda^2 - 2c\lambda + 1</math>
|-
|eigenvalues <math>\lambda_i</math>
|<math>\lambda_1 = \lambda_2 = k</math>
|<math>\lambda_1 = k_1</math><br /><math>\lambda_2 = k_2</math>
|<math>\lambda_1 = e^{\mathbf{i}\theta}=c+s\mathbf{i}</math><br /><math>\lambda_2 = e^{-\mathbf{i}\theta}=c-s\mathbf{i}</math>
|<math>\lambda_1 = \lambda_2 = 1</math>
|<math>\lambda_1 = e^\varphi</math><br /><math>\lambda_2 = e^{-\varphi}</math>,
|-
|algebraic multipl.<br /><math>\mu_i=\mu(\lambda_i)</math>
|<math>\mu_1 = 2</math>
|<math>\mu_1 = 1</math><br /><math>\mu_2 = 1</math>
|<math>\mu_1 = 1</math><br /><math>\mu_2 = 1</math>
|<math>\mu_1 = 2</math>
|<math>\mu_1 = 1</math><br /><math>\mu_2 = 1</math>
|-
|geometric multipl.<br /><math>\gamma_i = \gamma(\lambda_i)</math>
|<math>\gamma_1 = 2</math>
|<math>\gamma_1 = 1</math><br /><math>\gamma_2 = 1</math>
|<math>\gamma_1 = 1</math><br /><math>\gamma_2 = 1</math>
|<math>\gamma_1 = 1</math>
|<math>\gamma_1 = 1</math><br /><math>\gamma_2 = 1</math>
|-
|eigenvectors
|All non-zero vectors
|<math>u_1 = \begin{bmatrix}1\\0\end{bmatrix}</math><br /><math>u_2 = \begin{bmatrix}0\\1\end{bmatrix}</math>
|<math>u_1 = \begin{bmatrix}{\ }1\\-\mathbf{i}\end{bmatrix}</math><br /><math>u_2 = \begin{bmatrix}{\ }1\\ +\mathbf{i}\end{bmatrix}</math>
|<math>u_1 = \begin{bmatrix}1\\0\end{bmatrix}</math>
|<math>u_1 = \begin{bmatrix}{\ }1\\{\ }1\end{bmatrix}</math><br /><math>u_2 = \begin{bmatrix}{\ }1\\-1\end{bmatrix}.</math>
|}
 
Note that the characteristic equation for a rotation is a [[quadratic equation]] with [[discriminant]] <math>D = -4(\sin\theta)^2</math>, which is a negative number whenever <math>\theta</math> is not an integer multiple of 180°.  Therefore, except for these special cases, the two eigenvalues are complex numbers, <math>\cos\theta \pm \mathbf{i}\sin\theta</math>; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
 
===Schrödinger equation===<!-- This section is linked from [[Eigenstate]] -->
 
[[File:HAtomOrbitals.png|thumb|271px|The [[wavefunction]]s associated with the [[bound state]]s of an [[electron]] in a [[hydrogen atom]] can be seen as the eigenvectors of the [[hydrogen atom|hydrogen atom Hamiltonian]] as well as of the [[angular momentum operator]]. They are associated with eigenvalues interpreted as their energies (increasing downward: <math>n=1,2,3,\ldots</math>) and [[angular momentum]] (increasing across: <!-- do not italicize! -->s, p, d, ...). The illustration shows the square of the absolute value of the wavefunctions. Brighter areas correspond to higher [[probability density function|probability density]] for a position [[measurement in quantum mechanics|measurement]]. The center of each figure is the [[atomic nucleus]], a [[proton]].]]
 
An example of an eigenvalue equation where the transformation <math>T</math> is represented in terms of a differential operator is the time-independent [[Schrödinger equation]] in [[quantum mechanics]]:
 
: <math>H\psi_E = E\psi_E \,</math>
 
where <math>H</math>, the [[Hamiltonian (quantum mechanics)|Hamiltonian]], is a second-order [[differential operator]] and <math>\psi_E</math>, the [[wavefunction]], is one of its eigenfunctions corresponding to the eigenvalue <math>E</math>, interpreted as its [[energy]].
 
However, in the case where one is interested only in the [[bound state]] solutions of the Schrödinger equation, one looks for <math>\psi_E</math> within the space of [[Square-integrable function|square integrable]] functions. Since this space is a [[Hilbert space]] with a well-defined [[scalar product]], one can introduce a [[Basis (linear algebra)|basis set]] in which <math>\psi_E</math> and <math>H</math> can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
 
[[Bra-ket notation]] is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by <math>|\Psi_E\rangle</math>. In this notation, the Schrödinger equation is:
 
: <math>H|\Psi_E\rangle = E|\Psi_E\rangle</math>
 
where <math>|\Psi_E\rangle</math> is an '''eigenstate''' of <math>H</math>. It is a [[self adjoint operator]], the infinite dimensional analog of Hermitian matrices (''see [[Observable]]''). As in the matrix case, in the equation above <math>H|\Psi_E\rangle</math> is understood to be the vector obtained by application of the transformation <math>H</math> to <math>|\Psi_E\rangle</math>.
 
===Molecular orbitals===
In [[quantum mechanics]], and in particular in [[atomic physics|atomic]] and [[molecular physics]], within the [[Hartree–Fock]] theory, the [[atomic orbital|atomic]] and [[molecular orbital]]s can be defined by the eigenvectors of the [[Fock operator]]. The corresponding eigenvalues are interpreted as [[ionization potential]]s via [[Koopmans' theorem]]. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such equations are usually solved by an [[iteration]] procedure, called in this case [[self-consistent field]] method. In [[quantum chemistry]], one often represents the Hartree–Fock equation in a non-[[orthogonal]] [[basis set (chemistry)|basis set]]. This particular representation is a [[generalized eigenvalue problem]] called [[Roothaan equations]].
 
===Geology and glaciology===
In [[geology]], especially in the study of [[glacial till]], eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of [[clasts]] in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram,<ref>{{Citation|doi=10.1002/1096-9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C|last1=Graham|first1=D.|last2=Midgley|first2= N.|title=Graphical representation of particle shape using triangular diagrams: an Excel spreadsheet method|year= 2000|journal= Earth Surface Processes and Landforms |volume=25|pages=1473–1477|issue=13}}</ref><ref>{{Citation|doi=10.1086/626490|last1=Sneed|first1= E. D.|last2=Folk|first2= R. L.|year= 1958|title=Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis|journal= Journal of Geology|volume= 66|issue=2|pages=114–150}}</ref> or as a Stereonet on a Wulff Net.<ref>{{Citation |doi=10.1016/S0098-3004(97)00122-2 |last1=Knox-Robinson |year=1998 |first1=C |pages=243 |volume=24 |journal=Computers & Geosciences|title= GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system |issue=3 |last2=Gardoll |first2=Stephen J}}</ref>
 
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered <math>v_1, v_2, v_3</math> by their eigenvalues <math>E_1 \geq E_2 \geq E_3</math>;<ref>[http://www.ruhr-uni-bochum.de/hardrock/downloads.htm Stereo32 software]</ref> <math>v_1</math> then is the primary orientation/dip of clast, <math>v_2</math> is the secondary and <math>v_3</math> is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a [[compass rose]] of [[turn (geometry)|360°]]. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of <math>E_1</math>, <math>E_2</math>, and <math>E_3</math> are dictated by the nature of the sediment's fabric. If <math>E_1 = E_2 = E_3</math>, the fabric is said to be isotropic. If <math>E_1 = E_2 > E_3</math>, the fabric is said to be planar. If <math>E_1 > E_2 > E_3</math>, the fabric is said to be linear.<ref>{{Citation|last1=Benn|first1= D.|last2=Evans|first2=D.|year=2004|title= A Practical Guide to the study of Glacial Sediments|location= London|publisher=Arnold|pages=103–107}}</ref>
 
===Principal components analysis===
[[File:GaussianScatterPCA.png|thumb|right|PCA of the [[multivariate Gaussian distribution]] centered at <math>(1,3)</math> with a standard deviation of 3 in roughly the <math>(0.878,0.478)</math> direction and of&nbsp;1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) [[covariance matrix]] scaled by the square root of the corresponding eigenvalue. (Just as in the one-dimensional case, the square root is taken because the [[standard deviation]] is more readily visualized than the [[variance]].]]
{{Main|Principal component analysis}}
{{See also|Positive semidefinite matrix|Factor analysis}}
 
The [[Eigendecomposition of a matrix#Symmetric matrices|eigendecomposition]] of a [[symmetric matrix|symmetric]] [[positive semidefinite matrix|positive semidefinite]] (PSD) [[positive semidefinite matrix|matrix]] yields an [[orthogonal basis]] of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in [[multivariate statistics|multivariate analysis]], where the [[sample variance|sample]] [[covariance matrix|covariance matrices]] are PSD. This orthogonal decomposition is called [[principal components analysis]] (PCA) in statistics. PCA studies [[linear relation]]s among variables. PCA is performed on the [[covariance matrix]] or the  [[correlation matrix]] (in which each variable is scaled to have its [[sample variance]] equal to one). For the covariance or correlation matrix, the eigenvectors correspond to [[principal components analysis|principal components]] and the eigenvalues to the [[explained variance|variance explained]] by the principal components. Principal component analysis of the correlation matrix provides an [[orthogonal basis|orthonormal eigen-basis]] for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal-components that are associated with most of the covariability among a number of observed data.
 
Principal component analysis is used to study [[data mining|large]] [[data set]]s, such as those encountered in [[data mining]], [[chemometrics|chemical research]], [[psychometrics|psychology]], and in [[marketing]]. PCA is popular especially in psychology, in the field of [[psychometrics]]. In [[Q methodology]], the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of ''practical'' significance (which differs from the [[statistical significance]] of [[hypothesis testing]]; cf. [[Factor analysis#Criteria for determining the number of factors|criteria for determining the number of factors]]).  More generally, principal component analysis can be used as a method of [[factor analysis]] in [[structural equation model]]ing.
 
===Vibration analysis===
[[File:beam mode 1.gif|thumb|225px|1st lateral bending (See [[vibration]] for more types of vibration)]]
{{Main|Vibration}}
 
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many [[Degrees of freedom (mechanics)|degrees of freedom]]. The eigenvalues are used to determine the natural frequencies (or '''eigenfrequencies''') of vibration, and the eigenvectors determine the shapes of these vibrational modes. In particular, undamped vibration is governed by
:<math>m\ddot x + kx = 0</math>
or
:<math>m\ddot x = -k x</math>
that is, acceleration is proportional to position (i.e., we expect <math>x</math> to be sinusoidal in time).
 
In <math>n</math> dimensions, <math>m</math> becomes a [[mass matrix]] and <math>k</math> a [[stiffness matrix]]. Admissible solutions are then a linear combination of solutions to the [[generalized eigenvalue problem]]
:<math>-k x = \omega^2 m x</math>
where <math>\omega^2</math> is the eigenvalue and <math>\omega</math> is the [[angular frequency]]. Note that the principal vibration modes are different from the principal compliance modes, which are the eigenvectors of <math>k</math> alone. Furthermore, [[damped vibration]], governed by
:<math>m\ddot x + c \dot x + kx = 0</math>
leads to what is called a so-called [[quadratic eigenvalue problem]],
:<math>(\omega^2 m + \omega c + k)x = 0.</math>
This can be reduced to a generalized eigenvalue problem by [[Quadratic eigenvalue problem#Methods of Solution|clever use of algebra]] at the cost of solving a larger system.
 
The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using [[finite element analysis]], but neatly generalize the solution to scalar-valued vibration problems.
 
===Eigenfaces===
[[File:Eigenfaces.png|thumb|200px|[[Eigenface]]s as examples of eigenvectors]]
{{Main|Eigenface}}
In [[image processing]], processed images of [[face]]s can be seen as vectors whose components are the [[brightness]]es of each [[pixel]].<ref>{{Citation
| last=Xirouhakis
| first=A.
| first2=G.
| last2=Votsis
| first3=A.
| last3=Delopoulus
| title=Estimation of 3D motion and structure of human faces
| publisher=Online paper in PDF format, National Technical University of Athens
| url=http://www.image.ece.ntua.gr/papers/43.pdf
|format=PDF| year=2004
}}</ref> The dimension of this vector space is the number of pixels. The eigenvectors of the [[covariance matrix]] associated with a large set of normalized pictures of faces are called '''[[eigenface]]s'''; this is an example of [[principal components analysis]]. They are very useful for expressing any face image as a [[linear combination]] of some of them. In the [[Facial recognition system|facial recognition]] branch of [[biometrics]], eigenfaces provide a means of applying [[data compression]] to faces for [[Recognition of human individuals|identification]] purposes. Research related to eigen vision systems determining hand gestures has also been made.
 
Similar to this concept, '''eigenvoices''' represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.
 
===Tensor of moment of inertia===
In [[mechanics]], the eigenvectors of the [[moment of inertia#The inertia tensor|moment of inertia tensor]] define the [[principal axis (mechanics)|principal axes]] of a [[rigid body]]. The [[tensor]] of moment of [[inertia]] is a key quantity required to determine the rotation of a rigid body around its [[center of mass]].
 
===Stress tensor===
In [[solid mechanics]], the [[stress (mechanics)|stress]] tensor is symmetric and so can be decomposed into a [[diagonal]] tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no [[Shear (mathematics)|shear]] components; the components it does have are the principal components.
 
===Eigenvalues of a graph===
In [[spectral graph theory]], an eigenvalue of a [[graph theory|graph]] is defined as an eigenvalue of the graph's [[adjacency matrix]] <math>A</math>, or (increasingly) of the graph's [[Laplacian matrix]] (see also [[Discrete Laplace operator]]), which is either <math>T - A</math> (sometimes called the ''combinatorial Laplacian'') or <math>I - T^{-1/2}A T^{-1/2}</math> (sometimes called the ''normalized Laplacian''), where <math>T</math> is a diagonal matrix with <math>T_{i i}</math> equal to the degree of vertex <math>v_i</math>, and in <math>T^{-1/2}</math>, the <math>i</math>th diagonal entry is <math>\sqrt{\operatorname{deg}(v_i)}</math>. The <math>k</math>th principal eigenvector of a graph is defined as either the eigenvector corresponding to the <math>k</math>th largest or <math>k</math>th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
 
The principal eigenvector is used to measure the [[eigenvector centrality|centrality]] of its vertices. An example is [[Google]]'s [[PageRank]] algorithm. The principal eigenvector of a modified [[adjacency matrix]] of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the [[stationary distribution]] of the [[Markov chain]] represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via [[spectral clustering]]. Other methods are also available for clustering.
 
===Basic reproduction number===
::''See [[Basic reproduction number]]''
The basic reproduction number (<math>R_0</math>) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then <math>R_0</math> is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, <math>t_G</math>, from one person becoming infected to the next person becoming infected. In a heterogenous population, the next generation matrix defines how many people in the population will become infected after time <math>t_G</math> has passed. <math>R_0</math> is then the largest eigenvalue of the next generation matrix.<ref>{{Citation
| author = Diekmann O, Heesterbeek JAP, Metz JAJ
| year = 1990
| title = On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations
| journal = Journal of Mathematical Biology
| volume = 28
| issue = 4
| pages =365–382
| pmid = 2117040
| doi = 10.1007/BF00178324
}}</ref><ref>{{Citation
| author = Odo Diekmann and J. A. P. Heesterbeek
| title = Mathematical epidemiology of infectious diseases
| series = Wiley series in mathematical and computational biology
| publisher = John Wiley & Sons
| location = West Sussex, England
| year = 2000
}}</ref>
 
==See also==
* [[Antieigenvalue theory]]
* [[Eigenplane]]
* [[Introduction to eigenstates]]
* [[Jordan normal form]]
* [[List of numerical analysis software]]
* [[Nonlinear eigenproblem]]
* [[Quadratic eigenvalue problem]]
* [[Eigenvalue algorithm]]
 
==Notes==
{{reflist|2}}
 
==References==
<div class="references" style="-moz-column-count: 2; column-count: 2;">
* {{Citation
| last=Korn
| first=Granino A.
| first2=Theresa M.
| last2=Korn
| title=Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review
| publisher=1152 p., Dover Publications, 2 Revised edition
| year=2000
| isbn=0-486-41147-8
| bibcode=1968mhse.book.....K
| journal=New York: McGraw-Hill
}}.
* {{Citation
| last = Lipschutz
| first = Seymour
| title = Schaum's outline of theory and problems of linear algebra
| edition = 2nd
| publisher = McGraw-Hill Companies
| location = New York, NY
| series = Schaum's outline series
| year = 1991
| isbn = 0-07-038007-4 }}.
* {{Citation
| last = Friedberg
| first = Stephen H.
| first2 = Arnold J.
| last2 = Insel
| first3 = Lawrence E.
| last3 = Spence
| title = Linear algebra
| edition = 2nd
| publisher = Prentice Hall
| location = Englewood Cliffs, NJ 07632
| year = 1989
| isbn = 0-13-537102-3 }}.
* {{Citation
| last = Aldrich
| first = John
| title = Earliest Known Uses of Some of the Words of Mathematics
| url = http://jeff560.tripod.com/e.html
| editor = Jeff Miller (Editor)
| year = 2006
| chapter = Eigenvalue, eigenfunction, eigenvector, and related terms
| chapterurl = http://jeff560.tripod.com/e.html
| accessdate = 2006-08-22 }}
* {{Citation
| last=Strang
| first=Gilbert
| title=Introduction to linear algebra
| publisher=Wellesley-Cambridge Press, Wellesley, MA
| year=1993
| isbn=0-9614088-5-5
}}.
* {{Citation
| last=Strang
| first=Gilbert
| title=Linear algebra and its applications
| publisher=Thomson, Brooks/Cole, Belmont, CA
| year=2006
| isbn=0-03-010567-6
}}.
* {{Citation
| last=Bowen
| first=Ray M.
| first2=Chao-Cheng
| last2=Wang
| title=Linear and multilinear algebra
| publisher=Plenum Press, New York, NY
| year=1980
| isbn=0-306-37508-7
}}.
* {{Citation
| last = Cohen-Tannoudji
| first = Claude
| author-link = Claude Cohen-Tannoudji
| title = Quantum mechanics
| publisher = John Wiley & Sons
| year = 1977
| chapter = Chapter II. The mathematical tools of quantum mechanics
| isbn = 0-471-16432-1 }}.
* {{Citation
| last = Fraleigh
| first = John B.
| first2 = Raymond A.
| last2 = Beauregard
| title = Linear algebra
| edition = 3rd
| publisher = Addison-Wesley Publishing Company
| year = 1995
| isbn = 0-201-83999-7 (international edition) }}.
* {{Citation
| last=Golub
| first=Gene H.
| authorlink1 = Gene_H._Golub
| first2=Charles F.
| last2=Van Loan
| authorlink2 = Charles_F._Van_Loan
| title=Matrix computations (3rd Edition)
| publisher=Johns Hopkins University Press, Baltimore, MD
| year=1996
| isbn=978-0-8018-5414-9
}}.
* {{Citation
| last = Hawkins
| first = T.
| title = Cauchy and the spectral theory of matrices
| journal = Historia Mathematica
| volume = 2
| pages = 1–29
| year = 1975
| doi = 10.1016/0315-0860(75)90032-4 }}.
* {{Citation
| last=Horn
| first=Roger A.
| first2=Charles F.
| last2=Johnson
| title=Matrix analysis
| publisher=Cambridge University Press
| year=1985
| isbn=0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback)
}}.
* {{Citation
| last=Kline
| first=Morris
| title=Mathematical thought from ancient to modern times
| publisher=Oxford University Press
| year=1972
| isbn=0-19-501496-0
}}.
* {{Citation
| last=Meyer
| first=Carl D.
| title=Matrix analysis and applied linear algebra
| publisher=Society for Industrial and Applied Mathematics (SIAM), Philadelphia
| year=2000
| isbn=978-0-89871-454-8
}}.
* {{Citation
| last=Brown
| first=Maureen
| title=Illuminating Patterns of Perception: An Overview of Q Methodology
| date=October 2004
| isbn=
}}.
* {{Citation
| last = Golub
| first = Gene F.
| first2 = Henk A.
| last2 = van der Vorst
| title = Eigenvalue computation in the 20th century
| journal = Journal of Computational and Applied Mathematics
| volume = 123
| pages = 35–65
| year = 2000
| doi = 10.1016/S0377-0427(00)00413-1 }}.
* {{Citation
| last=Akivis
| first=Max A.
| coauthors=Vladislav V. Goldberg
| title=Tensor calculus
| series=Russian
| publisher=Science Publishers, Moscow
| year=1969
}}.
* {{Citation
| last=Gelfand
| first=I. M.
| title=Lecture notes in linear algebra
| series=Russian
| publisher=Science Publishers, Moscow
| year=1971
| isbn=
}}.
* {{Citation
| last=Alexandrov
| first=Pavel S.
| title=Lecture notes in analytical geometry
| series=Russian
| publisher=Science Publishers, Moscow
| year=1968
| isbn=
}}.
* {{Citation
| last=Carter
| first=Tamara A.
| first2=Richard A.
| last2=Tapia
| first3=Anne
| last3=Papaconstantinou
| title=Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students
| publisher=Rice University, Online Edition
| url=http://ceee.rice.edu/Books/LA/index.html
| accessdate=2008-02-19
}}.
* {{Citation
| last=Roman
| first=Steven
| title=Advanced linear algebra
| edition=3rd
| publisher=Springer Science + Business Media, LLC
| place=New York, NY
| year=2008
| isbn=978-0-387-72828-5
}}.
* {{Citation
| last=Shilov
| first=Georgi E.
| title=Linear algebra
| edition=translated and edited by Richard A. Silverman
| publisher=Dover Publications
| place=New York
| year=1977
| isbn=0-486-63518-X
}}.
* {{Citation
| last=Hefferon
| first=Jim
| title=Linear Algebra
| publisher=Online book, St Michael's College, Colchester, Vermont, USA
| url=http://joshua.smcvt.edu/linearalgebra/
| year=2001
| isbn=
}}.
* {{Citation
| last=Kuttler
| first=Kenneth
| title=An introduction to linear algebra
| publisher=Online e-book in PDF format, Brigham Young University
| url=http://www.math.byu.edu/~klkuttle/Linearalgebra.pdf
|format=PDF| year=2007
| isbn=
}}.
* {{Citation
| last=Demmel
| first=James W. | authorlink = James Demmel
| title=Applied numerical linear algebra
| publisher=SIAM
| year=1997
| isbn=0-89871-389-7
}}.
* {{Citation
| last=Beezer
| first=Robert A.
| title=A first course in linear algebra
| url=http://linear.ups.edu/
| publisher=Free online book under GNU licence, University of Puget Sound
| year=2006
| isbn=
}}.
* {{Citation
| last = Lancaster
| first = P.
| title = Matrix theory
| series = Russian
| publisher = Science Publishers
| location = Moscow, Russia
| year = 1973 }}.
* {{Citation
| last = Halmos
| first = Paul R.
| author-link = Paul Halmos
| title = Finite-dimensional vector spaces
| edition = 8th
| publisher = Springer-Verlag
| location = New York, NY
| year = 1987
| isbn = 0-387-90093-4 }}.
* Pigolkina, T. S. and Shulman, V. S., ''Eigenvalue'' (in Russian), In:Vinogradov, I. M. (Ed.), ''Mathematical Encyclopedia'', Vol. 5, Soviet Encyclopedia, Moscow, 1977.
* {{Citation
| last=Greub
| first=Werner H.
| title=Linear Algebra (4th Edition)
| publisher=Springer-Verlag, New York, NY
| year=1975
| isbn=0-387-90110-8
}}.
* {{Citation
| last=Larson
| first=Ron
| first2=Bruce H.
| last2=Edwards
| title=Elementary linear algebra
| edition=5th
| publisher=Houghton Mifflin Company
| year=2003
| isbn=0-618-33567-6
}}.
* [[Charles W. Curtis|Curtis, Charles W.]], ''Linear Algebra: An Introductory Approach'', 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0-387-90992-3.
* {{Citation
| last=Shores
| first=Thomas S.
| title=Applied linear algebra and matrix analysis
| publisher=Springer Science+Business Media, LLC
| year=2007
| isbn=0-387-33194-8
}}.
* {{Citation
| last=Sharipov
| first=Ruslan A.
| title=Course of Linear Algebra and Multidimensional Geometry: the textbook
| year=1996
| isbn=5-7477-0099-5
| arxiv=math/0405323
}}.
* {{Citation
| last=Gohberg
| first=Israel
| first2=Peter
| last2=Lancaster
| first3=Leiba
| last3=Rodman
| title=Indefinite linear algebra and applications
| publisher=Birkhäuser Verlag
| place=Basel-Boston-Berlin
| year=2005
| isbn=3-7643-7349-0
}}.</div>
 
==External links==
{{Wikibooks|Linear Algebra|Eigenvalues and Eigenvectors}}
 
{{Wikibooks|The Book of Mathematical Proofs|Algebra/Linear Transformations}}
* [http://www.physlink.com/education/AskExperts/ae520.cfm What are Eigen Values?] – non-technical introduction from PhysLink.com's "Ask the Experts"
* [http://people.revoledu.com/kardi/tutorial/LinearAlgebra/EigenValueEigenVector.html Eigen Values and Eigen Vectors Numerical Examples] – Tutorial and Interactive Program from Revoledu.
* [http://khanexercises.appspot.com/video?v=PhfbEr2btGQ Introduction to Eigen Vectors and Eigen Values] – lecture from Khan Academy
* {{cite web|last=Hill|first=Roger|title=λ – Eigenvalues|url=http://www.sixtysymbols.com/videos/eigenvalues.htm|work=Sixty Symbols|publisher=[[Brady Haran]] for the [[University of Nottingham]]|year=2009}}
 
'''Theory'''
* {{springer|title=Eigen value|id=p/e035150}}
* {{springer|title=Eigen vector|id=p/e035180}}
* {{planetmath reference|id=4397|title=Eigenvalue (of a matrix)}}
* [http://mathworld.wolfram.com/Eigenvector.html Eigenvector] – Wolfram [[MathWorld]]
* [http://ocw.mit.edu/ans7870/18/18.06/javademo/Eigen/ Eigen Vector Examination working applet]
* [http://web.mit.edu/18.06/www/Demos/eigen-applet-all/eigen_sound_all.html Same Eigen Vector Examination as above in a Flash demo with sound]
* [http://www.sosmath.com/matrix/eigen1/eigen1.html Computation of Eigenvalues]
* [http://www.cs.utk.edu/~dongarra/etemplates/index.html Numerical solution of eigenvalue problems] Edited by Zhaojun Bai, [[James Demmel]], Jack Dongarra, Axel Ruhe, and [[Henk van der Vorst]]
* Eigenvalues and Eigenvectors on the Ask Dr. Math forums: [http://mathforum.org/library/drmath/view/55483.html], [http://mathforum.org/library/drmath/view/51989.html]
 
'''Online calculators'''
* [http://www.arndt-bruenner.de/mathe/scripts/engl_eigenwert.htm arndt-bruenner.de]
* [http://www.bluebit.gr/matrix-calculator/ bluebit.gr]
* [http://wims.unice.fr/wims/wims.cgi?session=6S051ABAFA.2&+lang=en&+module=tool%2Flinear%2Fmatrix.en wims.unice.fr]
 
'''Demonstration applets'''
* [http://scienceapplets.blogspot.com/2012/03/eigenvalues-and-eigenvectors.html Java applet about eigenvectors in the real plane]
 
{{Linear algebra}}
{{Mathematics-footer}}
 
{{DEFAULTSORT:Eigenvalues And Eigenvectors}}
[[Category:Mathematical physics]]
[[Category:Abstract algebra]]
[[Category:Linear algebra]]
[[Category:Matrix theory]]
[[Category:Singular value decomposition]]
[[Category:Articles including recorded pronunciations]]
 
{{Link FA|es}}
{{Link FA|zh}}

Latest revision as of 21:34, 14 June 2014

It is very common to have a dental emergency -- a fractured tooth, an abscess, or severe pain when chewing. Over-the-counter pain medication is just masking the problem. Seeing an emergency dentist is critical to getting the source of the problem diagnosed and corrected as soon as possible.



Here are some common dental emergencies:
Toothache: The most common dental emergency. This generally means a badly decayed tooth. As the pain affects the tooth's nerve, treatment involves gently removing any debris lodged in the cavity being careful not to poke deep as this will cause severe pain if the nerve is touched. Next rinse vigorously with warm water. Then soak a small piece of cotton in oil of cloves and insert it in the cavity. This will give temporary relief until a dentist can be reached.

At times the pain may have a more obscure location such as decay under an old filling. As this can be only corrected by a dentist there are two things you can do to help the pain. Administer a pain pill (aspirin or some other analgesic) internally or dissolve a tablet in a half glass (4 oz) of warm water holding it in the mouth for several minutes before spitting it out. DO NOT PLACE A WHOLE TABLET OR ANY PART OF IT IN THE TOOTH OR AGAINST THE SOFT GUM TISSUE AS IT WILL RESULT IN A NASTY BURN.

Swollen Jaw: This may be caused by several conditions the most probable being an abscessed tooth. In any case the treatment should be to reduce pain and swelling. An ice pack held on the outside of the jaw, (ten minutes on and ten minutes off) will take care of both. If this does not control the pain, an analgesic tablet can be given every four hours.

Other Oral Injuries: Broken teeth, cut lips, bitten tongue or lips if severe means a trip to a dentist as soon as possible. In the mean time rinse the mouth with warm water and place cold compression the face opposite the injury. If there is a lot of bleeding, apply direct pressure to the bleeding area. If bleeding does not stop get patient to the emergency room of a hospital as stitches may be necessary.

Prolonged Bleeding Following Extraction: Place a gauze pad or better still a moistened tea bag over the socket and have the patient bite down gently on it for 30 to 45 minutes. The tannic acid in the tea seeps into the tissues and often helps stop the bleeding. If bleeding continues after two hours, call the dentist or take patient to the emergency room of the nearest hospital.

Broken Jaw: If you suspect the patient's jaw is broken, bring the upper and lower teeth together. Put a necktie, handkerchief or towel under the chin, tying it over the head to immobilize the jaw until you can get the patient to a dentist or the emergency room of a hospital.

Painful Erupting Tooth: In young children teething pain can come from a loose baby tooth or from an erupting permanent tooth. Some relief can be given by crushing a little ice and wrapping it in gauze or a clean piece of cloth and putting it directly on the tooth or gum tissue where it hurts. The numbing effect of the cold, along with an appropriate dose of aspirin, usually provides temporary relief.

In young adults, an erupting 3rd molar (Wisdom tooth), especially if it is impacted, can cause the jaw to swell and be quite painful. Often the gum around the tooth will show signs of infection. Temporary relief can be had by giving aspirin or some other painkiller and by dissolving an aspirin in half a glass of warm water and holding this solution in the mouth over the sore gum. AGAIN DO NOT PLACE A TABLET DIRECTLY OVER THE GUM OR CHEEK OR USE THE ASPIRIN SOLUTION ANY STRONGER THAN RECOMMENDED TO PREVENT BURNING THE TISSUE. The swelling of the jaw can be reduced by using an ice pack on the outside of the face at intervals of ten minutes on and ten minutes off.

If you liked this article and also you would like to acquire more info relating to dentist DC kindly visit our web-page.