167 (number): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Frietjes
No edit summary
en>Qwertyus
m Disambiguated: PipsPip (counting)
Line 1: Line 1:
{{Technical|date=April 2013}}
Fashion Designer Owen Krejci from Clavet, has pastimes which include house repair, diet and train collecting. At all times enjoys visiting spots including Abbey and Altenmünster of Lorsch.<br><br>Have a look at my homepage simple diet burn fat
 
In [[linear algebra]], for a [[matrix (mathematics)|matrix]] ''A'', there may not always exist a full set of linearly independent eigenvectors that form a complete basis – a matrix may not be [[diagonalizable matrix|diagonalizable]].  This happens when the [[algebraic multiplicity]] of at least one [[eigenvalue]] λ is greater than its [[geometric multiplicity]] (the [[nullity]] of the matrix <math>(A-\lambda I)</math>, or the [[dimension (vector space)|dimension]] of its [[Kernel (linear algebra)|nullspace]]). In such cases, a '''generalized eigenvector''' of ''A'' is a nonzero [[Euclidean space|vector]] '''v''', which is associated with λ having [[algebraic multiplicity]] ''k'' ≥1, satisfying
 
: <math>(A-\lambda I)^k\mathbf{v} = \mathbf{0}.</math>
The set spanned by all generalized eigenvectors for a given λ, form the '''generalized eigenspace''' for λ.
 
Ordinary [[eigenvector]]s and [[eigenspace]]s are obtained for ''k''=1.
 
==For defective matrices==
 
Generalized eigenvectors are needed to form a complete [[basis (linear algebra)|basis]] of a [[defective matrix]], which is a matrix in which there are fewer [[linearly independent]] eigenvectors than eigenvalues (counting multiplicity).  Over an algebraically closed field, the generalized eigenvectors ''do'' allow choosing a complete basis, as follows from the [[Jordan form]] of a matrix.
 
In particular, suppose that an eigenvalue ''λ'' of a matrix ''A'' has an algebraic multiplicity ''m'' but fewer corresponding eigenvectors.  We form a sequence of ''m'' eigenvectors and generalized eigenvectors <math>x_1, x_2, \ldots, x_m</math> that are linearly independent and satisfy
 
:<math>(A - \lambda I) x_k = \alpha_{k,1}x_1+\cdots+\alpha_{k,k-1}x_{k-1} </math>
 
for some coefficients <math>\alpha_{k,1},\ldots,\alpha_{k,k-1}</math>, for <math>k=1,\ldots,m</math>.  It follows that
 
:<math>(A - \lambda I)^k x_k = 0. \!</math>
 
The vectors <math>x_1, x_2, \ldots, x_m</math> can always be chosen, but are not uniquely determined by the above relations. If the geometric multiplicity (dimension of the eigenspace) of ''λ'' is ''p'', one can choose the first ''p'' vectors to be eigenvectors, but the remaining ''m'' − ''p'' vectors are only generalized eigenvectors.
 
==Examples==
 
===Example 1===
Suppose
:<math> A = \begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix}. </math>
Then there is one eigenvalue λ=1 with an algebraic multiplicity m of 2.
 
There are several ways to see that there will be one generalized eigenvector necessary.  Easiest is to notice that this matrix is in [[Jordan normal form]], but is not diagonal, meaning that this is not a diagonalizable matrix.  Since there is one superdiagonal entry, there will be one generalized eigenvector (or you could note that the vector space is of dimension 2, so there can be only one generalized eigenvector).  Alternatively, you could compute the dimension of the [[Kernel (linear algebra)|nullspace]] of  <math> A-I </math> to be ''p''=1, and thus there are ''m''-''p''=1 generalized eigenvectors.
 
Computing the ordinary eigenvector <math> v_1=\begin{bmatrix}1 \\0 \end{bmatrix}</math> is left to the reader (see the [[eigenvector]] page for examples).  Using this eigenvector, we compute the generalized eigenvector  <math> v_2 </math> by solving
 
:<math> (A-\lambda I)v_2 = v_1. </math>
Writing out the values:
:<math> \left(\begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix}- \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}\right)\begin{bmatrix}v_{21} \\v_{22} \end{bmatrix} = \begin{bmatrix}1 \\0 \end{bmatrix}.</math>
This simplifies to
:<math> \begin{matrix} v_{21}+v_{22}-v_{21} = 1 \\ v_{22}- v_{22} = 0. \end{matrix}</math>
This simplifies to
 
:<math> v_{22}= 1. </math>
 
And <math>v_{21}</math> has no restrictions and thus can be any scalar. So the generalized eigenvector is <math> v_2=\begin{bmatrix}* \\1 \end{bmatrix}</math>, where the * indicates that any value is fine.  Usually picking 0 is easiest.
 
===Example 2===
 
The matrix
 
:<math>A = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
3 & 1 & 0 & 0 & 0 \\
6 & 3 & 2 & 0 & 0 \\
10 & 6 & 3 & 2 & 0 \\
15 & 10 & 6 & 3 & 2
\end{bmatrix}</math>
 
has ''eigenvalues'' of 1 and 2 with ''algebraic multiplicities'' of 2 and 3, but ''geometric multiplicities'' of 1 and 1.
 
The ''generalized eigenspaces'' of <math>A</math> are calculated below.
 
:<math>(A-1 I) \begin{bmatrix}
0 \\ 1 \\ -3 \\ 3 \\ -1
\end{bmatrix} = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
3 & 0 & 0 & 0 & 0 \\
6 & 3 & 1 & 0 & 0 \\
10 & 6 & 3 & 1 & 0 \\
15 & 10 & 6 & 3 & 1
\end{bmatrix}\begin{bmatrix}
0 \\ 1 \\ -3 \\ 3 \\ -1
\end{bmatrix} = \begin{bmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}</math>
 
:<math>(A - 1 I) \begin{bmatrix}
1 \\ -15 \\ 30 \\ -1 \\ -45
\end{bmatrix} = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 \\
3 & 0 & 0 & 0 & 0 \\
6 & 3 & 1 & 0 & 0 \\
10 & 6 & 3 & 1 & 0 \\
15 & 10 & 6 & 3 & 1
\end{bmatrix} \begin{bmatrix}
1 \\ -15 \\ 30 \\ -1 \\ -45
\end{bmatrix} = 3\begin{bmatrix}
0 \\ 1 \\ -3 \\ 3 \\ -1
\end{bmatrix}
</math>
 
:<math>(A - 2 I) \begin{bmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 1
\end{bmatrix} = \begin{bmatrix}
-1 & 0 & 0 & 0 & 0 \\
3 & -1 & 0 & 0 & 0 \\
6 & 3 & 0 & 0 & 0 \\
10 & 6 & 3 & 0 & 0 \\
15 & 10 & 6 & 3 & 0
\end{bmatrix} \begin{bmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 1
\end{bmatrix} = \begin{bmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}
</math>
 
:<math>(A - 2 I) \begin{bmatrix}
0 \\ 0 \\ 0 \\ 1 \\ 0
\end{bmatrix} = \begin{bmatrix}
-1 & 0 & 0 & 0 & 0 \\
3 & -1 & 0 & 0 & 0 \\
6 & 3 & 0 & 0 & 0 \\
10 & 6 & 3 & 0 & 0 \\
15 & 10 & 6 & 3 & 0
\end{bmatrix} \begin{bmatrix}
0 \\ 0 \\ 0 \\ 1 \\ 0
\end{bmatrix} = 3 \begin{bmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 1
\end{bmatrix}
</math>
 
:<math>(A - 2 I) \begin{bmatrix}
0 \\ 0 \\ 1 \\ -2 \\ 0
\end{bmatrix} = \begin{bmatrix}
-1 & 0 & 0 & 0 & 0 \\
3 & -1 & 0 & 0 & 0 \\
6 & 3 & 0 & 0 & 0 \\
10 & 6 & 3 & 0 & 0 \\
15 & 10 & 6 & 3 & 0
\end{bmatrix} \begin{bmatrix}
0 \\ 0 \\ 1 \\ -2 \\ 0
\end{bmatrix} = 3 \begin{bmatrix}
0 \\ 0 \\ 0 \\ 1 \\ 0
\end{bmatrix}
</math>
 
This results in a basis for each of the ''generalized eigenspaces'' of <math>A</math>.
Together they span the space of all 5 dimensional column vectors.
 
:<math>
\left\{
\begin{bmatrix} 0 \\ 1 \\ -3 \\ 3 \\ -1 \end{bmatrix}
\begin{bmatrix} 1 \\ -15 \\ 30 \\ -1 \\ -45 \end{bmatrix}
\right\},
\left\{
\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}
\begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}
\begin{bmatrix} 0 \\ 0 \\ 1 \\ -2 \\ 0 \end{bmatrix}
\right\}
</math>
 
The ''Jordan Canonical Form'' is obtained.
 
:<math>
T = \begin{bmatrix}
0 &  0 & 0 &1& 0 \\
3 &  0 & 0 &-15& 0 \\
-9 &  0 & 0 &30& 1 \\
9 &  0 & 3 &-1& -2 \\
-3 &  9 & 0 &-45& 0
\end{bmatrix} \quad J = \begin{bmatrix}
1 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 2 & 1 & 0 \\
0 & 0 & 0 & 2 & 1 \\
0 & 0 & 0 & 0 & 2
\end{bmatrix}
</math>
 
where
 
:<math>AT = TJ</math>
 
==Other meanings of the term==
 
* The usage of [[generalized eigenfunction]] differs from this; it is part of the theory of [[rigged Hilbert space]]s, so that for a [[linear operator]] on a [[function space]] this may be something different.
 
* One can also use the term ''generalized eigenvector'' for an eigenvector of the ''[[generalized eigenvalue problem]]''
 
: <math> Av = \lambda B v.</math>
 
== The Nullity of (A &minus; &lambda; I)<sup>k</sup> ==
 
=== Introduction ===
 
In this section it is shown, when <math>\lambda</math> is an ''eigenvalue'' of a matrix <math>A</math> with ''algebraic multiplicity'' <math>k</math>, then the ''null space'' of <math>(A - \lambda I)^k</math> has dimension <math>k</math>.
 
=== Existence of Eigenvalues ===
 
Consider a n &times; n matrix '''A'''. The ''determinant'' of '''A''' has the fundamental properties of being ''n linear'' and ''alternating''. Additionally {{nowrap|det('''I''') {{=}} 1}}, for '''I''' the n &times; n identity matrix. From the determinant's
definition it can be seen that for a ''triangular'' matrix
{{nowrap|'''T''' {{=}} (t<sub>ij</sub>)}} that {{nowrap|det('''T''') {{=}} ∏(t<sub>ii</sub>)}}.  In other words, the determinant is the product of the diagonal entries.
 
There are three ''elementary row operations'', ''scalar multiplication'', ''interchange'' of two rows, and the ''addition'' of a ''scalar multiple'' of one row to another. Multiplication of a row of '''A''' by α results in a new matrix whose determinant is
α&nbsp;det('''A'''). Interchange of two rows changes the ''sign'' of the determinant, and the addition of a scalar multiple of one row to another does not affect the determinant. The following simple theorem holds, but requires a little proof.
 
'''Theorem:''' The equation {{nowrap|'''A''' '''x''' {{=}} '''0'''}} has a solution {{nowrap|'''x''' ≠ '''0'''}}, if and only if {{nowrap|det('''A''') {{=}} 0}}.
 
''Proof:'' Given the equation {{nowrap|'''A''' '''x''' {{=}} '''0'''}} attempt to solve it using the ''elementary row operations'' of ''addition'' of a ''scalar multiple'' of one row to another and row ''interchanges'' only, until an equivalent equation {{nowrap|'''U''' '''x''' {{=}} '''0'''}} has been reached, with '''U''' an upper triangular matrix. Since {{nowrap|det('''U''') {{=}} ±det('''A''')}} and {{nowrap|det('''U''') {{=}} ∏(u<sub>ii</sub>)}} we have that {{nowrap|det('''A''') {{=}} 0}} if and only if at least one {{nowrap|u<sub>ii</sub> {{=}} 0}}.  The back substitution
procedure as performed after ''Gaussian Elimination'' will allow placing at least one non zero element in '''x''' when there is a {{nowrap|u<sub>ii</sub> {{=}} 0}}. When all {{nowrap|u<sub>ii</sub> ≠ 0}} back substitution will require {{nowrap|'''x''' {{=}} '''0'''}}.  ''QED''
 
'''Theorem:''' The equation {{nowrap|'''A''' '''x''' {{=}} λ '''x'''}} has a solution {{nowrap|'''x''' ≠ '''0'''}}, if and only if {{nowrap|det( λ '''I''' &minus; '''A''') {{=}} 0}}.
 
''Proof:'' The equation {{nowrap|'''A''' '''x''' {{=}} λ '''x'''}} is equivalent to {{nowrap|( λ '''I''' &minus; '''A''') '''x''' {{=}} '''0'''}}. ''QED.''
 
=== Constructive proof of Schur's triangular form ===
 
The proof of the main result of this section will rely on the ''similarity transformation'' as stated and proven next.
 
'''Theorem''': (''Schur Transformation to Triangular Form Theorem'') For any n &times; n matrix '''A''', there exists a ''triangular'' matrix '''T''' and a ''unitary'' matrix '''Q''', such that {{nowrap|'''A''' '''Q''' {{=}} '''Q''' '''T'''}}. (The transformations are not unique, but are related.)
 
''Proof:'' Let λ<sub>1</sub> be an ''eigenvalue'' of the {{nowrap|n &times; n}} matrix '''A''' and '''x''' be an associated ''eigenvector'', so that '''A''' '''x''' = λ<sub>1</sub>'''x'''. Normalize the ''length'' of '''x''' so that {{abs|'''x'''}} = 1.
 
For
 
:<math>x=\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}</math>,
 
construct a unitary matrix
 
:<math>Q=\begin{bmatrix}
x_1 & q_{1\,2} & q_{1\,3} & \cdots & q_{1\,n} \\
x_2 & q_{2\,2} & q_{2\,3} & \cdots & q_{2\,n} \\
\vdots & \vdots & \vdots & & \vdots \\
x_n & q_{n\,2} & q_{n\,3} & \cdots & q_{n\,n}
\end{bmatrix}</math>
 
'''Q''' should have '''x''' as its first column and have its columns an ''orthonormal basis'' for '''C<sup>n</sup>'''. Now, {{nowrap|'''A''' '''Q''' {{=}} '''Q''' '''U<sub>1</sub>'''}}, with
'''U<sub>1</sub>''' of the form:
 
[[File:u1 shur.gif|link=|alt=]]
 
Let the ''induction hypothesis'' be that the theorem holds for all {{nowrap|(n-1) &times; (n-1)}} matrices. From the construction, so far, it holds for {{nowrap|n {{=}} 2}}. Choose a unitary '''Q<sub>0</sub>''', so that {{nowrap|'''U<sub>0</sub>''' '''Q<sub>0</sub>''' {{=}} '''Q<sub>0</sub>''' '''U<sub>2</sub>'''}}, with '''U<sub>2</sub>''' of the ''upper triangular'' form.  Define '''Q<sub>1</sub>''' by:
 
[[File:q1 shur.gif|link=|alt=]]
 
Now:
 
[[File:u1 q1 shur.gif|link=|alt=]]
 
[[File:q0 u2 shur.gif|link=|alt=]]
 
Summarizing,
 
:<math>U_1 Q_1 = Q_1 U_3</math>
 
with:
 
:<math>U_3=\begin{bmatrix}
\lambda_1 & z_{1\,2} & z_{1\,3} & \cdots & z_{1\,n} \\
0 & \lambda_2 & z_{2\,3} & \cdots & z_{2\,n} \\
0 & 0 & \lambda_3 & \cdots & z_{3\,n} \\
\vdots & \vdots & \vdots & & \vdots \\
0 & 0 & 0 & \cdots & \lambda_n
\end{bmatrix}</math>
 
Now, {{nowrap|'''A''' '''Q''' {{=}} '''Q''' '''U<sub>1</sub>'''}} and {{nowrap|'''U<sub>1</sub>''' '''Q<sub>1</sub>''' {{=}} '''Q<sub>1</sub>''' '''U<sub>3</sub>'''}}, where '''Q''' and '''Q<sub>1</sub>''' are ''unitary'' and
'''U<sub>3</sub>''' is ''upper triangular''. Thus {{nowrap|'''A''' '''Q''' '''Q<sub>1</sub>''' {{=}} '''Q''' '''Q<sub>1</sub>''' '''U<sub>3</sub>'''}}. Since the product of two unitary matrices is unitary, the proof is done. ''QED''.
 
=== Nullity Theorem's Proof ===
 
Starting from {{nowrap|'''A Q {{=}} Q U'''}}, we can solve for '''A''' to obtain {{nowrap|'''A''' {{=}} '''Q U Q'''<sup>T</sup>}}, since
{{nowrap|'''Q Q'''<sup>T</sup> {{=}} '''I'''}}. Now, after subtracting x'''I''' from both sides, we find
: {{nowrap|x '''I''' &minus; '''A''' {{=}} '''Q''' (x '''I''' &minus; '''U''') '''Q'''<sup>T</sup>}}
and hence
: {{nowrap|det(x '''I''' &minus; '''A''') {{=}} det(x '''I''' &minus; '''U''')}}.
So, the characteristic polynomial of '''A''' is the same as that for '''U''' and is given by
: {{nowrap|p(x) {{=}} (x &minus; λ<sub>1</sub>)(x &minus; λ<sub>2</sub>)...(x &minus; λ<sub>n</sub>)}},
where the λ<sub>i</sub>s are the eigenvalues of '''A''' and '''U'''.
 
Observe, the construction used in the proof above, allows choosing any order for the eigenvalues of '''A''' that will end up as the diagonal elements of the upper triangular matrix '''U''' obtained. The ''algebraic multiplicity'' of an eigenvalue is the count of the number of times it occurs on the diagonal.
 
Now. it can be supposed for a given eigenvalue '''λ''', of algebraic multiplicity
'''k''', that '''U''' has been contrived so that '''λ''' occurs as the first
'''k''' diagonal elements.
 
[[File:u shur.gif|link=|alt=]]
 
Place {{nowrap|'''U''' &minus; λ'''I'''}} in ''block form'' as below.
 
[[File:u i shur.gif|link=|alt=]]
 
The lower left block has only elements of ''zero''. The {{nowrap|β<sub>i</sub> {{=}} λ<sub>i</sub> &minus; λ ≠ 0}}
for {{nowrap|i {{=}} k+1, ..., n}}. It is easy to verify the following.
 
[[File:u i 2 shur.gif|link=|alt=]]
 
[[File:u i k shur.gif|link=|alt=]]
 
Where '''B''' is the {{nowrap|k &times; k}} subtriangular matrix, with all elements on or below the diagonal equal to 0,
and '''T''' is the  {{nowrap|(n-k) &times; (n-k)}} upper triangular matrix, taken from the blocks of {{nowrap|('''U''' &minus; λ'''I''')}}, as shown below.
 
[[File:b shur.gif|link=|alt=]]
 
Now, almost trivially,
 
[[File:b k shur.gif|link=|alt=]]
 
That is '''B<sup>k</sup>''' has only elements of 0 and '''T<sup>k</sup>''' is triangular with all non zero diagonal elements.
Observe that if a column vector v = [v<sub>1</sub>,  v<sub>2</sub>,  ..., v<sub>k</sub>]<sup>T</sup>,
is multiplied by '''B''', then after the  first multiplication the last, kth, component is zero. After the second multiplication the second to last, (k-1)th component is zero, also, and so on.
 
The conclusion that {{nowrap|('''U''' &minus; λ '''I''')<sup>k</sup>}} has ''[[Rank (linear algebra)|rank]]'' (n-k)
and ''[[Kernel (linear algebra)|nullity]]'' k follows.
 
It is only left to observe,
since {{nowrap|('''A''' &minus; λ'''I''')<sup>k</sup> {{=}} '''Q''' ('''U''' &minus; λ '''I''')<sup>k</sup> '''Q'''<sup>T</sup>}},
that {{nowrap|('''A''' &minus; λ'''I''')<sup>k</sup>}} has ''rank'' (n-k) and ''nullity'' k, as well.
A ''unitary'', or any other similarity transformation by a non-singular matrix preserves rank.
 
The main result is now proven.
 
'''Theorem:'''<br>
If λ is an ''eigenvalue'' of a matrix '''A''' with ''algebraic multiplicity'' k, then the ''null space'' of  {{nowrap|('''A''' &minus; λ'''I''')<sup>k</sup>}} has dimension k.
 
An important observation is that raising the power of {{nowrap|('''A''' &minus; λ'''I''')}} above k will not affect the ''[[Rank (linear algebra)|rank]]'' and ''[[Kernel (linear algebra)|nullity]]'' any further.
 
== Motivation of the Procedure ==
 
=== Introduction ===
 
In the section ''Existence of Eigenvalues'' it was shown that when a
{{nowrap|n &times; n}} matrix '''A''', has an
''eigenvalue'' λ, of ''algebraic multiplicity'' k, then the ''null space'' of {{nowrap|('''A''' &minus; λ'''I''')<sup>k</sup>}}, has dimension k.
The ''Generalized Eigenspace'' of '''A''', λ will be defined to be the ''null space'' of {{nowrap|('''A''' &minus; λ'''I''')<sup>k</sup>}}.
Many authors prefer to call this the ''[[Kernel (linear algebra)|kernel]]'' of {{nowrap|('''A''' &minus; λ'''I''')<sup>k</sup>}}.
 
Notice that if a {{nowrap|n &times; n}} matrix has ''eigenvalues'' {{nowrap|λ<sub>1</sub>, λ<sub>2</sub>, ..., λ<sub>r</sub>}}
with ''algebraic multiplicities'' {{nowrap|k<sub>1</sub>, k<sub>2</sub>, ..., k<sub>r</sub>}},
then {{nowrap|k<sub>1</sub> + k<sub>2</sub> + ... + k<sub>r</sub> {{=}} n}}.
 
It will turn out that any two ''generalized eigenspaces'' of '''A''', associated with different ''eigenvalues'', will have a trivial intersection of '''{0}'''. From this it follows that the ''generalized eigenspaces'' of '''A''' combined span '''C<sup>n</sup>''', the set of all n dimensional column vectors of complex numbers.
 
The motivation for using a recursive procedure starting with the ''eigenvectors'' of '''A''' and solving for a basis of the ''generalized eigenspace'' of '''A''', λ using the matrix {{nowrap|('''A''' &minus; λ '''I''')}}, will be expounded on.
 
=== Notation ===
 
Some notation is introduced to help abbreviate statements.
 
* '''C<sup>n</sup>''' is the vector space of all n dimensional ''column'' vectors of ''complex numbers''.
* The ''Null Space'' of '''A''', {{nowrap|N('''A''') {{=}} {'''x''': '''A''' '''x''' {{=}} '''0'''}}}.
* '''V'''&nbsp;⊆&nbsp;'''W''' denotes '''V''' is a ''subset'' of '''W'''.
* '''V'''&nbsp;⊂&nbsp;'''W''' denotes '''V''' is a ''proper subset'' of '''W'''.
* The ''Range'' of '''A''' over '''V''', is {{nowrap|'''A'''('''V''') {{=}} {'''y''': '''y''' {{=}} '''A''' '''x'''}}, for some {{nowrap|'''x''' ∈ '''V'''}.}}
* '''W''' \ '''V''' denotes the set {'''x''': '''x''' ∈ '''W''' and '''x''' is not in '''V'''}.
* The ''Range'' of '''A''' is '''A'''('''C<sup>n</sup>''') and will be denoted by R('''A''').
* dim('''V''') denotes the ''dimension'' of '''V'''.
* '''{0}''' is the ''trivial subspace'' of '''C<sup>n</sup>'''.
 
=== Preliminary Observations ===
 
Throughout this discussion it is assumed that '''A''' is a
{{nowrap|n &times; n}} matrix of complex numbers.
 
Since {{nowrap|'''A'''<sup>m</sup> '''x''' {{=}} '''A''' ('''A'''<sup>m-1</sup> '''x''')}}, the inclusions
 
:{{nowrap|N('''A''') ⊆ N('''A'''<sup>2</sup>) ⊆ ... ⊆ N('''A'''<sup>m-1</sup>) ⊆ N('''A'''<sup>m</sup>)}},
 
are obvious. Since {{nowrap|'''A'''<sup>m</sup> '''x''' {{=}} '''A'''<sup>m-1</sup>('''A''' '''x''')}}, the inclusions
 
:{{nowrap|R('''A''') ⊇ R('''A'''<sup>2</sup>) ⊇ ... ⊇ R('''A'''<sup>m-1</sup>) ⊇ R('''A'''<sup>m</sup>)}},
 
are clear as well.
 
'''Theorem:'''
 
When the more trivial case {{nowrap|N('''A'''<sup>2</sup>) {{=}} N('''A''')}}, does not hold,
there exists {{nowrap|k ≥ 2}}, such that the inclusions,
 
: {{nowrap|N('''A''') ⊂ N('''A'''<sup>2</sup>) ⊂ ... ⊂ N('''A'''<sup>k-1</sup>) ⊂ N('''A'''<sup>k</sup>) {{=}} N('''A'''<sup>k+1</sup>) {{=}} ...}},
 
and
 
:{{nowrap|R('''A''') ⊃ R('''A'''<sup>2</sup>) ⊃ ... ⊃ R('''A'''<sup>k-1</sup>) ⊃ R('''A'''<sup>k</sup> {{=}} R('''A'''<sup>k+1</sup>) {{=}} ...}},
 
are proper.
 
''Proof:'' {{nowrap|0 ≤ dim(R('''A'''<sup>m+1</sup>)) ≤ dim(R('''A'''<sup>m</sup>))}}
so eventually dim(R('''A'''<sup>m+1</sup>)) = dim(R('''A'''<sup>m</sup>)),
for some m. From the inclusion {{nowrap|R('''A'''<sup>m+1</sup>) ⊆ R('''A'''<sup>m</sup>)}}
it is seen that a basis for R('''A'''<sup>m+1</sup>) is a basis for R('''A'''<sup>m</sup>) as well. That is, {{nowrap|R('''A'''<sup>m+1</sup>) {{=}} R('''A'''<sup>m</sup>)}}.
Since {{nowrap|R('''A'''<sup>m+1</sup>) {{=}} '''A'''(R('''A'''<sup>m</sup>))}}, when
{{nowrap|R('''A'''<sup>m+1</sup>) {{=}} R('''A'''<sup>m</sup>)}}, it will be
{{nowrap|R('''A'''<sup>m+2</sup>) {{=}} '''A'''(R('''A'''<sup>m+1</sup>)) {{=}} '''A'''(R('''A'''<sup>m</sup>)) {{=}} R('''A'''<sup>m+1</sup>)}}. By the ''rank nullity theorem'', it will also be the case that {{nowrap|dim(N('''A'''<sup>m+2</sup>)) {{=}} dim(N('''A'''<sup>m+1</sup>)) {{=}} dim(N('''A'''<sup>m</sup>))}}, for the same m. From the inclusions
{{nowrap|N('''A'''<sup>m+2</sup>) ⊆ N('''A'''<sup>m+1</sup>) ⊆ N('''A'''<sup>m</sup>)}}, it is clear that a basis for N('''A'''<sup>m+2</sup>) is also a basis for N('''A'''<sup>m+1</sup>) and N('''A'''<sup>m</sup>). So {{nowrap|N('''A'''<sup>m+2</sup>) {{=}} N('''A'''<sup>m+1</sup>) {{=}} N('''A'''<sup>m</sup>)}}. Now, k is the first m for which this happens.  ''QED''
 
Since certain expressions will occur many times in the following, some more notation will be introduced.
 
* {{nowrap|'''A<sub>λ,k</sub>''' {{=}} ('''A''' &minus; λ'''I''')<sup>k</sup>}}
* {{nowrap|'''N<sub>λ,k</sub>''' {{=}} N(('''A''' &minus; λ'''I''')<sup>k</sup>) {{=}} N('''A<sub>λ,k</sub>''')}}
* {{nowrap|'''R<sub>λ,k</sub>''' {{=}} R(('''A''' &minus; λ'''I''')<sup>k</sup>) {{=}} R('''A<sub>λ,k</sub>''')}}
 
From the inclusions {{nowrap|'''N<sub>&lambda;,1</sub>''' &sub;}}
{{nowrap|'''N<sub>&lambda;,2</sub>''' &sub; ... &sub;}}
{{nowrap|'''N<sub>&lambda;,k-1</sub>''' &sub; '''N<sub>&lambda;,k</sub>'''}}
{{nowrap|{{=}} '''N<sub>&lambda;,k+1</sub>''' {{=}} ...}},
{{nowrap|'''N<sub>&lambda;,k</sub>''' &#92; '''{0}''' {{=}} &cup; ('''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>)'''}}, for {{nowrap|m {{=}} 1, ..., k}} and {{nowrap|'''N<sub>&lambda;,0</sub>''' {{=}} '''{0}'''}}, follows.
 
When λ is an eigenvalue of '''A''', in the statement above, k will not exceed the algebraic multiplicity of  λ, and can be less. In fact when k would only be 1 is when there is a full set of linearly independent eigenvectors. Let's consider when {{nowrap|k ≥ 2}}.
 
Now, {{nowrap|'''x''' &isin; '''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}}, if and only {{nowrap|if  '''A<sub>&lambda;,m</sub>''' '''x''' {{=}} '''0'''}}, and {{nowrap|'''A<sub>&lambda;,m-1</sub>''' '''x''' &ne; '''0'''}}.
 
Make the observation that {{nowrap|'''A<sub>&lambda;,m</sub>''' '''x''' {{=}} '''0''',}}
and {{nowrap|'''A<sub>&lambda;,m-1</sub>''' '''x''' &ne; '''0'''}}, if and only {{nowrap|if  '''A<sub>&lambda;,m-1</sub>''' '''A<sub>&lambda;,1</sub>''' '''x''' {{=}} '''0'''}},
and {{nowrap|'''A<sub>&lambda;,m-2</sub>''' '''A<sub>&lambda;,1</sub>''' '''x''' &ne; '''0'''}}.
 
So, {{nowrap|'''x''' &isin; '''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}}, if and only  {{nowrap|if '''A<sub>&lambda;,1</sub>''' '''x''' &isin; '''N<sub>&lambda;,m-1</sub>''' &#92; '''N<sub>&lambda;,m-2</sub>'''}}.
 
=== Recursive Procedure ===
 
Consider a matrix '''A''', with an ''eigenvalue'' λ of ''algebraic multiplicity'' {{nowrap|k ≥ 2}}, such that there are not k ''linearly independent eigenvectors'' associated with λ.
 
It is desired to extend the ''eigenvectors'' to a ''basis'' for {{nowrap|'''N<sub>λ,k</sub>'''}}. That is a ''basis'' for the ''generalized eigenvectors'' associated with λ.
 
There exists some {{nowrap|2 ≤ r ≤ k}}, such that
 
:{{nowrap|'''N<sub>&lambda;,1</sub>''' &sub; '''N<sub>&lambda;,2</sub>''' &sub; ...}} {{nowrap|&sub; '''N<sub>&lambda;,r-1</sub>''' &sub; '''N<sub>&lambda;,r</sub>''' {{=}} '''N<sub>&lambda;,r+1</sub>''' {{=}} ...,}} {{nowrap|'''N<sub>&lambda;,r</sub>''' &#92; '''{0}''' {{=}} &cup; ('''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>''')}}, for {{nowrap|m {{=}} 1, ..., r}} and {{nowrap|'''N<sub>&lambda;, 0</sub>''' {{=}} '''{0}'''}}.
 
The ''eigenvectors'' are {{nowrap|'''N<sub>&lambda;,1</sub>''' &#92; '''{0}'''}}, so let {{nowrap|'''x<sub>1</sub>''', ..., '''x<sub>r<sub>1</sub></sub>''' }} be a basis for {{nowrap|'''N<sub>&lambda;,1</sub>''' &#92; '''{0}'''}}.
 
Note that each {{nowrap|'''N<sub>λ,m</sub>'''}} is a ''subspace'' and so a ''basis'' for {{nowrap|'''N<sub>λ,m-1</sub>'''}} can be extended to a ''basis'' for '''N<sub>λ,m</sub>'''.
 
Because of this we can expect to find some r<sub>2</sub> = {{nowrap|dim('''N<sub>&lambda;,2</sub>''') &minus; dim('''N<sub>&lambda;,1</sub>''')}} ''linearly independent'' vectors {{nowrap|'''x<sub>r<sub>1</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+r<sub>2</sub></sub>'''}} such that {{nowrap|'''x<sub>1</sub>''', ..., '''x<sub>r<sub>1</sub></sub> '''}},  '''x<sub>r<sub>1</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+r<sub>2</sub></sub>''' is a ''basis'' for '''N<sub>&lambda;,2</sub>'''
 
Now, {{nowrap|'''x''' ∈ '''N<sub>λ,2</sub>''' \ '''N<sub>λ,1</sub>'''}}, if and only if {{nowrap|'''A<sub>λ,1</sub>''' '''x''' ∈ '''N<sub>λ,1</sub> \ '''{0}'''}}.
 
Thus we can expect that for each '''x''' &isin;
{{nowrap| {'''x<sub>r<sub>1</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+r<sub>2</sub></sub>'''} }},
'''A<sub>&lambda;,1</sub>''' '''x'''  = &alpha;<sub>1</sub> '''x<sub>1</sub>''' + ... + &alpha;<sub>r<sub>1</sub></sub> '''x<sub>r<sub>1</sub></sub>''', for some {{nowrap|&alpha;<sub>1</sub>, ..., &alpha;<sub>r<sub>1</sub></sub>}}, depending on '''x'''.
 
Suppose we have reached the stage in the construction so that m-1 sets,
 
:{'''x<sub>1</sub>''', ..., '''x<sub>r<sub>1</sub></sub>'''}, {'''x<sub>r<sub>1</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+r<sub>2</sub></sub>'''}, ..., {'''x<sub>r<sub>1</sub>+ ... + r<sub>m-2</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub></sub>'''}
 
such that
 
:'''x<sub>1</sub>''', ..., '''x<sub>r<sub>1</sub></sub>''' , '''x<sub>r<sub>1</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+r<sub>2</sub></sub>''', ..., '''x<sub>r<sub>1</sub>+ ... + r<sub>m-2</sub> + 1</sub>''', ..., '''x<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub></sub>'''
 
is a ''basis'' for '''N<sub>&lambda;,m-1</sub>''', have been found.
 
We can expect to find some
 
:r<sub>m</sub> = dim('''N<sub>&lambda;,m</sub>''') &minus; dim('''N<sub>&lambda;,m-1</sub>''')
 
''linearly independent'' vectors
 
:'''x<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+ ... + r<sub>m</sub></sub>'''
 
such that
 
:'''x<sub>1</sub>''', ..., '''x<sub>r<sub>1</sub></sub>''' , '''x<sub>r<sub>1</sub>+1</sub>''', ..., '''x<sub>r<sub>1</sub>+ r<sub>2</sub></sub>''', ..., '''x<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub> + 1</sub>''', ..., '''x<sub>r<sub>1</sub>+ ... + r<sub>m</sub></sub>'''
 
is a ''basis'' for '''N<sub>&lambda;, m</sub>'''
 
Again, {{nowrap|'''x''' ∈ '''N<sub>λ,m</sub>''' \ '''N<sub>λ,m-1</sub>'''}}, if and only if {{nowrap|'''A<sub>λ,1</sub>''' '''x''' ∈ '''N<sub>λ,m-1</sub>''' \ '''N<sub>λ,m-2</sub>'''}}.
 
Thus we can expect that for each '''x''' &isin; {'''x<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub> + 1</sub>''', ..., '''x<sub>r<sub>1</sub>+ .... + r<sub>m</sub></sub>'''}, {{nowrap|'''A<sub>&lambda;,1</sub>''' '''x'''  {{=}}}}
&alpha;<sub>1</sub> '''x<sub>1</sub>''' + ... + &alpha;<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub></sub> '''x<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub></sub>''', for some  {{nowrap|&alpha;<sub>1</sub>, ..., &alpha;<sub>r<sub>1</sub>+ ... + r<sub>m-1</sub></sub>}},
depending on '''x'''.
 
Some of the {{nowrap|{&alpha;<sub>r<sub>1</sub>+ ... + r<sub>m-2</sub> + 1</sub>, ..., &alpha;<sub>r<sub>1</sub>+ .... + r<sub>m-1</sub></sub>}}},
will be non zero, since {{nowrap|'''A<sub>&lambda;,1</sub>''' '''x'''}} must lie in {{nowrap|'''N<sub>&lambda;,m-1</sub>''' &#92; '''N<sub>&lambda;,m-2</sub>'''}}.
 
The procedure is continued until {{nowrap|m {{=}} r}}.
 
The α<sub>i</sub> are not truly arbitrary and must be chosen, accordingly, so that sums
α<sub>1</sub> '''x<sub>1</sub>''' + α<sub>2</sub> '''x<sub>2</sub>''' + ... are in the range of '''A<sub>λ,1</sub>'''.
 
=== Generalized Eigenspace Decomposition ===
 
As was stated in the Introduction, if a {{nowrap|n &times; n}} matrix has
''eigenvalues'' {{nowrap|λ<sub>1</sub>, λ<sub>2</sub>, ..., λ<sub>r</sub>}}
with ''algebraic multiplicities'' {{nowrap|k<sub>1</sub>, k<sub>2</sub>, ..., k<sub>r</sub>}},
then {{nowrap|k<sub>1</sub> + k<sub>2</sub> + ... + k<sub>r</sub> {{=}} n}}.
 
When '''V<sub>1</sub>''' and '''V<sub>2</sub>''' are two ''subspaces'', satisfying
{{nowrap|'''V<sub>1</sub>''' &cap; '''V<sub>2</sub>''' {{=}} '''{0}'''}},
their ''direct sum'', {{resize|'''&oplus;'''}} is defined and notated by
* {{nowrap|'''V<sub>1</sub>''' {{resize|'''&oplus;'''}} '''V<sub>2</sub>''' {{=}} {v<sub>1</sub> + v<sub>2</sub> : v<sub>1</sub> &isin; '''V<sub>1</sub>''' and v<sub>2</sub> &isin; '''V<sub>2</sub>'''} }}.
 
{{nowrap|'''V<sub>1</sub>''' {{resize|'''⊕'''}} '''V<sub>2</sub>''' }}
is also a ''subspace'' and
{{nowrap|dim('''V<sub>1</sub>''' {{resize|'''⊕'''}} '''V<sub>2</sub>''')}}
= {{nowrap|dim('''V<sub>1</sub>''') + dim('''V<sub>2</sub>''')}}.
 
Since {{nowrap|dim('''N<sub>&lambda;<sub>i</sub>,k<sub>i</sub></sub>''') {{=}} k<sub>i</sub>}},
for {{nowrap|i {{=}} 1, 2, ..., r}}, after it is shown that
{{nowrap|'''N<sub>&lambda;<sub>i</sub>,k<sub>i</sub></sub>''' &cap;}}
{{nowrap|'''N<sub>&lambda;<sub>j</sub>,k<sub>j</sub></sub>''' {{=}} '''{0}'''}},
for {{nowrap|i &ne; j}}, we have the main result.
 
'''Theorem: ''Generalized Eigenspace Decomposition Theorem'''''
 
{{nowrap|'''C<sup>n</sup>''' {{=}}}}
{{nowrap|'''N<sub>λ<sub>1</sub>,k<sub>1</sub></sub>''' {{resize|'''⊕'''}}}}
{{nowrap|'''N<sub>λ<sub>2</sub>,k<sub>2</sub></sub>''' {{resize|'''⊕'''}}}}
{{nowrap|... {{resize|'''⊕'''}} '''N<sub>λ<sub>r</sub>,k<sub>r</sub></sub>'''}}.
 
This follows easily after we prove the theorem below.
 
'''Theorem:'''<br>
Let &lambda; be an ''eigenvalue'' of '''A''' and &beta; &ne; &lambda;.
Then {{nowrap|'''A<sub>&beta;,r</sub>'''('''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>''')}} =
{{nowrap|'''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}},
for any positive integers m and r.
 
'''Proof:'''<br>
If {{nowrap|'''x''' &isin; '''N<sub>&lambda;,1</sub>''' &#92; '''{0}'''}},
{{nowrap|'''A<sub>&lambda;,1</sub>''' '''x''' {{=}} ('''A''' &minus; &lambda; '''I''')'''x'''}} = '''0''',
then
{{nowrap|'''A''' '''x''' {{=}} &lambda; '''x'''}} and
{{nowrap|'''A<sub>&beta;,1</sub>''' '''x''' {{=}}}}
{{nowrap|('''A''' &minus; &beta;'''I''')'''x''' {{=}} (&lambda; &minus; &beta;)'''x'''}}.
 
So {{nowrap|'''A<sub>&beta;,1</sub>''' '''x''' &isin;}}
{{nowrap|'''N<sub>&lambda;,1</sub>''' &#92; '''{0}'''}} and
{{nowrap|'''A<sub>&beta;,1</sub>''' (&lambda; &minus; &beta;)<sup>&minus;1</sup>'''x''' {{=}} '''x'''}}.
 
It holds
{{nowrap|'''A<sub>&beta;,1</sub>''' ('''N<sub>&lambda;,1</sub>''' &#92; '''{0}''') {{=}}}}
{{nowrap|'''N<sub>&lambda;,1</sub>''' &#92; '''{0}'''}}.
 
Now, {{nowrap|'''x''' &isin; '''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}},
if and only if {{nowrap|'''A<sub>&lambda;,m</sub>''' '''x'''}} =
{{nowrap|('''A''' &minus; &lambda;'''I''')'''A<sub>&lambda;,m-1</sub>''' '''x''' {{=}} '''0'''}},
and {{nowrap|'''A<sub>&lambda;,m-1</sub>''' '''x''' &ne; '''0'''}}.
 
In the case,
{{nowrap|'''x''' &isin; '''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}},
{{nowrap|'''A<sub>&lambda;,m-1</sub>''' '''x''' &isin; '''N<sub>&lambda;,1</sub>''' &#92; '''0'''}},
and
{{nowrap|'''A<sub>&beta;,1</sub>''' '''A<sub>&lambda;,m-1</sub>''' '''x''' {{=}}}}
{{nowrap|(&lambda; &minus; &beta;) '''A<sub>&lambda;,m-1</sub>''' '''x''' &ne; '''0'''}}.
The ''operators'' '''A<sub>&beta;,1</sub>''' and
'''A<sub>&lambda;,m-1</sub>''' commute.
Thus
{{nowrap|'''A<sub>&lambda;,m</sub>''' ('''A<sub>&beta;,1</sub>''' '''x''') {{=}} '''0'''}} and
{{nowrap|'''A<sub>&lambda;,m-1</sub>''' ('''A<sub>&beta;,1</sub>''' '''x''') &ne; '''0'''}},
which means
{{nowrap|'''A<sub>&beta;,1</sub>''' '''x'''}}
{{nowrap|&isin; '''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}}.
 
Now, let our ''induction hypothesis'' be,
{{nowrap|'''A<sub>&beta;,1</sub>'''('''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>)}} = {{nowrap|'''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}}.
 
The relation {{nowrap| '''A<sub>&beta;,1</sub>''' '''x'''  {{=}}}}
{{nowrap|(&lambda; &minus; &beta;) '''x''' + '''A<sub>&lambda;,1</sub>''' '''x'''}} holds.
 
For {{nowrap| '''y''' &isin; '''N<sub>&lambda;,m+1</sub>''' &#92; '''N<sub>&lambda;, m</sub>'''}},
let {{nowrap| '''x''' {{=}} (&lambda; &minus; &beta;)<sup>-1</sup> '''y''' + '''z'''}}.
 
Then
{{nowrap|'''A<sub>&beta;,1</sub>''' '''x'''}}
{{nowrap|{{=}} '''y''' + (&lambda; &minus; &beta;)<sup>-1</sup>'''A<sub>&lambda;,1</sub>''' '''y''' + (&lambda; &minus; &beta;) '''z''' + '''A<sub>&lambda;,1</sub>''' '''z'''}}
{{nowrap|{{=}} '''y''' + (&lambda; &minus; &beta;)<sup>-1</sup>'''A<sub>&lambda;,1</sub>''' '''y''' + '''A<sub>&beta;,1</sub>''' '''z'''}}.
 
Now, {{nowrap|'''A<sub>&lambda;,1</sub>''' '''y''' &isin; '''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}}
and, by the induction hypothesis, there exists
{{nowrap|'''z''' &isin; '''N<sub>&lambda;,m</sub>''' &#92; '''N<sub>&lambda;,m-1</sub>'''}} that solves
{{nowrap|'''A<sub>&beta;,1</sub>''' '''z''' {{=}} &minus;(&lambda; &minus; &beta;)<sup>-1</sup>'''A<sub>&lambda;,1</sub>''' '''y'''}}.
 
It follows {{nowrap|'''x''' &isin; '''N<sub>&lambda;,m+1</sub>''' &#92; '''N<sub>&lambda;,m</sub>'''}}
and solves {{nowrap|'''A<sub>&beta;,1</sub>''' '''x''' {{=}} '''y'''}}.
 
So {{nowrap|'''A<sub>&beta;,1</sub>'''('''N<sub>&lambda;,m+1</sub>''' &#92; '''N<sub>&lambda;,m</sub>''') {{=}}}}
{{nowrap|'''N<sub>&lambda;,m+1</sub>''' &#92; '''N<sub>&lambda;,m</sub>'''}}.
 
Repeatedly applying {{nowrap|'''A<sub>&beta;,r</sub>''' {{=}} '''A<sub>&beta;,1</sub>''' '''A<sub>&beta;,r-1</sub>'''}} finishes the proof.
 
 
In fact, from the theorem just proved, for {{nowrap|i ≠ j}},
{{nowrap|'''A<sub>λ<sub>i</sub>,k<sub>i</sub></sub>'''('''N<sub>λ<sub>j</sub>,k<sub>j</sub></sub>''')}}{{nowrap|{{=}} '''N<sub>λ<sub>j</sub>,k<sub>j</sub></sub>'''}}.
 
Now, suppose that
{{nowrap|'''N<sub>λ<sub>i</sub>,k<sub>i</sub></sub>''' ∩ '''N<sub>λ<sub>j</sub>,k<sub>j</sub></sub>''' ≠ '''{0}'''}},
for some {{nowrap|i ≠ j}}.
 
Choose {{nowrap|'''x''' ∈ '''N<sub>λ<sub>i</sub>,k<sub>i</sub></sub>''' ∩
'''N<sub>λ<sub>j</sub>,k<sub>j</sub></sub>''' ≠ '''0'''}}.
 
Since {{nowrap| '''x''' ∈ '''N<sub>λ<sub>i</sub>,k<sub>i</sub></sub>'''}}, it follows {{nowrap|'''A<sub>λ<sub>i</sub>,k<sub>i</sub></sub>''' '''x''' {{=}} '''0'''}}.
 
Since {{nowrap| '''x''' &isin; '''N<sub>&lambda;<sub>j</sub>,k<sub>j</sub></sub>'''}},
it follows
{{nowrap|'''A<sub>&lambda;<sub>i</sub>,k<sub>i</sub></sub>''' '''x''' &ne; '''0'''}},
because '''A<sub>&lambda;<sub>i</sub>,k<sub>i</sub></sub>''' preserves
dimension on '''N<sub>&lambda;<sub>j</sub>,k<sub>j</sub></sub>'''.
 
So it must be {{nowrap|'''N<sub>λ<sub>i</sub>,k<sub>i</sub></sub>''' ∩ '''N<sub>λ<sub>j</sub>,k<sub>j</sub></sub>''' {{=}} '''{0}'''}}, for {{nowrap|i ≠ j}}.
 
This concludes the proof of the ''Generalized Eigenspace Decomposition Theorem''.
 
=== Powers of a Matrix ===
<!-- <meta content="powers of a matrix"> -->
 
==== Using generalized eigenvectors ====
 
Assume '''A''' is a {{nowrap|n &times; n}} matrix with
''eigenvalues'' {{nowrap|'''λ<sub>1</sub>, λ<sub>2</sub>, ..., λ<sub>r</sub>'''}}
<br>of ''algebraic multiplicities'' {{nowrap|'''k<sub>1</sub>, k<sub>2</sub>, ..., k<sub>r</sub>'''}}.
 
For notational convenience
&nbsp;'''A<sub>λ,&nbsp;0</sub>&nbsp;&nbsp;=&nbsp;&nbsp;I'''.
 
Note that&nbsp;
'''A<sub>β,&nbsp;1</sub>&nbsp;&nbsp;=&nbsp;'''
&nbsp;'''(λ&nbsp;&minus;&nbsp;β)I'''
+&nbsp;'''A<sub>λ,&nbsp;1</sub>&nbsp;'''.
and apply the ''binomial theorem''.
 
:<math>A_{\beta,s}=((\lambda-\beta)I+A_{\lambda,1})^s=\sum_{m=0}^s\binom{s}{m}(\lambda-\beta)^{s-m}A_{\lambda,m}</math>
 
When  '''λ''' is an ''eigenvalue'' of ''algebraic multiplicity'' &nbsp;'''k''',
and {{nowrap| '''x ∈ N<sub>λ, k</sub>''',}}<br>
&nbsp;then {{nowrap| '''A<sub>λ, m</sub> x {{=}} 0''', }}
for {{nowrap| '''m <span}} style="font-size:100%;">≥</span>&nbsp;k''',&nbsp; so in this case:
 
:<math>A_{\beta,s}x=\sum_{m=0}^{\min(s,k-1)}\binom{s}{m}(\lambda-\beta)^{s-m}A_{\lambda,m}x</math>
 
Since &nbsp;
'''C<sup>n</sup>&nbsp;&nbsp;=&nbsp;
N<sub>&lambda;<sub>1</sub>,&nbsp;k<sub>1</sub></sub>&nbsp;{{resize|&oplus;}}&nbsp;
N<sub>&lambda;<sub>2</sub>,&nbsp;k<sub>2</sub></sub>&nbsp;{{resize|&oplus;}}&nbsp;
...&nbsp;{{resize|&oplus;}}&nbsp;
N<sub>&lambda;<sub>r</sub>,&nbsp;k<sub>r</sub></sub>''',
<br>
any '''x''' in {{nowrap| '''C<sup>n</sup>''' }} can be expressed as&nbsp;
'''x&nbsp;=&nbsp;x<sub>1</sub>&nbsp;+&nbsp;x<sub>2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;x<sub>r</sub>&nbsp;''',
<br>with each {{nowrap| '''x<sub>i</sub> &isin; N<sub>&lambda;<sub>i</sub>, k<sub>i</sub></sub>'''}}.
&nbsp;&nbsp;Hence:
 
:<math>A_{\beta,s}x=\sum_{i=1}^r\sum_{m=0}^{\min(s,k_i-1)}\binom{s}{m}(\lambda_i-\beta)^{s-m}A_{\lambda_i,m}x_i</math>
 
The ''columns'' of {{nowrap| '''A<sub>β, s</sub>''' }} are obtained by letting
&nbsp;'''x'''&nbsp; vary across the ''standard basis'' vectors.
 
The case {{nowrap| '''A<sub>0, s</sub>''' }} is the power
&nbsp;'''A<sup>s</sup>'''&nbsp;&nbsp;of&nbsp;&nbsp;'''A'''.
 
==== The minimal polynomial of a matrix ====
 
Assume '''A''' is a {{nowrap|n &times; n}} matrix with
''eigenvalues'' {{nowrap|'''λ<sub>1</sub>, λ<sub>2</sub>, ..., λ<sub>r</sub>'''}}
<br>of ''algebraic multiplicities'' {{nowrap|'''k<sub>1</sub>, k<sub>2</sub>, ..., k<sub>r</sub>'''}}.
 
For each {{nowrap| '''i''' }} define &nbsp;'''α(λ<sub>i</sub>)''',
the ''null index'' of {{nowrap| '''λ<sub>i</sub>''', }} to be the<br>smallest
positive integer {{nowrap| '''α''' }} such that
&nbsp;'''N<sub>λ<sub>i</sub>, α</sub>&nbsp;
=&nbsp;&nbsp;N<sub>λ<sub>i</sub>, k<sub>i</sub></sub>'''.
 
It is often the case that {{nowrap| '''α(λ<sub>i</sub>) &lt; k<sub>i</sub></sub>'''}}.
 
Then&nbsp;
'''p(x)&nbsp;=&nbsp;∏&nbsp;(x&nbsp;&minus;&nbsp;λ<sub>i</sub>)<sup>α(λ<sub>i</sub>)</sup>'''
&nbsp;is the ''minimal polynomial'' for '''A'''.
 
To see this note&nbsp;
'''p(A)&nbsp;=&nbsp;∏&nbsp;A&nbsp;<sub>λ<sub>i</sub>,α(λ<sub>i</sub>)</sub>'''
&nbsp;and the factors can be commuted in any order.
 
So&nbsp;
'''p(A)&nbsp;(N<sub>λ<sub>j</sub>,&nbsp;k<sub>j</sub></sub>&nbsp;)&nbsp;=&nbsp;{0}''',
&nbsp;because&nbsp;
'''A&nbsp;<sub>λ<sub>j</sub>,α(λ<sub>j</sub>)</sub>'''
'''&nbsp;(N<sub>λ<sub>j</sub>,&nbsp;k<sub>j</sub></sub>&nbsp;)&nbsp;=&nbsp;{0}'''.
&nbsp;Being that
 
'''C<sup>n</sup>&nbsp;&nbsp;=&nbsp;
N<sub>λ<sub>1</sub>,&nbsp;k<sub>1</sub></sub>&nbsp;{{resize|⊕}}&nbsp;
N<sub>λ<sub>2</sub>,&nbsp;k<sub>2</sub></sub>&nbsp;{{resize|⊕}}&nbsp;
...&nbsp;{{resize|⊕}}&nbsp;
N<sub>λ<sub>r</sub>,&nbsp;k<sub>r</sub></sub>''',
&nbsp;it is clear {{nowrap| '''p(A) {{=}} 0'''}}.
 
Now '''p(x)''' can not be of less degree because&nbsp;
'''A&nbsp;<sub>β,&nbsp;1</sub>'''
'''(N<sub>λ<sub>j</sub>,&nbsp;k<sub>j</sub></sub>&nbsp;)&nbsp;=&nbsp;'''
'''N<sub>λ<sub>j</sub>,&nbsp;k<sub>j</sub></sub>&nbsp;''',
 
when {{nowrap| '''β ≠ λ<sub>j</sub>'''}},
and so
'''A&nbsp;<sub>λ<sub>j</sub>,α(λ<sub>j</sub>)</sub>'''
&nbsp;must be a factor of {{nowrap| '''p(A)''', }} for each &nbsp;'''j'''.
 
==== Using confluent Vandermonde matrices ====
 
An alternative strategy is to use the ''characteristic polynomial'' of matrix '''A'''.
 
Let &nbsp;
'''p(x)&nbsp;=&nbsp;a<sub>0</sub>&nbsp;+&nbsp;a<sub>1</sub>&nbsp;x&nbsp;+&nbsp;a<sub>2</sub>&nbsp;x<sup>2</sup>&nbsp;+&nbsp;...&nbsp;+'''
'''a<sub>n-1</sub>&nbsp;x<sup>n-1</sup>&nbsp;+&nbsp;x<sup>n</sup>'''
 
be the ''characteristic polynomial'' of '''A'''.
 
The ''minimal polynomial'' of '''A''' can be substituted for '''p(x)''' in this
discussion, if it is known,<br>and different, to reduce the degree '''n''' and the multiplicities
of the eigenvalues.
 
Then {{nowrap| '''p(A) {{=}} 0'''}} &nbsp;and&nbsp; {{nowrap|'''A<sup>n</sup>  {{=}} '''}}
'''&minus;(a<sub>0</sub>&nbsp;I&nbsp;+&nbsp;a<sub>1</sub>&nbsp;A&nbsp;+&nbsp;a<sub>2</sub>&nbsp;A<sup>2</sup>&nbsp;+&nbsp;...&nbsp;+'''
'''a<sub>n-1</sub>&nbsp;A<sup>n-1</sup>)'''.
 
So {{nowrap|  '''A<sup>n+m</sup>  {{=}} '''}}
'''b<sub>m,&nbsp;0</sub>&nbsp;I&nbsp;+&nbsp;b<sub>m,&nbsp;1</sub>&nbsp;A&nbsp;+&nbsp;b<sub>m,&nbsp;2</sub>&nbsp;A<sup>2</sup>&nbsp;+&nbsp;...&nbsp;+'''
'''b<sub>m,&nbsp;n-1</sub>&nbsp;A<sup>n-1</sup>''',
 
where the &nbsp;
'''b<sub>m,&nbsp;0</sub>,&nbsp;b<sub>m,&nbsp;1</sub>,&nbsp;b<sub>m,&nbsp;2</sub>,'''
'''...,&nbsp;b<sub>m,&nbsp;n-1</sub>,&nbsp;'''&nbsp;&nbsp;satisfy the recurrence relation
 
<br>'''b<sub>m,&nbsp;0</sub>&nbsp;=&nbsp;&minus;a<sub>0</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''b<sub>m,&nbsp;1</sub>&nbsp;=&nbsp;b<sub>m-1,&nbsp;0</sub>&nbsp;&minus;&nbsp;a<sub>1</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''b<sub>m,&nbsp;2</sub>&nbsp;=&nbsp;b<sub>m-1,&nbsp;1</sub>&nbsp;&minus;&nbsp;a<sub>2</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''...,'''
<br>'''b<sub>m,&nbsp;n-1</sub>&nbsp;=&nbsp;b<sub>m-1,&nbsp;n-2</sub>&nbsp;&minus;&nbsp;a<sub>n-1</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>'''
 
with &nbsp;
'''b<sub>0,&nbsp;0</sub>&nbsp;=&nbsp;b<sub>0,&nbsp;1</sub>&nbsp;=&nbsp;b<sub>0,&nbsp;2</sub>'''
'''=&nbsp;...&nbsp;=&nbsp;b<sub>0,&nbsp;n-2</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;'''and'''&nbsp;&nbsp;b<sub>0,&nbsp;n-1</sub>&nbsp;=&nbsp;1'''.
 
This alone will reduce the number of multiplications needed to calculate a higher<br>
power of '''A''' by a factor of '''n<sup>2</sup>''', as compared to simply
multiplying '''A<sup>n+m</sup>''' by '''A'''.
 
In fact the &nbsp;
'''b<sub>m,&nbsp;0</sub>,&nbsp;b<sub>m,&nbsp;1</sub>,&nbsp;b<sub>m,&nbsp;2</sub>,'''
'''...,&nbsp;b<sub>m,&nbsp;n-1</sub>,&nbsp;'''&nbsp;&nbsp;can be calculated by a formula.
 
Consider first when '''A''' has ''distinct eigenvalues'' &nbsp;
'''λ<sub>1</sub>,&nbsp;λ<sub>2</sub>,&nbsp;...,&nbsp;λ<sub>n</sub>'''.
<br>Since {{nowrap|'''p(λ<sub>i</sub>) {{=}} 0''', }} for each {{nowrap| '''i''', }}
the {{nowrap| '''λ<sub>i</sub>''' }} satisfy the recurrence relation also. So:
 
:<math>\begin{bmatrix}
1 & \lambda_1 & \lambda_1^2 & \cdots & \lambda_1^{n-1} \\
1 & \lambda_2 & \lambda_2^2 & \cdots & \lambda_2^{n-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & \lambda_n & \lambda_n^2 & \cdots & \lambda_n^{n-1}
\end{bmatrix}\begin{bmatrix}
b_{m,0} \\ b_{m,1} \\ \vdots \\ b_{m,n-1}
\end{bmatrix}=\begin{bmatrix}
\lambda_1^{n+m} \\ \lambda_2^{n+m} \\ \vdots \\ \lambda_n^{n+m}
\end{bmatrix}</math>
 
The matrix {{nowrap| '''V''' }} in the equation is the well studied ''Vandermonde's'',
<br>for which formulas for its determinant and inverse are known.
 
:<math>\det(V(\lambda_1,\lambda_2,\ldots,\lambda_n))=\prod_{1\le i<j\le n}(\lambda_j-\lambda_i)</math>
 
In the case that {{nowrap|''' λ<sub>2</sub> {{=}} λ<sub>1</sub> '''}},
consider instead when
'''&nbsp;λ<sub>1</sub>&nbsp;''' is near {{nowrap|''' λ<sub>2</sub> '''}},
and<br>subtract row {{nowrap| '''1''' }} from row &nbsp;'''2''', which does not
affect the determinant.
 
:<math>\begin{bmatrix}
1 & \lambda_1 & \lambda_1^2 & \cdots & \lambda_1^{n-1} \\
0 & \lambda_2-\lambda_1 & \lambda_2^2-\lambda_1^2 & \cdots & \lambda_2^{n-1}-\lambda_1^{n-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & \lambda_n & \lambda_n^2 & \cdots & \lambda_n^{n-1}
\end{bmatrix}=
\begin{bmatrix}
\lambda_1^{n+m} \\
\lambda_2^{n+m}-\lambda_1^{n+m} \\
\vdots \\
\lambda_n^{n+m}
\end{bmatrix}</math>
 
After dividing the second row by
&nbsp;'''(λ<sub>2</sub>&nbsp;&minus;&nbsp;λ<sub>1</sub>)'''&nbsp;
the determinant will be affected by<br>the removal of this factor and still be non-zero.
 
:<math>\begin{vmatrix}
0 &
\frac{\lambda_2-\lambda_1}{(\lambda_2-\lambda_1)} &
\frac{\lambda_2^2-\lambda_1^2}{(\lambda_2-\lambda_1)} &
\cdots &
\frac{\lambda_2^{n-1}-\lambda_1^{n-1} }{(\lambda_2-\lambda_1)}
\end{vmatrix}
\quad \begin{vmatrix}\frac{\lambda_2^{n+m}-\lambda_1^{n+m} }{(\lambda_2-\lambda_1)}\end{vmatrix}</math>
 
Taking the limit as {{nowrap| '''λ<sub>1</sub>→ λ<sub>2</sub>'''}},
the new system has the second row ''differentiated''.
 
:<math>\begin{bmatrix}
1 & \lambda_2 & \lambda_2^2 & \cdots & \lambda_2^{n-1} \\
0 & 1 & 2\lambda_2 & \cdots & (n-1)\lambda_2^{n-2} \\
1 & \lambda_3 & \lambda_3^2 & \cdots & \lambda_3^{n-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & \lambda_n & \lambda_n^2 & \cdots & \lambda_n^{n-1}
\end{bmatrix}=
\begin{bmatrix}
\lambda_2^{n+m} \\
(n+m)\lambda_2^{n+m-1} \\
\lambda_3^{n+m} \\
\vdots \\
\lambda_n^{n+m}
\end{bmatrix}</math>
 
The new system has determinant:
 
:<math>\det(V(\lambda_2,\ldots,\lambda_n))=\prod_{3\le j\le n}(\lambda_j-\lambda_2)^2\prod_{3\le i<j\le n}(\lambda_j-\lambda_i)</math>
 
In the case that {{nowrap|''' λ<sub>3</sub> {{=}} λ<sub>2</sub> '''}},
also, consider like before when
'''&nbsp;λ<sub>2</sub>&nbsp;''' is near ''' λ<sub>3</sub>''',
and<br>subtract row {{nowrap| '''1''' }} from row '''3''', which does not
affect the determinant. Next divide<br>row three by
&nbsp;'''(λ<sub>3</sub>&nbsp;&minus;&nbsp;λ<sub>2</sub>)'''&nbsp;
and then subtract row {{nowrap| '''2''' }} from the new row '''3'''
and<br>follow by dividing the resulting row '''3''' by
&nbsp;'''(λ<sub>3</sub>&nbsp;&minus;&nbsp;λ<sub>2</sub>)'''&nbsp;
again. This will affect the<br>determinant by removing a factor of
&nbsp;'''(λ<sub>3</sub>&nbsp;&minus;&nbsp;λ<sub>2</sub>)<sup>2</sup>'''.&nbsp;
 
Each element of row '''3''' is now of the form
 
:<math>((f(\lambda_3)-f(\lambda_2))/(\lambda_3-\lambda_2)-f'(\lambda_2))/(\lambda_3-\lambda_2)</math>
and
:<math>((f(\lambda_3)-f(\lambda_2))/(\lambda_3-\lambda_2)-f'(\lambda_2))/(\lambda_3-\lambda_2)\rightarrow\tfrac{1}{2}f''(\lambda_3)\text{  as }\lambda_2\rightarrow\lambda_3</math>
 
The effect is to differentiate twice and multiply by one half.
 
:<math>\begin{bmatrix}
1 & \lambda_3 & \lambda_3^2 & \lambda_3^3 & \cdots & \lambda_3^{n-1} \\
0 & 1 & 2\lambda_3 & 3\lambda_3^2 & \cdots & (n-1)\lambda_3^{n-2} \\
0 & 0 & 1 & 3\lambda_3 & \cdots & \tfrac{1}{2}(n-1)(n-2)\lambda_3^{n-3} \\
1 & \lambda_4 & \lambda_4^2 & \lambda_4^3 & \dots & \lambda_4^{n-1} \\
\vdots & \vdots & \vdots & \vdots & & \vdots \\
1 & \lambda_n & \lambda_n^2 & \lambda_n^3 & \cdots & \lambda_n^{n-1}
\end{bmatrix}
\begin{bmatrix}
\lambda_3^{n+m} \\
(n+m)\lambda_3^{n+m-1} \\
\tfrac{1}{2}(n+m)(n+m-1)\lambda_3^{n+m-2} \\
\lambda_4^{n+m} \\
\vdots \\
\lambda_n^{n+m}
\end{bmatrix}</math>
 
The new system has determinant:
 
:<math>\det(V(\lambda_3,\ldots,\lambda_n))=\prod_{4\le j\le n}(\lambda_j-\lambda_3)^3\prod_{4\le i<j\le n}(\lambda_j-\lambda_i)</math>
 
If it were that the multiplicity of the eigenvalue was even higher, then the next
row would<br>be differentiated three times and multiplied by {{nowrap| '''1/3!'''. }}
The progression is '''1/s!&nbsp;f<sup>(s)</sup>''', with the<br>constant coming from the
coefficients of the derivatives in the ''Taylor'' expansion. This<br>being done for
each ''eigenvalue'' of ''algebraic multiplicity'' greater than '''1'''.
 
'''example'''
 
The matrix
<math>A=\begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
3 & 1 & 0 & 0 & 0 \\
6 & 3 & 2 & 0 & 0 \\
10 & 6 & 3 & 2 & 0 \\
15 & 10 & 6 & 3 & 2
\end{bmatrix}</math>
 
has ''characteristic polynomial''&nbsp;
'''p(x)&nbsp;=&nbsp;(x&nbsp;&minus;&nbsp;1)<sup>2</sup>(x&nbsp;&minus;&nbsp;2)<sup>3</sup>'''.
 
The &nbsp;
'''b<sub>m,&nbsp;0</sub>,&nbsp;b<sub>m,&nbsp;1</sub>,&nbsp;b<sub>m,&nbsp;2</sub>,'''
'''&nbsp;b<sub>m,&nbsp;3</sub>,&nbsp;b<sub>m,&nbsp;4</sub>,&nbsp;'''&nbsp;&nbsp;
for which
 
'''A<sup>5+m</sup>&nbsp;&nbsp;=&nbsp;'''
'''b<sub>m,&nbsp;0</sub>&nbsp;I&nbsp;+&nbsp;b<sub>m,&nbsp;1</sub>&nbsp;A&nbsp;+&nbsp;b<sub>m,&nbsp;2</sub>&nbsp;A<sup>2</sup>&nbsp;+'''
'''b<sub>m,&nbsp;3</sub>&nbsp;A<sup>3</sup>&nbsp;+&nbsp;b<sub>m,&nbsp;4</sub>&nbsp;A<sup>4</sup>''',<br>
<br>satisfy the ''confluent'' Vandermonde system next.
 
:<math>\begin{bmatrix}
1 & 1 & 1^2 & 1^3 & 1^4 \\
0 & 1 & 2\cdot 1 & 3 \cdot 1^2 & 4\cdot 1^3 \\
1 & 2 & 2^2 & 2^3 & 2^4 \\
0 & 1 & 2\cdot 2 & 3\cdot 2^2 & 4\cdot 2^3 \\
0 & 0 & 1 & 3\cdot 2 & 6\cdot 2^2
\end{bmatrix}
\begin{bmatrix} b_{m,0} \\ b_{m,1} \\ b_{m,2} \\ b_{m,3} \\ b_{m,4} \end{bmatrix} =
\begin{bmatrix}
1^{5+m} \\
(5+m)\cdot 1^{5+m-1} \\
2^{5+m} \\
(5+m)\cdot 2^{5+m-1} \\
\tfrac{1}{2}(5+m)(5+m-1)\cdot 2^{5+m-2}
\end{bmatrix}</math>
 
:<math>\begin{bmatrix} b_{m,0} \\ b_{m,1} \\ b_{m,2} \\ b_{m,3} \\ b_{m,4} \end{bmatrix} =
\begin{bmatrix}
-16 & -8 & 17 & -10 & 4 \\
48 & 20 & -48 & 29 & -12 \\
-48 & -18 & 48 & -30 & 13 \\
20 & 7 & -20 & 13 & -6 \\
-3 & -1 & 3 & -2 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\
(5+m) \\
32 \cdot 2^m \\
16(5+m) \cdot 2^m \\
4(5+m)(5+m-1) \cdot 2^m
\end{bmatrix}</math>
 
====Using difference equations====
 
<p style="line-height:150%;">
Returning to the recurrence relation  for&nbsp;
'''b<sub>m,&nbsp;0</sub>,&nbsp;b<sub>m,&nbsp;1</sub>,&nbsp;b<sub>m,&nbsp;2</sub>,'''
'''...,&nbsp;b<sub>m,&nbsp;n-1</sub>,'''
<br>'''b<sub>m,&nbsp;0</sub>&nbsp;=&nbsp;&minus;a<sub>0</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''b<sub>m,&nbsp;1</sub>&nbsp;=&nbsp;b<sub>m-1,&nbsp;0</sub>&nbsp;&minus;&nbsp;a<sub>1</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''b<sub>m,&nbsp;2</sub>&nbsp;=&nbsp;b<sub>m-1,&nbsp;1</sub>&nbsp;&minus;&nbsp;a<sub>2</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''...,'''
<br>'''b<sub>m,&nbsp;n-1</sub>&nbsp;=&nbsp;b<sub>m-1,&nbsp;n-2</sub>&nbsp;&minus;&nbsp;a<sub>n-1</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>'''
<br>with &nbsp;
'''b<sub>0,&nbsp;0</sub>&nbsp;=&nbsp;b<sub>0,&nbsp;1</sub>&nbsp;=&nbsp;b<sub>0,&nbsp;2</sub>'''
'''=&nbsp;...&nbsp;=&nbsp;b<sub>0,&nbsp;n-2</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;'''and'''&nbsp;&nbsp;b<sub>0,&nbsp;n-1</sub>&nbsp;=&nbsp;1'''.<br>
</p>
 
<p style="line-height:150%;">
Upon substituting the first relation into the second,
<br>'''b<sub>m,&nbsp;1</sub>&nbsp;=&nbsp;&nbsp;&minus;a<sub>0</sub>&nbsp;b<sub>m-2,&nbsp;n-1</sub>&nbsp;&minus;&nbsp;a<sub>1</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>and now this one into the {{nowrap|next  }}
'''b<sub>m,&nbsp;2</sub>&nbsp;=&nbsp;b<sub>m-1,&nbsp;1</sub>&nbsp;&minus;&nbsp;a<sub>2</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''b<sub>m,&nbsp;2</sub>&nbsp;=
&nbsp;&minus;a<sub>0</sub>&nbsp;b<sub>m-3,&nbsp;n-1</sub>&nbsp;&minus;&nbsp;a<sub>1</sub>&nbsp;b<sub>m-2,&nbsp;n-1</sub>
&minus;&nbsp;a<sub>2</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>,&nbsp;'''
<br>'''...,'''&nbsp;&nbsp;and so on, the following difference equation is found.
<br>'''b<sub>m,&nbsp;n-1</sub>&nbsp;=
<br>'''
&nbsp;&minus;a<sub>0</sub>&nbsp;b<sub>m-n,&nbsp;n-1</sub>&nbsp;&minus;&nbsp;a<sub>1</sub>&nbsp;b<sub>m-n+1,&nbsp;n-1</sub>
&minus;&nbsp;a<sub>2</sub>&nbsp;b<sub>m-n+2,&nbsp;n-1</sub>'''
&nbsp;&minus;&nbsp;...&nbsp;&minus;&nbsp;a<sub>n-2</sub>&nbsp;b<sub>m-2,&nbsp;n-1</sub>&nbsp;&minus;&nbsp;a<sub>n-1</sub>&nbsp;b<sub>m-1,&nbsp;n-1</sub>'''
<br>with &nbsp;
'''b<sub>0,&nbsp;n-1</sub>&nbsp;=&nbsp;b<sub>1,&nbsp;n-1</sub>&nbsp;=&nbsp;b<sub>2,&nbsp;n-1</sub>'''
'''=&nbsp;...&nbsp;=&nbsp;b<sub>n-2,&nbsp;n-1</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;'''and'''&nbsp;&nbsp;b<sub>n-1,&nbsp;n-1</sub>&nbsp;=&nbsp;1'''.<br>
</p>
 
See the subsection on ''linear difference equations'' for more explanation.
 
===Chains of generalized eigenvectors===
 
Some notation and results from previous sections are restated.
 
<ul style="margin-left:15;">
 
<li>
'''A''' is a {{nowrap|n &times; n}} matrix of complex numbers.
</li>
<li style="line-height:175%">
'''A<sub>λ,&nbsp;k</sub>&nbsp;=&nbsp;(A&nbsp;&minus;&nbsp;λ&nbsp;I)<sup>k</sup>'''
</li>
<li style="line-height:175%">
'''N<sub>λ,&nbsp;k</sub>&nbsp;=&nbsp;N((A&nbsp;&minus;&nbsp;λ&nbsp;I)<sup>k</sup>)
&nbsp;=&nbsp;N(A<sub>λ,&nbsp;k</sub>)'''
</li>
<li>
For
&nbsp;{{nowrap|'''V<sub>1</sub>''' {{resize|140%|'''∩'''}} '''V<sub>2</sub> {{=}} {0}'''}},
&nbsp;{{nowrap|'''V<sub>1</sub>''' {{resize|140%|'''⊕'''}} '''V<sub>2</sub>''' }}
'''=&nbsp;{v<sub>1</sub>&nbsp;+&nbsp;v<sub>2</sub>&nbsp;: {{nowrap|v<sub>1</sub> ∈ V<sub>1</sub>}}
'''and'''&nbsp;v<sub>2</sub>&nbsp;∈&nbsp;V<sub>2</sub>}'''.
</li>
 
</ul>
 
Assume '''A''' has
''eigenvalues'' {{nowrap|'''λ<sub>1</sub>, λ<sub>2</sub>, ..., λ<sub>r</sub>'''}}
<br>of ''algebraic multiplicities'' {{nowrap|'''k<sub>1</sub>, k<sub>2</sub>, ..., k<sub>r</sub>'''}}.
 
For each {{nowrap| '''i''' }} define &nbsp;'''α(λ<sub>i</sub>)''',
the ''null index'' of {{nowrap| '''λ<sub>i</sub>''', }} to be the<br>smallest
positive integer {{nowrap| '''α''' }} such that
&nbsp;'''N<sub>λ<sub>i</sub>, α</sub>&nbsp;
=&nbsp;&nbsp;N<sub>λ<sub>i</sub>, k<sub>i</sub></sub>'''.
 
It is always the case that {{nowrap| '''α(λ<sub>i</sub>) <span}} style="font-size:100%;">≤</span>&nbsp;k<sub>i</sub>'''.
 
When {{nowrap| '''α(λ) <span}} style="font-size:100%;">≥</span>&nbsp;2&nbsp;''',
 
<p style="line-height:175%">
'''N<sub>&lambda;,&nbsp;1</sub>&nbsp;&sub;&nbsp;
N<sub>&lambda;,&nbsp;2</sub>&nbsp;&sub;&nbsp;...&nbsp;&sub;&nbsp;
N<sub>&lambda;,&nbsp;&nbsp;&alpha;-1</sub>&nbsp;&sub;&nbsp;N<sub>&lambda;,&nbsp;&alpha;</sub>
&nbsp;=&nbsp;N<sub>&lambda;,&nbsp;&alpha;+1</sub>&nbsp;=&nbsp;...''',<br>
'''&nbsp;N<sub>&lambda;,&nbsp;&alpha;</sub>&nbsp;&#92;&nbsp;{0}&nbsp;=&nbsp;&cup;
&nbsp;(N<sub>&lambda;,&nbsp;m</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;m-1</sub>)''',
for {{nowrap|'''m {{=}} 1, ..., &alpha;'''}} and {{nowrap|''' N<sub>&lambda;, 0</sub> {{=}} {0}'''.}}
</p>
 
'''x&nbsp;∈&nbsp;N<sub>λ,&nbsp;m</sub>&nbsp;\&nbsp;N<sub>λ,&nbsp;m-1</sub>''',
&nbsp;if and only if&nbsp;
'''A<sub>λ,&nbsp;1</sub>&nbsp;x&nbsp;∈&nbsp;N<sub>λ,&nbsp;m-1</sub>&nbsp;\&nbsp;N<sub>λ,&nbsp;m-2</sub>'''
 
Define a &nbsp;'''''chain''''' of ''generalized eigenvectors'' to be a set<br>
'''{&nbsp;x<sub>1</sub>,&nbsp;x<sub>2</sub>, {{nowrap|...,  x<sub>m</sub> }'''}}
&nbsp;such that&nbsp;
'''x<sub>1</sub>&nbsp;∈&nbsp;N<sub>λ,&nbsp;m</sub>&nbsp;\&nbsp;N<sub>λ,&nbsp;m-1</sub>''',
&nbsp;and&nbsp;
'''x<sub>i+1</sub>&nbsp;=&nbsp;A<sub>λ,&nbsp;1</sub>&nbsp;x<sub>i</sub>'''.
 
Then {{nowrap| '''x<sub>m</sub> ≠ 0''' }} and
&nbsp;'''A<sub>λ,&nbsp;1</sub>&nbsp;x<sub>m</sub>&nbsp;=&nbsp;0'''.
 
When&nbsp;
'''x<sub>1</sub>&nbsp;∈&nbsp;N<sub>λ,&nbsp;1</sub>&nbsp;\&nbsp;{0}''',
&nbsp;'''{x<sub>1</sub>}'''&nbsp; can be, for the sake of not requiring extra<br>
terminology, considered ''trivially'' a ''chain''.
 
When a ''disjoint'' collection of ''chains'' combined form a ''basis set''
for {{nowrap|'''N<sub>λ, α(λ)</sub>''' ,<br>they}} are often referred
to as ''Jordan chains'' and are the vectors used for<br> the columns of a
''transformation'' matrix in the ''Jordan canonical form''.
 
When a ''disjoint'' collection of ''chains'' that combined form a ''basis set'',
<br>is needed that satisfy&nbsp;
'''β<sub>i+1</sub>x<sub>i+1</sub>&nbsp;=&nbsp;A<sub>λ,&nbsp;1</sub>&nbsp;x<sub>i</sub>''',
for some scalars {{nowrap| '''β<sub>i</sub>''', ''chains''<br>as}} already defined
can be scaled for this purpose.
 
What will be proven here is that such a ''disjoint''  collection of ''chains''
<br>can always be constructed.
 
<p style="line-height:175%">
Before the proof is started, recall a few facts about ''direct sums''.<br>
When the notation {{nowrap|'''V<sub>1</sub>''' &oplus; V<sub>2</sub>''' '''}}
is used, it is assumed
{{nowrap|'''V<sub>1</sub>''' {{resize|140%|'''&cap;'''}} '''V<sub>2</sub> {{=}} {0}'''.}}
<br>For&nbsp;
'''x&nbsp;=&nbsp;v<sub>1</sub>&nbsp;+&nbsp;v<sub>2</sub>&nbsp;'''&nbsp;with&nbsp;'''v<sub>1</sub>&nbsp;&isin;&nbsp;V<sub>1</sub>
'''and'''&nbsp;v<sub>2</sub>&nbsp;&isin;&nbsp;V<sub>2</sub>'''&nbsp;,&nbsp;
then {{nowrap| '''x {{=}} 0''',<br>if}} and only if&nbsp;
'''v<sub>1</sub>&nbsp;=&nbsp;v<sub>2</sub>&nbsp;=&nbsp;0'''.
</p>
 
In the discussion below<br>
'''δ<sub>i</sub>&nbsp;&nbsp;='''
'''&nbsp;dim(N<sub>λ,&nbsp;i</sub>)&nbsp;&nbsp;&minus;'''
'''&nbsp;dim(N<sub>λ,&nbsp;i&minus;1</sub>)''',&nbsp;&nbsp;with&nbsp;
'''δ<sub>1</sub>&nbsp;&nbsp;='''
'''&nbsp;dim(N<sub>λ,&nbsp;1</sub>)'''.
 
<p style="line-height:175%">
First consider when
'''&nbsp;N<sub>&lambda;,&nbsp;2</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;1</sub>
&ne;&nbsp;{0}&nbsp;''', Then a ''basis'' for
'''&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;''' can be<br>''extended'' to a ''basis'' for
'''&nbsp;N<sub>&lambda;,&nbsp;2</sub>'''.&nbsp; If&nbsp;
'''&delta;<sub>2</sub>&nbsp;&nbsp;=&nbsp;1''', then there exists
'''x<sub>1</sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;2</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;1</sub>''',
<br>such that
'''&nbsp;N<sub>&lambda;,&nbsp;2</sub> ='''
'''&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;&oplus;&nbsp;span{x<sub>1</sub>}'''.
&nbsp;Let&nbsp;
'''x<sub>2</sub>&nbsp;=&nbsp;A<sub>&lambda;,&nbsp;1</sub>&nbsp;x<sub>1</sub>'''.
&nbsp;Then<br>
'''x<sub>2</sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;&#92;&nbsp;{0}''',
&nbsp;with '''x<sub>1</sub>''' and '''x<sub>2</sub>''' ''linearly independent''.
If {{nowrap|'''dim(N<sub>&lambda;, 2</sub>) {{=}} 2''',<br>since}}
'''{x<sub>1</sub>,&nbsp;x<sub>2</sub>}'''&nbsp; is a ''chain'' we are through.
Otherwise {{nowrap|'''x<sub>1</sub>, x<sub>2</sub>''' }} can be extended<br>to a ''basis''
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>, ...,&nbsp;x<sub>&delta;<sub>1</sub></sub>'''
for {{nowrap| '''N<sub>&lambda;, 2</sub>'''. }} The sets
'''{x<sub>1</sub>,&nbsp;x<sub>2</sub>},&nbsp;{x<sub>3</sub>},&nbsp;...,&nbsp;{x<sub>&delta;<sub>1</sub></sub>}'''
<br>form a ''disjoint'' collection of ''chains''.
In the case that
'''&delta;<sub>2</sub>&nbsp;&nbsp;&gt;&nbsp;1''', then there exist<br>
''linearly independent''&nbsp;
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>2</sub></sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;2</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;1</sub>''',
&nbsp;such that<br>
'''&nbsp;N<sub>&lambda;,&nbsp;2</sub> ='''
'''&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;&oplus;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>2</sub></sub>}'''.
&nbsp;Let&nbsp;
'''y<sub>i</sub>&nbsp;=&nbsp;A<sub>&lambda;,&nbsp;1</sub>&nbsp;x<sub>i</sub>'''.
<br>Then &nbsp;
'''y<sub>i</sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;&#92;&nbsp;{0}''',
&nbsp;for &nbsp;'''i = {{nowrap|1, 2, ..., &delta;<sub>2</sub>'''}}.
&nbsp;To see the {{nowrap| '''y<sub>1</sub>, y<sub>2</sub>, ..., y<sub>&delta;<sub>2</sub></sub>'''}}
<br>are ''linearly independent'', assume that for some
&nbsp;'''&beta;<sub>1</sub>,&nbsp;&beta;<sub>2</sub>,&nbsp;...,&nbsp;&beta;<sub>&delta;<sub>2</sub></sub>''',
<br>that
&nbsp;'''&beta;<sub>1</sub>y<sub>1</sub>&nbsp;+&nbsp;&beta;<sub>2</sub>y<sub>2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;&beta;<sub>&delta;<sub>2</sub></sub>y<sub>&delta;<sub>2</sub></sub>&nbsp;=&nbsp;0''',
Then for
&nbsp;'''x&nbsp;=&nbsp;&beta;<sub>1</sub>x<sub>1</sub>&nbsp;+&nbsp;&beta;<sub>2</sub>x<sub>2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;&beta;<sub>&delta;<sub>2</sub></sub>x<sub>&delta;<sub>2</sub></sub>''',
<br>'''x&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;''',
&nbsp;and
'''x&nbsp;&isin;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>2</sub></sub>}''',
which implies that {{nowrap|'''x {{=}} 0'''}}, &nbsp;and<br>
&nbsp;'''&beta;<sub>1</sub>=&nbsp;&beta;<sub>2</sub>=&nbsp;...&nbsp;=&nbsp;&beta;<sub>&delta;<sub>2</sub></sub>&nbsp;=&nbsp;0'''.
&nbsp;Since&nbsp;'''span{y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>2</sub></sub>}'''
'''&sube;&nbsp;N<sub>&lambda;,&nbsp;1</sub>''', the vectors<br>
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>2</sub></sub>'''
''',&nbsp;y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>2</sub></sub>'''
&nbsp;are a ''linearly independent'' set.<br>
If {{nowrap| '''&delta;<sub>2</sub></sub> {{=}} '''&delta;<sub>1</sub>'''</sub>''', }}
then the sets
'''{x<sub>1</sub>,&nbsp;y<sub>1</sub>},&nbsp;{x<sub>2</sub>,&nbsp;y<sub>2</sub>},&nbsp;...,&nbsp;{x<sub>&delta;<sub>2</sub></sub>,&nbsp;y<sub>&delta;<sub>2</sub></sub>}'''
&nbsp;form a<br>''disjoint'' collection of ''chains'' that when combined are a ''basis set''
for {{nowrap|''' N<sub>&lambda;, 2</sub>'''.}}<br>
If {{nowrap| '''&delta;<sub>1</sub></sub> &gt; '''&delta;<sub>2</sub>'''</sub>''', }}
then&nbsp;
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>'''&delta;'''<sub>2</sub></sub>'''
''',&nbsp;y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>2</sub></sub>'''
&nbsp;can be extended to a ''basis''<br>for {{nowrap|''' N<sub>&lambda;, 2</sub>''' }}
by some vectors
'''x<sub>&delta;<sub>2</sub>+1</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>1</sub></sub>'''
&nbsp;in {{nowrap|''' N<sub>&lambda;, 1</sub>''', }} so that<br>
'''{x<sub>1</sub>,&nbsp;y<sub>1</sub>},&nbsp;{x<sub>2</sub>,&nbsp;y<sub>2</sub>},&nbsp;...,&nbsp;{x<sub>&delta;<sub>2</sub></sub>,&nbsp;y<sub>&delta;<sub>2</sub></sub>}'''
''',&nbsp;{x<sub>&delta;<sub>2</sub>+1</sub>},&nbsp;...,&nbsp;{x<sub>&delta;<sub>1</sub></sub>}'''
<br>forms a ''disjoint'' collection of ''chains''.
</p>
 
To reduce redundancy, in the next paragraph, when {{nowrap| '''δ {{=}} 1''' }}
the notation<br>
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>δ</sub>'''
will be understood simply to mean just '''x<sub>1</sub>''' and when
&nbsp;'''δ&nbsp;=&nbsp;2'''&nbsp;<br>to mean
&nbsp;'''x<sub>1</sub>,&nbsp;x<sub>2</sub>'''.
 
<p style="line-height:175%">
So far it has been shown that, if ''linearly independent''<br>
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>2</sub></sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;2</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;1</sub>''',
&nbsp;are chosen, such that<br>
'''&nbsp;N<sub>&lambda;,&nbsp;2</sub> ='''
'''&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;&oplus;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>2</sub></sub>}''',
&nbsp;then there exists a ''disjoint''<br>collection of ''chains'' with each of the
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>2</sub></sub>&nbsp;'''
being the first member or ''top''<br>of one of the ''chains''. Furthermore, this
collection of ''vectors'', when combined,<br>forms a ''basis'' for
'''&nbsp;N<sub>&lambda;,&nbsp;2</sub>'''.
</p>
 
<p style="line-height:175%">
Now, let the ''induction hypothesis'' be that, if ''linearly independent''<br>
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m</sub></sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;m&minus;1</sub>''',
&nbsp;are chosen, such that<br>
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;='''
'''&nbsp;N<sub>&lambda;,&nbsp;m&minus;1</sub>&nbsp;&oplus;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m</sub></sub>}''',
&nbsp;then there exists a ''disjoint''<br>collection of ''chains'' with each of the
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m</sub></sub>&nbsp;'''
being the first member or ''top''<br>of one of the ''chains''. Furthermore, this
collection of ''vectors'', when combined,<br>forms a ''basis'' for
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>'''.
</p>
 
<p style="line-height:175%">
Consider {{nowrap| '''m &lt; &alpha;(&lambda;)'''. }} A ''basis'' for
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>''' can always be ''extended'' to a ''basis''
for<br>'''&nbsp;N<sub>&lambda;,&nbsp;m+1</sub>'''. So ''linearly independent''&nbsp;
'''x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m+1</sub></sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;m+1</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;m</sub>''',
&nbsp;such that<br>
'''&nbsp;N<sub>&lambda;,&nbsp;m+1</sub>&nbsp;='''
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;&oplus;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m+1</sub></sub>}''',
&nbsp;can be chosen.
&nbsp;Let&nbsp;
'''y<sub>i</sub>&nbsp;=&nbsp;A<sub>&lambda;,&nbsp;1</sub>&nbsp;x<sub>i</sub>'''.
<br>Then &nbsp;
'''y<sub>i</sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;m&minus;1</sub>''',
&nbsp;for &nbsp;'''i = {{nowrap|1, 2, ..., &delta;<sub>m+1</sub>'''}}.
&nbsp;To see the {{nowrap| '''y<sub>1</sub>, y<sub>2</sub>, ..., y<sub>&delta;<sub>m+1</sub></sub>'''}}
<br>are ''linearly independent'', assume that for some
&nbsp;'''&beta;<sub>1</sub>,&nbsp;&beta;<sub>2</sub>,&nbsp;...,&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>''',
<br>that
&nbsp;'''&beta;<sub>1</sub>y<sub>1</sub>&nbsp;+&nbsp;&beta;<sub>2</sub>y<sub>2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>y<sub>&delta;<sub>m+1</sub></sub>&nbsp;=&nbsp;0''',
Then for<br>
&nbsp;'''x&nbsp;=&nbsp;&beta;<sub>1</sub>x<sub>1</sub>&nbsp;+&nbsp;&beta;<sub>2</sub>x<sub>2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>x<sub>&delta;<sub>m+1</sub></sub>''',
&nbsp;&nbsp;'''x&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;1</sub>&nbsp;''',
&nbsp;and<br>
'''x&nbsp;&isin;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m+1</sub></sub>}''',
which implies that {{nowrap|'''x {{=}} 0'''}}, &nbsp;and<br>
&nbsp;'''&beta;<sub>1</sub>=&nbsp;&beta;<sub>2</sub>=&nbsp;...&nbsp;=&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>&nbsp;=&nbsp;0'''.
&nbsp;In addition,
'''&nbsp;span{y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>m+1</sub></sub>}'''
'''&cap;&nbsp;N<sub>&lambda;,&nbsp;m&minus;1</sub>&nbsp;=&nbsp;{0}'''.
<br>To see this assume that for some
&nbsp;'''&beta;<sub>1</sub>,&nbsp;&beta;<sub>2</sub>,&nbsp;...,&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>''',
<br>that
&nbsp;'''&beta;<sub>1</sub>y<sub>1</sub>&nbsp;+&nbsp;&beta;<sub>2</sub>y<sub>2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>y<sub>&delta;<sub>m+1</sub></sub>&nbsp;&isin;&nbsp;'''
'''N<sub>&lambda;,&nbsp;m&minus;1</sub>&nbsp;'''
Then for<br>
&nbsp;'''x&nbsp;=&nbsp;&beta;<sub>1</sub>x<sub>1</sub>&nbsp;+&nbsp;&beta;<sub>2</sub>x<sub>2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>x<sub>&delta;<sub>m+1</sub></sub>''',
&nbsp;&nbsp;'''x&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;''',
&nbsp;and<br>
'''x&nbsp;&isin;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m+1</sub></sub>}''',
which implies that {{nowrap|'''x {{=}} 0'''}}, &nbsp;and<br>
&nbsp;'''&beta;<sub>1</sub>=&nbsp;&beta;<sub>2</sub>=&nbsp;...&nbsp;=&nbsp;&beta;<sub>&delta;<sub>m+1</sub></sub>&nbsp;=&nbsp;0'''.
&nbsp;The proof is nearly done.
</p>
 
<p style="line-height:175%">
At this point suppose that&nbsp;
'''b<sub>1</sub>, {{nowrap|b<sub>2</sub>, ..., b<sub>d<sub>m&minus;1</sub></sub>'''}}
&nbsp;is any ''basis'' for '''N<sub>&lambda;,&nbsp;m&minus;1</sub>'''.<br>
Then&nbsp;&nbsp;'''''B''&nbsp;=&nbsp;'''
'''span{b<sub>1</sub>,&nbsp;b<sub>2</sub>,&nbsp;...,&nbsp;b<sub>d<sub>m&minus;1</sub></sub>}&nbsp;&oplus;'''
'''span{y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>m+1</sub></sub>}'''
<br>is a ''subspace'' of '''N<sub>&lambda;,&nbsp;m</sub>'''.
&nbsp;If&nbsp;&nbsp;'''''B''&nbsp;&ne;&nbsp;N<sub>&lambda;,&nbsp;m</sub>''',
&nbsp;then<br>
'''b<sub>1</sub>,&nbsp;b<sub>2</sub>,&nbsp;...,&nbsp;b<sub>d<sub>m&minus;1</sub></sub>,&nbsp;'''
'''y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>m+1</sub></sub>'''
can be ''extended'' to a ''basis'' for '''N<sub>&lambda;,&nbsp;m</sub>''',
<br>by some set of vectors&nbsp;
'''z<sub>1</sub>,&nbsp;z<sub>2</sub>,&nbsp;...,&nbsp;z<sub>(&delta;<sub>m</sub>&minus;&nbsp;&delta;<sub>m+1</sub>)</sub>'''&nbsp;,
in which case<br>
'''N<sub>&lambda;,&nbsp;m</sub>&nbsp;='''
'''&nbsp;N<sub>&lambda;,&nbsp;m&minus;1</sub>&nbsp;&oplus;&nbsp;span{y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>m+1</sub></sub>}'''
'''&oplus;&nbsp;span{z<sub>1</sub>,&nbsp;z<sub>2</sub>,&nbsp;...,&nbsp;z<sub>(&delta;<sub>m</sub>&minus;&nbsp;&delta;<sub>m+1</sub>)</sub>}'''.
</p>
 
<p style="line-height:175%">
If {{nowrap| '''&delta;<sub>m</sub> {{=}} &delta;<sub>m+1</sub>''', }} then<br>
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;='''
'''&nbsp;N<sub>&lambda;,&nbsp;m&minus;1</sub>&nbsp;&oplus;&nbsp;span{y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>m+1</sub></sub>}'''
<br>or if {{nowrap| '''&delta;<sub>m</sub> &gt; &delta;<sub>m+1</sub>''', }} then<br>
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;='''
'''&nbsp;N<sub>&lambda;,&nbsp;m&minus;1</sub>&nbsp;&oplus;&nbsp;span{z<sub>1</sub>,&nbsp;z<sub>2</sub>,&nbsp;...,&nbsp;z<sub>(&delta;<sub>m</sub>&minus;&nbsp;&delta;<sub>m+1</sub>)</sub>'''
''',&nbsp;y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>m+1</sub></sub>}'''
<br>In either case apply the ''induction hypothesis'' to get that
there exists a ''disjoint''<br>collection of ''chains'' with each of the
'''y<sub>1</sub>,&nbsp;y<sub>2</sub>,&nbsp;...,&nbsp;y<sub>&delta;<sub>m+1</sub></sub>&nbsp;'''
being the first member or ''top''<br>of one of the ''chains''. Furthermore, this
collection of ''vectors'', when combined,<br>forms a ''basis'' for
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>'''. Now,&nbsp;
'''y<sub>i</sub>&nbsp;=&nbsp;A<sub>&lambda;,&nbsp;1</sub>&nbsp;x<sub>i</sub>''',
&nbsp;for &nbsp;'''i = {{nowrap|1, 2, ..., &delta;<sub>m+1</sub>'''}}, &nbsp;so each of
the<br>''chains'' beginning with {{nowrap| '''y<sub>i</sub>''' }} can be extended upwards into
'''&nbsp;N<sub>&lambda;,&nbsp;m+1</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;m</sub>'''
to a ''chain''<br>beginning with {{nowrap| '''x<sub>i</sub>'''. }} Since
'''&nbsp;N<sub>&lambda;,&nbsp;m+1</sub>&nbsp;='''
'''&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;&oplus;&nbsp;span{x<sub>1</sub>,&nbsp;x<sub>2</sub>,&nbsp;...,&nbsp;x<sub>&delta;<sub>m+1</sub></sub>}''',
<br>the ''combined vectors'' of the ''new chains'' form a ''basis'' for
'''&nbsp;N<sub>&lambda;,&nbsp;m+1</sub>'''.
</p>
 
====Differential equations ''y&prime;= Ay''====
 
<p style="line-height:175%;">
Let {{nowrap| '''A''' }} be a '''n&times;n''' matrix of complex numbers and
&nbsp;'''&lambda;'''&nbsp; an ''eigenvalue'' of {{nowrap| '''A''' }},
with<br>''associated eigenvector'' {{nowrap| '''x''' }}.
Suppose {{nowrap| '''y(t)''' }} is a &nbsp;'''n''' ''dimensional vector valued<br>
function'', sufficiently smooth, so that {{nowrap| '''y&prime;(t)''' }} is continuous. The restriction that &nbsp;'''y(t)'''<br> be ''smooth'' can be relaxed somewhat, but is not the main focus of this discussion.
</p>
 
<p style="line-height:150%;">
The solutions to the equation &nbsp;'''''y&prime;(t) = Ay(t)'''''&nbsp; are sought.
The first observation
is that<br>&nbsp;'''y(t) = e<sup>&lambda;t</sup>x'''&nbsp; will be a solution.
When {{nowrap| '''A''' }} does not have {{nowrap| '''n''' }} ''linearly independent''
<br>''eigenvectors'', solutions of this kind will not provide the
total of {{nowrap| '''n''' }} needed for a<br>''fundamental basis set''.
</p>
 
<p style="line-height:175%;">
In view of the existence of ''chains'' of ''generalized eigenvectors'' seek a
solution of<br>the form
&nbsp;'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t''''''e<sup>&lambda;t</sup>x<sub>2</sub>'''&nbsp;,
then<br>
&nbsp;'''y&prime;(t) =''''''
&lambda; e<sup>&lambda;t</sup>x<sub>1</sub> + e<sup>&lambda;t</sup>x<sub>2</sub>
+&nbsp;&lambda; t e<sup>&lambda;t</sup>x<sub>2</sub>&nbsp;=''''''
e<sup>&lambda;t</sup>(&lambda;&nbsp;</sup>'''x<sub>1</sub>&nbsp;+&nbsp;x<sub>2</sub>)'''''''''&nbsp;
+&nbsp;t&nbsp;e<sup>&lambda;t</sup>(&lambda;x&nbsp;<sub>2</sub>)'''&nbsp;
<br>and<br>
&nbsp;'''Ay(t) = e<sup>&lambda;t</sup>A x<sub>1</sub> + t e<sup>&lambda;t</sup>A x<sub>2</sub>'''&nbsp;.
</p>
 
<p style="line-height:175%;">
In view of this, {{nowrap| '''y(t)''' }} will be a solution to &nbsp;'''y&prime;(t) = Ay(t)'''&nbsp;, when
&nbsp;'''A x<sub>1</sub> = &lambda; '''</sup>x<sub>1</sub> + x<sub>2</sub>''''''&nbsp; and<br>
&nbsp;'''A x<sub>2</sub> = &lambda; x<sub>2</sub>'''&nbsp;.
That is when
&nbsp;'''(A &minus; &lambda; I)x<sub>1</sub> = x<sub>2</sub>'''&nbsp; and
&nbsp;'''(A &minus; &lambda; I)x<sub>2</sub> = 0'''&nbsp;.
Equivalently,<br>when &nbsp;'''{x<sub>1</sub>, x<sub>2</sub>}'''&nbsp; is a ''chain'' of
''generalized eigenvectors''.
</p>
 
<p style="line-height:175%;">
Continuing with this reasoning seek a solution of
the form<br>
&nbsp;'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub>'''&nbsp;,
then<br>
&nbsp;'''y&prime;(t) =
&lambda; e<sup>&lambda;t</sup>x<sub>1</sub> + e<sup>&lambda;t</sup>x<sub>2</sub>
+ &lambda; t e<sup>&lambda;t</sup>x<sub>2</sub>
+ 2 t e<sup>&lambda;t</sup>x<sub>3</sub>
+ &lambda; t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub>
<br> =
e<sup>&lambda;t</sup>(&lambda; '''</sup>x<sub>1</sub> + x<sub>2</sub>)
+ t e<sup>&lambda;t</sup>(&lambda; x<sub>2</sub> + 2 x<sub>3</sub>)
+ t<sup>2</sup> e<sup>&lambda;t</sup>(&lambda; x<sub>3</sub>)''''''&nbsp;
and<br>
&nbsp;'''Ay(t) = e<sup>&lambda;t</sup>A x<sub>1</sub> + t e<sup>&lambda;t</sup>A x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>A x<sub>3</sub>'''&nbsp;.
</p>
 
<p style="line-height:150%;">
Like before, {{nowrap| '''y(t)''' }} will be a solution to
&nbsp;'''y&prime;(t) = Ay(t)'''&nbsp;, when
&nbsp;'''A x<sub>1</sub> = &lambda; '''</sup>x<sub>1</sub> + x<sub>2</sub>''''''&nbsp;,<br>
&nbsp;'''A x<sub>2</sub> = &lambda; '''x<sub>2</sub> + 2 x<sub>3</sub>''''''&nbsp;, and
&nbsp;'''A x<sub>3</sub> = &lambda; x<sub>3</sub>'''&nbsp;.
That is when
'''(A &minus; &lambda; I)x<sub>1</sub> = x<sub>2</sub>'''&nbsp;,<br>
&nbsp;'''(A &minus; &lambda; I)x<sub>2</sub> = 2 x<sub>3</sub>'''&nbsp;, and
&nbsp;'''(A &minus; &lambda; I)x<sub>3</sub> = 0'''&nbsp;.
Since it will hold &nbsp;'''(A &minus; &lambda; I)(2 x<sub>3</sub>) = 0'''&nbsp;,<br>
also, equivalently, when &nbsp;'''{x<sub>1</sub>, x<sub>2</sub>, 2 x<sub>3</sub>}'''&nbsp;
is a ''chain'' of ''generalized eigenvectors''.
</p>
 
<p style="line-height:175%;">
More generally, to find the progression, seek a solution of the form<br>
&nbsp;'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub>
+ t<sup>3</sup> e<sup>&lambda;t</sup>x<sub>4</sub> + ...
+ t<sup>m&minus;2</sup> e<sup>&lambda;t</sup>x<sub>m&minus;1</sub>
+ t<sup>m&minus;1</sup> e<sup>&lambda;t</sup>x<sub>m</sub>'''&nbsp;,
<br>then<br>
&nbsp;'''y&prime;(t) =
&lambda; e<sup>&lambda;t</sup>x<sub>1</sub> + e<sup>&lambda;t</sup>x<sub>2</sub>
+ &lambda; t e<sup>&lambda;t</sup>x<sub>2</sub>''''''
+ 2 t e<sup>&lambda;t</sup>x<sub>3</sub>''''''
+ &lambda; t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub>''''''
+ 3 t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>4</sub>''''''
+ &lambda; t<sup>3</sup> e<sup>&lambda;t</sup>x<sub>4</sub>''''''<br>
+ ...''''''
+ (m&minus;2)t<sup>m&minus;3</sup>e<sup>&lambda;t</sup>x<sub>m&minus;1</sub>''''''
+ &lambda; t<sup>m&minus;2</sup> e<sup>&lambda;t</sup>x<sub>m&minus;1</sub>''''''
+ (m&minus;1)t<sup>m&minus;2</sup> e<sup>&lambda;t</sup>x<sub>m</sub>''''''
+ &lambda; t<sup>m&minus;1</sup> e<sup>&lambda;t</sup>x<sub>m</sub>
<br> =''''''
e<sup>&lambda;t</sup>(&lambda; '''</sup>x<sub>1</sub> + x<sub>2</sub>)
+ t e<sup>&lambda;t</sup>(&lambda; x<sub>2</sub> + 2 x<sub>3</sub>)
+ t<sup>2</sup> e<sup>&lambda;t</sup>(&lambda; x<sub>3</sub> + 3 x<sub>4</sub>)
+ t<sup>3</sup> e<sup>&lambda;t</sup>(&lambda; x<sub>4</sub> + 4 x<sub>5</sub>)
'''<br>'''+ ...'''<br>'''
+ t<sup>m&minus;3</sup> e<sup>&lambda;t</sup>(&lambda; x<sub>m&minus;2</sub> + (m&minus;2) x<sub>m&minus;1</sub>)
+ t<sup>m&minus;2</sup> e<sup>&lambda;t</sup>(&lambda; x<sub>m&minus;1</sub> + (m&minus;1) x<sub>m</sub>)
+ t<sup>m&minus;1</sup> e<sup>&lambda;t</sup>(&lambda; x<sub>m</sub>)''''''&nbsp;
<br>and<br>
'''Ay(t) =<br>&nbsp; e<sup>&lambda;t</sup>A x<sub>1</sub> + t e<sup>&lambda;t</sup>A x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>A x<sub>3</sub>
+ t<sup>3</sup> e<sup>&lambda;t</sup>A x<sub>4</sub> + ...
+ t<sup>m&minus;2</sup> e<sup>&lambda;t</sup>A x<sub>m&minus;1</sub>
+ t<sup>m&minus;1</sup> e<sup>&lambda;t</sup>A x<sub>m</sub>'''&nbsp;.
</p>
 
<p style="line-height:175%;">
Again, {{nowrap| '''y(t)''' }} will be a solution to &nbsp;'''y&prime;(t) = Ay(t)'''&nbsp;, when<br>
&nbsp;'''A x<sub>1</sub> = &lambda; '''</sup>x<sub>1</sub> + x<sub>2</sub>''''''&nbsp;,
&nbsp;'''A x<sub>2</sub> = &lambda; x<sub>2</sub> + 2 x<sub>3</sub>'''&nbsp;,
&nbsp;'''A x<sub>3</sub> = &lambda; x<sub>3</sub> + 3 x<sub>4</sub>'''&nbsp;,
&nbsp;'''A x<sub>4</sub> = &lambda; x<sub>4</sub> + 4 x<sub>5</sub>'''&nbsp;,<br>
&nbsp;'''A x<sub>m&minus;2</sub> = &lambda; x<sub>m&minus;2</sub> + (m&minus;2) x<sub>m&minus;1</sub>'''&nbsp;,
&nbsp;'''A x<sub>m&minus;1</sub> = &lambda; x<sub>m&minus;1</sub> + (m&minus;1) x<sub>m</sub>'''&nbsp;,
<br>and &nbsp;'''A x<sub>m</sub> = &lambda; x<sub>m</sub>'''&nbsp;.
<br>That is when<br>
&nbsp;'''(A &minus; &lambda; I)x<sub>1</sub> = x<sub>2</sub>'''&nbsp;,
&nbsp;'''(A &minus; &lambda; I)x<sub>2</sub> = 2 x<sub>3</sub>'''&nbsp;,
&nbsp;'''(A &minus; &lambda; I)x<sub>3</sub> = 3 x<sub>4</sub>'''&nbsp;,
&nbsp;'''(A &minus; &lambda; I)x<sub>4</sub> = 4 x<sub>5</sub>'''&nbsp;,<br>
&nbsp;'''...,'''&nbsp;<br>
&nbsp;'''(A &minus; &lambda; I)x<sub>m&minus;2</sub> = (m&minus;2) x<sub>m&minus;1</sub>'''&nbsp;,
&nbsp;'''(A &minus; &lambda; I)x<sub>m&minus;1</sub> = (m&minus;1) x<sub>m</sub>'''&nbsp;, and<br>
&nbsp;'''(A &minus; &lambda; I)x<sub>m</sub> = 0'''&nbsp;.
<br>Since it will hold &nbsp;'''(A &minus; &lambda; I)((m&minus;1)! x<sub>3</sub>) = 0'''&nbsp;, also,
equivalently, when<br>
&nbsp;'''{x<sub>1</sub>, 1! x<sub>2</sub>, 2! x<sub>3</sub>, 3! x<sub>4</sub>, ...,
(m&minus;2)! x<sub>m&minus;1</sub>, (m&minus;1)! x<sub>m</sub>}'''&nbsp;
<br>is a ''chain'' of ''generalized eigenvectors''.
</p>
 
Now, the ''basis set'' for all solutions will be found through a
''disjoint collection<br>of chains of generalized eigenvectors'' of
the matrix '''A'''.
 
<p style="line-height:150%;">
Assume '''A''' has
''eigenvalues'' {{nowrap|'''&lambda;<sub>1</sub>, &lambda;<sub>2</sub>, ..., &lambda;<sub>r</sub>'''}}
<br>of ''algebraic multiplicities'' {{nowrap|'''k<sub>1</sub>, k<sub>2</sub>, ..., k<sub>r</sub>'''}}.
</p>
 
<p style="line-height:175%;">
For a given ''eigenvalue'' '''&lambda;<sub>i</sub>''' there is a ''collection'' of
'''s''', with '''s''' depending on '''i''',<br>''disjoint chains'' of
''generalized eigenvectors''<br>
'''
''C<sub>i,1</sub>'' = {<sup>1</sup>z<sub>1</sub>, <sup>1</sup>z<sub>2</sub>, ...,<sup>1</sup>z<sub>j1</sub>},
''C<sub>i,2</sub>'' = {<sup>2</sup>z<sub>1</sub>, <sup>2</sup>z<sub>2</sub>, ...,<sup>2</sup>z<sub>j2</sub>},
...,
''C<sub>i,js(i)</sub>'' = {<sup>s</sup>z<sub>1</sub>, <sup>s</sup>z<sub>2</sub>, ...,<sup>s</sup>z<sub>js</sub>},
'''
<br>that when ''combined'' form a ''basis set'' for '''N<sub>&lambda;<sub>i</sub>, k<sub>i</sub></sub>'''.
The total number of ''vectors''<br>in this set will be
'''j1 + j2 + ... + js = k<sub>i</sub>'''.
Sets in this collection may have only one<br>or two members so in this discussion understand
the notation
'''{<sup>&beta;</sup>z<sub>1</sub>, <sup>&beta;</sup>z<sub>2</sub>, ...,<sup>&beta;</sup>z<sub>j&beta;</sub>}'''
<br>will mean '''{<sup>&beta;</sup>z<sub>1</sub>}''' when '''j&beta; = 1''', and
'''{<sup>&beta;</sup>z<sub>1</sub>, <sup>&beta;</sup>z<sub>2</sub>}''' when '''j&beta; = 2''',
and so forth.<br>
</p>
 
<p style="line-height:150%;">
Being that this ''notation'' is cumbersome with many ''indices'', in the next
paragraphs<br> any particular '''''C<sub>i,&beta;</sub>''''', when more explanation
is not needed, may just be notated as<br>
'''
''C'' = {z<sub>1</sub>, z<sub>2</sub>, ..., z<sub>j</sub>}.
'''
</p>
 
<p style="line-height:150%;">
For each such of these ''chain'' sets,
'''''C'' = {z<sub>1</sub>, z<sub>2</sub>, ..., z<sub>j</sub>}'''<br>
the sets '''{x<sub>j</sub>}''', '''{x<sub>j&minus;1</sub>, x<sub>j</sub>},'''
'''{x<sub>j&minus;2</sub>, x<sub>j&minus;1</sub>, x<sub>j</sub>}, ...,'''
'''{z<sub>2</sub>, z<sub>3</sub>, ..., z<sub>j</sub>},'''
'''{z<sub>1</sub>, z<sub>2</sub>, ..., z<sub>j</sub>}'''<br>
are also ''chains''. This notation being understood to mean when<br>
'''''C'' = {z<sub>1</sub>}''' just '''{z<sub>1</sub>}''',
when '''''C'' = {z<sub>1</sub>, z<sub>2</sub>}''' just '''{z<sub>2</sub>},'''
'''{z<sub>1</sub>, z<sub>2</sub>}''' and when<br>
'''''C'' = {z<sub>1</sub>, z<sub>2</sub>, z<sub>2</sub>}''' just '''{z<sub>3</sub>},'''
'''{z<sub>2</sub>, z<sub>3</sub>},''' '''{z<sub>1</sub>, z<sub>2</sub>, z<sub>3</sub>}''',
and so on.
</p>
 
<p style="line-height:175%;">
The conclusion of the top of the discussion was that<br>
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub>''', is a solution when '''{x<sub>1</sub>}'''
is a ''chain''.<br>
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>''',
is a solution when '''{x<sub>1</sub>, 1! x<sub>2</sub>}'''
is a ''chain''.<br>
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub>''',
is a solution when '''{x<sub>1</sub>, 1! x<sub>2</sub>, , 2! x<sub>3</sub>}'''
is a ''chain''.<br>
The progression continues to<br>
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub>
+ t<sup>3</sup> e<sup>&lambda;t</sup>x<sub>4</sub> + ...
+ t<sup>m&minus;2</sup> e<sup>&lambda;t</sup>x<sub>m&minus;1</sub>
+ t<sup>m&minus;1</sup> e<sup>&lambda;t</sup>x<sub>m</sub>''',<br>
is a solution when
'''{x<sub>1</sub>, 1! x<sub>2</sub>, 2! x<sub>3</sub>, 3! x<sub>4</sub>, ...,
(m&minus;2)! x<sub>m&minus;1</sub>, (m&minus;1)! x<sub>m</sub>}''',
<br>is a ''chain'' of ''generalized eigenvectors''.
</p>
 
<p style="line-height:175%;">
In light of the preceding calculations, all that must be done is to provide
the proper<br> ''scaling'' for each of the ''chains'' arising from the
set '''''C'' = {z<sub>1</sub>, z<sub>2</sub>, ..., z<sub>j</sub>}'''.
<br>The progression for the ''solutions'' is given by<br>
'''y(t) = e<sup>&lambda;t</sup>z<sub>j</sub>''', for ''chain'' '''{z<sub>j</sub>}'''<br>
'''y(t) = e<sup>&lambda;t</sup>z<sub>j&minus;1</sub> + (1 &frasl; 1!) t e<sup>&lambda;t</sup>z<sub>j</sub>''',
for ''chain'' '''{z<sub>j&minus;1</sub>, 1!(1 &frasl; 1!) z<sub>j</sub>}'''<br>
'''y(t) = e<sup>&lambda;t</sup>z<sub>j&minus;2</sub> + (1 &frasl; 1!) t e<sup>&lambda;t</sup>z<sub>j&minus;1</sub>
+ (1 &frasl; 2!) t<sup>2</sup> e<sup>&lambda;t</sup>z<sub>j</sub>''',
<br>for ''chain'' '''{z<sub>j&minus;2</sub>, 1!(1 &frasl; 1!) z<sub>j&minus;1</sub>, 2!(1 &frasl; 2!) z<sub>j</sub>}'''<br>
'''y(t) = e<sup>&lambda;t</sup>z<sub>j&minus;3</sub> + (1 &frasl; 1!) t e<sup>&lambda;t</sup>z<sub>j&minus;2</sub>
+ (1 &frasl; 2!) t<sup>2</sup> e<sup>&lambda;t</sup>z<sub>j&minus;1</sub> + (1 &frasl; 3!) t<sup>3</sup> e<sup>&lambda;t</sup>z<sub>j</sub>''',
<br>for ''chain'' '''{z<sub>j&minus;3</sub>, 1!(1 &frasl; 1!) z<sub>j&minus;2</sub>, 2!(1 &frasl; 2!) z<sub>j&minus;1</sub>, 3!(1 &frasl; 3!) z<sub>j</sub>}''',<br>
and so on until,<br>
'''y(t) = e<sup>&lambda;t</sup>z<sub>1</sub> + (1 &frasl; 1!) t e<sup>&lambda;t</sup>z<sub>2</sub>
+ (1 &frasl; 2!) t<sup>2</sup> e<sup>&lambda;t</sup>z<sub>3</sub> + ... + (1 &frasl; (j&minus;1)!) t<sup> j&minus;1</sup> e<sup>&lambda;t</sup>z<sub>j</sub>''',
<br>for the ''chain'' of ''generalized eigenvectors'',
<br>
'''{z<sub>1</sub>, 1!(1 &frasl; 1!) z<sub>2</sub>, 2!(1 &frasl; 2!) z<sub>3</sub>, ...,
(j&minus;2)!(1 &frasl; (j&minus;2)!) x<sub>j&minus;1</sub>, (j&minus;1)!(1 &frasl; (j&minus;1)!) z<sub>j</sub>}'''.
</p>
 
What is left to show is that when all the ''solutions'' constructed from the
''chain sets'',<br>as described, are considered, they form a ''fundamental set''
of ''solutions''.<br> To do this it has to be shown that there are '''n''' of them
and that they are<br>''linearly independent''.
 
<p style="line-height:175%;">
Reiterating, for a given ''eigenvalue'' '''&lambda;<sub>i</sub>''' there is a ''collection'' of
'''s''', with '''s''' depending on '''i''',<br>''disjoint chains'' of
''generalized eigenvectors''<br>
'''
''C<sub>i,1</sub>'' = {<sup>1</sup>z<sub>1</sub>, <sup>1</sup>z<sub>2</sub>, ...,<sup>1</sup>z<sub>j1(i)</sub>},
''C<sub>i,2</sub>'' = {<sup>2</sup>z<sub>1</sub>, <sup>2</sup>z<sub>2</sub>, ...,<sup>2</sup>z<sub>j2(i)</sub>},
<br>...,
''C<sub>i,js(i)</sub>'' = {<sup>s(i)</sup>z<sub>1</sub>, <sup>s(i)</sup>z<sub>2</sub>, ...,<sup>s(i)</sup>z<sub>js(i)</sub>},
'''
<br>that when ''combined'' form a ''basis set'' for '''N<sub>&lambda;<sub>i</sub>, k<sub>i</sub></sub>'''.
The total number of ''vectors''<br>in this set will be
'''j1(i) + j2(i) + ... + js(i) = k<sub>i</sub>'''.
</p>
 
<p style="line-height:150%;">
Thus the total number of all such ''basis vectors'' and so ''solutions'' is
<br>'''k<sub>1</sub> + k<sub>2</sub> + ... + k<sub>r</sub> = n'''.
</p>
 
<p style="line-height:175%;">
Each solution is one of the forms
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub>''',
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>''',<br>
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub>''',
'''y(t) = e<sup>&lambda;t</sup>x<sub>1</sub> + t e<sup>&lambda;t</sup>x<sub>2</sub>
+ t<sup>2</sup> e<sup>&lambda;t</sup>x<sub>3</sub> + ...'''.<br>
Now each ''basis vector''  '''v<sub>j</sub>''', for '''j = 1, 2, ..., n''';
of the ''combined'' set of<br>''generalized eigenvectors'', occurs as
'''x<sub>1</sub>''' in one of the expressions immediately<br>
above '''''precisely once'''''. That is, for each '''j''', there is one
'''y<sub>j</sub>(t) = e<sup>&lambda;t</sup>v<sub>j</sub> + ...'''<br>
Since '''y<sub>j</sub>(0) = e<sup>&lambda;0</sup>v<sub>j</sub> = v<sub>j</sub>''',
the set of ''solutions'' are ''linearly independent'' at '''t = 0'''.<br>
</p>
 
====Revisiting the powers of a matrix====
 
As a notational convenience {{nowrap| '''A<sub>λ, 0</sub> {{=}} I'''}}.
 
Note that&nbsp;
'''A&nbsp;&nbsp;=&nbsp;'''
&nbsp;'''λ I'''
+&nbsp;'''A<sub>λ,&nbsp;1</sub>&nbsp;'''.
and apply the ''binomial theorem''.
 
:<math>A^s=(\lambda I+A_{\lambda,1})^s=\sum_{r=0}^s\binom{s}{r}\lambda^{s-r}A_{\lambda,r}</math>
 
Assume '''&lambda;''' is an ''eigenvalue'' of {{nowrap| '''A''', }} and let&nbsp;
'''{&nbsp;x<sub>1</sub>,&nbsp;x<sub>2</sub>, {{nowrap|...,  x<sub>m</sub> }'''}}
<br>be a &nbsp;'''''chain''''' of ''generalized eigenvectors'' such that&nbsp;
'''x<sub>1</sub>&nbsp;&isin;&nbsp;N<sub>&lambda;,&nbsp;m</sub>&nbsp;&#92;&nbsp;N<sub>&lambda;,&nbsp;m-1</sub>&nbsp;,'''
<br>&nbsp;'''x<sub>i+1</sub>&nbsp;=&nbsp;A<sub>&lambda;,&nbsp;1</sub>&nbsp;x<sub>i</sub>&nbsp;,'''.
&nbsp;'''x<sub>m</sub>&nbsp;&ne;&nbsp;0&nbsp;,'''&nbsp; and
&nbsp;'''A<sub>&lambda;,&nbsp;1</sub>&nbsp;x<sub>m</sub>&nbsp;=&nbsp;0'''.
 
Then&nbsp; {{nowrap|'''x<sub>r+1</sub> {{=}} A<sub>λ, r</sub> x<sub>1</sub>'''}},
&nbsp; for&nbsp; '''r = 0, 1, ..., m-1'''.
 
:<math>A^s x_1=\sum_{r=0}^s\binom{s}{r}\lambda^{s-r}A_{\lambda,r}x_1=\sum_{r=0}^s\binom{s}{r}\lambda^{s-r}x_{r+1}</math>
 
So for '''s ≤ m &minus; 1'''
 
:<math>A^s x_1=\sum_{r=0}^s\binom{s}{r}\lambda^{s-r}x_{r+1}</math>
 
and for '''s ≥ m &minus; 1''',&nbsp; since&nbsp;
'''A<sub>λ,&nbsp;m</sub>&nbsp;x<sub>1</sub> = 0''',
 
:<math>A^s x_1=\sum_{r=0}^{m-1}\binom{s}{r}\lambda^{s-r}x_{r+1}</math>
 
===Ordinary linear difference equations===
 
<p style="line-height:150%;">
Ordinary ''linear difference equations'' are equations of the sort:<br>
'''
y<sub>n</sub>&nbsp;=&nbsp;a&nbsp;y<sub>n&minus;1</sub>&nbsp;+&nbsp;&nbsp;b<br>
'''
'''
y<sub>n</sub>&nbsp;=&nbsp;a&nbsp;y<sub>n&minus;1</sub>&nbsp;+&nbsp;b  y<sub>n&minus;2</sub> {{nowrap| +  c}}<br>
'''
or more generally,<br>
'''
y<sub>n</sub>&nbsp;=&nbsp;a<sub>m</sub>y<sub>n&minus;1</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>y<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;
a<sub>2</sub>y<sub>n&minus;m&nbsp;+&nbsp;1</sub>&nbsp;+&nbsp;a<sub>1</sub>y<sub>n&minus;m</sub>&nbsp;+&nbsp;a<sub>0</sub><br>
'''
with initial conditions<br>
'''
y<sub>0</sub>,&nbsp;&nbsp;y<sub>1</sub>,&nbsp;&nbsp;y<sub>2</sub>,&nbsp;&nbsp;...,&nbsp;&nbsp;y<sub>m&minus;2</sub>,&nbsp;&nbsp;y<sub>m&minus;1</sub>.
'''
</p>
 
A case with {{nowrap| '''a<sub>1</sub> {{=}} 0''' }} can be excluded, since
it represents an equation of less degree.
 
<p style="line-height:175%;">
They have a characteristic polynomial<br>
'''
p(x)&nbsp;=&nbsp;
x<sup>m</sup>&nbsp;&minus;&nbsp;a<sub>m</sub>x<sup>m&minus;1</sup>&nbsp;&minus;&nbsp;a<sub>m&minus;1</sub>x<sup>m&minus;2</sup>&nbsp;&minus;&nbsp;... &minus;
a<sub>2</sub>x&nbsp;&minus;&nbsp;a<sub>1</sub>.<br>
'''
To solve a ''difference equation'' it is first observed,&nbsp;if
&nbsp;'''y<sub>n</sub>'''&nbsp; and {{nowrap| '''z<sub>n</sub>''' }} are both solutions,<br>then
&nbsp;'''(y<sub>n</sub>&nbsp;&minus;&nbsp;z<sub>n</sub>)'''&nbsp; is a solution of the ''homogeneous'' equation:<br>
'''
y<sub>n</sub>&nbsp;=&nbsp;a<sub>m</sub>y<sub>n&minus;1</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>y<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;
a<sub>2</sub>y<sub>n&minus;m&nbsp;+&nbsp;1</sub>&nbsp;+&nbsp;a<sub>1</sub>y<sub>n&minus;m</sub>.
'''
</p>
<p style="line-height:150%;">
So a ''particular'' solution to the ''difference equation'' must be
found together with<br>all solutions of the ''homogeneous'' equation to get the
''general solution'' for the<br>''difference equation''.
Another observation to make is that,&nbsp;if {{nowrap|''' y<sub>n</sub>''' }} is a solution to<br>
the ''inhomogeneous'' equation,&nbsp;then<br>
'''z<sub>n</sub>&nbsp;=&nbsp;y<sub>n+1</sub>&nbsp;&minus;&nbsp;y<sub>n</sub>'''<br>
is also a solution to the ''homogeneous'' equation.<br>
So all solutions of the ''homogeneous'' equation will be found first.
</p>
<p style="line-height:175%;">
When {{nowrap| '''&beta;''' }} is a root of {{nowrap| '''p(x) {{=}} 0''',  then}} it is easily seen<br>
'''y<sub>n</sub>&nbsp;=&nbsp;&beta;<sup>n</sup>'''
&nbsp;is a solution to the ''homogeneous'' equation since<br>
'''
y<sub>n</sub>&nbsp;&minus;&nbsp;a<sub>m</sub>y<sub>n&minus;1</sub>&nbsp;&minus;&nbsp;a<sub>m&minus;1</sub>y<sub>n&minus;2</sub>&nbsp;&minus;&nbsp;...&nbsp;&minus;&nbsp;
a<sub>2</sub>y<sub>n&minus;m&nbsp;+&nbsp;1</sub>&nbsp;&minus;&nbsp;a<sub>1</sub>y<sub>n&minus;m</sub>,<br>
'''
becomes upon the substitution {{nowrap| '''y<sub>n</sub> {{=}} &beta;<sup>n</sup>''',}}<br>
'''
&beta;<sup>n</sup>&nbsp;&minus;&nbsp;a<sub>m</sub>&beta;<sup>n&minus;1</sup>&nbsp;&minus;&nbsp;a<sub>m&minus;1</sub>&beta;<sup>n&minus;2</sup>&nbsp;&minus;&nbsp;... &minus;
a<sub>2</sub>&beta;<sup>n&minus;m&nbsp;+&nbsp;1</sup>&nbsp;&minus;&nbsp;a<sub>1</sub>&beta;<sup>n&minus;m</sup><br>
'''
'''
=&nbsp;&beta;<sup>n&minus;m</sup>(&beta;<sup>m</sup>&nbsp;&minus;&nbsp;a<sub>m</sub>&beta;<sup>m&minus;1</sup>&nbsp;&minus;&nbsp;a<sub>m&minus;1</sub>&beta;<sup>m&minus;2</sup>&nbsp;&minus;&nbsp;... &minus;
a<sub>2</sub>&beta;&nbsp;&minus;&nbsp;a<sub>1</sub>)<br>
'''
'''
&nbsp;=&nbsp;&beta;<sup>n&minus;m</sup>p(&beta;)&nbsp;=&nbsp;0.
</p>
<p style="line-height:175%;">
'''
When {{nowrap| '''&beta;''' }} is a repeated root of {{nowrap| '''p(x) {{=}} 0''',  then}}<br>
'''y<sub>n</sub>&nbsp;=&nbsp;n&beta;<sup>n&minus;1</sup>'''
&nbsp;is a solution to the ''homogeneous'' equation since<br>
'''
n&beta;<sup>n&minus;1</sup>&nbsp;&minus;&nbsp;a<sub>m</sub>(n&minus;1)&beta;<sup>n&minus;2</sup>&nbsp;&minus;&nbsp;a<sub>m&minus;1</sub>(n&minus;2)&beta;<sup>n&minus;3</sup>&nbsp;&minus;&nbsp;... &minus;
a<sub>2</sub>(n&minus;m&nbsp;+&nbsp;1)&beta;<sup>n&minus;m</sup>&nbsp;&minus;&nbsp;a<sub>1</sub>(n&minus;m)&beta;<sup>n&minus;m&nbsp;&minus;&nbsp;1</sup><br>
'''
'''
=&nbsp;(n&minus;m)&beta;<sup>n&minus;m&nbsp;&minus;&nbsp;1</sup>(&beta;<sup>m</sup>&nbsp;&minus;&nbsp;a<sub>m</sub>&beta;<sup>m&minus;1</sup>&nbsp;&minus;&nbsp;a<sub>m&minus;1</sub>&beta;<sup>m&minus;2</sup>&nbsp;&minus;&nbsp;... &minus;
a<sub>2</sub>&beta;&nbsp;&minus;&nbsp;a<sub>1</sub>)<br>
'''
'''
+&nbsp;&beta;<sup>n&minus;m&nbsp;&minus;&nbsp;1</sup>(m&beta;<sup>m&minus;1</sup>&nbsp;&minus;&nbsp;(m&minus;1)a<sub>m</sub>&beta;<sup>m&minus;2</sup>&nbsp;&minus;&nbsp;(m&minus;2)a<sub>m&minus;1</sub>&beta;<sup>m&minus;3</sup>&nbsp;&minus;&nbsp;... &minus;
2a<sub>3</sub>&beta;&nbsp;&minus;&nbsp;a<sub>2</sub>)<br>
'''
'''
=&nbsp;(n&minus;m)&beta;<sup>n&minus;m&nbsp;&minus;&nbsp;1</sup>p(&beta;)&nbsp;+&nbsp;&beta;<sup>n&minus;m&nbsp;&minus;&nbsp;1</sup>p&prime;(&beta;)&nbsp;==&nbsp;0.
'''
</p>
<p style="line-height:175%;">
After reaching this point in the calculation the ''mystery'' is solved. Just notice when<br>
'''&beta;'''&nbsp; is a root of {{nowrap| '''p(x) {{=}} 0'''}} with ''multiplicity'' &nbsp;'''k''',
&nbsp;then for {{nowrap| '''s {{=}} 1, 2, ..., k&minus;1'''}}<br>
'''
d<sup>s</sup>(&beta;<sup>n&minus;m</sup>p(&beta;))/d&beta;<sup>s</sup>&nbsp;=&nbsp;0.<br>
'''
Referring this back to the original equation<br>
'''
&beta;<sup>n</sup>&nbsp;&minus;&nbsp;a<sub>m</sub>&beta;<sup>n&minus;1</sup>&nbsp;&minus;&nbsp;a<sub>m&minus;1</sub>&beta;<sup>n&minus;2</sup>&nbsp;&minus;&nbsp;... &minus;
a<sub>2</sub>&beta;<sup>n&minus;m&nbsp;+&nbsp;1</sup>&nbsp;&minus;&nbsp;a<sub>1</sub>&beta;<sup>n&minus;m</sup><br>
'''
it is seen that<br>
'''
y<sub>n</sub>&nbsp;=&nbsp;d<sup>s</sup>(&beta;<sup>n</sup>)/d&beta;<sup>s</sup><br>
'''
are solutions to the ''homogeneous'' equation. For example,&nbsp;if '''&beta;''' is a root
of<br>''multiplicity'' '''3''',  then '''y<sub>n</sub>&nbsp;=&nbsp;n(n&minus;1)&beta;<sup>n&minus;2</sup>'''
&nbsp;is a solution. In any case this gives&nbsp;
'''m'''<br>''linearly independent'' solutions to the ''homogeneous'' equation.
</p>
<p style="line-height:175%;">
To look for a ''particular solution'' first consider the simpliest equation.<br>
'''
y<sub>n</sub>&nbsp;=&nbsp;a&nbsp;y<sub>n&minus;1</sub>&nbsp;+&nbsp;&nbsp;b.<br>
'''
It has a ''particular'' solution {{nowrap| '''y<sub>p,n</sub>''' }} given by<br>
'''
y<sub>p,0</sub>&nbsp;=&nbsp;0,&nbsp;y<sub>p,1</sub>&nbsp;=&nbsp;b,&nbsp;y<sub>p,2</sub>&nbsp;=&nbsp;(1&nbsp;+&nbsp;a)b,&nbsp;...,
y<sub>p,n</sub>&nbsp;=&nbsp;(1&nbsp;+&nbsp;a&nbsp;+&nbsp;a<sup>2</sup>&nbsp;+&nbsp;...&nbsp;+&nbsp;a<sup>n&minus;1</sup>)b,&nbsp;...,&nbsp;.<br>
'''
It's ''homogeneous'' equation {{nowrap| '''y<sub>n</sub> {{=}} a y<sub>n&minus;1</sub>''' }} has
solutions {{nowrap| '''y<sub>n</sub> {{=}} a<sup>n</sup>y<sub>0</sub>'''.}}<br>
So {{nowrap| '''z<sub>n</sub> {{=}} y<sub>n+1</sub> &minus; y<sub>n</sub> {{=}} a<sup>n</sup>b'''}}<br>
can be ''telescoped'' to get<br>
'''
y<sub>n</sub>&nbsp;=&nbsp;(y<sub>n</sub>&nbsp;&minus;&nbsp;y<sub>n&minus;1</sub>)&nbsp;+&nbsp;(y<sub>n&minus;1</sub>&nbsp;&minus;&nbsp;y<sub>n&minus;2</sub>)
+&nbsp;...&nbsp;+&nbsp;(y<sub>2</sub>&nbsp;&minus;&nbsp;y<sub>1</sub>)&nbsp;+&nbsp;(y<sub>1</sub>&nbsp;&minus;&nbsp;y<sub>0</sub>)&nbsp;+&nbsp;y<sub>0</sub>
'''
<br>
'''
=&nbsp;z<sub>n&minus;1</sub>&nbsp;+&nbsp;z<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>&nbsp;+&nbsp;y<sub>0</sub><br>
=&nbsp;(1&nbsp;+&nbsp;a&nbsp;+&nbsp;a<sup>2</sup>&nbsp;+&nbsp;...&nbsp;+&nbsp;a<sup>n&minus;1</sup>)b
''',
<br>
the ''particular'' solution with {{nowrap|'''y<sub>0</sub> {{=}} 0'''}}.
</p>
<p style="line-height:150%;">
Now,&nbsp;returning to the general problem,&nbsp;the equation<br>
'''
y<sub>n</sub>&nbsp;=&nbsp;a<sub>m</sub>y<sub>n&minus;1</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>y<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;
a<sub>2</sub>y<sub>n&minus;m&nbsp;+&nbsp;1</sub>&nbsp;+&nbsp;a<sub>1</sub>y<sub>n&minus;m</sub>&nbsp;+&nbsp;a<sub>0</sub>.<br>
'''
When '''y<sub>p,n</sub>''' is a ''particular'' solution  with
'''y<sub>p,0</sub>&nbsp;=&nbsp;0''',&nbsp; then<br>
&nbsp;'''z<sub>n</sub>&nbsp;=&nbsp;y<sub>p,n+1</sub>&nbsp;&minus;&nbsp;y<sub>p,n</sub>&nbsp;'''<br>
is a solution to the ''homogeneous'' equation with
&nbsp;'''z<sub>0</sub>&nbsp;=&nbsp;y<sub>p,1</sub>&nbsp;'''.<br>
So {{nowrap| '''z<sub>n</sub> {{=}} y<sub>p,n+1</sub> &minus; y<sub>p,n</sub> '''}}<br>
can be ''telescoped'' to get<br>
'''
y<sub>p,n</sub>&nbsp;=&nbsp;(y<sub>p,n</sub>&nbsp;&minus;&nbsp;y<sub>p,n&minus;1</sub>)&nbsp;+&nbsp;(y<sub>p,n&minus;1</sub>&nbsp;&minus;&nbsp;y<sub>p,n&minus;2</sub>)
+&nbsp;...&nbsp;+&nbsp;(y<sub>p,2</sub>&nbsp;&minus;&nbsp;y<sub>p,1</sub>)&nbsp;+&nbsp;(y<sub>p,1</sub>&nbsp;&minus;&nbsp;y<sub>p,0</sub>)&nbsp;+&nbsp;y<sub>p,0</sub>
'''
<br>
'''
=&nbsp;z<sub>n&minus;1</sub>&nbsp;+&nbsp;z<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>&nbsp;<br>
'''
Considering<br>
'''
y<sub>p,m</sub>&nbsp;=&nbsp;a<sub>m</sub>y<sub>p,m&minus;1</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>y<sub>p,m&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;
a<sub>2</sub>y<sub>p,1</sub>&nbsp;+&nbsp;a<sub>1</sub>y<sub>p,0</sub>&nbsp;+&nbsp;a<sub>0</sub>.<br>
'''
and rewriting the equation in the &nbsp;'''z<sub>i</sub>'''<br>
'''
z<sub>m&minus;1</sub>&nbsp;+&nbsp;z<sub>m&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>&nbsp;<br>
'''
'''
&nbsp;=&nbsp;&nbsp;(a<sub>m</sub>)&nbsp;(
z<sub>m&minus;2</sub>&nbsp;+&nbsp;z<sub>m&minus;3</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>)&nbsp;
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>m&minus;1</sub>)&nbsp;(
z<sub>m&minus;3</sub>&nbsp;+&nbsp;z<sub>m&minus;4</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>)&nbsp;<br>
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>m&minus;2</sub>)&nbsp;(
z<sub>m&minus;4</sub>&nbsp;+&nbsp;z<sub>m&minus;5</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>)&nbsp;<br>
'''
'''
&nbsp;+&nbsp;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;<br>
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>3</sub>)&nbsp;(&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>)
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>2</sub>)&nbsp;(&nbsp;z<sub>0</sub>)
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>0</sub>)<br>
'''
and<br>
'''
z<sub>m&minus;1</sub><br>
'''
'''
&nbsp;=&nbsp;&nbsp;(a<sub>m</sub>&nbsp;&minus;&nbsp;1)&nbsp;z<sub>m&minus;2</sub>
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>m</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>&nbsp;&minus;&nbsp;1)&nbsp;z<sub>m&minus;3</sub>
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>m</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>&nbsp;+
a<sub>m&minus;2</sub>&nbsp;&minus;&nbsp;1)&nbsp;z<sub>m&minus;4</sub><br>
'''
'''
&nbsp;+&nbsp;&nbsp;&middot;&nbsp;&middot;&nbsp;&middot;<br>
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>m</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>&nbsp;+&nbsp;...&nbsp;+
a<sub>4</sub>&nbsp;+&nbsp;a<sub>3</sub>&nbsp;&minus;&nbsp;1)&nbsp;z<sub>1</sub>
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>m</sub>&nbsp;+&nbsp;a<sub>m&minus;1</sub>&nbsp;+&nbsp;...&nbsp;+
a<sub>3</sub>&nbsp;+&nbsp;a<sub>2</sub>&nbsp;&minus;&nbsp;1)&nbsp;z<sub>0</sub><br>
'''
'''
&nbsp;+&nbsp;&nbsp;(a<sub>0</sub>).<br>
'''
</p>
<p style="line-height:150%;">
Since a solution of the ''homogeneous'' equation can be found for any ''initial conditions''<br>
'''
z<sub>0</sub>,&nbsp;&nbsp;z<sub>1</sub>,&nbsp;&nbsp;z<sub>2</sub>,&nbsp;&nbsp;...,&nbsp;&nbsp;z<sub>m&minus;2</sub>,&nbsp;&nbsp;z<sub>m&minus;1</sub>.
'''
<br>reasoning ''conversely'' find such '''z<sub>i</sub>''' satisfying the
equation,<br> just before and define '''y<sub>p,n</sub>''' by the relation<br>
'''
&nbsp;y<sub>p,0</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;y<sub>p,n</sub>&nbsp;
=&nbsp;z<sub>n&minus;1</sub>&nbsp;+&nbsp;z<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>&nbsp;<br>
'''
</p>
 
<p style="line-height:150%;">
One choice is, for example,&nbsp;
'''
z<sub>m&minus;1</sub>&nbsp;=&nbsp;a<sub>0</sub>,&nbsp;&nbsp;
z<sub>0</sub>&nbsp;&nbsp;=&nbsp;&nbsp;z<sub>1</sub>&nbsp;&nbsp;=&nbsp;&nbsp;z<sub>2</sub>&nbsp;&nbsp;=&nbsp;...&nbsp;=&nbsp;&nbsp;z<sub>m&minus;2</sub>&nbsp;&nbsp;=&nbsp;&nbsp;0.
'''
<br>This solution solves the problem for all ''initial values'' equal to ''zero''.
</p>
 
<p style="line-height:150%;">
The ''general solution'' to the ''inhomogeneous'' equation is given by<br>
'''y<sub>n</sub>&nbsp;=&nbsp;y<sub>p,n</sub>&nbsp;+&nbsp;&gamma;<sub>1</sub>&nbsp;w(1)<sub>n</sub>
+&nbsp;&gamma;<sub>2</sub>&nbsp;w(2)<sub>n</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;&gamma;<sub>m&minus;1</sub>&nbsp;w(m&minus;1)<sub>n</sub>
+&nbsp;&gamma;<sub>m</sub>&nbsp;w(m)<sub>n</sub>'''<br>
where<br>
'''w(1)<sub>n</sub>,&nbsp;&nbsp;w(2)<sub>n</sub>,&nbsp;&nbsp;...,&nbsp;&nbsp;w(m&minus;1)<sub>n</sub>,&nbsp;w(m)<sub>n</sub>'''<br>
are a ''basis'' for the ''homogeneous'' equation,&nbsp;and<br>
'''&gamma;<sub>1</sub>,&nbsp;&nbsp;&gamma;<sub>2</sub>,&nbsp;&nbsp;...,&nbsp;&nbsp;&gamma;<sub>m&minus;1</sub>,&nbsp;&nbsp;&gamma;<sub>m</sub>'''<br>
are ''scalars''.
</p>
 
'''example'''
 
<p style="line-height:150%;">
'''
y<sub>n</sub>&nbsp;=&nbsp;8&nbsp;y<sub>n&minus;1</sub>&nbsp;&minus;&nbsp;25&nbsp;y<sub>n&minus;2</sub>&nbsp;+
38&nbsp;y<sub>n&minus;3</sub>&nbsp;&minus;&nbsp;28&nbsp;y<sub>n&minus;4</sub>
&nbsp;+&nbsp;8&nbsp;y<sub>n&minus;5</sub>&nbsp;+&nbsp;&nbsp;1<br>
'''
with initial conditions<br>
'''
y<sub>0</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;y<sub>1</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;y<sub>2</sub>&nbsp;=&nbsp;0,&nbsp;
&nbsp;y<sub>3</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;'''and'''&nbsp;&nbsp;y<sub>4</sub>&nbsp;=&nbsp;0.
'''
</p>
 
<p style="line-height:175%;">
The ''characteristic polynomial'' for the equation is<br>
'''
p(x)&nbsp;=&nbsp;
x<sup>5</sup>&nbsp;&minus;&nbsp;8x<sup>4</sup>&nbsp;+&nbsp;25x<sup>3</sup>&nbsp;&minus;
38x<sup>2</sup>&nbsp;+&nbsp;28x&nbsp;&minus;&nbsp;8&nbsp;
&nbsp;=&nbsp;&nbsp;(x&nbsp;&minus;&nbsp;1)<sup>2</sup>(x&nbsp;&minus;&nbsp;2)<sup>3</sup>.<br>
'''
</p>
 
<p style="line-height:175%;">
The ''homogeneous'' equation has ''independent'' solutions<br>
'''
w1<sub>n</sub>&nbsp;&nbsp;=&nbsp;&nbsp;1<sup>n</sup>&nbsp;=&nbsp;1,&nbsp;&nbsp;
w2<sub>n</sub>&nbsp;&nbsp;=&nbsp;&nbsp;n&middot;1<sup>n&minus;1</sup>&nbsp;=&nbsp;n,
&nbsp;&nbsp;and<br>
w3<sub>n</sub>&nbsp;&nbsp;=&nbsp;&nbsp;2<sup>n</sup>,&nbsp;&nbsp;
w4<sub>n</sub>&nbsp;&nbsp;=&nbsp;&nbsp;n&middot;2<sup>n&minus;1</sup>,&nbsp;&nbsp;
w5<sub>n</sub>&nbsp;&nbsp;=&nbsp;&nbsp;n(n&minus;1)&middot;2<sup>n&minus;2</sup>.<br>
'''
The solution to the ''homogeneous'' equation<br>
'''
z<sub>n</sub>&nbsp;=&nbsp;&minus;3&nbsp;w1<sub>n</sub>&nbsp;&minus;&nbsp;w2<sub>n</sub>&nbsp;+
&nbsp;3&nbsp;w3<sub>n</sub>&nbsp;&minus;&nbsp;2&nbsp;w4<sub>n</sub>&nbsp;+&nbsp;&frac12;&nbsp;w5<sub>n</sub>&nbsp;&nbsp;
'''
<br>satisfies the ''initial conditions''<br>
'''
z<sub>4</sub>&nbsp;=&nbsp;1,&nbsp;&nbsp;
z<sub>0</sub>&nbsp;&nbsp;=&nbsp;&nbsp;z<sub>1</sub>&nbsp;&nbsp;=&nbsp;&nbsp;z<sub>2</sub>&nbsp;&nbsp;=&nbsp;&nbsp;z<sub>3</sub>&nbsp;&nbsp;=&nbsp;&nbsp;0.
'''
<br>A ''particular solution'' can be found by<br>
'''y<sub>p,0</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;&nbsp;y<sub>p,n</sub>&nbsp;'''
'''
=&nbsp;z<sub>n&minus;1</sub>&nbsp;+&nbsp;z<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;z<sub>1</sub>&nbsp;+&nbsp;z<sub>0</sub>&nbsp;.<br>
'''
</p>
 
<p style="line-height:175%;">
Calculating sums:<br>
'''
&sum;w1&nbsp;&nbsp;=&nbsp;&nbsp;w1<sub>n&minus;1</sub>&nbsp;+&nbsp;w1<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;w1<sub>1</sub>&nbsp;+&nbsp;w1<sub>0</sub>&nbsp;
&nbsp;=&nbsp;&nbsp;n&nbsp;.<br>
'''
'''
&sum;w2&nbsp;&nbsp;=&nbsp;&nbsp;w2<sub>n&minus;1</sub>&nbsp;+&nbsp;w2<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;w2<sub>1</sub>&nbsp;+&nbsp;w2<sub>0</sub>&nbsp;
&nbsp;=&nbsp;&nbsp;(n&minus;1)n&nbsp;/&nbsp;2&nbsp;.<br>
'''
'''
&sum;w3&nbsp;&nbsp;=&nbsp;&nbsp;w3<sub>n&minus;1</sub>&nbsp;+&nbsp;w3<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;w3<sub>1</sub>&nbsp;+&nbsp;w3<sub>0</sub>&nbsp;
&nbsp;=&nbsp;&nbsp;2<sup>n</sup>&nbsp;&minus;&nbsp;1&nbsp;.<br>
'''
Sums of these kinds are found by differentiating {{nowrap|'''(x<sup>n</sup> &minus; 1) / (x &minus; 1)'''}}.<br>
'''
&sum;w4&nbsp;&nbsp;=&nbsp;&nbsp;w4<sub>n&minus;1</sub>&nbsp;+&nbsp;w4<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;w4<sub>1</sub>&nbsp;+&nbsp;w4<sub>0</sub>&nbsp;
&nbsp;=&nbsp;&nbsp;(n&minus;2)2<sup>n&minus;1</sup>&nbsp;+&nbsp;1&nbsp;.<br>
'''
'''
&sum;w5&nbsp;&nbsp;=&nbsp;&nbsp;w5<sub>n&minus;1</sub>&nbsp;+&nbsp;w5<sub>n&minus;2</sub>&nbsp;+&nbsp;...&nbsp;+&nbsp;w5<sub>1</sub>&nbsp;+&nbsp;w5<sub>0</sub>&nbsp;
&nbsp;=&nbsp;&nbsp;(n<sup>2</sup>&nbsp;&minus;&nbsp;5n&nbsp;+&nbsp;8)2<sup>n&minus;2</sup>&nbsp;&minus;&nbsp;2&nbsp;.<br>
'''
</p>
 
<p style="line-height:175%;">
Now,<br>
'''
y<sub>p,n</sub>&nbsp;=&nbsp;&minus;3&nbsp;&sum;w1<sub>n</sub>&nbsp;&minus;&nbsp;&sum;w2<sub>n</sub>&nbsp;+
&nbsp;3&nbsp;&sum;w3<sub>n</sub>&nbsp;&minus;&nbsp;2&nbsp;&sum;w4<sub>n</sub>&nbsp;+&nbsp;&frac12;&nbsp;&sum;w5<sub>n</sub>
'''
<br>solves the ''initial value problem'' of this example.
</p>
 
<p style="line-height:175%;">
At this point it is worthwhile to notice that all the ''terms'' that are combinations of
<br>''scalar multiples'' of ''basis elements'' can be removed.  These are any multiples of<br>
'''
1,&nbsp;&nbsp;n,&nbsp;&nbsp;2<sup>n</sup>,&nbsp;&nbsp;n&middot;2<sup>n&minus;1</sup>,'''
&nbsp;and&nbsp;&nbsp;'''n<sup>2</sup>&middot;2<sup>n&minus;2</sup>.
'''
<br>So instead the ''particular'' solution next, may be preferred.<br>
'''
y<sub>p,n</sub>&nbsp;&nbsp;=&nbsp;&nbsp;&minus;&frac12;&nbsp;n<sup>2</sup>&nbsp;.<br>
'''
This solution has ''non zero'' initial values, which must be taken into account.<br>
'''
y<sub>0</sub>&nbsp;=&nbsp;0,&nbsp;&nbsp;y<sub>1</sub>&nbsp;=&nbsp;&minus;1&nbsp;&frasl;&nbsp;2,&nbsp;&nbsp;y<sub>2</sub>&nbsp;=&nbsp;&minus;2,&nbsp;
&nbsp;y<sub>3</sub>&nbsp;=&nbsp;&minus;9&nbsp;&frasl;&nbsp;2,&nbsp;&nbsp;'''and'''&nbsp;&nbsp;y<sub>4</sub>&nbsp;=&nbsp;&minus;8.
'''
</p>
 
== References ==
*{{Cite book
  | last = Axler
  | first = Sheldon
  | title = Linear Algebra Done Right
  | publisher = Springer
  | year = 1997
  | edition = 2nd
  | isbn = 978-0-387-98258-8}}
 
{{DEFAULTSORT:Generalized Eigenvector}}
[[Category:Linear algebra]]

Revision as of 23:09, 13 February 2014

Fashion Designer Owen Krejci from Clavet, has pastimes which include house repair, diet and train collecting. At all times enjoys visiting spots including Abbey and Altenmünster of Lorsch.

Have a look at my homepage simple diet burn fat