Proofs involving the Moore–Penrose pseudoinverse
{{#invoke:main|main}} Template:Cleanup
Let be an m-by-n matrix over a field , where , is either the field , of real numbers or the field , of complex numbers. Then there is a unique n-by-m matrix over , such that:
- A A^{+}A = A
- A^{+}A A^{+} = A^{+}
- (AA^{+})^{*} = AA^{+}
- (A^{+}A)^{*} = A^{+}A
A^{+} is called the Moore-Penrose pseudoinverse of A. Notice that A is also the Moore-Penrose pseudoinverse of A^{+} . That is, (A^{+} )^{+} = A.
Useful lemmas
In the following lemmas, A is a matrix with complex elements and n columns. B and C are matrices with complex elements and n rows. These results are used in the proofs below.
Lemma 1: A*A = 0 ⇒ A = 0
The assumption says that all elements of A*A are zero. Therefore
Therefore all equal 0 i.e. A=0.
Lemma 2: A*AB = 0 ⇒ AB = 0
0 = A*AB | |
⇒ | 0 = B*A*AB |
⇒ | 0 = (AB)*(AB) |
⇒ | 0 = AB (by Lemma 1) |
Lemma 3: BAA* = 0 ⇒ BA = 0
This is proved in a manner similar to the argument of Lemma 2 (or by simply taking the Hermitian conjugate).
Existence and uniqueness
Proof of uniqueness
Suppose that B and C are two n-by-m matrices over satisfying the Moore-Penrose criteria. Observe then that
AB = (AB)* = B*A* = B*(ACA)* = B*A*C*A* = (AB)*(AC)* = ABAC = AC.
Analogously we conclude that BA=CA. The proof is completed by observing that then
B = BAB = BAC = CAC = C.
Proof of existence
The proof proceeds in stages.
1-by-1 matrices
It is easy to see that is a pseudoinverse of (interpreted as a 1-by-1 matrix).
Square diagonal matrices
Let be an n-by-n matrix over K with zeros off the diagonal. We define as an n-by-n matrix over K with . We write simply for .
Notice that is also a matrix with zeros off the diagonal.
We now show that is a pseudoinverse of :
General diagonal matrices
Arbitrary matrices
The singular value decomposition theorem states that there exists a factorization of the form
where:
- U is an m-by-m unitary matrix over K.
- Σ is an m-by-n matrix over K with nonnegative numbers on the diagonal and zeros off the diagonal.
- V is an n-by-n unitary matrix over K.^{[1]}
We now show that is a pseudoinverse of :
Basic properties
A*^{+}=A^{+*}
The proof works by showing that A^{+*} satisfies the four criteria for the pseudoinverse of A*. Since this amounts to just substitution, it is not shown here.
The proof of this relation is given as Exercise 1.18c in.^{[2]}
Identities
A^{+} = A^{+} A^{+*} A*
A^{+} = A^{+}AA^{+} and AA^{+} = (AA^{+})* imply that A^{+} = A^{+}(A A^{+})* = A^{+}A^{+*}A*.
A^{+} = A* A^{+*} A^{+}
A^{+} = A^{+}AA^{+} and A^{+}A = (A^{+}A)* imply that A^{+} = (A^{+}A)*A^{+} = A*A^{+*}A^{+}.
A = A^{+*} A* A
A = A A^{+} A and A A^{+} = (A A^{+})* imply that A = (A A^{+})* A = A^{+}* A* A.
A = A A* A^{+*}
A = A A^{+} A and A^{+} A = (A^{+} A)* imply that A = A (A^{+} A)* = A A* A^{+}*.
A* = A* A A^{+}
This is the conjugate transpose of A = A^{+}* A* A above.
A* = A^{+} A A*
This is the conjugate transpose of A = A A* A^{+}* above.
Reduction to the Hermitian case
The results of this section show that the computation of the pseudoinverse is reducible to its construction in the Hermitian case. It suffices to show that the putative constructions satisfy the defining criteria.
A^{+} = A* (A A*)^{+}
This relation is given as exercise 18(d) in,^{[2]} for the reader to prove, "for every matrix A". Write D = A* (A A*)^{+}. Observe that
AA* = AA*(A A*)^{+} AA* | |
⇔ | AA* = ADAA* |
⇔ | 0 = (AD − I)AA* |
⇔ | 0 = ADA − A (by Lemma 3) |
⇔ | A = ADA. |
Similarly, (AA*)^{+}AA*(AA*)^{+} = (AA*)^{+} implies that A*(AA*)^{+}AA*(AA*)^{+} = A*(AA*)^{+} i.e. DAD = D.
Additionally, AD = AA*(AA*)^{+} so AD = (AD)*.
Finally, DA = A*(AA*)^{+}A implies that (DA)* = A* ((AA*)^{+})*A = A* ((AA*)^{+})A = DA.
Therefore D = A^{+}.
A^{+} = (A* A)^{+}A*
This is proved in an analogous manner to the case above.
Products
For the first three proofs, we consider products C = AB.
A has orthonormal columns
If A has orthonormal columns i.e. A*A = I then A^{+}=A*. Write D=B^{+}A^{+} = B^{+}A*. We show that D satisfies the Moore-Penrose criteria.
CDC = ABB^{+}A*AB = ABB^{+}B = AB = C .
DCD = B^{+}A*ABB^{+}A* = B^{+}BB^{+}A* = B^{+}A* = D
(CD)* = D*B*A* = A(B^{+})*B*A* = A(BB^{+})*A* = ABB^{+}A* = CD
(DC)* = B*A*D* = B*A*A(B^{+})* = (B^{+}B)* = B^{+}B = B^{+}A*AB = DC
Therefore D = C^{+}
B has orthonormal rows
If B has orthonormal rows i.e. BB* = I then B^{+}=B*. Write D=B^{+}A^{+} = B*A^{+}. We show that D satisfies the Moore-Penrose criteria.
CDC = ABB*A^{+}AB = AA^{+}AB = AB = C .
DCD = B*A^{+}ABB*A^{+} = B*A^{+}AA^{+} = B*A^{+} = D
(CD)* = D*B*A* = (A^{+})*BB*A* = (A^{+})*A* = (AA^{+})* = AA^{+} = ABB*A^{+} = CD
(DC)* = B*A*D* = B*A*(A^{+})*B = B*(A^{+}A)*B = B*A^{+}AB = DC
Therefore D = C^{+}
A has full column rank and B has full row rank
Since A has full column rank, A*A is invertible so (A*A)^{+} = (A*A)^{−1}. Similarly, since B has full row rank, BB* is invertible so (BB*)^{+} = (BB*)^{−1}.
Write D = B^{+}A^{+} = B*(BB*)^{−1}(A*A)^{−1}A*. We show that D satisfies the Moore-Penrose criteria.
CDC = ABB*(BB*)^{−1}(A*A)^{−1}A*AB = AB = C .
DCD = B*(BB*)^{−1}(A*A)^{−1}A*ABB*(BB*)^{−1}(A*A)^{−1}A*= B*(BB*)^{−1}(A*A)^{−1}A* = D
CD = ABB*(BB*)^{−1}(A*A)^{−1}A* = A(AA*)^{−1}A* = (A(AA*)^{−1}A*)* ⇒ (CD)* = CD.
DC = B*(BB*)^{−1}(A*A)^{−1}A*AB = B*(BB*)^{−1}B = (B*(BB*)^{−1}B)* ⇒ (DC)* = DC.
Therefore D = C^{+}
Conjugate transpose
Here, , and thus and . We show that indeed satisfies the four Moore-Penrose criteria.
Projectors and subspaces
Define P = AA^{+} and Q = A^{+}A. Observe that P^{2} = AA^{+}AA^{+} = AA^{+} = P. Similarly Q^{2} = Q, and finally, P = P* and Q = Q*. Thus P and Q are orthogonal projection operators. Orthogonality follows from the relations P = P* and Q = Q *. Indeed, consider the operator P: any vector decomposes as
x = Px + (I-P)x
and for all vectors x and y satisfying Px=x and (I-P)y = y, we have
x*y = (Px)*(I-P)y = x*P*(I-P)y = x*P(I-P)y = 0.
It follows that PA = AA^{+}A = A and A^{+}P = A^{+}AA^{+} = A^{+}. Similarly, QA^{+} = A^{+} and AQ = A. The orthogonal components are now readily identified.
If y belongs to the range of A then for some x, y = Ax and Py = PAx = Ax = y. Conversely, if Py = y then y = AA^{+}y so that y belongs to the range of A. It follows that P is the orthogonal projector onto the range of A. I - P is then the orthogonal projector onto the orthogonal complement of the range of A, which equals the kernel of A*.
A similar argument using the relation Q A* = A* establishes that Q is the orthogonal projector onto the range of A* and (I-Q) is the orthogonal projector onto the kernel of A.
Using the relations P(A^{+})* = P*(A^{+})* = (A^{+}P)* = (A^{+})* and P = P* = (A^{+})*A* it follows that the range of P equals the range of (A^{+})*, which in turn implies that the range of I-P equals the kernel of A^{+}. Similarly QA^{+} = A^{+} implies that the range of Q equals the range of A^{+}. Therefore we find,
Additional properties
Least-squares minimization
In the general case, it is shown here for any matrix that where . This lower bound need not be zero as the system may not have a solution (e.g. when the matrix A does not have full rank or the system is overdetermined).
To prove this, we first note that (stating the complex case), using the fact that satisfies and , we have
so that
as claimed.
If is injective i.e. one-to-one (which implies ), then the bound is attained uniquely at .
Minimum-norm solution to a linear system
The proof above also shows that if the system is satisfiable i.e. has a solution, then necessarily is a solution (not necessarily unique). We show here that is the smallest such solution (its Euclidean norm is uniquely minimum).
To see this, note first, with , that and that . Therefore, assuming that , we have
Thus
with equality if and only if , as was to be shown.
References
- ↑ Some authors use slightly different dimensions for the factors. The two definitions are equivalent.
- ↑ ^{2.0} ^{2.1} {{#invoke:citation/CS1|citation |CitationClass=book }}
{{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }}