|
|
Line 1: |
Line 1: |
| {{Refimprove|date=December 2009}}
| |
| In [[mathematics]], a '''block matrix''' or a '''partitioned matrix''' is a [[matrix (mathematics)|matrix]] which is ''interpreted'' as having been broken into sections called '''blocks''' or '''submatrices'''.<ref>{{cite book |last=Eves |first=Howard |authorlink=Howard Eves |title=Elementary Matrix Theory |year=1980 |publisher=Dover |location=New York |isbn=0-486-63946-0 |page=37 |url=http://books.google.com/books?id=ayVxeUNbZRAC&lpg=PA40&dq=block%20multiplication&pg=PA37#v=onepage&q&f=false |edition=reprint |accessdate=24 April 2013 |quote=We shall find that it is sometimes convenient to subdivide a matrix into rectangular blocks of elements. This leads us to consider so-called ''partitioned'', or ''block'', ''matrices''.}}</ref> Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines which break it out, or [[Partition of a set|partition]] it, into a collection of smaller matrices.<ref>{{cite book |last=Anton |first=Howard |title=Elementary Linear Algebra |year=1994 |publisher=John Wiley |location=New York |isbn=0-471-58742-7 |page=30 |edition=7th |quote=A matrix can be subdivided or '''''partitioned''''' into smaller matrices by inserting horizontal and vertical rules between selected rows and columns.}}</ref> Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.
| |
|
| |
|
| This notion can be made more precise for an <math>n</math> by <math>m</math> matrix <math>M</math> by partitioning <math>n</math> into a collection <math>rowgroups</math>, and then partitioning <math>m</math> into a collection <math>colgroups</math>. The original matrix is then considered as the "total" of these groups, in the sense that the <math>(i,j)</math> entry of the original matrix corresponds in a [[Bijection|1-to-1 and onto]] way to some <math>(s,t)</math> [[Offset (computer science)|offset]] entry of some <math>(x,y)</math>, where <math>x \in rowgroups</math> and <math>y \in colgroups</math>.
| |
|
| |
|
| ==Example==
| | They connect with me Linnie Ruggerio. Scheduling vacations is my career but soon I will be on my possess. North Dakota is wherever my household is and will by no means move. What I really like executing is to engage in golfing but I have been taking on new [http://Www.google.de/search?q=factors factors] recently. My spouse and I keep a web page. You may well want to look at it out right here: http://hurghada-diving.ru/modules.php?name=Your_Account&op=userinfo&username=AThacker<br><br>Take a look at my weblog - Nike Air Max 1 Baratas ([http://hurghada-diving.ru/modules.php?name=Your_Account&op=userinfo&username=AThacker have a peek at this web-site]) |
| [[File:BlockMatrix168square.png|thumb |A 168×168 element block matrix with 12×12, 12×24, and 24×24 sub-Matrices. Non-zero elements are in blue, zero elements are grayed.]]
| |
| | |
| The matrix
| |
| | |
| :<math>\mathbf{P} = \begin{bmatrix}
| |
| 1 & 1 & 2 & 2\\
| |
| 1 & 1 & 2 & 2\\
| |
| 3 & 3 & 4 & 4\\
| |
| 3 & 3 & 4 & 4\end{bmatrix}</math>
| |
| | |
| can be partitioned into 4 2×2 blocks
| |
| | |
| :<math>\mathbf{P}_{11} = \begin{bmatrix}
| |
| 1 & 1 \\
| |
| 1 & 1 \end{bmatrix}, \mathbf{P}_{12} = \begin{bmatrix}
| |
| 2 & 2\\
| |
| 2 & 2\end{bmatrix}, \mathbf{P}_{21} = \begin{bmatrix}
| |
| 3 & 3 \\
| |
| 3 & 3 \end{bmatrix}, \mathbf{P}_{22} = \begin{bmatrix}
| |
| 4 & 4\\
| |
| 4 & 4\end{bmatrix}.</math>
| |
| | |
| The partitioned matrix can then be written as
| |
| | |
| :<math>\mathbf{P} = \begin{bmatrix}
| |
| \mathbf{P}_{11} & \mathbf{P}_{12}\\
| |
| \mathbf{P}_{21} & \mathbf{P}_{22}\end{bmatrix}.</math>
| |
| | |
| ==Block matrix multiplication==
| |
| A block partitioned matrix product can sometimes be used that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "conformable partitions"<ref>{{cite book |last=Eves |first=Howard |authorlink=Howard Eves |title=Elementary Matrix Theory |year=1980 |publisher=Dover |location=New York |isbn=0-486-63946-0 |page=37 |url=http://books.google.com/books?id=ayVxeUNbZRAC&lpg=PA40&dq=block%20multiplication&pg=PA39#v=onepage&q&f=false |edition=reprint |accessdate=24 April 2013 |quote=A partitioning as in Theorem 1.9.4 is called a ''conformable partition'' of ''A'' and ''B''.}}</ref> between two matrices <math>A</math> and <math>B</math> such that all submatrix products that will be used are defined.<ref>{{cite book |last=Anton |first=Howard |title=Elementary Linear Algebra |year=1994 |publisher=John Wiley |location=New York |isbn=0-471-58742-7 |page=36 |edition=7th |quote=...provided the sizes of the submatrices of A and B are such that the indicated operations can be performed.}}</ref> Given an <math>(m \times p)</math> matrix <math>\mathbf{A}</math> with <math>q</math> row partitions and <math>s</math> column partitions
| |
| | |
| :<math> | |
| \mathbf{A} = \begin{bmatrix}
| |
| \mathbf{A}_{11} & \mathbf{A}_{12} & \cdots &\mathbf{A}_{1s}\\
| |
| \mathbf{A}_{21} & \mathbf{A}_{22} & \cdots &\mathbf{A}_{2s}\\
| |
| \vdots & \vdots & \ddots &\vdots \\
| |
| \mathbf{A}_{q1} & \mathbf{A}_{q2} & \cdots &\mathbf{A}_{qs}\end{bmatrix}</math>
| |
| | |
| and a <math>(p\times n)</math> matrix <math>\mathbf{B}</math> with <math>s</math> row partitions and <math>r</math> column partitions
| |
| | |
| :<math> | |
| \mathbf{B} = \begin{bmatrix}
| |
| \mathbf{B}_{11} & \mathbf{B}_{12} & \cdots &\mathbf{B}_{1r}\\
| |
| \mathbf{B}_{21} & \mathbf{B}_{22} & \cdots &\mathbf{B}_{2r}\\
| |
| \vdots & \vdots & \ddots &\vdots \\
| |
| \mathbf{B}_{s1} & \mathbf{B}_{s2} & \cdots &\mathbf{B}_{sr}\end{bmatrix},</math>
| |
| that are compatible with the partitions of <math>A</math>, the matrix product
| |
| | |
| :<math>
| |
| \mathbf{C}=\mathbf{A}\mathbf{B}
| |
| </math>
| |
| | |
| can be formed blockwise, yielding <math>\mathbf{C}</math> as an <math>(m\times n)</math> matrix with <math>q</math> row partitions and <math>r</math> column partitions. The matrices in your matrix <math>\mathbf{C}</math> are calculated by multiplying:
| |
| | |
| :<math>
| |
| \mathbf{C}_{\alpha \beta} = \sum^s_{\gamma=1}\mathbf{A}_{\alpha \gamma}\mathbf{B}_{\gamma \beta}.
| |
| </math>
| |
| | |
| Or, using the [[Einstein notation]] that implicitly sums over repeated indices:
| |
| | |
| :<math>
| |
| \mathbf{C}_{\alpha \beta} = \mathbf{A}_{\alpha \gamma}\mathbf{B}_{\gamma \beta}.
| |
| </math>
| |
| | |
| ==Block diagonal matrices {{anchor|Block diagonal matrix}} == | |
| A '''block diagonal matrix''' is a block matrix which is a [[square matrix]], and having [[main diagonal]] blocks square matrices, such that the off-diagonal blocks are zero matrices. A block diagonal matrix '''A''' has the form
| |
| | |
| :<math>
| |
| \mathbf{A} = \begin{bmatrix}
| |
| \mathbf{A}_{1} & 0 & \cdots & 0 \\ 0 & \mathbf{A}_{2} & \cdots & 0 \\
| |
| \vdots & \vdots & \ddots & \vdots \\
| |
| 0 & 0 & \cdots & \mathbf{A}_{n}
| |
| \end{bmatrix}
| |
| </math>
| |
| | |
| where '''A'''<sub>''k''</sub> is a square matrix; in other words, it is the [[Direct sum of matrices|direct sum]] of '''A'''<sub>1</sub>, …, '''A'''<sub>''n''</sub>. It can also be indicated as '''A'''<sub>1</sub> <math>\oplus</math> '''A'''<sub>2</sub> <math>\oplus\,\ldots\,\oplus </math> '''A'''<sub>n</sub> or diag('''A'''<sub>1</sub>, '''A'''<sub>2</sub>,<math>\ldots</math>, '''A'''<sub>n</sub>) (the latter being the same formalism used for a [[diagonal matrix]]).
| |
| Any square matrix can trivially be considered a block diagonal matrix with only one block.
| |
| | |
| For the [[determinant]] and [[trace (linear algebra)|trace]], the following properties hold
| |
| :<math> \operatorname{det} \mathbf{A} = \operatorname{det} \mathbf{A}_1 \times \ldots \times \operatorname{det} \mathbf{A}_n</math>,
| |
| :<math> \operatorname{tr} \mathbf{A} = \operatorname{tr} \mathbf{A}_1 +\cdots +\operatorname{tr} \mathbf{A}_n.</math>
| |
| | |
| The inverse of a block diagonal matrix is another block diagonal matrix, composed of the inverse of each block, as follows:
| |
| :<math>\begin{pmatrix}
| |
| \mathbf{A}_{1} & 0 & \cdots & 0 \\
| |
| 0 & \mathbf{A}_{2} & \cdots & 0 \\
| |
| \vdots & \vdots & \ddots & \vdots \\
| |
| 0 & 0 & \cdots & \mathbf{A}_{n}
| |
| \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{A}_{1}^{-1} & 0 & \cdots & 0 \\
| |
| 0 & \mathbf{A}_{2}^{-1} & \cdots & 0 \\
| |
| \vdots & \vdots & \ddots & \vdots \\
| |
| 0 & 0 & \cdots & \mathbf{A}_{n}^{-1}
| |
| \end{pmatrix}.
| |
| </math>
| |
| | |
| The eigenvalues and eigenvectors of <math>A</math> are simply those of <math>A_{1}</math> and <math>A_{2}</math> and ... and <math>A_{n}</math> (combined).
| |
| | |
| ==Block tridiagonal matrices==
| |
| A '''block tridiagonal matrix''' is another special block matrix, which is just like the block diagonal matrix a [[square matrix]], having square matrices (blocks) in the lower diagonal, [[main diagonal]] and upper diagonal, with all other blocks being zero matrices.
| |
| It is essentially a [[tridiagonal matrix]] but has submatrices in places of scalars. A block tridiagonal matrix '''A''' has the form
| |
| | |
| :<math> | |
| \mathbf{A} = \begin{bmatrix}
| |
| \mathbf{B}_{1} & \mathbf{C}_{1} & & & \cdots & & 0 \\
| |
| \mathbf{A}_{2} & \mathbf{B}_{2} & \mathbf{C}_{2} & & & & \\
| |
| & \ddots & \ddots & \ddots & & & \vdots \\
| |
| & & \mathbf{A}_{k} & \mathbf{B}_{k} & \mathbf{C}_{k} & & \\
| |
| \vdots & & & \ddots & \ddots & \ddots & \\
| |
| & & & & \mathbf{A}_{n-1} & \mathbf{B}_{n-1} & \mathbf{C}_{n-1} \\
| |
| 0 & & \cdots & & & \mathbf{A}_{n} & \mathbf{B}_{n}
| |
| \end{bmatrix}
| |
| </math>
| |
| | |
| where '''A'''<sub>''k''</sub>, '''B'''<sub>''k''</sub> and '''C'''<sub>''k''</sub> are square sub-matrices of the lower, main and upper diagonal respectively.
| |
| | |
| Block tridiagonal matrices are often encountered in numerical solutions of engineering problems (e.g., [[computational fluid dynamics]]). Optimized numerical methods for [[LU factorization]] are available and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix. The [[Thomas algorithm]], used for efficient solution of equation systems involving a [[tridiagonal matrix]] can also be applied using matrix operations to block tridiagonal matrices (see also [[Block LU decomposition]]).
| |
| | |
| ==Block Toeplitz matrices== | |
| A '''block Toeplitz matrix''' is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, as a [[Toeplitz matrix]] has elements repeated down the diagonal. The individual block matrix elements, Aij, must also be a Toeplitz matrix.
| |
| | |
| A block Toeplitz matrix '''A''' has the form
| |
| | |
| :<math>
| |
| \mathbf{A} = \begin{bmatrix}
| |
| \mathbf{A}_{(1,1)} & \mathbf{A}_{(1,2)} & & & \cdots & \mathbf{A}_{(1,n-1)} & \mathbf{A}_{(1,n)} \\
| |
| \mathbf{A}_{(2,1)} & \mathbf{A}_{(1,1)} & \mathbf{A}_{(1,2)} & & & & \mathbf{A}_{(1,n-1)} \\
| |
| & \ddots & \ddots & \ddots & & & \vdots \\
| |
| & & \mathbf{A}_{(2,1)} & \mathbf{A}_{(1,1)} & \mathbf{A}_{(1,2)} & & \\
| |
| \vdots & & & \ddots & \ddots & \ddots & \\
| |
| \mathbf{A}_{(n-1,1)} & & & & \mathbf{A}_{(2,1)} & \mathbf{A}_{(1,1)} & \mathbf{A}_{(1,2)} \\
| |
| \mathbf{A}_{(n,1)} & \mathbf{A}_{(n-1,1)} & \cdots & & & \mathbf{A}_{(2,1)} & \mathbf{A}_{(1,1)}
| |
| \end{bmatrix}.
| |
| </math>
| |
| | |
| ==Direct sum==
| |
| For any arbitrary matrices '''A''' (of size ''m'' × ''n'') and '''B''' (of size ''p'' × ''q''), we have the '''direct sum''' of '''A''' and '''B''', denoted by '''A''' <math>\oplus</math> '''B''' and defined as
| |
|
| |
| :<math>
| |
| \mathbf{A} \oplus \mathbf{B} =
| |
| \begin{bmatrix}
| |
| a_{11} & \cdots & a_{1n} & 0 & \cdots & 0 \\
| |
| \vdots & \cdots & \vdots & \vdots & \cdots & \vdots \\
| |
| a_{m 1} & \cdots & a_{mn} & 0 & \cdots & 0 \\
| |
| 0 & \cdots & 0 & b_{11} & \cdots & b_{1q} \\
| |
| \vdots & \cdots & \vdots & \vdots & \cdots & \vdots \\
| |
| 0 & \cdots & 0 & b_{p1} & \cdots & b_{pq}
| |
| \end{bmatrix}.
| |
| </math>
| |
| | |
| For instance,
| |
| | |
| :<math>
| |
| \begin{bmatrix}
| |
| 1 & 3 & 2 \\
| |
| 2 & 3 & 1
| |
| \end{bmatrix}
| |
| \oplus
| |
| \begin{bmatrix}
| |
| 1 & 6 \\
| |
| 0 & 1
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 1 & 3 & 2 & 0 & 0 \\
| |
| 2 & 3 & 1 & 0 & 0 \\
| |
| 0 & 0 & 0 & 1 & 6 \\
| |
| 0 & 0 & 0 & 0 & 1
| |
| \end{bmatrix}.
| |
| </math>
| |
| | |
| This operation generalizes naturally to arbitrary dimensioned arrays (provided that '''A''' and '''B''' have the same number of dimensions).
| |
| | |
| Note that any element in the [[direct sum of vector spaces|direct sum]] of two [[vector space]]s of matrices could be represented as a direct sum of two matrices.
| |
| | |
| ==Direct Product==
| |
| {{main|Kronecker product}}
| |
| | |
| ==Application==
| |
| In [[linear algebra]] terms, the use of a block matrix corresponds to having a [[linear mapping]] thought of in terms of corresponding 'bunches' of [[basis vector]]s. That again matches the idea of having distinguished direct sum decompositions of the [[domain (mathematics)|domain]] and [[range (mathematics)|range]]. It is always particularly significant if a block is the zero matrix; that carries the information that a summand maps into a sub-sum.
| |
| | |
| Given the interpretation ''via'' linear mappings and direct sums, there is a special type of block matrix that occurs for square matrices (the case ''m'' = ''n''). For those we can assume an interpretation as an [[endomorphism]] of an ''n''-dimensional space ''V''; the block structure in which the bunching of rows and columns is the same is of importance because it corresponds to having a single direct sum decomposition on ''V'' (rather than two). In that case, for example, the [[diagonal]] blocks in the obvious sense are all square. This type of structure is required to describe the [[Jordan normal form]].
| |
| | |
| This technique is used to cut down calculations of matrices, column-row expansions, and many [[computer science]] applications, including [[VLSI]] chip design. An example is the [[Strassen algorithm]] for fast [[matrix multiplication]], as well as the [[Hamming(7,4)]] encoding for error detection and recovery in data transmissions.
| |
| | |
| ==References==
| |
| {{Reflist}}
| |
| *{{Cite web |last=Strang |first=Gilbert |authorlink=Gilbert Strang |url=http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-3-multiplication-and-inverse-matrices |title=Lecture 3: Multiplication and inverse matrices |publisher=MIT Open Course ware |at=18:30–21:10 |date=1999}}
| |
| | |
| {{Linear algebra}}
| |
| | |
| {{DEFAULTSORT:Block Matrix}}
| |
| [[Category:Matrices]]
| |
| [[Category:Sparse matrices]]
| |