Encephalization: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Deselliers
m →‎Evolution of the EQ: Link on methane clathrates
en>GoingBatty
m →‎Evolution of the EQ: General fixes, typo(s) fixed: Yucatan → Yucatán using AWB (9946)
Line 1: Line 1:
  {{multiple image
Friends contact him Royal. Kansas is where her home is but she requirements to move simply because of her family members. Interviewing is what I do in my day job. What she enjoys doing is to play croquet but she hasn't made a dime with it.<br><br>my web-site: [http://freeonlinetypinggames.com/typing-arcade/profile/692123/KaRTCG.html extended car warranty reviews]
  | align    = right
  | footer    =
  | width1    = 290
  | image1    = 3d basis transformation.svg
  | caption1  = A [[linear combination]] of one basis set of vectors (purple) obtains new vectors (red). If they are [[linearly independent]], these form a new basis set. The linear combinations relating the first set to the other extend to a [[linear transformation]], called the change of basis.
  | width2    = 122
  | image2    = 3d two bases same vector.svg
  | caption2  = A vector can be represented in two different bases (purple and red arrows).
  }}
 
In [[linear algebra]], a [[basis (linear algebra)|basis]] for a [[vector space]] of [[dimension (linear algebra)|dimension]] ''n'' is a sequence of ''n'' vectors {{nowrap|(&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>)}} with the property that every vector in the space can be expressed uniquely as a [[linear combination]] of the basis vectors. The [[Transformation matrix|matrix representations]] of [[linear transformation|operators]] are also determined by the chosen basis. Since it is often desirable to work with more than one basis for a vector space, it is of fundamental importance in linear algebra to be able to easily transform coordinate-wise representations of vectors and operators taken with respect to one basis to their equivalent representations with respect to another basis. Such a transformation is called a '''change of basis'''.
 
Although the terminology of vector spaces is used below and the symbol ''R'' can be taken to mean the [[field (mathematics)|field]] of [[real number]]s, the results discussed hold whenever ''R'' is a [[commutative ring]] and ''vector space'' is everywhere replaced with ''[[free module|free]] [[module (mathematics)|R-module]]''.
 
== Preliminary notions ==
The [[standard basis]] for ''R<sup>n</sup>'' is the ordered sequence {{nowrap|('''e'''<sub>1</sub>, …, '''e'''<sub>''n''</sub>)}}, where '''e'''<sub>''j''</sub> is the element of ''R<sup>n</sup>'' with 1 in the ''j''th place and 0s elsewhere.
 
If {{nowrap|''T'' : ''R<sup>n</sup>'' → ''R<sup>m</sup>''}} is a [[linear transformation]], the {{nowrap|''m'' × ''n''}} '''[[matrix (mathematics)|matrix]] of''' ''T'' is the matrix '''t''' whose ''j''th column is ''T''('''e'''<sub>''j''</sub>) for {{nowrap|1=''j'' = 1, …, ''n''}}.  In this case we have {{nowrap|1=''T''('''x''') = '''tx'''}} for all '''x''' in ''R<sup>n</sup>'', where we regard '''x''' as a column vector and the multiplication on the right side is [[matrix multiplication]]. It is a basic fact in linear algebra that the vector space {{nowrap|Hom(''R<sup>n</sup>'', ''R<sup>m</sup>'')}} of all linear transformations from ''R<sup>n</sup>'' to ''R<sup>m</sup>'' is naturally [[isomorphism|isomorphic]] to the space {{nowrap|''R''<sup>''m'' × ''n''</sup>}} of {{nowrap|''m'' × ''n''}} matrices over ''R''; that is, a linear transformation {{nowrap|''T'' : ''R<sup>n</sup>'' → ''R<sup>m</sup>''}} is for all intents and purposes equivalent to its matrix '''t'''.
 
We will also make use of the following simple observation.
 
'''Theorem''' Let ''V'' and ''W'' be vector spaces, let {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}<nowiki/>}} be a basis for ''V'', and let {{nowrap|{&gamma;<sub>1</sub>, …, &gamma;<sub>''n''</sub>}<nowiki/>}} be any ''n'' vectors in ''W''.  Then there exists a unique linear transformation {{nowrap|''T'' : ''V'' → ''W''}} with {{nowrap|1=''T''(&alpha;<sub>''j''</sub>) = &gamma;<sub>''j''</sub>}} for {{nowrap|1=''j'' = 1, …, ''n''}}.
 
This unique ''T'' is defined by {{nowrap|1=''T''(''x''<sub>1</sub>&alpha;<sub>1</sub> + … + ''x<sub>n</sub>''&alpha;<sub>''n''</sub>) = ''x''<sub>1</sub>&gamma;<sub>1</sub> + … + ''x<sub>n</sub>''&gamma;<sub>''n''</sub>}}.  Of course, if {{nowrap|{&gamma;<sub>1</sub>, …, &gamma;<sub>''n''</sub>}<nowiki/>}} happens to be a basis for ''W'', then ''T'' is [[bijective]] as well as linear; in other words, ''T'' is an [[isomorphism]].  If in this case we also have {{nowrap|1=''W'' = ''V''}}, then ''T'' is said to be an [[automorphism]].
 
Now let ''V'' be a vector space over ''R'' and suppose {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}<nowiki/>}} is a basis for ''V''.  By definition, if &xi; is a vector in ''V'' then {{nowrap|1=&xi; = ''x''<sub>1</sub>&alpha;<sub>1</sub> + … + ''x<sub>n</sub>''&alpha;<sub>''n''</sub>}} for a unique choice of [[scalar (mathematics)|scalars]] {{nowrap|''x''<sub>1</sub>, …, ''x<sub>n</sub>''}} in ''R'' called the '''coordinates of''' &xi; '''relative to the ordered basis''' {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}.}}  The vector {{nowrap|1='''x''' = (''x''<sub>1</sub>, …, ''x<sub>n</sub>'')}} in ''R<sup>n</sup>'' is called the '''coordinate tuple of''' &xi; (relative to this basis).  The unique linear map {{nowrap|&phi; : ''R<sup>n</sup>'' → ''V''}} with {{nowrap|1=&phi;('''e'''<sub>''j''</sub>) = &alpha;<sub>''j''</sub>}} for {{nowrap|1=''j'' = 1, …, ''n''}} is called the '''coordinate isomorphism''' for ''V'' and the basis {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}.}}  Thus {{nowrap|1=&phi;('''x''') = &xi;}} [[if and only if]]  {{nowrap|1=&xi; = ''x''<sub>1</sub>&alpha;<sub>1</sub> + … + ''x<sub>n</sub>''&alpha;<sub>''n''</sub>}}.
 
===Matrix of a set of vectors===
 
A set of vectors can be represented by a matrix whose columns are the components of each vector of the set. As a basis is a set of vectors, a basis can be given by a matrix of this kind. Later it will be shown that the change of basis of any object of the space is related to this matrix. For example vectors change with its inverse (and they are therefore called contravariant objects).
 
== Change of coordinates of a vector==
First we examine the question of how the coordinates of a vector &xi;, in the vector space ''V'', change when we select another basis.
 
===Two dimensions===
This means that given a matrix ''M'' whose columns are the vectors of the new basis of the space (new basis matrix), the new coordinates for a column vector ''v'' are given by the matrix product ''M''<sup>−1</sup>''v''. For this reason, it is said that normal vectors are [[Covariance and contravariance of vectors|contravariant]] objects.
 
Any finite set of vectors can be represented by a matrix in which its columns are the coordinates of the given vectors. As an example in dimension 2, a pair of vectors obtained by rotating the standard basis counterclockwise for 45°. The matrix whose columns are the coordinates of these vectors is
 
:<math>M=\begin{pmatrix}
1/\sqrt{2} & -1/\sqrt{2} \\
1/\sqrt{2} & 1/\sqrt{2}
\end{pmatrix}</math>
 
If we want to change any vector of the space to this new basis, we only need to left-multiply its components by the inverse of this matrix.
 
===Three dimensions===
 
For example, be a new basis given by its [[Euler angles]]. The matrix of the basis will have as columns the components of each vector. Therefore, this matrix will be (See Euler angles article):
 
:<math>[\mathbf{R}] = \begin{bmatrix}
\mathrm{c}_\alpha \, \mathrm{c}_\gamma - \mathrm{s}_\alpha \, \mathrm{c}_\beta \, \mathrm{s}_\gamma &
-\mathrm{c}_\alpha \, \mathrm{s}_\gamma - \mathrm{s}_\alpha \, \mathrm{c}_\beta \, \mathrm{c}_\gamma & 
\mathrm{s}_\beta  \, \mathrm{s}_\alpha \\
\mathrm{s}_\alpha \, \mathrm{c}_\gamma + \mathrm{c}_\alpha \, \mathrm{c}_\beta \, \mathrm{s}_\gamma &
-\mathrm{s}_\alpha \, \mathrm{s}_\gamma + \mathrm{c}_\alpha \, \mathrm{c}_\beta \, \mathrm{c}_\gamma & 
-\mathrm{s}_\beta  \, \mathrm{c}_\alpha  \\
\mathrm{s}_\beta  \, \mathrm{s}_\gamma &
\mathrm{s}_\beta  \, \mathrm{c}_\gamma & 
\mathrm{c}_\beta
\end{bmatrix}
.</math>
 
Again, any vector of the space can be changed to this new basis by left-multiplying its components by the inverse of this matrix.
 
===General case===
Suppose {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}<nowiki/>}} and {{nowrap|{&alpha;′<sub>1</sub>, …, &alpha;′<sub>''n''</sub>}<nowiki/>}} are two ordered bases for ''V''. Let &phi;<sub>1</sub> and &phi;<sub>2</sub> be the corresponding coordinate isomorphisms ([[linear map]]s) from ''R<sup>n</sup>'' to ''V'', i.e. {{nowrap|1=&phi;<sub>1</sub>('''e'''<sub>''j''</sub>) = &alpha;<sub>''j''</sub>}} and {{nowrap|1=&phi;<sub>2</sub>('''e'''<sub>''j''</sub>) = &alpha;′<sub>''j''</sub>}} for {{nowrap|1=''j'' = 1, …, ''n''}}.
 
If {{nowrap|1='''x''' = (''x''<sub>1</sub>, …, ''x''<sub>n</sub>)}} is the coordinate ''n''-tuple of &xi; with respect to the first basis, so that {{nowrap|1=&xi; = &phi;<sub>1</sub>('''x''')}}, then the coordinate tuple of &xi; with respect to the second basis is {{nowrap|1=&phi;<sub>2</sub><sup>−1</sup>(&xi;) = &phi;<sub>2</sub><sup>−1</sup>(&phi;<sub>1</sub>('''x'''))}}. Now the map {{nowrap|&phi;<sub>2</sub><sup>−1</sup> ∘ &phi;<sub>1</sub>}} is an automorphism on ''R<sup>n</sup>'' and therefore has a matrix '''p'''. Moreover, the ''j''th column of '''p''' is  {{nowrap|1=&phi;<sub>2</sub><sup>−1</sup> ∘ &phi;<sub>1</sub>('''e'''<sub>j</sub>) = &phi;<sub>2</sub><sup>−1</sup>(&alpha;<sub>j</sub>)}}, that is, the coordinate ''n''-tuple of &alpha;<sub>''j''</sub> with respect to the second basis {{nowrap|{&alpha;′<sub>1</sub>, …, &alpha;′<sub>''n''</sub>}.}} Thus {{nowrap|1='''y''' = &phi;<sub>2</sub><sup>−1</sup>(&phi;<sub>1</sub>('''x''')) = '''px'''}} is the coordinate ''n''-tuple of &xi; with respect to the basis {{nowrap|{&alpha;′<sub>1</sub>, …, &alpha;′<sub>''n''</sub>}.}}
 
== The matrix of a linear transformation ==
Now suppose {{nowrap|''T'' : ''V'' → ''W''}} is a linear transformation, {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}<nowiki/>}} is a basis for ''V'' and {{nowrap|{&beta;<sub>1</sub>, …, &beta;<sub>''m''</sub>}<nowiki/>}} is a basis for ''W''.  Let &phi; and &psi; be the coordinate isomorphisms for ''V'' and ''W'', respectively, relative to the given bases.  Then the map {{nowrap|1=''T''<sub>1</sub> = &psi;<sup>−1</sup> ∘ ''T'' ∘ &phi;}} is a linear transformation from ''R<sup>n</sup>'' to ''R<sup>m</sup>'', and therefore has a matrix '''t'''; its ''j''th column is {{nowrap|&psi;<sup>−1</sup>(''T''(&alpha;<sub>''j''</sub>))}} for {{nowrap|1=''j'' = 1, …, ''n''}}.  This matrix is called the matrix of ''T'' with respect to the ordered bases {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}<nowiki/>}} and {{nowrap|{&beta;<sub>1</sub>, …, &beta;<sub>''m''</sub>}.}}  If {{nowrap|1=&eta; = ''T''(&xi;)}} and '''y''' and '''x''' are the coordinate tuples of &eta; and &xi;, then {{nowrap|1='''y''' = &psi;<sup>−1</sup>(T(&phi;('''x'''))) = '''tx'''}}.  Conversely, if &xi; is in ''V'' and {{nowrap|1='''x''' = &phi;<sup>−1</sup>(&xi;)}} is the coordinate tuple of &xi; with respect to {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>},}} and we set {{nowrap|1='''y''' = '''tx'''}} and {{nowrap|1=&eta; = &psi;('''y''')}}, then {{nowrap|1=&eta; = &psi;(''T''<sub>1</sub>('''x''')) = ''T''(&xi;)}}.  That is, if &xi; is in ''V'' and &eta; is in ''W'' and '''x''' and '''y''' are their coordinate tuples, then {{nowrap|1='''y''' = '''tx'''}} [[if and only if]] {{nowrap|1=&eta; = ''T''(&xi;)}}.
 
'''Theorem''' Suppose ''U'', ''V'' and ''W'' are vector spaces of finite dimension and an ordered basis is chosen for each.  If {{nowrap|''T'' : ''U'' → ''V''}} and {{nowrap|''S'' : ''V'' → ''W''}} are linear transformations with matrices '''s''' and '''t''', then the matrix of the linear transformation {{nowrap|''S'' ∘ ''T'' : ''U'' → ''W''}} (with respect to the given bases) is '''st'''.
 
=== Change of basis ===
Now we ask what happens to the matrix of {{nowrap|''T'' : ''V'' → ''W''}} when we change bases in ''V'' and ''W''.  Let {{nowrap|{&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}<nowiki/>}} and {{nowrap|{&beta;<sub>1</sub>, …, &beta;<sub>''m''</sub>}<nowiki/>}} be ordered bases for ''V'' and ''W'' respectively, and suppose we are given a second pair of bases {{nowrap|{&alpha;′<sub>1</sub>, …, &alpha;′<sub>''n''</sub>}<nowiki/>}} and {{nowrap|{&beta;′<sub>1</sub>, …, &beta;′<sub>''m''</sub>}.}}  Let &phi;<sub>1</sub> and &phi;<sub>2</sub> be the coordinate isomorphisms taking the usual basis in ''R<sup>n</sup>'' to the first and second bases for ''V'', and let &psi;<sub>1</sub> and &psi;<sub>2</sub> be the isomorphisms taking the usual basis in ''R<sup>m</sup>'' to the first and second bases for ''W''.
 
Let {{nowrap|1=''T''<sub>1</sub> = &psi;<sub>1</sub><sup>−1</sup> ∘ ''T'' ∘ &phi;<sub>1</sub>}}, and {{nowrap|1=''T''<sub>2</sub> = &psi;<sub>2</sub><sup>−1</sup> ∘ ''T'' ∘ &phi;<sub>2</sub>}} (both maps taking ''R<sup>n</sup>'' to ''R<sup>m</sup>''), and let '''t'''<sub>1</sub> and '''t'''<sub>2</sub> be their respective matrices.  Let '''p''' and '''q''' be the matrices of the change-of-coordinates automorphisms {{nowrap|&phi;<sub>2</sub><sup>−1</sup> ∘ &phi;<sub>1</sub>}} on ''R<sup>n</sup>'' and  {{nowrap|&psi;<sub>2</sub><sup>−1</sup> ∘ &psi;<sub>1</sub>}} on ''R<sup>m</sup>''.
 
The relationships of these various maps to one another are illustrated in the following [[commutative diagram]].
 
<!--insert change-of-basis diagram here-->
 
Since we have {{nowrap|1=''T''<sub>2</sub> = &psi;<sub>2</sub><sup>−1</sup> ∘ ''T'' ∘ &phi;<sub>2</sub> = (&psi;<sub>2</sub><sup>−1</sup> ∘ &psi;<sub>1</sub>) ∘ ''T''<sub>1</sub> ∘ (&phi;<sub>1</sub><sup>−1</sup> ∘ &phi;<sub>2</sub>)}}, and since composition of linear maps corresponds to matrix multiplication, it follows that
: '''t'''<sub>2</sub> = '''q''' '''t'''<sub>1</sub> '''p'''<sup>−1</sup>.
 
Given that the change of basis has once the basis matrix and once its inverse, this objects are said to be [[Covariance and contravariance of vectors|1-co, 1-contra-variant]].
 
== The matrix of an endomorphism ==
An important case of the matrix of a linear transformation is that of an [[endomorphism]], that is,
a linear map from a vector space ''V'' to itself: that is, the case that {{nowrap|1=''W'' = ''V''}}.
We can naturally take {{nowrap|1={&beta;<sub>1</sub>, …, &beta;<sub>''n''</sub>} = {&alpha;<sub>1</sub>, …, &alpha;<sub>''n''</sub>}<nowiki/>}} and {{nowrap|1={&beta;′<sub>1</sub>, …, &beta;′<sub>''m''</sub>} = {&alpha;′<sub>1</sub>, …, &alpha;′<sub>''n''</sub>}.}}  The matrix of the linear map ''T'' is necessarily square. 
 
===Change of basis===
 
We apply the same change of basis, so that {{nowrap|1='''q''' = '''p'''}} and the change of basis formula becomes
: '''t'''<sub>2</sub> = '''p''' '''t'''<sub>1</sub> '''p'''<sup>−1</sup>.
 
In this situation the [[invertible matrix|invertible]] matrix '''p''' is called a change-of-basis matrix for the vector space ''V'', and the equation above says that the matrices '''t'''<sub>1</sub> and '''t'''<sub>2</sub> are [[similar (linear algebra)|similar]].
 
== The matrix of a bilinear form ==
A ''[[bilinear form]]'' on a vector space ''V'' over a [[field (mathematics)|field]] ''R'' is a mapping {{nowrap|''V'' × ''V'' → ''R''}} which is [[linear transformation|linear]] in both arguments. That is, {{nowrap|''B'' : ''V'' × ''V'' → ''R''}} is bilinear if the maps
:<math>v \mapsto B(v, w)</math>
:<math>v \mapsto B(w, v)</math>
are linear for each ''w'' in ''V''. This definition applies equally well to [[module (mathematics)|module]]s over a [[commutative ring]] with linear maps being [[module homomorphism]]s.
 
The [[Gram matrix]] ''G'' attached to a basis <math>\alpha_1,\dots, \alpha_n</math> is defined by
 
:<math>G_{i,j} = B(\alpha_i,\alpha_j) .</math>
 
If <math>v = \sum_i x_i \alpha_i</math> and <math>w = \sum_i y_i \alpha_i</math> are the expressions of vectors ''v'', ''w'' with respect to this basis, then the bilinear form is given by
 
:<math>B(v,w) = v^\mathsf{T} G w .</math>
 
The matrix will be [[symmetric (matrix)|symmetric]] if the bilinear form ''B'' is a [[symmetric bilinear form]].
 
===Change of basis===
If ''P'' is the invertible matrix representing a change of basis from
<math>\alpha_1,\dots, \alpha_n</math> to <math>\alpha'_1,\dots, \alpha'_n</math>
then the Gram matrix transforms by the [[matrix congruence]]
 
:<math>G' = P^\mathsf{T} G P .</math>
 
==Important instances==
In abstract vector space theory the change of basis concept is innocuous; it seems to add little to science. Yet there are cases in [[associative algebra]]s where a change of basis is sufficient to turn a caterpillar into a butterfly, figuratively speaking:
*In the [[split-complex number]] plane there is an alternative "diagonal basis". The standard hyperbola {{nowrap|1=''xx'' − ''yy'' = 1}} becomes {{nowrap|1=''xy'' = 1}} after the change of basis. Transformations of the plane that leave the hyperbolae in place correspond to each other, [[modulo (jargon)|modulo]] a change of basis. The contextual difference is profound enough to then separate [[Lorentz boost]] from [[squeeze mapping]]. A panoramic view of the literature of these mappings can be taken using the underlying change of basis.
 
*With the [[2 × 2 real matrices]] one finds the beginning of a catalogue of linear algebras due to [[Arthur Cayley]]. His associate [[James Cockle]] put forward in 1849 his algebra of ''coquaternions'' or [[split-quaternion]]s, which are the same algebra as the {{nowrap|2 × 2}} real matrices, just laid out on a different matrix basis. Once again it is the concept of change of basis that synthesizes Cayley’s matrix algebra and Cockle’s coquaternions.
 
*A change of basis turns a {{nowrap|2 × 2}} complex matrix into a [[biquaternion#Linear representation|biquaternion]].
 
== See also ==
* [[Coordinate vector]]
* [[integral transform]], the continuous analogue of change of basis.
* [[Active and passive transformation]]
 
==External links==
* [http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-31-change-of-basis-image-compression/ MIT Linear Algebra Lecture on Change of Bases], from MIT OpenCourseWare
 
*[http://www.youtube.com/watch?v=1j5WnqwMdCk Khan Academy Lecture on Change of Basis], from Khan Academy
[[Category:Linear algebra]]
[[Category:Matrix theory]]

Revision as of 20:18, 23 February 2014

Friends contact him Royal. Kansas is where her home is but she requirements to move simply because of her family members. Interviewing is what I do in my day job. What she enjoys doing is to play croquet but she hasn't made a dime with it.

my web-site: extended car warranty reviews