# Talk:Transpose

Another note -- should mention that the generalization of the notion of transpose to complex matrices is to make element (i,j) equal to the conjugate of the (j,i) element. I agree with Chris W that the notation A' should be mentioned as an alternative. Wile E. Heresiarch 08:39, 11 Mar 2004 (UTC)

## Differentiation of transposed matrices

I was trying to understand what the derivative of a transposed matrix is with respect to that matrix? So something like
where both are matrices.

yanneman 13:26, 20 November 2006 (UTC)

## Transpose of linear maps

In the **Transpose of linear maps** section, the article had read:

- If
*f*: V→W is a linear map between vector spaces V and W with dual spaces W* and V*, we define the*transpose*of*f*to be the linear map^{t}*f*: W*→V* with

An anonymous user changed that to

- If
*f*: V→W is a linear map between vector spaces V and W with dual spaces W* and V*, we define the*transpose*of*f*to be the linear map*f*^{T}: W*→V* with

Now, I think that's confusing (it's *f* that's transposed, not *f*(*φ*)), and it would be better to write

But even that seems ambiguous; is this "transpose *f*" or "*f* to the T power"? But I'm unfamiliar with this notation, and I certainly don't have the same objection to ^{T} used with matrices. So, are there any experts who could weigh in with the most common usage in this area? (I'm not proposing to change the ^{T} notation used in the earlier sections, just to keep the ^{t} notation in this one section.) --Quuxplusone 21:14, 8 August 2005 (UTC)

- Putting the T in front sounds like a worthy experiment. In general though, you have to realise that mathematics notation is never anywhere near as rigorous as people would like to think. A classic example is -1 designates both the inverse and the reciprocal. Cesiumfrog (talk) 04:52, 11 May 2010 (UTC)

- What about using ? —Ben FrantzDale (talk) 12:40, 11 May 2010 (UTC)

## hermitian transpose?

hermitian transpose = conjugate transpose? --Moala 09:23, 20 December 2005 (UTC)

## Transpose on tensors...

I'm confused. It seems like much of linear algebra glosses over the meaning of transposition and simply uses it as a mechanism for manipulating numbers, for example, defining the norm of *v* as .

In some linear-algebra topics, however, it appears that column and row vectors have different meanings (that appear to have something to do with covariance and contravariance of vectors). The transpose of a column vector, *c*, gives you a row vector -- a vector in the dual space of *c*. I think the idea is that column vectors would be indexed with raised indices and row vectors with lowered indices with tensors.

Here's my confusion: If row vectors and column vectors are in distinct spaces (and they certainly are in that you can't just add them), then taking the transpose of a vector isn't just some notational convenience, it is an application of a nontrivial function, . To do something like this in general, we can use any bilinear form, but that involves more structure than just

So:

- Is it correct that there are are two things going on here: (1) using transpose for numerical convenience and (2) using rows versus columns to for indicateng co- versus contravariance?
- Isn't the conventional Euclidean metric defined with a contravariant metric tensor: ? Doesn't that not involve any transposition in that both
*v*s have raised indices?

Thanks. —Ben FrantzDale (talk) 05:00, 11 November 2009 (UTC)

### As asked on the Math Reference Desk

#### Transpose and tensors

I posed a question on Talk:Transpose that didn't get any responses there. Perhaps this is a better audience since it's a bit of an essoteric question for such an elementary topic; Here's the question again:

- I'm confused. It seems like much of linear algebra glosses over the meaning of transposition and simply uses it as a mechanism for manipulating numbers, for example, defining the norm of
*v*as .

- In some linear-algebra topics, however, it appears that column and row vectors have different meanings (that appear to have something to do with covariance and contravariance of vectors). In that context, the transpose of a column vector,
*c*, gives you a row vector -- a vector in the dual space of*c*. I think the idea is that column vectors would be indexed with raised indices and row vectors with lowered indices with tensors.

- Here's my confusion: If row vectors and column vectors are in distinct spaces (and they certainly, even in elementary linear algebra in that you can't just add a column to a row vector because they have different shapes), then taking the transpose of a vector isn't just some notational convenience, it is an application of a nontrivial function, . To do something like this in general, we can use any bilinear form, but that involves more structure than just

- So:
- Is it correct that there are are two things going on here: (1) using transpose for numerical convenience and (2) using rows versus columns to for indicateng co- versus contravariance?
- Isn't the conventional Euclidean metric defined with a contravariant metric tensor: ? Doesn't that not involve any transposition in that both
*v*s have raised indices?

- Thanks. —Ben FrantzDale (talk) 14:16, 23 November 2009 (UTC)

- I guess it depends on how we define vectors. If we consider a vector as just being an
*n*by*m*matrix with either*n*=1 or*m*=1, then transposition is just what it is with any other matrix - a map from the space of*n*by*m*matrices to the space of*m*by*n*matrices. --Tango (talk) 14:38, 23 November 2009 (UTC)- Sure. I'm asking because I get the sense that there are some unwritten ruels going on. At one extreme is the purely-mechanical notion of tranpose that you describe, which I'm happy with. In that context, transpose is just used along with matrix operations to simplify the expression of some operations. At the other extreme, rows and columns correspond to co- and contra-variant vectors, in which case transpose is completely non-trivial.
- My hunch is that the co- and contravariance convention is useful for some limited cases in which all transformations are mixex of type (1,1) and all (co-) vectors are either of type (0,1) or (1,0). But that usage doesn't extend to problems involving things like type-(0,2) or type-(2,0) tensors since usual linear algebra doesn't allow for a row vector of row vectors. My hunch is that in this case, transpose is used a kludge to allow expressions like to be represented with matrices as . Does that sound right, or am I jumping to conclusions? If this is right, it could do with some explanation somewhere. —Ben FrantzDale (talk) 15:13, 23 November 2009 (UTC)

- I guess it depends on how we define vectors. If we consider a vector as just being an

Using an orthonormal basis, , and That "usual linear algebra doesn't allow for a row vector of row vectors" is the reason why tensor notation is used when a row vector of row vectors, such as , is needed. Bo Jacoby (talk) 16:53, 23 November 2009 (UTC).

- Also note that there is no canonical isomorphism between V and V* if V is a plain real vector space of finite dimension >1, with no additional structure. What is of course canonical is the pairing VxV* →
**R**. Fixing a base on V is the same as fixing an isomorphism with**R**^{n}, hence produces a special isomorphism V→V*, because**R**^{n}does possess a preferred isomorphism with its dual, that is the transpose, if we represent n-vectors with columns and forms with rows. Fixing an isomorphism V→V* is the same as giving V a scalar product (check the equivalence), which is a tensor of type (0,2), that eats pairs of vectors and defecates scalars. --pma (talk) 18:46, 23 November 2009 (UTC)

- Those are great answers! That clarifies some things that have been nagging me for a long time! I feel like It is particularly helpful to think that conventional matrix notation doesn't provide notation for a row of row vectors or the like. I will probably copy the above discussion to Talk:Transpose for postarity and will probably add explanation along these lines to appropriate articles.
- I haven't worked much with complex tensors, but your use of conjugate transpose reminds me that I've also long been suspiscious of its "meaning" (and simply that of complex conjugate) for the same reasons. Could you comment on that? In some sense on a complex number, is the same operation as on a vector, using conjugate transpose as a mechanism to compute . For a complex number, I'm not sure what would generalize to "row vector" or "column vector"... I'm not sure what I'm asking, but I feel like there's a little more that could be said connecting the above great explanations to conjugate transpose. :-) —Ben FrantzDale (talk) 19:19, 23 November 2009 (UTC)
- A complex number (just as a real number) is a 1-D vector, so rows and columns are the same thing. The modulus on can be thought of as a special case of the norm on (ie. for n=1). From an algebraic point of view, complex conjugation is the unique (non-trivial) automorphism on that keeps fixed. Such automorphisms are central to Galois theory. I'm not really sure what the importance and meaning is from a geometrical or analytic point of view... --Tango (talk) 19:41, 23 November 2009 (UTC)

Let *V* and *W* be two vector spaces, and ƒ : *V* → *W* be a linear map. Let *F* be the matrix representation of ƒ with respect to some bases {*v*_{i}} and {*w*_{j}}. I seem to recall, please do correct me if I'm wrong, that *F* : *V* → *W* and *F*^{T} : *W** → *V** where *V** and *W** are the dual spaces of *V* and *W* respectively. In this setting *v*^{T} is dual to *v*. So the quantity *v*^{T}*v* is the evaluation of the vector *v* by the covector *v*^{T}. ~~ Dr Dec (Talk) ~~ 23:26, 23 November 2009 (UTC)

## Orthogonal Matrices

In the "Special transpose matrices" section, the writing implies that an orthogonal matrix **G** is *defined* as one for which **G**^{T}=**G**^{-1}. Thus I was going to change the "if" in "...that is, **G** is orthogonal if..." to "iff" but I was unsure if this was really a fundamental definition. The "Orthogonal Matrix" page does the same as this one.

It seems like a decent definition of an orthogonal matrix could be a **G** such that **GG**^{T} and **G**^{T}**G** are (one or both) diagonal or something. Not necessarily that one, but it's enough to make me suspect there's a more general definition some people might use.

Hopefully someone better versed in (multi-)linear algebra literature comes along and knows if there's a more general definition. If there isn't, or if it's still fully compatible, let's change the "if" to "iff" here and possibly in the "Orthogonal Matrix" page too. --Horn.imh (talk) 19:20, 16 June 2011 (UTC)

- I'm pretty sure you are right and that it is iff. Suppose
*G*is not orthogonal. Then two columns of*G*aren't orthogonal (or a column doesn't have norm of one). Then will not be diagonal (because off-diagonal terms are inner products of columns with different columns) in the case that the columns aren't orthogonal, and it will have something other than one on the diagonal in the case that any columns don't have norm of one (because the diagonals are the inner products of columns with themselves). QED. —Ben FrantzDale (talk) 20:05, 16 June 2011 (UTC)

## Notation in 'Transpose of Linear Maps'

is a terrible notation for anything because it looks like a zero. We should change this to or something. Is the author trying to suggest an ? Because then they should just use the ...

Also, we need to be consistent for our transpose notation. Should it be or ? We use three of the four possibilities here.

129.32.11.206 (talk) 19:16, 10 October 2012 (UTC)

## Transpose of linear maps: why defined in terms of a bilinear form?

In the section Transpose of linear maps, the abstract definition of a transpose is in principle independent of any bilinear form. This was stated in this way until changed by this edit (which may have been taken from *Linear Algebra Quick Study Guide for Smartphones and Mobile Devices*). This fundamentally changes the definition of a transpose in the abstract context. It would make more sense to me if it were defined primarily in the metric-free context, and (if desired) related to the concept defined in the section at present when suitable bilinear forms are available. I suggest reverting this section to the earlier form, with the approach using bilinear forms omitted. Does anyone with more familiarity of the area know what the most generally accepted definition is? — *Quondum* 14:02, 1 June 2013 (UTC)

- I would call what is described in that section of the article the adjoint rather than the transpose, although I'm not sure whether there is a universally accepted definition. It would make sense to me to define the transpose in a metric free setting and define the adjoint as a generalization. I'm a little surprised that we don't already have an article on the adjoint (except for the special case hermitian adjoint.) Sławomir Biały (talk) 14:59, 1 June 2013 (UTC)

- Thanks. It is a pity that the definitions seem to be a little variable (gauging from the few references I've browsed). I'll make a change along these lines in the next week or so, any comment from other editors being welcome. —
*Quondum*11:07, 2 June 2013 (UTC) - I've made some comprehensive changes to the section, criticism welcome. I also removed a misguided association of the transpose of a coordinate vector and the more abstract concept of a transpose from Dual basis. —
*Quondum*02:00, 5 June 2013 (UTC)

- Thanks. It is a pity that the definitions seem to be a little variable (gauging from the few references I've browsed). I'll make a change along these lines in the next week or so, any comment from other editors being welcome. —