Talk:Dual space

From formulasearchengine
Jump to navigation Jump to search

Template:ArticleHistory Template:Maths rating

Discussion from 2004

This is fine math, but no one has even defined a basis for a Vector Space, given the standard geometric interpretation of R3 as a Vector Space (a good concrete example.), etc. Lets try to fill in the more elementary material, before we soar to these heights...:-). I guess that includes me, too. RoseParks

I moved Rose's comment here, partly because there is at least entry for basis of a vector space now. DMD

I've seen what you call the "continuous dual" denoted by X* as well. Often the distinction is entirely based on context, if you're just working with vector spaces, X* is the algebraic dual, if you're workign with normed spaces, it's the continuous one. It might be worth noting this, because I for one have been confused by texts working with what they call X* when they mean what you call X'. cfp 22:37, 4 Apr 2004 (UTC)

I read elsewhere that a square matrix can be thought of as a tensor with one covariant index and one contravariant index. So it seems to me that row and column vectors correspond to tensors like . If dual spaces have to do with the relation between row and column vectors, does it have something to do with tensors?

Well, yes. General tensors on a vector space V are built up from the tensor product of some copies of V and its dual space. Charles Matthews 09:56, 11 May 2004 (UTC)

continuous dual need not be Banach

The article claims that the continuous dual of a normed vector space is something stronger: a Banach space. It seems to me this cannot be so. For example, what if my normed vector space had the rationals as its underlying field? The dual will also have Q as its field, and so cannot be complete. -Lethe | Talk 17:40, 3 October 2005 (UTC)

It's probably assuming that the base field of a normed vector space must be R or C. The normed vector space article used to require this. --Zundark 19:32, 3 October 2005 (UTC)

linear discontinuous function on a TVS

Oleg, I think this should be an example of a linear map on a topological vector space which is not continuous. Let X be a space which is not locally compact and consider the space of real functions on it with the compact open topology. The Dirac delta functional send each function to its value at 0. Since X is not locally compact, this is not continuous. Since X is not compact, this is not a metric space. -lethe talk 15:12, 27 December 2005 (UTC)

Another: the article on locally convex topological vector spaces gives an example of a space which is not locally convex. This space has no continuous functionals other than 0. -lethe talk 15:27, 27 December 2005 (UTC)

double dual and transpose

I took the liberty of adding that transposing gives 'more or less' the same map, when you associate vectors in V with vectors in V** by the connection explained in the bidual part. However, in order to avoid confusion I had to place the transpose section after the bidual section.

A few questions : is what I said correct or is finiteness of dimension needed somewhere? Why does my expression at the end of the transpose part look 'not nice' Evilbu 22:38, 14 February 2006 (UTC)


I already have posted elsewhere, but I should do it mention it too here.

Annihilators on spaces, are very important when studying projective spaces, as they help you classify all correlations and thus also the polarities

Now I almost made an annihilator article (note there is already a disambiguation page linking to annihilator in ring theory etc) but then I wondered if it shouldn't be defined here?

Another thing that bothers me is my lack of inexperience with infinite dimensional vector spaces.

dim (V*) and dim(V)

One of the authors seemed to believe that for dim(V) infinite dim(V*) > dim(V). The easiest example is in another part of the article where for any L^2 space V = V* and thus dim(V) = dim(V*). jbolden1517Talk 19:35, 17 May 2006 (UTC)

For the algebraic dual it is always true that dim(V*) > dim(V) if dim(V) is infinite. The other part of the article you mention is talking about the continuous dual (which is a subspace of the algebraic dual, and often has smaller dimension). --Zundark 20:24, 17 May 2006 (UTC)
Then I think you may want to mention that in the article. This doesn't seem to make any sense. I'm a little rusty but, if you want to be purely algebraic then why use vector spaces and not speak in terms of modules? Why talk in terms of functional analysis (where you need continuity)...
Also can you give me a quick sketch of proof for greater than cardinality? I"m still not seeing why that's true. For example I'm thinking F_2^Z where F_2 is the field with 2 elements and Z is the integers. Why is the dual space actually larger? jbolden1517Talk 21:00, 17 May 2006 (UTC)
Assuming the axiom of choice, a vector space of dimension k is isomorphic to the coproduct of k copies of the underlying field, while the dual space is isomorphic to the product of k copies. Finite coproducts and products coincide in abelian categories, but an infinite product (which is allowed to have infinitely many nonzero terms) is strictly larger than an infinite coproduct (which may only have finitely many nonzero terms). So for example, the vector space over F2 with countably infinite basis contains things like (1,1,1,1,0,0,0,0,....) but not (1,1,1,1,1....). The dual space contains both, and the latter is independent of the former (remembering that independence is decided by finite linear combinations only. Infinite linear combinations are only allowed in TVSes). -lethe talk + 21:07, 17 May 2006 (UTC)
Where is it stated that "the vector space over F2 with countably infinite basis contains things like (1,1,1,1,0,0,0,0,....) but not (1,1,1,1,1....)."? My understanding of things like Hilbert spaces and Banach spaces is that there is no requirement that all but a finite number of coefficients vanish. I'll crawl through the product and coproduct articles, but from the naive point of view, these statements are head-scratchers. linas 00:42, 15 October 2006 (UTC)
Thank you! You mind if I put the first two sentences in the article, You actual give a description of the algebraic dual which wasn't in the article. If you are OK with that, mind If I makeone change: I would use "direct sum" rather than coproduct, since that doesn't assume category theory for your argument which you don't need. Thanks again either way jbolden1517Talk 21:31, 17 May 2006 (UTC)
I would certainly agree that the term "direct sum" is preferable to "coproduct" in an article about linear algebra. -lethe talk + 22:27, 17 May 2006 (UTC)

I have rewritten your section. Comments welcome. I mostly added stuff, didn't remove anything you wrote, just shuffled and rephrased. The one thing I did remove was the examples of tuples from the product and coproduct. I'm not sure they're needed. -lethe talk + 03:54, 22 May 2006 (UTC)

I like your version better than mine. Really good job! A few comments:
  1. I don't like is the way you phrase the basis issue. You make it seem like: choosing a basis allows us to decompose rather than a decomposition is a basis (by definition). Or in terms of logical implication it reads to me like basis -> decomposition, rather which is misleading since basis <-> decomposition.
  2. I think I'd drop the choice issue its not needed. basis covers this issue (and in fact includes the proof). Its a side issue about picking basis not about describing dual spaces
  3. "The structure of the algebraic dual space depends on whether the space is finite dimensional or infinite dimensional." This as phrased is false. No where do we use finite dimensionality in our description of the structure of the dual. Everything we say is true about the finite case....
jbolden1517Talk 04:34, 22 May 2006 (UTC)
I think I've fixed your number 3, how's it look? The sentence could probably be excised altogether; it was just meant to be an introduction/summary of the section. Number 2, I am pathologically compelled to mention AC every time I use it, so I can't remove it. As for point 1: It's true that an isomorphism between a vector space and a coproduct of the field is equivalent to a basis. It just seems to me that the existence of a basis is more axiomatic (in the original sense of the word, i.e. it's "obviously true"), and therefore a convenient starting point of the argument. We could just as well have started with the existence of an isomorphism, it's true. I like it this way, but if you want to change it, well go ahead, and I'll consider your version. -lethe talk + 15:17, 22 May 2006 (UTC)
I gave an example of the 3 points in the article. I made them easy to separate so you can revert / modify. jbolden1517Talk 16:35, 22 May 2006 (UTC)

I don't think this article is an appropriate place to provide a proof that the dual of the coproduct is the product. That should go in one of those articles. -lethe talk + 19:30, 22 May 2006 (UTC)

That's not what I'm proving. I'm proving that you can pull the * inside the parenthesis by turning sums into products. jbolden1517Talk 19:40, 22 May 2006 (UTC)

catagory of dual vector spaces

I'm pulling this out since it may turn into a longer conversation.

What you said and what I said are the same thing. The dual of a coproduct is a product. The * indicates dual space. This article is about dual spaces, not about coproducts, which has its own article. -lethe talk + 19:47, 22 May 2006 (UTC)

Good point. But then what we haven't show is dual vector spaces is the dual category to the category of vector spaces. I'll clean up and then we'll talk more. jbolden1517Talk 19:54, 22 May 2006 (UTC)
uh, I'm not exactly sure what you mean here. The vector dual space is in the category of vector spaces, not the dual category. -lethe talk + 20:08, 22 May 2006 (UTC)
Nope. You want to use the fact that the coproduct is the dual of the product then you have to think of * as a contravariant functor from the category of vector space to its dual category (which just so happens to be the category of vector spaces). We never proved any of this. That is . jbolden1517Talk 20:40, 22 May 2006 (UTC)
Your comment that "the dual of the category of vector spaces is the category of vector spaces" is incorrect. For a category to be self dual, it must contain only isomorphisms. Vect contains arrows that are not isomorphisms. The dual space functor is a contravariant functor from Vect to Vect. That is, it takes linear maps to linear maps with their arrows reversed. Note that the arrows in Vectopp are not linear maps. Now, the stuff that you're spending all this time doing is showing that the hom(-,F) functor turns coproducts into products. This is the dual space functor. It is a functor from Vect to Vect. Hom-functors usually take objects to hom-sets, but the hom-sets in Vect happen to also be vector spaces. What you're proving is the well-known fact that hom-functors are continuous. The covariant hom-functor takes limits to limits (products to products) and the contravariant hom-functor takes colimits to limits (coproducts to products). This well-known fact does not belong in this article. It is already mentioned in the article on limits, but it might deserve to be expanded on in the article on coproducts. It can't stay here. -lethe talk + 21:00, 22 May 2006 (UTC)
OK that's the way to phrase it. I'll take care of this soon. Not much more time tonight. jbolden1517Talk 21:15, 22 May 2006 (UTC)
Well now that you know the correct phrasing (the dual of a coproduct is a product is the linear algebra language version, the hom-functor is continuous is the category theoretic phrasing), perhaps you will consider my point that this proof does not belong in this article? I'm going to revert it for now. We should probably find a new home for the proof. -lethe talk + 21:30, 22 May 2006 (UTC)

I have added the assertion that dual of the coproduct is the product to the article coproduct and direct sum. Those are the right articles for this stuff. -lethe talk + 01:47, 23 May 2006 (UTC)

canonical injection

I was trying to prove that the canonical injection from a vector space to its double dual is indeed an injection. Seems to me that I couldn't do it without choosing a basis (and therefore invoking the axiom of choice). I don't like that. Is there a way to check the injectivity without a basis? -lethe talk + 03:54, 22 May 2006 (UTC)

Let , and let be their image under the map. Then

But then is in every hyperplane of V, that is jbolden1517Talk 04:49, 22 May 2006 (UTC)

The last step went a little too fast for my taste. How do you show that 0 is the only vector annihilated by all of V*? -lethe talk + 15:22, 22 May 2006 (UTC)
Well, if vV is nonzero, then there certainly exists a linear functional taking v to 1 (since {v} can be extended to a basis and linear functions can be specified on bases), so not all linear functionals can kill v. -- Fropuff 16:09, 22 May 2006 (UTC)
But I think you've used choice to extend {v} to a basis, which is what I'm hoping to be able to avoid. -lethe talk + 16:11, 22 May 2006 (UTC)

You got the same answer twice, it was in the midst of typing the below and....
Well there is a bunch of proofs. I'd recommend using the Hahn–Banach theorem if can define f on the subspace , with f(v-w) = 1, and then extend f as a linear functional to all of V. Then . Though you do end up picking a basis (but at least you can just cite the result) rather than prove in this article.
As for your concern about choice you need to use choice. Being able to construct linear functions requires choice. jbolden1517Talk 16:22, 22 May 2006 (UTC)
So in the end, your answer is that choice is unavoidable? In other words, in models where choice does not hold, the map from a space to its double dual need not be injective. I'm not pleased with that, but if that's the answer then I guess I have to accept it. -lethe talk + 16:41, 22 May 2006 (UTC)
Here is hopefully a way of justifying it. You need choice to insure there are "enough" linear functionals (i.e the V* is big). Without choice you could have V* tiny even for big V, which would mean that V** isn't "big enough" to hold V. jbolden1517Talk 16:56, 22 May 2006 (UTC)
This is weird. Vector spaces with bases always have bigger dual spaces, while vector spaces without may have smaller dual space. -lethe talk + 17:33, 22 May 2006 (UTC)
To put it even more starkly: without choice, V* could even be the zero vector space. Geometry guy 14:56, 12 February 2007 (UTC)

reasons for reversion

  1. We do not need to repeat things twice. The text mentions that the dual of the underlying field is itself. Writing it a second time with more esoteric notation doesn't help anything.
  2. This is an article about linear algebra. We should not mention anything to do with category theory unless we have to. Scares off readers.
  3. The appropriate place for discussion of why the dual of a coproduct is a product is in those articles. These facts are mentioned there.
  4. In my opinion, Jbolden's text makes a discussion longer with without adding anything at all.

I've used my 3 reverts for the day. It's out of my hands. -lethe talk + 03:13, 23 May 2006 (UTC)

Thank you for discussing rather than just reverting. Remember how this whole section started. I was questioning why it was "obvious" that dim V* > dim V for V infinite. I think we've proven its not obvious. Similarly I don't think that F(sum x) = prod F(x) should be seen as obvious. There are two ways we can prove it. The first is constructive which is what I was doing earlier today (which was my preference). I can do that without any category theory. You then objected that it followed from category theory. So now we have a proof using category theory and the duality.
But we have not shown anywhere that just because the functors are "dual" that we can use "dual" in the linear algebra sense. If you want to replace "dual" with "transpose" (since we've picked a basis anyway) we can do that but its going to be even longer and more constructive than my first version. Anyway you can't assert something isn't worth explaining because its too complicated and that it is too trivial to be worth explaining.
I want you to tell me how a sophomore is supposed to understand why that first equation holds true jbolden1517Talk 03:39, 23 May 2006 (UTC)
The text you keep adding (lately) does not explain why the dual of the coproduct is the product. What is the purpose of the text then? It only repeats things which are already said, but in a categorical language. What end does that serve? -lethe talk + 03:48, 23 May 2006 (UTC)
You are equivocating on dual. I'm not trying to explain that the dual of the product is the coproduct. I'm going to use new language to help you see what the problem is. I'll use the the word "star spaces" for spaces mapping a vector space to its underlying field. I'll use the word "reverse arrows" for categorical duals. Your assertion is that the star space of the sum is the reverse arrow of the sum over star spaces. Now written that way is it all clear that its true, much less that it is so obvious it isn't worth explaining? No where in the article do we establish that a star space is a categorical dual.
Now if you don't like category theory then what was your objection to the constructive proof the last time? jbolden1517Talk 04:00, 23 May 2006 (UTC)
I like category theory. As I have said, I don't think the language of categories should be used in an article about linear algebra unless it has to. In this case, it does not have to. I cannot understand what you're talking about in the other paragraph above. -lethe talk + 04:11, 23 May 2006 (UTC)

I have looked at the most recent versions. Although the difference is small, I prefer Lethe's version. I do not like category theory. Referring to it in this article is disturbing and makes me not want to read the article. JRSpriggs 08:02, 23 May 2006 (UTC)

Then how about addressing the real point. Which of the following options do you prefer:
  1. No proof / no justification (which is Lethe's argument)
  2. Element based proof (original from a few days ago)
  3. Function based proof (from early yesterday)
  4. Categorical proof
No one is arguing that #4 is easiest and the shortest if you have the right background (which is a lot). #2 is pretty short and pretty clear but still involves some hand waving. #3 is the longest is a full proof. I just find #1 is unacceptable. jbolden1517Talk 11:09, 23 May 2006 (UTC)
Here you are arguing again for the inclusion of a proof of the fact that the dual of the direct sum is the direct product of the duals. I will once again point out that the text you're reverting to does not contain such a proof (though an earlier revision did). Anyway, let me say regarding your 4 points, let me again say that #1 is the right option; this is not a textbook, we do not have to prove every theorem. Moreover, even if we did want a proof (which I am not at all convinced we do), this is the wrong article to contain it. -lethe talk + 12:39, 23 May 2006 (UTC)

OK, I have elaborated on the proof at coproduct in painful detail. I have added a brief indication of the idea in direct sum, and I have added a description to this article, of the fact that the dual space is the product, all infinite tuples. I have the feeling that there is too much discussion of this fact. I wonder what others think. -lethe talk + 05:31, 24 May 2006 (UTC)

jbolden1517, if you want me to respond to your list of suggested proofs, please provide pointers to the specific historical versions and which section they were in. JRSpriggs 06:44, 24 May 2006 (UTC)

A question I'd love answered

Let V be the vector space of all sequences of real numbers in which only a finite number of elements are non-zero. This vector space is not finite dimensional, and any injection into its dual space is a strict subspace (as we'd expect). That is, the dual space of V is isomorphic to the set of all sequences of reals, without the condition that almost all entries be zero. So what is its dual space, V^**? Why is this set strictly "larger" than V^*? The argument in the article relies on there being a basis of V. Can "V^* > V <=> V is not finite dimensional" be proved without using bases? Or does this boil down to some Zorn's lemma argument?


I do not believe that a reader who is unfamiliar with the concept of dual space will find this article very helpful, even though its content as far as I can see is correct.

  • The connection to row and column vectors mentioned in the introduction is not the right way to think about dual spaces. A dual space has nothing to do with the graphical representation of vectors. This connection is perhaps helpful when the reader has a better understanding of what a dual space really is.
  • The central idea which should be stated here is that any vector space always has an accompanying space of linear functionals; its dual space. Gradients of multi-variable functions is a good example of a dual space. Also, the dual space is distinct from the original space, e.g., the coordinates of its elements changes in the "opposite" or dual way compared to elements in the original space as a consequence of basis transformations. This will (at best) also give a motivation for why this concept is defined and has to be taken into account in certain calculations.
  • The article presents two types of dual spaces: algebraic dual spaces and continuous dual spaces. There is no discussion about their relations or differences. Is the latter only a special case of the former or is it something completely different? Some text in the introduction about this would be helpful.

--KYN 20:46, 15 August 2006 (UTC)

After doing some more research, it seems that you are confusing the relationship of the dual space with the relationship between covariant and contravariant vectors. Your introduction is completely extraneous to the concept of a dual space. I'm going to revert most of your editing as soon as I find an easy way to do it. 19:21, 2 October 2006 (UTC)
The word "motivation" is not used in the section heading to imply that gradients, etc, is the main motivation for dual spaces. The section is intended to provide an example of when dual spaces appear in a natural way, i.e., as a consequence of rather simple and common calculations. I still believe that the article should provide such an example. Otherwise, it will define "dual space" without providing an explanation of WHY this concept is needed. Also, many vector spaces (including all finite dimensional space) are isomorphic to their dual spaces, which may lead to the conclusion that in these cases we can treat elements of the vector space and elements of its dual space on equal terms. This is not true; coordinates of vectors and dual vectors do not transform in the same way when the basis change in the vector space. This is a general observation which doesn't have to be related to gradients. The section which is under discussion here is intended to address both these issues; provide an example of a dual space which can be constructed from a practical problem and demonstrate that elements of the vector space and the dual space must not be treated on equal terms, even if the two spaces are isomorphic. --KYN 22:10, 2 October 2006 (UTC)
Your example and changes to the introduction were not needed, and serve only to increae the conceptual burden on the reader. The best course of action would be to revert your changes to this article.
I suggest you look at the articles on wikipedia which reference the dual space article. You will find that such references require the dual space to be a space of linear functionals, not "a space of things that transform a certain way." This transformation example you provide is not needed to understand the space of linear functionals and is unrelated to their effective use. Published literature is the same way. If you disagree, please cite some reference which discusses the dual space as a space of vectors which transform in a certain way. If you can not cite such a reference, then your material does not belong on wikipedia.
Your terminology and example are seething with contradictions. For one, as I have already pointed out, the gradient of a scalar function is not in the dual space because it is not a linear functional. It is not a function from a linear space to a field. I don't think I can be clearer on that point. Worse than this, the transformation you use is pathological. No one expects even scalar functions to be invariant under the transformation you used (if you do, you need to check yourself); why should anyone familiar with calculus be surprised at the effect of your transformation? Furthermore, the apparent effect of a coordinate transformation (under an appropriate transformation) should be practically invisible when dealing with cartesian coordinates. Why should you expect someone new to linear functionals to have dealt with transformations of any other coordinate system?
Essentially, you are trying to motivate newcomers to understand a linear space of linear functionals by giving a faulty example of covariant transformations. The introduction which you replaced is much better. Please return the former introduction. 16:06, 3 October 2006 (UTC)

How can the gradient be in the dual space?

I cannot understand this... The space which the gradient belongs to is the dual space relative to the space which or belong to.

The gradient of a function is a map from , not . So the gradient is not a linear functional and can not be in the dual space. Although you could map the gradient to the dual space by .

20:38, 21 September 2006 (UTC)

Yes, you are right. Did some modifictions of the text. Have a look, and see if it makes sense. --KYN 21:43, 21 September 2006 (UTC)
I don't understand what it means to "interpret the gradient as a linear functional." It seems to me that and the associated function is the associated dual space vector. I see that you could interpret as a linear functional, but not by itself. I added arguments to the gradient function here to make it clear that I was referring to the gradient "at" a specific point as being in
14:55, 22 September 2006 (UTC)
If you see as an element in then this implies that it should transform in the same way as other elements of this space, like dx. The example demonstrates that and dx transform in "opposite" ways, hence they cannot be elements of the same space. Is this part OK with you? --KYN 19:02, 22 September 2006 (UTC)
I don't follow this either. If and are two elements in and is a linear transformation from , then and will both transform "in the same way" under the transformation . Maybe you could be more explicit about what you mean by "transform" and "in the same way." 22:39, 22 September 2006 (UTC)
Looking at your example more closely I am even more confused. What exactly do you mean by ? Since is a multivariable function, the change in will be different depending on which direction you move. One way I would write this is to say , which represents the differential change in at when I move in the direction of You are supressing both and . Now, is redundant if we are talking about a global effect on (i.e., we are using an arbitrary ) so I don't mind hiding that. But if we let represent the direction, and , then I would expect since I expect to change twice as much if I move twice the distance. I ask again that you be more explicit in the details of your calculation. 04:40, 23 September 2006 (UTC)
You are right that the discussion presented in the example takes place at a specific point, which we can call ; the gradient is evaluated at this point and the displacement is relative to that point. However, if we make a change of variables , then the chain rule gives
and it follows that . This is a standard result which you can find in any textbook on calculus.
A less abstract example is as follows: consider an electric field in a 1D space which increases linearly with 1 Volt per meter. This means that the potential gradient (or derivative with respect to position) is 1 V/m. If A and B are two points at 1 meter distance, we get a potential difference between the two points which is 1 [Volt/meter] times 1 [meter] = 1 Volt. Now, if we instead measure the position in inches, the distance between A and B is (approximately) 40 inches (y = 40 x) and the potential gradient is 0.025 Volt per inch (df/dy = 0.025 df/dx). Consequently, the potential difference between A and B is 0.025 [Volt/inch] times 40 [inch] = 1 Volt, which is the same as before and also what we expect. If the potential gradient transformed in the same way as the the displacement, i.e., with a multiplicative factor of 40, then the potential difference between A and B would have to be 40 times 40 = 1600 Volt. This is clearly not correct.
The example given in the article is only a generalization of this example. We consider a general multi-variable and non-linear (but differentiable) function f and therefore have to restrict ourself to consider the infinitesimal change in f (denoted df) when we move an infinitesimal displacement (denoted dx). --KYN 20:33, 24 September 2006 (UTC)
I agree that works. But you still haven't made clear what you mean by transform. I thought you were trying to say which clearly is not true. Instead however, you are saying that I made this error because I did not understand what you meant by transform (and I still don't). But now that I know which permutation of variable substitution led you to that conclusion, I can ask why do you think that I would expect those two functions of should transform the same way? and are seemingly arbitrary functions of what do they have to do with the dual space of ?
Take for example and Under the "transformation," I find
Should I be surprised that the vectors and don't transform in the same way? Does this make the existence of a trual space obvious to me? Please elaborate on what you mean by "transform" and what this has to do with "dual space." 01:36, 25 September 2006 (UTC)
Agree with anon here -- dual spaces have nothing to do with gradients; and differential one-forms are not exactly an "easy" example, as they carry all sorts of other baggage. linas 00:28, 15 October 2006 (UTC)
This discussion is back-to-front and "gradient" is the wrong word here. The correct word is (total) derivative or differential (see also derivative (generalizations)). The differential of a differentiable function from R2 to R at a point x in R2 is the "best linear approximation" to this function near x and is a linear map from R2 to R, hence an element of the dual space of R2. It is well-defined, independent of the choice of inner product on R2 (it uses only the induced topology, which is the same for any norm). The gradient is a way of interpreting the differential as a vector in R2, and makes essential use of the standard dot product (or some other choice of inner product. Hence it is backwards to "interpret the gradient as a linear functional". To put it another way, if V is an abstract finite dimensional real vector space, then the gradient of a real-valued function on V at a point x is not defined, but the differential is, and is an element of the dual space. It is therefore, in my opinion, a very good example of why dual spaces are needed, and you do not need any baggage from differential forms to say this because you are only differentiating a function at a single point. I hope that helps! Geometry guy 15:23, 12 February 2007 (UTC)

Confusing/wrong example

The "examples" section currently states the following:

If V is infinite-dimensional, then the above construction of ei does not produce a basis for V* and the dimension of V* is greater than that of V. Consider for instance the space R(ω), whose elements are those sequences of real numbers which have only finitely many non-zero entries (dimension is countably infinite). The dual of this space is Rω, the space of all sequences of real numbers (dimension is uncountably infinite). Such a sequence (an) is applied to an element (xn) of R(ω) to give the number ∑nanxn.

The statements about uncountablity is certainly wrong, or at least confusing. To make this more concrete, and easier to talk about, consider the space where the set with two elements. Then has a countable basis, and thus its dimension is countable. As a set, it is uncountable: taken as the set of all possible strings in two digits, it is homomorphic to the set of all reals on the unit interval, and thus clearly uncountable. The fact that there are an uncountable number of elements in the set should not be confused with the fact that, as a vector space, it has a countable basis, and thus a countable dimension. (Here, by "countable" I mean countably infinite; I do not mean "finite"). Yet the above example seems to be making exactly this confusion, with R standing in the place of . Can someone please fix this to say what should have been said? linas 00:09, 15 October 2006 (UTC)

Hello linas, long time no see. Unless I'm confused by the notation, I don't think has a countable basis. But you are right, this is confusing.--CSTAR 01:31, 15 October 2006 (UTC)
I've been busy doing other things. I presume is countable infinity, and so I envision a basis with a 1 in the k'th position and k an integer. So wouldn't that be a countable basis? Its tantamount to saying "the binary expansion of any irrational number has a countably infinite number of binary digits". linas 16:13, 15 October 2006 (UTC)
It's not an algebraic basis for all of the space. That set certainly is linearly independent. However, the only vectors you will get in the algebraic span are those whose coordinates vanish outside a finite set.--CSTAR 16:34, 15 October 2006 (UTC)
Ahh, I'm starting to get it. This is the same argument as below. This set is linearly independent, but, as a Hamel basis, it fails to span the entire space. Ergo it is not an algebraic basis. Notice that algebraic basis is a red link, and yet its definition is central to the argument. I presume that there must be a proof that no algebraic basis for for such a space can ever be countable; does such a theorem have some famous name? linas 20:03, 15 October 2006 (UTC)

Similarly, the set of strings with only a finite number of 1's in them is a countable set (its homomorphic to the rationals), and it has a countable dimension. Thus, I'd conclude that both W and both have countable dimension, but W has a countable number of elements, while has an uncountable number of elements. I think this is what the example is trying to say, right? linas 00:28, 15 October 2006 (UTC)

Algebraic structure

Similarly, we have the section:

Structure of the dual space
The structure of the algebraic dual space is simply related to the structure of the vector space. If the space is finite dimensional then the space and its dual are isomorphic, while if the space is infinite dimensional then the dual space always has larger dimension.
Given a basis {eα} for V indexed by A, one may construct the linearly independent set of dual vectors {σα}, as defined above. If V is infinite-dimensional however, the dual vectors do not form a basis for V*; the span of {σα} consists of all finite linear combinations of the dual vectors, but any infinite ordered tuple of dual vectors (thought of informally as an infinite sum) defines an element of the dual space. Because every vector of the vector space may be written as a finite linear combination of basis vectors {eα}, an infinite tuple of dual vectors evaluates to nonzero scalars only finitely many times.

Coming from a rather naive background, I find the above paragraphs utterly confusing. For example, the statement: the span of {σα} consists of all finite linear combinations of the dual vectors... ... Huh? Where is it required that the combination be finite? From the articles on Hamel basis and Schauder basis, I am typically used to using a Schauder basis for countably-dimensional vector spaces, so this appearance of a Hamel basis out of thin air is confusing. I suspect that this has something to do with the categorical notions of product and coproduct, based on the discussions above from May 2006, but if this is the case, it is quite opaque.

I am also unable to parse the sentence ... an infinite tuple of dual vectors evaluates to nonzero scalars only finitely many times. because I can't figure out what "evaluates" means in this context. Please fix. linas 01:08, 15 October 2006 (UTC)

The "structure of the dual space" subsection is in the section on the algebraic dual. So we do not have any topology, and therefore linear combinations must be finite and the only basis we have is the Hamel basis. (The whole algebraic/continuous thing seems to confuse a lot of people. I wonder if it might be best to have separate articles for the algebraic dual and the continuous dual, and use the dual space article to give a general overview.) --Zundark 12:20, 15 October 2006 (UTC)
I agree.--CSTAR 14:54, 15 October 2006 (UTC)
Ahh, of course, ... if by "no topology", you mean no vector norm and no metric, so that the convergence of Cauchy sequences cannot be discussed, then yes, I now realize that of course, one can only take finite sums. This just didn't come through in reading the article. I think the algebraic/continuous confusion could be remedied quite simply just by harping on why only the Hamel basis can be used when constructing the algebraic duals. Just state the ground rules, and I think the confusion will go away. I might try "being bold" later on if I get the chance. linas 16:13, 15 October 2006 (UTC)

Useful examples?

The first example given here is pretty much useless. It requires no thought and does not give a general method for finding the dual basis of a finite vector space. For anyone learning about Dual Spaces, the example could confuse them even more.


I suggested above that the article be split into continuous dual and algebraic dual, with dual space being used for an overview article. CSTAR agreed, but there has been no further discussion of this. I am prompted to make the suggestion again after finding that someone inserted the claim that Hilbert spaces are isomorphic to their duals in the section on algebraic duals. I think less confusion would arise if both concepts had their own entries. --Zundark 09:51, 19 November 2006 (UTC)

Introduction again

I'm still not satisfied with the current introduction. I tried to rewrite it some time ago, but it was reverted on unclear grounds. Here are my concerns.

In mathematics, the existence of a dual vector space reflects in an abstract way the relationship between row vectors (1×n) and column vectors (n×1). This statement is at best correct for some specific examples. It is not a general characterization of a dual space. Furthermore, this statement is not very informative for someone not familiar with the concept, and it is not properly explained in the rest of the article. The construction can also take place for infinite-dimensional spaces and gives rise to important ways of looking at measures, distributions, and Hilbert space. The use of the dual space in some fashion is thus characteristic of functional analysis. It is also inherent in the Fourier transform. This could probably be OK, but since the unfamiliar reader has not been told was a dual space is, this information falls to the ground. Also, the connection to the Fourier transform is never explained in the article.
Dual vector spaces defined on finite-dimensional vector spaces can be used for defining tensors which are studied in tensor algebra. OK When applied to vector spaces of functions (which typically are infinite dimensional) dual spaces are employed for defining and studying concepts like measures, distributions, and Hilbert spaces. Consequently, dual space is an important concept in the study of functional analysis. Why are these statements from the previous section repeated here?
There are two types of dual space: the algebraic dual space, and the continuous dual space. The algebraic dual space is defined for all vector spaces. The continuous dual space is a subspace of the algebraic dual space, and is only defined for topological vector spaces. Is "subspace" really the correct word here?

Instead I propose the following introduction:

In mathematics it can be shown that any vector space V has a corresponding dual vector space (or just dual space for short) consisting of all linear functionals on V. Presents a formal definition already in the first sentence which later can be extended and explained. Dual vector spaces defined on finite-dimensional vector spaces can be used for defining tensors which are studied in tensor algebra. When applied to vector spaces of functions (which typically are infinite dimensional) dual spaces are employed for defining and studying concepts like measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in the study of functional analysis. A concatenation of the first two paragraphs in the current intro.
There are two types of dual spaces: the algebraic dual space, and the continuous dual space. The algebraic dual space is defined for all vector spaces. The continuous dual space is a particular or certain type of algebraic dual space which is defined only for topological vector spaces. Same as before, but "subspace" is replaced by "particular/certain type".

--KYN 16:27, 5 January 2007 (UTC)

Your previous edits were reverted because they introduced a number of errors, and were generally confusing. And "subspace" is certainly the correct word. I agree that the current introduction is not good. Part of the problem is that it's trying to introduce two different concepts (algebraic dual and continuous dual). I think it would be easier to write good introductions if the article were first split as I suggested above. --Zundark 19:14, 5 January 2007 (UTC)

My problem with the current introduction's presentation of the continuous dual space as a subspace of the algebraic dual space is that the context in which this is done suggests that "algebraic dual space" refers to a concept rather than (I believe?) to a specific instance of a dual space related to some specific topological space. Maybe it can be written something like:

There are two types of dual spaces: the algebraic dual space, and the continuous dual space. The algebraic dual space is defined for all vector spaces. When defined for a topological vector space there is a subspace of this dual space, corresponding to continuous linear functionals, which constitutes a continuous dual space.

--KYN 20:12, 5 January 2007 (UTC)

I agree with most of your changes from a style perspective. I think the new "two types of dual spaces" resolves one major issue I had. Another is that I consider the row and column idea to be vital. The most common informal "definition" for the dual space is about column vs. row vectors and dot products.
Tensor algebras are defined over modules (that is they can be infinite dimensional) so I think the paragraph as written with respect to finite is incorrect. jbolden1517Talk 21:16, 5 January 2007 (UTC)

Splitting the article into one part devoted to the algebraic dual space and one to the continuous dual space is either way to me. The two concepts appear to be reasonably well related, one is a specialization of the other, and to have them in the same article is not a problem. But since the stuff in the continuous dual space section is rather focused on the various properties of such a space it can also be in a separate article.

About the row and column idea, I agree that this can be used to illustrate how a practical implementation of a dual space may be set up for a particular choice of vector space. But I do not see the row-column idea as a vital thing in order to understand what a dual space is, in particular for the general case. I would like to remove the current wording in the intro since I believe that it does convey something that the "unfamiliar" reader may be able to grasp at that point. I'd rather see that this idea is further developed in the example section.

OK, let's remove tensor algebras from the intro but keep the tensors. Note that the current and proposed text does not exclude the possibility that tensor also may be defined on infinite dimensional spaces.

--KYN 22:57, 8 January 2007 (UTC)

About the row and column idea, I agree that this can be used to illustrate how a practical implementation of a dual space may be set up for a particular choice of vector space.
I think you are missing something. This is a linear algebra article. The bulk of the readers will have never heard or understood the definition of an abstract vector space. For them there are 3 vector space the plane R2, euclidean 3 space R3, and coordinate patches on Space-time. For them, the basis is the vector space and thus the dual basis is well defined.
I certainly think we need to cover more advance topics and more advanced definitions but "column vectors" is simply too key too far too many people to drop that. Frankly if I had to drop one or the other I'd treat the abstract definition as an aside.
jbolden1517Talk 21:01, 9 January 2007 (UTC)


As far as I can see, we agree that the article needs a simple example which can introduce the reader to the concept of a dual space. However, this does not mean that reflects in an abstract way the relationship between row vectors and column vectors will make sense to the reader who is not already familiar with the concept. I'd rather see that this ambiguous and inprecise statement is replaced by an eample (early in the article) of a space and its dual space, possibly in the form of row and column vectors. Such an example can then provide the "key" which you want. --KYN 23:00, 16 January 2007 (UTC)

I could go for a few examples. In fact a separate "examples of dual vector spaces" article might make sense. The key problem is that there is an absolutely huge range of readership. We need to cover everything from the dual of R3 to dual spaces as categorical duals. jbolden1517Talk 01:03, 17 January 2007 (UTC)

I think the example near the beginning with the arrows and parallel lines is wrong: the result of counting lines crossed by the arrow is a natural number, and the natural numbers do not form a field. --Vapniks (talk) 17:11, 9 July 2008 (UTC)

Structure of the dual space

I was somewhat hesitant to edit this section, as I can see from the discussion and history that it has been the subject of lively debate in the past. However, I think I've been able to both clarify and shorten the treatment. There is still one thing missing here, however: although the dual space is seen to be "bigger", that does not prove it does not have a basis of the same cardinality (although the axiom of choice is needed to see that it has any basis at all!). It would be enough to show that the dual space itself has strictly larger cardinality than the original vector space, but I'm not able to see why that is. Geometry guy 17:39, 15 February 2007 (UTC)

The cardinality is not always strictly larger, but the dimension is (assuming you're talking about the infinite-dimensional case). The proof is not trivial - see either of the books listed in the reference section of the PlanetMath entry dimension formulae for vector spaces. --Zundark 19:29, 15 February 2007 (UTC)

GA nomination

I have reviewed the article having regard to the guidelines set out in Wikipedia:What is a good article. I have reached the following conclusions:

  1. The article is largely well written. In particular, the first half dealing with algebraic duals is clear and should be understandable also to a reader with limited background in abstract algebra provided that he/she is willing to follow the links to the basic definitions. However, the second half dealing with continous duals is not quite yet up to the same standard. Partially this issue has to do with the unclear scope of the subject matter covered in this sections (referred to in 3.2 below). Specifically the subsection Further properties is still more of a stub than a well-written section. In addition, there are undefined technical terms (e.g., anti-isomorphism is neither defined nor linked elsewhere).
  2. The article is mostly factually accurate. There are no issues with the first half dealing with algebraic duals (except for a slightly misleading statement in the note: weaker axioms than the full axiom of choise suffice to show that RN has a basis; while this may be already splitting of hairs, I would propose wording like "the (usual) proof makes use of the axiom of choice"). However, somehat misleading statements are made in the second paragraph of the Further properties subsection of the second section: the statements therein appear to apply to all topological vector spaces, while they are not true in such generality (e.g., for a non-Hausdorf space the canonical mapping to the bidual is not injective). They would hold for normed spaces, and perhps the intention was to continue discussing only Hilbert spaces. This should be easy to fix. Worse issue is that the article does not cite any references. Pointers to at least a few good textbooks and references should be given.
  3. The article is not broad in its coverage. This is largely due to its ambitious scope to cover both algebraic and topological duals. My observations are as follows:
    1. The first section fares quite alright as an encyclopaedia article on algebraic duals of vector spaces. More context for the duality (such as role of algebraic duality in topological (cohomological) duality theorems) could still be added. However, duality for vector spaces is a special case of duality for modules, and treating them in two separate articles would lead to duplication of same constructs, results (partially) and proofs. Keeping duality for vector spaces separate could be justifiable by (i) willingness to keep it together with topological duals or (ii) to avoid introducing too much additional terminology to the vector space case. Either way, the fact that this is a special case should be mentioned and elaborated here or linked to a new article.
    2. The second section is not ready in terms of coverage even for s short enecyclopaedia article. There is clearly an issue of how broad categories of topological vector spaces to cover — the article begins by encompassing all topological vector spaces, then moves to normed spaces and Hilbert spaces without a warning. The article does not mention the question of which topology to choose for the dual if the space is not normed. And even for normed spaces there are alternatives to the strong topology, which is not mentioned (although weak dual mentioned briefly at the very end). A problem from the viewpoint of editing the article is that duality for general topological vector spaces is not entirely straightforward, but leads to rich structures and quite theory. There could be a case for a smaller scope for this article, but at least that would require that the general (and by no means "academic"!) case be mentioned and linked.
  4. Neutrality, stability and images are in good shape.

In summary, I have found the article to be good in many respects. However, the issues above are such that I am led to fail the GA nomination for now. I considered "On hold" status instead, but I find it hard to assume that the issues relating to the broadness of coverage on topological duals could be resolved in a week. One option would be to split the article into algebraic and topological duals, in which case the half on algebraic duals could be developed to the Good article level fairly quickly. However, the topic appears to have been discussed above, and reaching consensus on such a decision could take time and likely not be completed in a stable manner in one week either.

As a final note, I think this article is clearly capable of growing into a Good article status and beyond with some more editing.

Stca74 20:48, 11 May 2007 (UTC)

Wow, excellent review! That's a very good to do list and I'm hard pressed to disagree with any of it. jbolden1517Talk 20:53, 11 May 2007 (UTC)

Duality over the complex field and sesquilinear forms

I notice that the physicists' bracket notation is included, with the claim that it is a bilinear form. That's not quite true, as it is sesquilinear. More generally, I see no mention in the article of the picky issues that occur when working over C, e.g. the fact that a nondegenerate sesquilinear form yields an isomorphism of the dual with the conjugate space, not the space itself. Does anyone think that should be included, or at least mentioned? Or is that more suitable elsewhere? -- Spireguy (talk) 03:27, 25 July 2008 (UTC)

I have removed the offending passage — or at least I hope it was passage you were referering to. Bilinear forms and sesquilinear forms deserve mention, but I believe it should be done in a separate section, say "Representing linear functionals". I would start with a non-degenerate bilinear form on a finite-dimensional vector space (bilinear over the groundfield, not sesquilinear), and then to mention the "representation theorem" that a non-degenerate bilinear form gives a linear isomorphism of V with its dual space. Sesquilinear forms can then be introduced (still in finite dimensions), and now with the assertion that they define an anti-isomorphism with the dual space. Finally, this can be tied in with the usual Riesz representation theorem and the infinite dimensional case of interest to physicists. Thoughts? siℓℓy rabbit (talk) 03:41, 25 July 2008 (UTC)

"Bilinear products and dual spaces" section all finite dimensional

The section called "Bilinear products and dual spaces" starts with the sentence. "If V is finite-dimensional, then V is isomorphic to V*." Further in the paragraph is reads: "If the bilinear form is assumed to be nondegenerate, then this is an isomorphism." not referencing the finite dimensionality anymore. I guess it's need. Should we change the first sentence to something like: "This paragraph concerns itself with finite-dimensional vectorspaces V. In this case V is isomorphic to V*. ...."? -- JanCK (talk) 15:47, 7 August 2008 (UTC)

Thanks for spotting the problem. I have tried to clear up matters. Tell me if you think this is an improvement. siℓℓy rabbit (talk) 16:03, 7 August 2008 (UTC)

Axiom of Choice

Why does a footnote say that the axiom of choice is needed to show that $\mathbb{R}^n$ has a basis? Infinite dimensional vector space, sure, but finite dimensional? That's wrong, isn't it? —Preceding unsigned comment added by (talk) 04:57, 20 December 2008 (UTC)

It's talking about , not . --Zundark (talk) 09:03, 20 December 2008 (UTC)
Then the fonts need work! I was also fooled and I had to come all this way to find that out. Why isn't this in "image-y" font, like you used? I know it needs to be small, but it also needs to be clear. If changing the font is not an option, then it needs clarification in the form " ... where N is the natural numbers". Episanty (talk) 06:13, 30 January 2010 (UTC)

It has been suggested that Dual basis article be merged into Dual space.

I disagree. If a man wants to know what a dual basis is, it will be not logical to send him to dual space, as dual basis is simpler to understand; only a link to dual space should be given at the beginning. I think I will write some more information about dual basis in the next month, before 2009-06-25, as I am a student myself. Q0k (talk) 00:56, 29 May 2009 (UTC)

I agree with the proposed move and disagree with the previous comment. Since a dual basis is in particular a basis of the dual space, and no reasonable definition of dual basis can circumvent the concept of dual space, it is hard to see how the latter could be deemed more complicated than the former. Indeed, the first sentence of the Dual basis article bears witness to this logical dependency. Stca74 (talk) 18:53, 29 May 2009 (UTC)
Disagree--kmath (talk) 00:16, 30 May 2009 (UTC)
The reason that dual space is more complicated is because there is a lot more to talk about regarding dual spaces than basis. In general we cover this primarily in terms of the infinite dimensional case. Moreover the dual basis article might want to have more detail on Dual_space#The_infinite_dimensional_case where we cover the nature of the basis. Also there are topics like constructing basis for and subspaces like which are too specialized for this article. I'm not sure what the merge gets either article at this point. I think the merge people need to make an affirmative case. jbolden1517Talk 11:35, 30 May 2009 (UTC)
Well, the proposal being discussed is about merging dual basis into dual space, not the other way round. Obviously a well-rounded article on dual space covers much more than a reasonable article on dual basis. Indeed, the present dual space article covers essentially everything there is in Dual basis. This is in my view the key reason for supporting the proposed merger: it does not make sense to replicate the concise section on dual bases in Dual space#The finite dimensional case in another article; instead, it is better to provide a redirect to this section in Dual space, thus giving at the same time the relevant context for dual basis. The merger would not make sense if either there were a clear reason to exclude discussion of dual bases from Dual space, or if there were significant material worth inlcuding in Dual basis while too detailed for Dual space. However, any good article on dual spaces should include a short section on dual bases (as the present article does). On the other hand, the present Dual basis article does not include any significant material not in Dual space. Whether there were something to add is debatable; only topic that comes to mind is expanding the definition to finite-rank free modules (over base rings more general than fields), but then this is a larger issue about how to best expand Wikipedia's articles on linear algebra to cover modules and not only vector spaces. Stca74 (talk) 19:29, 30 May 2009 (UTC)

example and calculation

Somewhere above people have asked for more examples and I recall one person asking for details on calculating a dual basis. Assuming the following is mathematically correct, do people think it would be useful to add as a further example in the finite-dimensional case?

"Another basis of R2 is . From the biorthogonality conditions, it is clear that the dual basis of B is given by the rows of ."

digfarenough (talk) 20:36, 22 December 2009 (UTC)

Injection into the double dual: Clumsy wording

(Infinite-dimensional Hilbert spaces are not a counterexample to this, as they are isomorphic to their continuous duals, not to their algebraic duals.)

I'm guessing the intended meaning is: "While this isn't necessarily the case for an infinite-dimensional vector space, a Hilbert space is isomorphic to its continuous double dual." (This statement appears in a section entitled Algebraic dual space, so it's natural to understand an unqualified reference to "dual" as meaning "algebraic dual".) Fomentalist (talk) 15:00, 9 July 2011 (UTC)

Double Dual

There should be more emphasis made that the double dual is only "interesting" in infinite dimensional spaces.... — Preceding unsigned comment added by (talk) 01:15, 25 December 2011 (UTC)

Annihilators in the Infinite Dimensional Case

Presently, the article states that:

Moreover, if A and B are two subsets of V, then

and equality holds provided V is finite-dimensional.

It's tacitly implied that equality generally does not hold. However, it seems to be an easy exercise to prove equality always holds, by just using the fact that short exact sequences of vector spaces split, i.e., complements to subspaces always exist. In particular, one can decompose V by


Then given any linear function we can define functionals g and h by


Then it's clear that f=g + h and that (talk) 20:31, 14 May 2012 (UTC)

That seems right to me. The stated result remains true of the continuous dual space, where subspaces need not have complements, but the section under discussion is about the algebraic dual. Sławomir Biały (talk) 21:26, 14 May 2012 (UTC)

"Continuous dual space"

Excuse me, the term "continuous dual space", where is it from? In the theory of topological vector spaces people usually say just "dual space".

I would also like to insert here some facts about this notion. Eozhik (talk) 13:43, 10 November 2012 (UTC)

See, for instance, the textbook by Kadison and Ringrose on operator algebras, or Steven Roman's textbook "Advanced linear algebra". The term "topological dual space" is probably more common in analysis, though both are definitely used, especially if there is a need (as in this article) to distinguish between the two meanings of the term "dual space". Sławomir Biały (talk) 13:27, 21 April 2013 (UTC)

I am sorry, I forgot about my question of November 2012 (and I did not see your answer of April 2013). I've just changed the title of this section "Continuous dual space" to "Dual space of a topological vector space", because this term -- continuous dual space -- is indeed very rarely used in this theory. I gave references (and I am a specialist in this field).Eozhik (talk) 16:01, 24 July 2013 (UTC)

"amigo space"

Really? I get no hits on Google books. —Quondum 15:25, 2 December 2013 (UTC)

Removed, no longer relevant. —Quondum 19:22, 14 January 2014 (UTC)

Rename to "Dual vector space"?

In the context of an encyclopaedia, surely the full name "Dual vector space" should be used as the name of the article, and not the abbreviated form "Dual space"? In the context of a discussion of vector spaces, "dual space" is natural and a redirect with the shortened form should be given, but an encyclopaedia is not only about vector spaces. This is especially significant for this article, since "dual" and "space" individually have multiple meanings even when restricted to mathematics, so to combine them could produce multiple sensible meanings, and this article is about only one of these meanings. —Quondum 19:33, 14 January 2014 (UTC)