Threshold displacement energy: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Kyrsjo
m →‎Theory and simulation: obviously not obvious to every reader who doesn't know the terminology.
en>Knordlun
mNo edit summary
 
Line 1: Line 1:
{{Dablink|For the Hilbert space-filling curve, see [[Hilbert curve]].}}
by Nas, is very fitting and the film agrees with it. The next step is to visit your Word - Press blog dashboard.  To find more information on [http://aorta.in/WordpressDropboxBackup286905 wordpress dropbox backup] check out our webpage. The Word - Press Dashboard : an administrative management tool that supports FTP content upload  2. Word - Press also provides protection against spamming, as security is a measure issue. All this is very simple, and the best thing is that it is totally free, and you don't need a domain name or web hosting. <br><br>Always remember that an effective linkwheel strategy strives to answer all the demands of popular  search engines while reacting to the latest marketing number trends. If you are a positive thinker businessman then today you have to put your business online. Some plugins ask users to match pictures or add numbers, and although effective, they appear unprofessional and unnecessary. t need to use the back button or the URL to get to your home page. As soon as you start developing your Word - Press MLM website you'll see how straightforward and simple it is to create an online presence for you and the products and services you offer. <br><br>It is also popular because willing surrogates,as well as egg and sperm donors,are plentiful. Word - Press has different exciting features including a plug-in architecture with a templating system. For a much deeper understanding of simple wordpress themes", check out  Upon browsing such, you'll be able to know valuable facts. Provide the best and updated information to the web searchers and make use of these wonderful free themes and create beautiful websites. Customization of web layout is easy due to the availability of huge selection of templates. <br><br>There has been a huge increase in the number of developers releasing free premium Word - Press themes over the years. High Quality Services: These companies help you in creating high quality Word - Press websites. Some examples of its additional features include; code inserter (for use with adding Google Analytics, Adsense section targeting etc) Webmaster verification assistant, Link Mask Generator, Robots. It supports backup scheduling and allows you to either download the backup file or email it to you. Make sure you have the latest versions of all your plugins are updated. <br><br>A sitemap is useful for enabling web spiders and also on rare occasions clients, too, to more easily and navigate your website. If you operate a website that's been built on HTML then you might have to witness traffic losses because such a site isn't competent enough in grabbing the attention of potential consumers. You can select color of your choice, graphics of your favorite, skins, photos, pages, etc. Word - Press is an open source content management system which is easy to use and offers many user friendly features. As for performing online business, websites and blogs are the only medium that are available to interact with customers and Word - Press perform this work with the help of cross-blog communication tools, comments and  full user registration plug-ins.
{{pp-move-indef}}
[[File:Standing waves on a string.gif|thumb|A [[vibrating string]] can be modeled as a point in a Hilbert space. The decomposition of a vibrating string into its vibrations in distinct [[overtone]]s is given by the projection of the point onto the coordinate axes in the space.]]
 
The [[mathematics|mathematical]] concept of a '''Hilbert space''', named after [[David Hilbert]], generalizes the notion of [[Euclidean space]]. It extends the methods of [[linear algebra|vector algebra]] and [[calculus]] from the two-dimensional [[plane (geometry)|Euclidean plane]] and three-dimensional space to spaces with any finite or infinite number of dimensions.  A Hilbert space is an abstract [[vector space]] possessing the [[mathematical structure|structure]] of an [[inner product space|inner product]] that allows length and angle to be measured. Furthermore, Hilbert spaces are [[Complete metric space|complete]]: there are enough [[limit (mathematics)|limits]] in the space to allow the techniques of calculus to be used.
 
Hilbert spaces arise naturally and frequently in [[mathematics]] and [[physics]], typically as infinite-dimensional [[function space]]s. The earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by [[David Hilbert]], [[Erhard Schmidt]], and [[Frigyes Riesz]]. They are indispensable tools in the theories of [[partial differential equation]]s, [[mathematical formulation of quantum mechanics|quantum mechanics]], [[Fourier analysis]] (which includes applications to [[signal processing]] and heat transfer)—and [[ergodic theory]], which forms the mathematical underpinning of [[thermodynamics]]. [[John von Neumann]] coined the term ''Hilbert space'' for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for [[functional analysis]]. Apart from the classical Euclidean spaces, examples of Hilbert spaces include [[Lp space|spaces of square-integrable functions]], [[Sequence space|spaces of sequences]], [[Sobolev space]]s consisting of [[generalized function]]s, and [[Hardy space]]s of [[holomorphic function]]s.
 
Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the [[Pythagorean theorem]] and [[parallelogram law]] hold in a Hilbert space. At a deeper level, perpendicular projection onto a subspace (the analog of "[[altitude (triangle)|dropping the altitude]]" of a triangle) plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to a set of [[coordinate axes]] (an [[orthonormal basis]]), in analogy with Cartesian coordinates in the plane. When that set of axes is [[countably infinite]], this means that the Hilbert space can also usefully be thought of in terms of [[infinite sequence]]s that are [[Lp space#Lp spaces|square-summable]]. [[Linear operator]]s on a Hilbert space are likewise fairly concrete objects: in good cases, they are simply transformations that stretch the space by different factors in mutually perpendicular directions in a sense that is made precise by the study of their [[spectral theory|spectrum]].
 
==Definition and illustration==
 
===Motivating example: Euclidean space===
One of the most familiar examples of a Hilbert space is the [[Euclidean space]] consisting of three-dimensional [[Euclidean vector|vectors]], denoted by '''R'''<sup>3</sup>, and equipped with the [[dot product]].  The dot product takes two vectors '''x''' and '''y''', and produces a real number '''x'''·'''y'''.  If '''x''' and '''y''' are represented in [[Cartesian coordinates]], then the dot product is defined by
:<math>(x_1,x_2,x_3)\cdot (y_1,y_2,y_3) = x_1y_1+x_2y_2+x_3y_3.</math>
The dot product satisfies the properties:
#It is symmetric in '''x''' and '''y''': '''x'''&nbsp;·&nbsp;'''y'''&nbsp;=&nbsp;'''y'''&nbsp;·&nbsp;'''x'''.
#It is [[linear function|linear]] in its first argument: (''a'''''x'''<sub>1</sub>&nbsp;+&nbsp;''b'''''x'''<sub>2</sub>)&nbsp;·&nbsp;'''y'''&nbsp;=&nbsp;''a'''''x'''<sub>1</sub>&nbsp;·&nbsp;'''y'''&nbsp;+&nbsp;''b'''''x'''<sub>2</sub>&nbsp;·&nbsp;'''y''' for any scalars ''a'', ''b'', and vectors '''x'''<sub>1</sub>, '''x'''<sub>2</sub>, and '''y'''.
#It is [[Definite bilinear form|positive definite]]: for all vectors '''x''', '''x'''&nbsp;·&nbsp;'''x'''&nbsp;≥&nbsp;0, with equality [[if and only if]] '''x'''&nbsp;=&nbsp;0.
 
An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a (real) [[inner product]]. A [[vector space]] equipped with such an inner product is known as a (real) [[inner product space]].  Every finite-dimensional inner product space is also a Hilbert space. The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length (or [[norm (mathematics)|norm]]) of a vector, denoted ||'''x'''||, and to the angle θ between two vectors '''x''' and '''y''' by means of the formula
 
:<math>\mathbf{x}\cdot\mathbf{y} = \|\mathbf{x}\|\,\|\mathbf{y}\|\,\cos\theta.</math>
 
[[File:Completeness in Hilbert space.png|thumb|right|Completeness means that if a particle moves along the broken path (in blue) travelling a finite total distance, then the particle has a [[Well defined|well-defined]] net displacement (in orange).]]
[[Multivariable calculus]] in Euclidean space relies on the ability to compute [[limit (mathematics)|limits]], and to have useful criteria for concluding that limits exist.  A [[series (mathematics)|mathematical series]]
:<math>\sum_{n=0}^\infty \mathbf{x}_n</math>
consisting of vectors in '''R'''<sup>3</sup> is [[absolute convergence|absolutely convergent]] provided that the sum of the lengths converges as an ordinary series of real numbers:<ref>{{harvnb|Marsden|1974|loc=§2.8}}</ref>
:<math>\sum_{k=0}^\infty \|\mathbf{x}_k\| < \infty.</math>
Just as with a series of scalars, a series of vectors that converges absolutely also converges to some limit vector '''L''' in the Euclidean space, in the sense that
:<math>\left\|\mathbf{L}-\sum_{k=0}^N\mathbf{x}_k\right\|\to 0\quad\text{as }N\to\infty.</math>
This property expresses the ''completeness'' of Euclidean space: that a series that converges absolutely also converges in the ordinary sense.
 
===Definition===
A '''Hilbert space''' ''H'' is a [[real number|real]] or [[complex number|complex]] [[inner product space]] that is also a [[complete metric space]] with respect to the distance function induced by the inner product.<ref name="General">The mathematical material in this section can be found in any good textbook on functional analysis, such as {{Harvtxt|Dieudonné|1960}}, {{Harvtxt|Hewitt|Stromberg|1965}}, {{Harvtxt|Reed|Simon|1980}} or {{Harvtxt|Rudin|1980}}.</ref>  To say that ''H'' is a complex inner product space means that ''H'' is a complex vector space on which there is an inner product <math>\langle x,y\rangle</math> associating a complex number to each pair of elements ''x'',''y'' of ''H'' that satisfies the following properties:
* The inner product of a pair of elements is equal to the [[complex conjugate]] of the inner product of the swapped elements:
::<math>\langle y,x\rangle = \overline{\langle x, y\rangle}.</math>
* The inner product is [[linear functional|linear]] in its first argument.<ref>In some conventions, inner products are linear in their second arguments instead.</ref>  For all complex numbers ''a'' and ''b'',
::<math>\langle ax_1+bx_2, y\rangle = a\langle x_1, y\rangle + b\langle x_2, y\rangle.</math>
* The inner product of an element with itself is [[Definite bilinear form|positive definite]]:
::<math>\langle x,x\rangle \ge 0</math>
:where the case of equality holds precisely when ''x''&nbsp;=&nbsp;0.
It follows from properties 1 and 2 that a complex inner product is [[Antilinear map|antilinear]] in its second argument, meaning that
:<math>\langle x, ay_1+by_2\rangle = \bar{a}\langle x, y_1\rangle + \bar{b}\langle x, y_2\rangle.</math>
A real inner product space is defined in the same way, except that ''H'' is a real vector space and the inner product takes real values.  Such an inner product will be bilinear: that is, linear in each argument.
 
The [[norm (mathematics)|norm]] is the real-valued function
:<math>\|x\| = \sqrt{\langle x,x \rangle},</math>
and the distance ''d'' between two points ''x'',''y'' in ''H'' is defined in terms of the norm by
:<math>d(x,y)=\|x-y\| = \sqrt{\langle x-y,x-y \rangle}.</math>
That this function is a distance function means (1) that it is symmetric in ''x'' and ''y'', (2) that the distance between ''x'' and itself is zero, and otherwise the distance between ''x'' and ''y'' must be positive, and (3) that the [[triangle inequality]] holds, meaning that the length of one leg of a triangle ''xyz'' cannot exceed the sum of the lengths of the other two legs:
:<math>d(x,z) \le d(x,y) + d(y,z).</math>
[[File:Triangle inequality in a metric space.svg|300px|center]]
 
This last property is ultimately a consequence of the more fundamental [[Cauchy–Schwarz inequality]], which asserts
:<math>|\langle x, y\rangle| \le \|x\|\,\|y\|</math>
with equality if and only if ''x'' and ''y'' are [[Linear independence|linearly dependent]].
 
Relative to a distance function defined in this way, any inner product space is a [[metric space]], and sometimes is known as a '''pre-Hilbert space'''.<ref>{{harvnb|Dieudonné|1960|loc=§6.2}}</ref> Any pre-Hilbert space that is additionally also a [[complete space|complete]] space is a Hilbert space.  Completeness is expressed using a form of the [[Cauchy criterion]] for sequences in ''H'': a pre-Hilbert space ''H'' is complete if every [[Cauchy sequence]] [[limit (mathematics)|converges with respect to this norm]] to an element in the space. Completeness can be characterized by the following equivalent condition: if a series of vectors <math>\textstyle{\sum_{k=0}^\infty u_k}</math> [[absolute convergence|converges absolutely]] in the sense that
:<math>\sum_{k=0}^\infty\|u_k\| < \infty,</math>
then the series converges in ''H'', in the sense that the partial sums converge to an element of ''H''.
 
As a complete normed space, Hilbert spaces are by definition also [[Banach space]]s.  As such they are [[topological vector space]]s, in which [[topology|topological]] notions like the [[open set|openness]] and [[closed set|closedness]] of subsets are well-defined.  Of special importance is the notion of a closed [[linear subspace]] of a Hilbert space that, with the inner product induced by restriction, is also complete (being a closed set in a complete metric space) and therefore a Hilbert space in its own right.
 
===Second example: sequence spaces===
The [[sequence space]] ''ℓ'' <sup>2</sup> consists of all [[sequence (mathematics)|infinite sequences]] '''z'''&nbsp;=&nbsp;(''z''<sub>''1''</sub>,''z''<sub>2</sub>,...) of complex numbers such that the [[series (mathematics)|series]]
:<math>\sum_{n=1}^\infty |z_n|^2</math>
[[convergent series|converges]].  The inner product on ''ℓ'' <sup>2</sup> is defined by
:<math>\langle \mathbf{z},\mathbf{w}\rangle = \sum_{n=1}^\infty z_n\overline{w_n},</math>
with the latter series converging as a consequence of the Cauchy–Schwarz inequality.
 
Completeness of the space holds provided that whenever a series of elements from ''ℓ'' <sup>2</sup> converges absolutely (in norm), then it converges to an element of ''ℓ'' <sup>2</sup>. The proof is basic in [[mathematical analysis]], and permits mathematical series of elements of the space to be manipulated with the same ease as series of complex numbers (or vectors in a finite-dimensional Euclidean space).<ref>{{harvnb|Dieudonné|1960}}</ref>
 
==History==
[[File:Hilbert.jpg|thumb|right|[[David Hilbert]]]]
Prior to the development of Hilbert spaces, other generalizations of Euclidean spaces were known to [[mathematician]]s and physicists. In particular, the idea of an [[vector space|abstract linear space]] had gained some traction towards the end of the 19th century:<ref>Largely from the work of [[Hermann Grassmann]], at the urging of [[August Ferdinand Möbius]] {{harv|Boyer|Merzbach|1991|pp=584–586}}. The first modern axiomatic account of abstract vector spaces ultimately appeared in [[Giuseppe Peano]]'s 1888 account ({{harvnb|Grattan-Guinness|2000|loc=§5.2.2}}; {{harvnb|O'Connor|Robertson|1996}}).</ref> this is a space whose elements can be added together and multiplied by scalars (such as [[real numbers|real]] or [[complex numbers]]) without necessarily identifying these elements with [[vector (geometric)|"geometric" vectors]], such as position and momentum vectors in physical systems. Other objects studied by mathematicians at the turn of the 20th century, in particular spaces of [[sequence (mathematics)|sequences]] (including [[series (mathematics)|series]]) and spaces of functions,<ref>A detailed account of the history of Hilbert spaces can be found in {{harvnb|Bourbaki|1987}}.</ref> can naturally be thought of as linear spaces. Functions, for instance, can be added together or multiplied by constant scalars, and these operations obey the algebraic laws satisfied by addition and scalar multiplication of spatial vectors.
 
In the first decade of the 20th century, parallel developments led to the introduction of Hilbert spaces. The first of these was the observation, which arose during [[David Hilbert]] and [[Erhard Schmidt]]'s study of [[integral equations]],<ref>{{harvnb|Schmidt|1908}}</ref> that two [[square-integrable]] real-valued functions ''f'' and ''g'' on an interval [''a'',''b''] have an ''inner product''
 
:<math>\langle f,g \rangle = \int_a^b f(x)g(x)\,dx</math>
 
which has many of the familiar properties of the Euclidean dot product. In particular, the idea of an [[orthogonality|orthogonal]] family of functions has meaning. Schmidt exploited the similarity of this inner product with the usual dot product to prove an analog of the [[spectral decomposition]] for an operator of the form
 
:<math>f(x) \mapsto \int_a^b K(x,y) f(y)\, dy</math>
 
where ''K'' is a continuous function symmetric in ''x'' and ''y''. The resulting [[eigenfunction expansion]] expresses the function ''K'' as a series of the form
 
:<math>K(x,y) = \sum_n \lambda_n\varphi_n(x)\varphi_n(y)\,</math>
 
where the functions ''φ''<sub>''n''</sub> are orthogonal in the sense that {{nowrap|⟨''φ''<sub>''n''</sub>,''φ''<sub>''m''</sub>⟩ {{=}} 0}} for all {{nowrap|''n'' ≠ ''m''}}.  The individual terms in this series are sometimes referred to as elementary product solutions.  However, there are eigenfunction expansions that fail to converge in a suitable sense to a square-integrable function: the missing ingredient, which ensures convergence, is completeness.<ref>{{harvnb|Titchmarsh|1946|loc=§IX.1}}</ref>
 
The second development was the [[Lebesgue integral]], an alternative to the [[Riemann integral]] introduced by [[Henri Lebesgue]] in 1904.<ref>{{harvnb|Lebesgue|1904}}. Further details on the history of integration theory can be found in {{harvtxt|Bourbaki|1987}} and {{harvtxt|Saks|2005}}.</ref> The Lebesgue integral made it possible to integrate a much broader class of functions. In 1907, [[Frigyes Riesz]] and [[Ernst Sigismund Fischer]] independently proved that the space ''L''<sup>2</sup> of square Lebesgue-integrable functions is a [[complete metric space]].<ref>{{harvnb|Bourbaki|1987}}.</ref> As a consequence of the interplay between geometry and completeness, the 19th century results of [[Joseph Fourier]], [[Friedrich Bessel]] and [[Marc-Antoine Parseval]] on [[trigonometric series]] easily carried over to these more general spaces, resulting in a geometrical and analytical apparatus now usually known as the [[Riesz–Fischer theorem]].<ref>{{harvnb|Dunford|Schwartz|1958|loc=§IV.16}}</ref>
 
Further basic results were proved in the early 20th century. For example, the [[Riesz representation theorem]] was independently established by [[Maurice Fréchet]] and [[Frigyes Riesz]] in 1907.<ref>In {{harvtxt|Dunford|Schwartz|1958|loc=§IV.16}}, the result that every linear functional on L<sup>2</sup>[0,1] is represented by integration is jointly attributed to {{harvtxt|Fréchet|1907}} and {{harvtxt|Riesz|1907}}. The general result, that the dual of a Hilbert space is identified with the Hilbert space itself, can be found in {{harvtxt|Riesz|1934}}.</ref> [[John von Neumann]] coined the term ''abstract Hilbert space'' in his work on unbounded [[Self-adjoint operator|Hermitian operators]].<ref>{{Harvnb|von Neumann|1929}}.</ref> Although other mathematicians such as [[Hermann Weyl]] and [[Norbert Wiener]] had already studied particular Hilbert spaces in great detail, often from a physically motivated point of view, von Neumann gave the first complete and axiomatic treatment of them.<ref>{{harvnb|Kline|1972|p=1092}}</ref> Von Neumann later used them in his seminal work on the foundations of quantum mechanics,<ref>{{Harvnb|Hilbert|Nordheim|von Neumann|1927}}.</ref> and in his continued work with [[Eugene Wigner]]. The name "Hilbert space" was soon adopted by others, for example by Hermann Weyl in his book on quantum mechanics and the theory of groups.<ref name="Weyl31">{{Harvnb|Weyl|1931}}.</ref>
 
The significance of the concept of a Hilbert space was underlined with the realization that it offers one of the best [[mathematical formulation of quantum mechanics|mathematical formulations of quantum mechanics]].<ref>{{harvnb|Prugovečki|1981|pp=1–10}}.</ref> In short, the states of a quantum mechanical system are vectors in a certain Hilbert space, the observables are [[hermitian operator]]s on that space, the [[symmetry|symmetries]] of the system are [[unitary operator]]s, and [[quantum measurement|measurements]] are [[orthogonal projection]]s. The relation between quantum mechanical symmetries and unitary operators provided an impetus for the development of the [[unitary representation|unitary]] [[representation theory]] of [[group (mathematics)|groups]], initiated in the 1928 work of Hermann Weyl.<ref name="Weyl31" /> On the other hand, in the early 1930s it became clear that classical mechanics can be described in terms of Hilbert space ([[Koopman–von Neumann classical mechanics]]) and that certain properties of classical [[dynamical systems]] can be analyzed using Hilbert space techniques in the framework of [[ergodic theory]].<ref name="von Neumann 1932">{{harvnb|von Neumann|1932}}</ref>
 
The algebra of [[observable]]s in quantum mechanics is naturally an algebra of operators defined on a Hilbert space, according to [[Werner Heisenberg]]'s [[matrix mechanics]] formulation of quantum theory.  Von Neumann began investigating [[operator algebra]]s in the 1930s, as [[ring (mathematics)|rings]] of operators on a Hilbert space. The kind of algebras studied by von Neumann and his contemporaries are now known as [[von Neumann algebra]]s.  In the 1940s, [[Israel Gelfand]], [[Mark Naimark]] and [[Irving Segal]] gave a definition of a kind of operator algebras called [[C*-algebra]]s that on the one hand made no reference to an underlying Hilbert space, and on the other extrapolated many of the useful features of the operator algebras that had previously been studied.  The spectral theorem for self-adjoint operators in particular that underlies much of the existing Hilbert space theory was generalized to C*-algebras.  These techniques are now basic in abstract harmonic analysis and representation theory.
 
==Examples==
 
===Lebesgue spaces===
{{Main|Lp space|l1=L<sup>p</sup> space}}
 
Lebesgue spaces are [[function space]]s associated to [[measure (mathematics)|measure spaces]] (''X'', ''M'', ''μ''), where ''X'' is a set, ''M'' is a [[Sigma-algebra|σ-algebra]] of subsets of ''X'', and ''μ'' is a [[countably additive measure]] on ''M''.  Let ''L''<sup>2</sup>(''X'', μ) be the space of those complex-valued measurable functions on ''X'' for which the [[Lebesgue integration|Lebesgue integral]] of the square of the [[absolute value]] of the function is finite, i.e., for a function ''f'' in ''L''<sup>2</sup>(''X'',μ),
 
:<math> \int_X |f|^2 d \mu  < \infty, </math>
 
and where functions are identified if and only if they differ only on a [[null set|set of measure zero]].
 
The inner product of functions ''f'' and ''g'' in ''L''<sup>2</sup>(''X'', μ) is then defined as
:<math>\langle f,g\rangle=\int_X f(t) \overline{g(t)} \ d \mu(t).</math>
 
For ''f'' and ''g'' in ''L''<sup>2</sup>, this integral exists because of the Cauchy&ndash;Schwarz inequality, and defines an inner product on the space. Equipped with this inner product, ''L''<sup>2</sup> is in fact complete.<ref>{{Harvnb|Halmos|1957|loc=Section 42}}.</ref> The Lebesgue integral is essential to ensure completeness: on domains of real numbers, for instance, not enough functions are [[Riemann integral|Riemann integrable]].<ref>{{Harvnb|Hewitt|Stromberg|1965}}.</ref>
 
The Lebesgue spaces appear in many natural settings.  The spaces ''L''<sup>2</sup>('''R''') and ''L''<sup>2</sup>([0,1]) of square-integrable functions with respect to the [[Lebesgue measure]] on the real line and unit interval, respectively, are natural domains on which to define the Fourier transform and Fourier series.  In other situations, the measure may be something other than the ordinary Lebesgue measure on the real line.  For instance, if ''w'' is any positive measurable function, the space of all measurable functions ''f'' on the interval [0, 1] satisfying
:<math>\int_0^1 |f(t)|^2w(t)\,dt < \infty</math>
is called the [[Lp space#Weighted Lp spaces|weighted ''L''<sup>2</sup> space]] ''L''{{su|p=2|b=''w''}}([0,1]), and ''w'' is called the weight function.  The inner product is defined by
:<math>\langle f,g\rangle=\int_0^1 f(t) \overline{g(t)} w(t) \, dt.</math>
The weighted space ''L''{{su|p=2|b=''w''}}([0,1]) is identical with the Hilbert space ''L''<sup>2</sup>([0,1],μ) where the measure μ of a Lebesgue-measurable set ''A'' is defined by
:<math>\mu(A) = \int_A w(t)\,dt.</math>
Weighted ''L''<sup>2</sup> spaces like this are frequently used to study orthogonal polynomials, because different families of orthogonal polynomials are orthogonal with respect to different weighting functions.
 
===Sobolev spaces===
[[Sobolev space]]s, denoted by ''H''<sup>''s''</sup> or {{nowrap|''W''<sup> ''s'', 2</sup>}}, are Hilbert spaces. These are a special kind of [[function space]] in which [[derivative|differentiation]] may be performed, but that (unlike other [[Banach spaces]] such as the [[Hölder space]]s) support the structure of an inner product. Because differentiation is permitted, Sobolev spaces are a convenient setting for the theory of [[partial differential equations]].<ref name="BeJoSc81" /> They also form the basis of the theory of [[Direct method in calculus of variations|direct methods in the calculus of variations]].<ref>{{Harvnb|Giusti|2003}}.<!--Find a reference more specific to the case p=2--></ref>
 
For ''s'' a non-negative integer and {{nowrap|Ω ⊂ '''R'''<sup>''n''</sup>}}, the Sobolev space ''H''<sup>''s''</sup>(Ω) contains L<sup>2</sup> functions whose [[weak derivative]]s of order up to ''s'' are also L<sup>2</sup>. The inner product in ''H''<sup>s</sup>(Ω) is
 
:<math>\langle f,g\rangle = \int_\Omega f(x)\bar{g}(x)\,dx + \int_\Omega D f\cdot D\bar{g}(x)\,dx + \cdots + \int_\Omega D^s f(x)\cdot D^s \bar{g}(x)\, dx</math>
 
where the dot indicates the dot product in the Euclidean space of partial derivatives of each order. Sobolev spaces can also be defined when ''s'' is not an integer.
 
Sobolev spaces are also studied from the point of view of spectral theory, relying more specifically on the Hilbert space structure.  If Ω is a suitable domain, then one can define the Sobolev space ''H''<sup>''s''</sup>(Ω) as the space of [[Bessel potential]]s;<ref>{{harvnb|Stein|1970}}</ref> roughly,
:<math>H^s(\Omega) = \{ (1-\Delta)^{-s/2}f | f\in L^2(\Omega)\}.</math>
Here Δ is the Laplacian and (1&nbsp;&minus;&nbsp;Δ)<sup>&minus;''s''/2</sup> is understood in terms of the [[spectral mapping theorem]].  Apart from providing a workable definition of Sobolev spaces for non-integer ''s'', this definition also has particularly desirable properties under the [[Fourier transform]] that make it ideal for the study of [[pseudodifferential operator]]s.  Using these methods on a [[compact space|compact]] [[Riemannian manifold]], one can obtain for instance the [[Hodge decomposition]], which is the basis of [[Hodge theory]].<ref>Details can be found in {{harvtxt|Warner|1983}}.</ref>
 
===Spaces of holomorphic functions===
;Hardy spaces
The [[Hardy space]]s are function spaces, arising in [[complex analysis]] and [[harmonic analysis]], whose elements are certain [[holomorphic function]]s in a complex domain.<ref>A general reference on Hardy spaces is the book {{harvtxt|Duren|1970}}.</ref> Let ''U'' denote the [[unit disc]] in the complex plane. Then the Hardy space ''H''<sup>2</sup>(''U'') is defined as the space of holomorphic functions ''f'' on ''U'' such that the means
 
:<math>M_r(f) = \frac{1}{2\pi}\int_0^{2\pi}|f(re^{i\theta})|^2\,d\theta</math>
 
remain bounded for {{nowrap|''r'' < 1}}. The norm on this Hardy space is defined by
 
:<math>\|f\|_2 = \lim_{r\to 1} \sqrt{M_r(f)}.</math>
 
Hardy spaces in the disc are related to Fourier series. A function ''f'' is in ''H''<sup>2</sup>(''U'') if and only if
 
:<math>f(z) = \sum_{n=0}^\infty a_nz^n</math>
 
where
 
:<math>\sum_{n=0}^\infty\,|a_n|^2 < \infty.</math>
 
Thus ''H''<sup>2</sup>(''U'') consists of those functions that are L<sup>2</sup> on the circle, and whose negative frequency Fourier coefficients vanish.
 
;Bergman spaces
The [[Bergman space]]s are another family of Hilbert spaces of holomorphic functions.<ref>{{harvnb|Krantz|2002|loc=§1.4}}</ref>  Let ''D'' be a bounded open set in the [[complex plane]] (or a higher dimensional complex space) and let ''L''<sup>2,''h''</sup>(''D'') be the space of holomorphic functions ''f'' in ''D'' that are also in ''L''<sup>2</sup>(''D'') in the sense that
:<math>\|f\|^2 = \int_D |f(z)|^2\,d\mu(z) < \infty,</math>
where the integral is taken with respect to the Lebesgue measure in ''D''.  Clearly ''L''<sup>2, ''h''</sup>(''D'') is a subspace of ''L''<sup>2</sup>(''D''); in fact, it is a [[closed set|closed]] subspace, and so a Hilbert space in its own right.  This is a consequence of the estimate, valid on [[compact space|compact]] subsets ''K'' of ''D'', that
:<math>\sup_{z\in K} |f(z)| \le C_K \|f\|_2,</math>
which in turn follows from [[Cauchy's integral formula]].  Thus convergence of a sequence of holomorphic functions in ''L''<sup>2</sup>(''D'') implies also [[compact convergence]], and so the limit function is also holomorphic.  Another consequence of this inequality is that the linear functional that evaluates a function ''f'' at a point of ''D'' is actually continuous on ''L''<sup>2,''h''</sup>(''D'').  The Riesz representation theorem implies that the evaluation functional can be represented as an element of ''L''<sup>2,''h''</sup>(''D'').  Thus, for every ''z''&nbsp;∈&nbsp;''D'', there is a function η<sub>''z''</sub>&nbsp;∈&nbsp;''L''<sup>2,''h''</sup>(''D'') such that
:<math>f(z) = \int_D f(\zeta)\overline{\eta_z(\zeta)}\,d\mu(\zeta)</math>
for all ''f'' ∈ ''L''<sup>2,''h''</sup>(''D''). The integrand
:<math>K(\zeta,z) = \overline{\eta_z(\zeta)}</math>
is known as the [[Bergman kernel]] of ''D''.  This [[integral kernel]] satisfies a reproducing property
:<math>f(z) = \int_D f(\zeta)K(\zeta,z)\,d\mu(\zeta).</math>
 
A Bergman space is an example of a [[reproducing kernel Hilbert space]], which is a Hilbert space of functions along with a kernel ''K''(ζ,''z'') that verifies a reproducing property analogous to this one.  The Hardy space ''H''<sup>2</sup>(''D'') also admits a reproducing kernel, known as the [[Szegő kernel]].<ref>{{harvnb|Krantz|2002|loc=§1.5}}</ref>  Reproducing kernels are common in other areas of mathematics as well.  For instance, in [[harmonic analysis]] the [[Poisson kernel]] is a reproducing kernel for the Hilbert space of square-integrable [[harmonic function]]s in the [[unit ball]].  That the latter is a Hilbert space at all is a consequence of the mean value theorem for harmonic functions.
 
==Applications==
Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like [[projection operator|projection]] and [[change of basis]] from their usual finite dimensional setting. In particular, the [[spectral theory]] of [[continuous function|continuous]] [[self-adjoint operator|self-adjoint]] [[linear operator]]s on a Hilbert space generalizes the usual [[spectral decomposition]] of a [[matrix (mathematics)|matrix]], and this often plays a major role in applications of the theory to other areas of mathematics and physics.
 
===Sturm–Liouville theory===
{{Main|Sturm–Liouville theory|Spectral theory of ordinary differential equations}}
[[File:Harmonic partials on strings.svg|right|thumb|The [[overtone]]s of a vibrating string. These are [[eigenfunction]]s of an associated Sturm–Liouville problem. The eigenvalues 1,1/2,1/3,… form the (musical) [[harmonic series (music)|harmonic series]].]]
In the theory of [[ordinary differential equation]]s, spectral methods on a suitable Hilbert space are used to study the behavior of eigenvalues and eigenfunctions of differential equations. For example, the [[Sturm–Liouville theory|Sturm–Liouville problem]] arises in the study of the harmonics of waves in a violin string or a drum, and is a central problem in [[ordinary differential equations]].<ref>{{harvnb|Young|1988|loc=Chapter 9}}.</ref> The problem is a differential equation of the form
:<math> -\frac{d}{dx}\left[p(x)\frac{dy}{ dx}\right]+q(x)y=\lambda w(x)y</math>
for an unknown function ''y'' on an interval [''a'',''b''], satisfying general homogeneous [[Robin boundary conditions]]
:<math>\begin{cases}
\alpha y(a)+\alpha' y'(a)=0\\
\beta y(b) + \beta' y'(b)=0.
\end{cases}</math>
The functions ''p'', ''q'', and ''w'' are given in advance, and the problem is to find the function ''y'' and constants λ for which the equation has a solution. The problem only has solutions for certain values of λ, called eigenvalues of the system, and this is a consequence of the spectral theorem for [[compact operator]]s applied to the [[integral operator]] defined by the [[Green's function]] for the system. Furthermore, another consequence of this general result is that the eigenvalues λ of the system can be arranged in an increasing sequence tending to infinity.<ref>The eigenvalues of the Fredholm kernel are 1/λ, which tend to zero.</ref>
 
===Partial differential equations===
Hilbert spaces form a basic tool in the study of [[partial differential equations]].<ref name="BeJoSc81">{{harvnb|Bers|John|Schechter|1981}}.</ref> For many classes of partial differential equations, such as linear [[elliptic partial differential equation|elliptic equations]], it is possible to consider a generalized solution (known as a [[weak derivative|weak]] solution) by enlarging the class of functions.  Many weak formulations involve the class of [[Sobolev space|Sobolev functions]], which is a Hilbert space.  A suitable weak formulation reduces to a geometrical problem the analytic problem of finding a solution or, often what is more important, showing that a solution exists and is unique for given boundary data.  For linear elliptic equations, one geometrical result that ensures unique solvability for a large class of problems is the [[Lax–Milgram theorem]].  This strategy forms the rudiment of the [[Galerkin method]] (a [[finite element method]]) for numerical solution of partial differential equations.<ref>More detail on finite element methods from this point of view can be found in {{harvtxt|Brenner|Scott|2005}}.</ref>
 
A typical example is the [[Poisson equation]] {{nowrap|−Δ''u'' {{=}} ''g''}} with [[Dirichlet boundary conditions]] in a bounded domain Ω in '''R'''<sup>2</sup>.  The weak formulation consists of finding a function ''u'' such that, for all continuously differentiable functions ''v'' in Ω vanishing on the boundary:
:<math>\int_\Omega \nabla u\cdot\nabla v = \int_\Omega gv.</math>
 
This can be recast in terms of the Hilbert space ''H''{{su|p=1|b=0}}(Ω) consisting of functions ''u'' such that ''u'', along with its weak partial derivatives, are square integrable on  Ω, and vanish on the boundary. The question then reduces to finding ''u'' in this space such that for all ''v'' in this space
:<math>a(u,v) = b(v)</math>
 
where ''a'' is a continuous [[bilinear form]], and ''b'' is a continuous [[linear functional]], given respectively by
:<math>a(u,v) = \int_\Omega \nabla u\cdot\nabla v,\quad b(v)= \int_\Omega gv.</math>
 
Since the Poisson equation is [[elliptic partial differential equation|elliptic]], it follows from Poincaré's inequality that the bilinear form ''a'' is [[coercive function|coercive]]. The Lax–Milgram theorem then ensures the existence and uniqueness of solutions of this equation.
 
Hilbert spaces allow for many elliptic partial differential equations to be formulated in a similar way, and the Lax–Milgram theorem is then a basic tool in their analysis.  With suitable modifications, similar techniques can be applied to [[parabolic partial differential equation]]s and certain [[hyperbolic partial differential equation]]s.
 
===Ergodic theory===
[[File:BunimovichStadium.svg|thumb|right|The path of a [[dynamical billiards|billiard]] ball in the [[Bunimovich stadium]] is described by an ergodic [[dynamical system]].]]
The field of [[ergodic theory]] is the study of the long-term behavior of [[chaos theory|chaotic]] [[dynamical system]]s.  The protypical case of a field that ergodic theory applies to is [[thermodynamics]], in which&mdash;though the microscopic state of a system is extremely complicated (it is impossible to understand the ensemble of individual collisions between particles of matter)&mdash;the average behavior over sufficiently long time intervals is tractable. The [[laws of thermodynamics]] are assertions about such average behavior.  In particular, one formulation of the [[zeroth law of thermodynamics]] asserts that over sufficiently long timescales, the only functionally independent measurement that one can make of a thermodynamic system in equilibrium is its total energy, in the form of [[temperature]].
 
An ergodic dynamical system is one for which, apart from the energy&mdash;measured by the [[Hamiltonian (quantum mechanics)|Hamiltonian]]&mdash;there are no other functionally independent [[conserved quantities]] on the [[phase space]].  More explicitly, suppose that the energy ''E'' is fixed, and let Ω<sub>''E''</sub> be the subset of the phase space consisting of all states of energy ''E'' (an energy surface), and let ''T''<sub>''t''</sub> denote the evolution operator on the phase space. The dynamical system is ergodic if there are no continuous non-constant functions on Ω<sub>''E''</sub> such that
:<math>f(T_tw) = f(w)\,</math>
for all ''w'' on Ω<sub>''E''</sub> and all time ''t''.  [[Liouville's theorem (Hamiltonian)|Liouville's theorem]] implies that there exists a [[measure theory|measure]] μ on the energy surface that is invariant under the time translation.  As a result, time translation is a [[unitary transformation]] of the Hilbert space ''L''<sup>2</sup>(Ω<sub>''E''</sub>,μ) consisting of square-integrable functions on the energy surface Ω<sub>''E''</sub> with respect to the inner product
:<math>\langle f,g\rangle_{L^2(\Omega_E,\mu)} = \int_E f\bar{g}\,d\mu.</math>
 
The von Neumann mean ergodic theorem<ref name="von Neumann 1932"/> states the following:
* If ''U''<sub>''t''</sub> is a (strongly continuous) one-parameter semigroup of unitary operators on a Hilbert space ''H'', and ''P'' is the orthogonal projection onto the space of common fixed points of ''U''<sub>''t''</sub>, {''x''∈''H''&nbsp;|&nbsp;''U''<sub>''t''</sub>''x''&nbsp;=&nbsp;''x'' for all ''t''&nbsp;>&nbsp;0}, then
::<math>Px = \lim_{T\to\infty}\frac{1}{T}\int_0^TU_tx\,dt.</math>
 
For an ergodic system, the fixed set of the time evolution consists only of the constant functions, so the ergodic theorem implies the following:<ref>{{harvnb|Reed|Simon|1980}}</ref> for any function ''f'' ∈ ''L''<sup>2</sup>(Ω<sub>''E''</sub>,μ),
:<math>\underset{T\to\infty}{L^2\!-\!\lim} \frac{1}{T}\int_0^T f(T_tw)\,dt = \int_{\Omega_E} f(y)\,d\mu(y).</math>
That is, the long time average of an observable ''f'' is equal to its expectation value over an energy surface.
 
===Fourier analysis===
[[File:Sawtooth Fourier Analysys.svg|thumb|right|Superposition of sinusoidal wave basis functions (bottom) to form a sawtooth wave (top)]]
[[File:Harmoniki.png|thumb|right|[[Spherical harmonics]], an orthonormal basis for the Hilbert space of square-integrable functions on the sphere, shown graphed along the radial direction]]
One of the basic goals of [[Fourier analysis]] is to decompose a function into a (possibly infinite) [[linear combination]] of given basis functions: the associated [[Fourier series]].  The classical Fourier series associated to a function ''f'' defined on the interval [0, 1] is a series of the form
:<math>\sum_{n=-\infty}^\infty a_n e^{2\pi in\theta}</math>
where
:<math>a_n = \int_0^1f(\theta)e^{-2\pi in\theta}\,d\theta.</math>
 
The example of adding up the first few terms in a Fourier series for a sawtooth function is shown in the figure. The basis functions are sine waves with wavelengths λ/''n'' (''n''=integer) shorter than the wavelength λ of the sawtooth itself (except for ''n''=1, the ''fundamental'' wave). All basis functions have nodes at the nodes of the sawtooth, but all but the fundamental have additional nodes. The oscillation of the summed terms about the sawtooth is called the [[Gibbs phenomenon]].
 
A significant problem in classical Fourier series asks in what sense the Fourier series converges, if at all, to the function ''f''. Hilbert space methods provide one possible answer to this question.<ref>A treatment of Fourier series from this point of view is available, for instance, in {{harvtxt|Rudin|1987}} or {{harvtxt|Folland|2009}}.</ref>  The functions ''e<sub>n</sub>''(θ) = e<sup>2πi''n''θ</sup> form an orthogonal basis of the Hilbert space ''L''<sup>2</sup>([0,1]).  Consequently, any square-integrable function can be expressed as a series
:<math>f(\theta) = \sum_n a_n e_n(\theta),\quad a_n = \langle f,e_n\rangle</math>
 
and, moreover, this series converges in the Hilbert space sense (that is, in the [[mean convergence|''L''<sup>2</sup> mean]]).
 
The problem can also be studied from the abstract point of view: every Hilbert space has an [[orthonormal basis]], and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements.  The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space.<ref>{{harvnb|Halmos|1957|loc=§5}}</ref> The abstraction is especially useful when it is more natural to use different basis functions for a space such as ''L''<sup>2</sup>([0,1]).  In many circumstances, it is desirable not to decompose a function into trigonometric functions, but rather into [[orthogonal polynomials]] or [[wavelet]]s for instance,<ref>{{harvnb|Bachman|Narici|Beckenstein|2000}}</ref> and in higher dimensions into [[spherical harmonics]].<ref>{{harvnb|Stein|Weiss|1971|loc=§IV.2}}.</ref>
 
For instance, if ''e''<sub>''n''</sub> are any orthonormal basis functions of ''L''<sup>2</sup>[0,1], then a given function in ''L''<sup>2</sup>[0,1] can be approximated as a finite linear combination<ref>{{harvnb|Lancos|1988|pp=212&ndash;213}}</ref>
:<math>f(x) \approx f_n (x) = a_1 e_1 (x) + a_2 e_2(x) + \cdots + a_n e_n (x)</math>
The coefficients {''a''<sub>''j''</sub>} are selected to make the magnitude of the difference ||{{nowrap|''&fnof;'' − ''&fnof;''<sub>''n''</sub>}}||<sup>2</sup> as small as possible.  Geometrically, the [[#Best approximation|best approximation]] is the [[#Orthogonal complements and projections|orthogonal projection]] of ''&fnof;'' onto the subspace consisting of all linear combinations of the {''e''<sub>''j''</sub>}, and can be calculated by<ref>{{harvnb|Lanczos|1988|loc=Equation 4-3.10}}</ref>
:<math>a_j = \int_0^1 \overline{e_j(x)}f (x) \, dx.</math>
That this formula minimizes the difference ||{{nowrap|''&fnof;'' − ''&fnof;''<sub>''n''</sub>}}||<sup>2</sup> is a consequence of [[#Bessel's inequality and Parseval's formula|Bessel's inequality and Parseval's formula]].
 
In various applications to physical problems, a function can be decomposed into physically meaningful [[eigenfunction]]s of a [[differential operator]] (typically the [[Laplace operator]]): this forms the foundation for the spectral study of functions, in reference to the [[spectral theorem|spectrum]] of the differential operator.<ref>The classic reference for spectral methods is {{harvnb|Courant|Hilbert|1953}}.  A more up-to-date account is {{harvnb|Reed|Simon|1975}}.</ref> A concrete physical application involves the problem of [[hearing the shape of a drum]]: given the fundamental modes of vibration that a drumhead is capable of producing, can one infer the shape of the drum itself?<ref>{{harvnb|Kac|1966}}</ref>  The mathematical formulation of this question involves the [[Dirichlet eigenvalue]]s of the Laplace equation in the plane, that represent the fundamental modes of vibration in direct analogy with the integers that represent the fundamental modes of vibration of the violin string.
 
[[Spectral theory]] also underlies certain aspects of the [[Fourier transform]] of a function.  Whereas Fourier analysis decomposes a function defined on a [[compact set]] into the discrete spectrum of the Laplacian (which corresponds to the vibrations of a violin string or drum), the Fourier transform of a function is the decomposition of a function defined on all of Euclidean space into its components in the [[continuous spectrum]] of the Laplacian.  The Fourier transformation is also geometrical, in a sense made precise by the [[Plancherel theorem]], that asserts that it is an [[isometry]] of one Hilbert space (the "time domain") with another (the "frequency domain").  This isometry property of the Fourier transformation is a recurring theme in abstract [[harmonic analysis]], as evidenced for instance by the [[Plancherel theorem for spherical functions]] occurring in [[noncommutative harmonic analysis]].
 
===Quantum mechanics===
[[File:HAtomOrbitals.png|right|thumb|The [[atomic orbital|orbitals]] of an [[electron]] in a [[hydrogen atom]] are [[eigenfunction]]s of the [[energy (physics)|energy]].]]
In the mathematically rigorous formulation of quantum mechanics, developed by [[John von Neumann]],<ref>{{harvnb|von Neumann|1955}}</ref> the possible states (more precisely, the [[pure state]]s) of a quantum mechanical system are represented by [[unit vector]]s (called ''state vectors'') residing in a complex separable Hilbert space, known as the [[State space (physics)|state space]], well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the [[projective space|projectivization]] of a Hilbert space, usually called the [[complex projective space]]. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all [[square-integrable]] functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of [[spinors in three dimensions|spinors]]. Each observable is represented by a [[self-adjoint operator|self-adjoint]] [[linear operator]] acting on the state space. Each eigenstate of an observable corresponds to an [[eigenvector]] of the operator, and the associated [[eigenvalue]] corresponds to the value of the observable in that eigenstate.
 
The time evolution of a quantum state is described by the [[Schrödinger equation]], in which the [[Hamiltonian (quantum mechanics)|Hamiltonian]], the [[Operator (physics)|operator]] corresponding to the [[total energy]] of the system, generates time evolution.
 
The inner product between two state vectors is a complex number known as a [[probability amplitude]]. During an ideal measurement of a quantum mechanical system, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the [[absolute value]] of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator&mdash;which explains the choice of self-adjoint operators, for all the eigenvalues must be real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator.
 
For a general system, states are typically not pure, but instead are represented as statistical mixtures of pure states, or mixed states, given by [[density matrix|density matrices]]: self-adjoint operators of [[trace of a matrix|trace]] one on a Hilbert space. Moreover, for general quantum mechanical systems, the effects of a single measurement can influence other parts of a system in a manner that is described instead by a [[positive operator valued measure]].  Thus the structure both of the states and observables in the general theory is considerably more complicated than the idealization for pure states.
 
Heisenberg's [[uncertainty principle]] is represented by the statement that the operators corresponding to certain observables do not commute, and gives a specific form that the [[commutator]] must have.
 
==Properties==
 
===Pythagorean identity===
Two vectors ''u'' and ''v'' in a Hilbert space ''H'' are orthogonal when <math>\langle u, v\rangle</math>&nbsp;= 0. The notation for this is {{nowrap|''u'' ⊥ ''v''}}. More generally, when ''S'' is a subset in ''H'', the notation {{nowrap|''u'' ⊥ ''S''}} means that ''u'' is orthogonal to every element from ''S''.<br>
When ''u'' and ''v'' are orthogonal, one has
 
:<math>\|u + v\|^2 = \langle u + v, u + v \rangle = \langle u, u \rangle + 2 \, \mathrm{Re} \langle u, v \rangle + \langle v, v \rangle= \|u\|^2 + \|v\|^2.</math>
 
By induction on ''n'', this is extended to any family ''u''<sub>1</sub>,...,''u<sub>n</sub>'' of ''n'' orthogonal vectors,
 
:<math>\|u_1 + \cdots + u_n\|^2 = \|u_1\|^2 + \cdots + \|u_n\|^2.</math>
 
Whereas the Pythagorean identity as stated is valid in any inner product space, completeness is required for the extension of the Pythagorean identity to series. A series Σ&nbsp;''u<sub>k</sub>'' of ''orthogonal'' vectors converges in ''H''&thinsp; if and only if the series of squares of norms converges, and
:<math>\bigl\|\sum_{k=0}^\infty u_k \bigr\|^2 = \sum_{k=0}^\infty \|u_k\|^2.</math>
Furthermore, the sum of a series of orthogonal vectors is independent of the order in which it is taken.
 
===Parallelogram identity and polarization===
[[File:Color parallelogram.svg|right|thumb|Geometrically, the parallelogram identity asserts that {{nowrap|AC<sup>2</sup> + BD<sup>2</sup> {{=}} 2(AB<sup>2</sup> + AD<sup>2</sup>)}}. In words, the sum of the squares of the diagonals is twice the sum of the squares of any two adjacent sides.]]
By definition, every Hilbert space is also a [[Banach space]]. Furthermore, in every Hilbert space the following [[parallelogram identity]] holds:
: <math>\|u+v\|^2+\|u-v\|^2=2(\|u\|^2+\|v\|^2).</math>
Conversely, every Banach space in which the parallelogram identity holds is a Hilbert space, and the inner product is uniquely determined by the norm by the [[polarization identity]].<ref>{{harvnb|Young|1988|p=23}}.</ref> For real Hilbert spaces, the polarization identity is
:<math>\langle u,v\rangle = \frac{1}{4}\left(\|u+v\|^2-\|u-v\|^2\right).</math>
For complex Hilbert spaces, it is
:<math>\langle u,v\rangle = \frac{1}{4}\left(\|u+v\|^2-\|u-v\|^2+i\|u+iv\|^2-i\|u-iv\|^2\right).</math>
The parallelogram law implies that any Hilbert space is a [[uniformly convex Banach space]].<ref>{{harvnb|Clarkson|1936}}.</ref>
 
===Best approximation===
If ''C'' is a non-empty closed convex subset of a Hilbert space ''H'' and ''x'' a point in ''H'', there exists a unique point ''y''&nbsp;∈ ''C'' that minimizes the distance between ''x'' and points in ''C'',<ref>{{harvnb|Rudin|1987|loc=Theorem 4.10}}</ref>
 
:<math> y \in C, \ \ \ \|x - y\| = \mathrm{dist}(x, C) = \min \{ \|x - z\| : z \in C \}.</math>
 
This is equivalent to saying that there is a point with minimal norm in the translated convex set ''D''&nbsp;= {{nowrap|''C'' − ''x''}}. The proof consists in showing that every minimizing sequence (''d<sub>n</sub>'')&nbsp;⊂ ''D'' is Cauchy (using the parallelogram identity) hence converges (using completeness) to a point in ''D'' that has minimal norm. More generally, this holds in any uniformly convex Banach space.<ref>{{harvnb|Dunford|Schwartz|1958|loc=II.4.29}}</ref>
 
When this result is applied to a closed subspace ''F'' of ''H'', it can be shown that the point ''y'' ∈ ''F'' closest to ''x'' is characterized by<ref>{{harvnb|Rudin|1987|loc=Theorem 4.11}}</ref>
 
:<math> y \in F, \ \ x - y \perp F.</math>
 
This point ''y'' is the ''orthogonal projection'' of ''x'' onto ''F'', and the mapping ''P<sub>F</sub>''&nbsp;: {{nowrap|''x'' → ''y''}} is linear (see [[#Orthogonal complements and projections|Orthogonal complements and projections]]).  This result is especially significant in [[applied mathematics]], especially [[numerical analysis]], where it forms the basis of [[least squares]] methods {{Citation needed|date=August 2012}}.
 
In particular, when ''F'' is not equal to ''H'', one can find a non-zero vector ''v'' orthogonal to ''F'' (select ''x'' not in ''F'' and ''v''&nbsp;= {{nowrap|''x'' − ''y'')}}. A very useful criterion is obtained by applying this observation to the closed subspace ''F'' generated by a subset ''S'' of ''H''.
:A subset ''S'' of ''H'' spans a dense vector subspace if (and only if) the vector 0 is the sole vector ''v'' ∈ ''H'' orthogonal to ''S''.
 
===Duality===
The [[continuous dual space|dual space]] ''H*'' is the space of all [[continuous function (topology)|continuous]] linear functions from the space ''H'' into the base field.  It carries a natural norm, defined by
:<math>\|\varphi\| = \sup_{\|x\|=1, x\in H} |\varphi(x)|.</math>
This norm satisfies the parallelogram law, and so the dual space is also an inner product space.  The dual space is also complete, and so it is a Hilbert space in its own right.
 
The [[Riesz representation theorem]] affords a convenient description of the dual.  To every element ''u'' of ''H'', there is a unique element φ<sub>''u''</sub> of ''H*'', defined by
:<math>\varphi_u(x) = \langle x,u\rangle.</math>
The mapping <math>u\mapsto \varphi_u</math> is an [[antilinear map]]ping from ''H'' to ''H*''.  The Riesz representation theorem states that this mapping is an antilinear isomorphism.<ref>{{harvnb|Weidmann|1980|loc=Theorem 4.8}}</ref>  Thus to every element ''φ'' of the dual ''H*'' there exists one and only one ''u''<sub>φ</sub> in ''H'' such that
:<math>\langle x, u_\varphi\rangle = \varphi(x)</math>
for all ''x''&nbsp;∈&nbsp;''H''.  The inner product on the dual space ''H*'' satisfies
:<math> \langle \varphi, \psi \rangle = \langle u_\psi, u_\varphi \rangle.</math>
The reversal of order on the right-hand side restores linearity in φ from the antilinearity of ''u''<sub>φ</sub>.  In the real case, the antilinear isomorphism from ''H'' to its dual is actually an isomorphism, and so real Hilbert spaces are naturally isomorphic to their own duals.
 
The representing vector ''u''<sub>φ</sub> is obtained in the following way. When ''φ''&nbsp;≠ 0, the [[Kernel (algebra)|kernel]] ''F'' = Ker(φ) is a closed vector subspace of ''H'', not equal to ''H'', hence there exists a non-zero vector ''v'' orthogonal to ''F''. The vector ''u'' is a suitable scalar multiple ''λv'' of ''v''. The requirement that φ(''v'') = ⟨''v'',&nbsp;''u''⟩ yields
:<math> u = \langle v, v \rangle^{-1} \, \overline{\varphi (v)} \, v.</math>
 
This correspondence ''φ'' ↔ ''u'' is exploited by the [[bra-ket notation]] popular in [[physics]]. It is common in physics to assume that the inner product, denoted by ⟨''x''|''y''⟩, is linear on the right,
:<math>\langle x| y \rangle = \langle y, x \rangle.</math>
The result ⟨''x''|''y''⟩ can be seen as the action of the linear functional ⟨''x''| (the ''bra'') on the vector&nbsp; |''y''⟩ (the ''ket'').
 
The Riesz representation theorem relies fundamentally not just on the presence of an inner product, but also on the completeness of the space.  In fact, the theorem implies that the [[Banach space|topological dual]] of any inner product space can be identified with its completion.  An immediate consequence of the Riesz representation theorem is also that a Hilbert space ''H'' is [[reflexive space|reflexive]], meaning that the natural map from ''H'' into its [[dual space|double dual space]] is an isomorphism.
 
===Weakly convergent sequences===
{{main|Weak convergence (Hilbert space)}}
In a Hilbert space ''H'', a sequence {''x''<sub>''n''</sub>} is [[Weak topology#Weak convergence|weakly convergent]] to a vector ''x''&nbsp;∈ ''H'' when
 
:<math>\lim_n \langle x_n, v \rangle = \langle x, v \rangle</math>
 
for every {{nowrap|''v'' ∈ ''H''}}.
 
For example, any orthonormal sequence {''f''<sub>''n''</sub>} converges weakly to&nbsp;0, as a consequence of [[#Bessel's inequality|Bessel's inequality]]. Every weakly convergent sequence {''x''<sub>''n''</sub>} is bounded, by the [[uniform boundedness principle]].
 
Conversely, every bounded sequence in a Hilbert space admits weakly convergent subsequences ([[Alaoglu's theorem]]).<ref>{{harvnb|Weidmann|1980|loc=§4.5}}</ref> This fact may be used to prove minimization results for continuous [[convex function]]als, in the same way that the [[Bolzano–Weierstrass theorem]] is used for continuous functions on '''R'''<sup>''d''</sup>. Among several variants, one simple statement is as follows:<ref>{{harvnb|Buttazzo|Giaquinta|Hildebrandt|1998|loc=Theorem 5.17}}</ref>
 
:If ''f'': {{nowrap|''H'' → '''R'''}} is a convex continuous function such that ''f''(''x'') tends to +∞ when ||''x''|| tends to ∞, then ''f'' admits a minimum at some point {{nowrap|''x''<sub>0</sub> ∈ ''H''}}.
 
This fact (and its various generalizations) are fundamental for [[direct method in the calculus of variations|direct method]]s in the [[calculus of variations]].  Minimization results for convex functionals are also a direct consequence of the slightly more abstract fact that closed bounded convex subsets in a Hilbert space ''H'' are [[Weak topology|weakly compact]], since ''H'' is reflexive. The existence of weakly convergent subsequences is a special case of the [[Eberlein–Šmulian theorem]].
 
===Banach space properties===
Any general property of [[Banach space]]s continues to hold for Hilbert spaces. The [[open mapping theorem (functional analysis)|open mapping theorem]] states that a [[continuous function|continuous]] [[surjective]] linear transformation from one Banach space to another is an [[open mapping]] meaning that it sends open sets to open sets.  A corollary is the [[bounded inverse theorem]], that a continuous and [[bijective]] linear function from one Banach space to another is an isomorphism (that is, a continuous linear map whose inverse is also continuous).  This theorem is considerably simpler to prove in the case of Hilbert spaces than in general Banach spaces.<ref>{{harvnb|Halmos|1982|loc=Problem 52, 58}}</ref>  The open mapping theorem is equivalent to the [[closed graph theorem]], which asserts that a function from one Banach space to another is continuous if and only if its graph is a [[closed set]].<ref>{{harvnb|Rudin|1973}}</ref>  In the case of Hilbert spaces, this is basic in the study of [[unbounded operator]]s (see [[closed operator]]).
 
The (geometrical) [[Hahn–Banach theorem]] asserts that a closed convex set can be separated from any point outside it by means of a [[hyperplane]] of the Hilbert space.  This is an immediate consequence of the [[#Best approximation|best approximation]] property: if ''y'' is the element of a closed convex set ''F'' closest to ''x'', then the separating hyperplane is the plane perpendicular to the segment ''xy'' passing through its midpoint.<ref>{{harvnb|Trèves|1967|loc=Chapter 18}}</ref>
 
==Operators on Hilbert spaces==
 
===Bounded operators===
The [[continuous function (topology)|continuous]] [[linear operator]]s ''A'' : ''H''<sub>1</sub> → ''H''<sub>2</sub> from a Hilbert space ''H''<sub>1</sub> to a second Hilbert space ''H''<sub>2</sub> are ''bounded'' in the sense that they map [[bounded set]]s to bounded sets. Conversely, if an operator is bounded, then it is continuous. The space of such [[bounded linear operator]]s has a [[norm (mathematics)|norm]], the [[operator norm]] given by
 
:<math>\lVert A \rVert = \sup \left\{\,\lVert Ax \rVert : \lVert x \rVert \leq 1\,\right\}.</math>
 
The sum and the composite of two bounded linear operators is again bounded and linear. For ''y'' in ''H''<sub>2</sub>, the map that sends ''x''&nbsp;∈ ''H''<sub>1</sub> to ⟨''Ax'', ''y''⟩ is linear and continuous, and according to the [[Riesz representation theorem]] can therefore be represented in the form
 
:<math>\langle x, A^* y \rangle = \langle Ax, y \rangle</math>
for some vector ''A*''&thinsp;''y'' in ''H''<sub>1</sub>. This defines another bounded linear operator ''A*'': ''H''<sub>2</sub> → ''H''<sub>1</sub>, the [[Hermitian adjoint|adjoint]] of ''A''. One can see that {{nowrap|''A**'' {{=}} ''A''}}.
 
The set B(''H'') of all bounded linear operators on ''H'', together with the addition and composition operations, the norm and the adjoint operation, is a [[C*-algebra]], which is a type of [[operator algebra]].
 
An element ''A''&thinsp; of B(''H'') is called ''self-adjoint'' or ''Hermitian'' if ''A*''= ''A''.  If ''A''&thinsp; is Hermitian and {{nowrap|⟨''Ax'', ''x''⟩ ≥}} 0 for every ''x'', then ''A''&thinsp; is called ''non-negative'', written ''A''&nbsp;≥&nbsp;0; if equality holds only when ''x''&nbsp;=&nbsp;0, then ''A''&thinsp; is called ''positive''.  The set of self adjoint operators admits a [[partial order]], in which ''A'' ≥ ''B'' if ''A''&nbsp;&minus;&nbsp;''B''&nbsp;≥&nbsp;0.  If ''A''&thinsp; has the form ''B*''&thinsp;''B''&thinsp; for some ''B'', then ''A'' is non-negative; if ''B'' is invertible, then ''A''&thinsp; is positive.  A converse is also true in the sense that, for a non-negative operator ''A'', there exists a unique non-negative [[Square root of a matrix|square root]] ''B'' such that
 
:<math>A = B^2=B^*B.\,</math>
 
In a sense made precise by the [[#Spectral theorem|spectral theorem]], self-adjoint operators can usefully be thought of as operators that are "real".  An element ''A'' of B(''H'') is called ''normal'' if ''A*''&thinsp;''A'' = ''A''&thinsp;''A*''.  Normal operators decompose into the sum of a self-adjoint operators and an imaginary multiple of a self adjoint operator
:<math>A = \frac{A+A^*}{2} + i\frac{A-A^*}{2i}</math>
that commute with each other.  Normal operators can also usefully be thought of in terms of their real and imaginary parts.
 
An element ''U''&thinsp; of B(''H'') is called [[unitary operator|unitary]] if ''U''&thinsp; is invertible and its inverse is given by ''U*''. This can also be expressed by requiring that ''U''&thinsp; be onto {{nowrap|and ⟨''Ux'', ''Uy''⟩ {{=}} ⟨''x'', ''y''⟩}} for all ''x'' and ''y'' in ''H''. The unitary operators form a [[group (mathematics)|group]] under composition, which is the [[isometry group]] of ''H''.
 
An element of B(''H'') is [[compact operator|compact]] if it sends bounded sets to [[relatively compact]] sets.  Equivalently, a bounded operator ''T'' is compact if, for any bounded sequence {''x<sub>k</sub>''}, the sequence {''Tx<sub>k</sub>''} has a convergent subsequence.  Many [[integral operator]]s are compact, and in fact define a special class of operators known as [[Hilbert–Schmidt operator]]s that are especially important in the study of [[integral equation]]s.  [[Fredholm operator]]s differ from a compact operator by a multiple of the identity, and are equivalently characterized as operators with a finite dimensional [[kernel (linear operator)|kernel]] and [[cokernel]].  The index of a Fredholm operator ''T'' is defined by
:<math>\operatorname{index}\, T = \dim\ker T - \dim\operatorname{coker}\, T.</math>
The index is [[homotopy]] invariant, and plays a deep role in [[differential geometry]] via the [[Atiyah–Singer index theorem]].
 
===Unbounded operators===
[[Unbounded operator]]s are also tractable in Hilbert spaces, and have important applications to [[quantum mechanics]].<ref>See {{harvtxt|Prugovečki|1981}}, {{harvtxt|Reed|Simon|1980|loc=Chapter VIII}} and {{harvtxt|Folland|1989}}.</ref>  An unbounded operator ''T'' on a Hilbert space ''H'' is defined as a linear operator whose domain ''D''(''T'') is a linear subspace of ''H''.  Often the domain ''D''(''T'') is a dense subspace of ''H'', in which case ''T'' is known as a [[densely defined operator]].
 
The adjoint of a densely defined unbounded operator is defined in essentially the same manner as for bounded operators. [[Self-adjoint operator|Self-adjoint unbounded operators]] play the role of the ''observables'' in the mathematical formulation of quantum mechanics. Examples of self-adjoint unbounded operators on the Hilbert space ''L''<sup>2</sup>('''R''') are:<ref>{{harvnb|Prugovečki|1981|loc=III, §1.4}}</ref>
* A suitable extension of the differential operator
 
:: <math> (A f)(x) = i \frac{d}{dx} f(x), \, </math>
 
: where ''i'' is the imaginary unit and ''f'' is a differentiable function of compact support.
* The multiplication-by-''x'' operator:
 
:: <math> (B f) (x) = x f(x).\, </math>
 
These correspond to the [[momentum]] and [[position operator|position]] observables, respectively. Note that neither ''A'' nor ''B'' is defined on all of ''H'', since in the case of ''A'' the derivative need not exist, and in the case of ''B'' the product function need not be square integrable. In both cases, the set of possible arguments form dense subspaces of ''L''<sup>2</sup>('''R''').
 
==Constructions==
 
===Direct sums===
Two Hilbert spaces ''H''<sub>1</sub> and ''H''<sub>2</sub> can be combined into another Hilbert space, called the [[direct sum of modules#Direct sum of Hilbert spaces|(orthogonal) direct sum]],<ref>{{harvnb|Dunford|Schwartz|1958|loc=IV.4.17-18}}</ref> and denoted
:<math>H_1\oplus H_2,</math>
consisting of the set of all [[ordered pair]]s (''x''<sub>1</sub>,&nbsp;''x''<sub>2</sub>) where {{nowrap|''x''<sub>''i''</sub> ∈ ''H''<sub>''i''</sub>}}, {{nowrap|''i'' {{=}} 1,2}}, and inner product defined by
:<math>\langle (x_1,x_2), (y_1,y_2)\rangle_{H_1\oplus H_2} = \langle x_1,y_1\rangle_{H_1} + \langle x_2,y_2\rangle_{H_2}.</math>
 
More generally, if ''H''<sub>''i''</sub> is a family of Hilbert spaces indexed by {{nowrap|''i'' ∈ ''I''}}, then the direct sum of the ''H''<sub>''i''</sub>, denoted
:<math>\bigoplus_{i\in I}H_i</math>
consists of the set of all indexed families
:<math>x=(x_i\in H_i|i\in I) \in \prod_{i\in I}H_i</math>
in the [[Cartesian product]] of the ''H''<sub>''i''</sub> such that
:<math>\sum_{i\in I} \|x_i\|^2 < \infty.</math>
The inner product is defined by
:<math>\langle x, y\rangle = \sum_{i\in I} \langle x_i, y_i\rangle_{H_i}.</math>
 
Each of the ''H''<sub>''i''</sub> is included as a closed subspace in the direct sum of all of the ''H''<sub>''i''</sub>. Moreover, the ''H''<sub>''i''</sub> are pairwise orthogonal. Conversely, if there is a system of closed subspaces, ''V''<sub>''i''</sub>, {{nowrap|''i'' ∈ ''I''}}, in a Hilbert space ''H'', that are pairwise orthogonal and whose union is dense in ''H'', then ''H'' is canonically isomorphic to the direct sum of ''V''<sub>''i''</sub>. In this case, ''H'' is called the internal direct sum of the ''V''<sub>''i''</sub>. A direct sum (internal or external) is also equipped with a family of orthogonal projections ''E''<sub>''i''</sub> onto the ''i''th direct summand ''H''<sub>i</sub>. These projections are bounded, self-adjoint, [[idempotent]] operators that satisfy the orthogonality condition
:<math>E_iE_j = 0,\quad i\not= j.</math>
 
The [[spectral theorem]] for [[compact operator|compact]] self-adjoint operators on a Hilbert space ''H'' states that ''H'' splits into an orthogonal direct sum of the eigenspaces of an operator, and also gives an explicit decomposition of the operator as a sum of projections onto the eigenspaces. The direct sum of Hilbert spaces also appears in quantum mechanics as the [[Fock space]] of a system containing a variable number of particles, where each Hilbert space in the direct sum corresponds to an additional [[degrees of freedom (mechanics)|degree of freedom]] for the quantum mechanical system. In [[representation theory]], the [[Peter–Weyl theorem]] guarantees that any [[unitary representation]] of a [[compact group]] on a Hilbert space splits as the direct sum of finite-dimensional representations.
 
===Tensor products===
:{{Main|Tensor product of Hilbert spaces}}
If ''H''<sub>1</sub> and ''H''<sub>2</sub>, then one defines an inner product on the (ordinary) [[tensor product]] as follows. On [[simple tensor]]s, let
:<math> \langle x_1 \otimes x_2, \, y_1 \otimes y_2 \rangle = \langle x_1, y_1 \rangle \, \langle x_2, y_2 \rangle.</math>
This formula then extends by [[Sesquilinear form|sesquilinearity]] to an inner product on ''H''<sub>1</sub> ⊗ ''H''<sub>2</sub>. The Hilbertian tensor product of ''H''<sub>1</sub> and ''H''<sub>2</sub>, sometimes denoted by <math>H_1\widehat{\otimes}H_2</math>, is the Hilbert space obtained by completing ''H''<sub>1</sub> ⊗ ''H''<sub>2</sub> for the metric associated to this inner product.<ref>{{harvnb|Weidmann|1980|loc=§3.4}}</ref>
 
An example is provided by the Hilbert space ''L''<sup>2</sup>([0,&nbsp;1]). The Hilbertian tensor product of two copies of ''L''<sup>2</sup>([0,&nbsp;1]) is isometrically and linearly isomorphic to the space ''L''<sup>2</sup>([0,&nbsp;1]<sup>2</sup>) of square-integrable functions on the square [0,&nbsp;1]<sup>2</sup>. This isomorphism sends a simple tensor <math>f_1 \otimes f_2</math> to the function
:<math> (s, t) \mapsto f_1(s) \, f_2(t) </math>
on the square.
 
This example is typical in the following sense.<ref>{{harvnb|Kadison|Ringrose|1983|loc=Theorem 2.6.4}}</ref> Associated to every simple tensor product ''x''<sub>1</sub> ⊗ ''x''<sub>2</sub> is the rank one operator from ''H''<sub>1</sub><sup>∗</sup> to ''H''<sub>2</sub> that maps a given <math>x^*\in H^*_1</math> as
:<math> x^* \mapsto x^*(x_1) \, x_2.</math>
This mapping defined on simple tensors extends to a linear identification between ''H''<sub>1</sub> ⊗ ''H''<sub>2</sub> and the space of finite rank operators from ''H*''<sub>1</sub> to ''H''<sub>2</sub>. This extends to a linear isometry of the Hilbertian tensor product <math>H_1\widehat{\otimes}H_2</math> with the Hilbert space ''HS''(''H*''<sub>1</sub>, ''H''<sub>2</sub>) of [[Hilbert–Schmidt operator]]s from ''H*''<sub>1</sub> to ''H''<sub>2</sub>.
 
==Orthonormal bases==
The notion of an [[orthonormal basis]] from linear algebra generalizes over to the case of Hilbert spaces.<ref>{{harvnb|Dunford|Schwartz|1958|loc=§IV.4}}.</ref> In a Hilbert space ''H'', an orthonormal basis is a family {''e''<sub>''k''</sub>}{{nowrap|<sub>''k'' ∈ ''B''</sub>}} of elements of ''H'' satisfying the conditions:
# ''Orthogonality'': Every two different elements of ''B'' are orthogonal: {{nowrap|⟨''e''<sub>''k''</sub>, ''e''<sub>''j''</sub>⟩{{=}} 0}} for all ''k'', ''j'' in ''B'' with {{nowrap|''k'' ≠ ''j''}}.
# ''Normalization'': Every element of the family has norm 1:||''e''<sub>''k''</sub>||&nbsp;=&nbsp;1 for all ''k'' in ''B''.
# ''Completeness'': The [[linear span]] of the family ''e''<sub>''k''</sub>, {{nowrap|''k'' ∈ ''B''}}, is [[dense set|dense]] in ''H''.
 
A system of vectors satisfying the first two conditions basis is called an orthonormal system or an orthonormal set (or an orthonormal sequence if ''B'' is [[countable set|countable]]). Such a system is always [[linearly independent]]. Completeness of an orthonormal system of vectors of a Hilbert space can be equivalently restated as:
 
: if {{nowrap|⟨''v'', ''e''<sub>''k''</sub>⟩ {{=}} 0}} for all {{nowrap|''k'' ∈ ''B''}} and some {{nowrap|''v'' ∈ ''H''}} then {{nowrap|''v'' {{=}} '''0'''}}.
 
This is related to the fact that the only vector orthogonal to a dense linear subspace is the zero vector, for if ''S'' is any orthonormal set and ''v'' is orthogonal to ''S'', then ''v'' is orthogonal to the closure of the linear span of ''S'', which is the whole space.
 
Examples of orthonormal bases include:
* the set {(1,0,0), (0,1,0), (0,0,1)} forms an orthonormal basis of '''R'''<sup>3</sup> with the dot product;
* the sequence {''f''<sub>''n''</sub> : ''n'' ∈ '''Z'''} with ''f''<sub>''n''</sub>(''x'') = [[exponential function|exp]](2π''inx'') forms an orthonormal basis of the complex space L<sup>2</sup>([0,1]);
 
In the infinite-dimensional case, an orthonormal basis will not be a basis in the sense of [[linear algebra]]; to distinguish the two, the latter basis is also called a [[Hamel basis]]. That the span of the basis vectors is dense implies that every vector in the space can be written as the sum of an infinite series, and the orthogonality implies that this decomposition is unique.
 
===Sequence spaces===
The space ''ℓ''<sup>&nbsp;2</sup> of square-summable sequences of complex numbers is the set of infinite sequences
 
: <math> (c_1, c_2, c_3, \dots) \, </math>
 
of complex numbers such that
 
: <math> |c_1|^2 + |c_2|^2 + |c_3|^2 + \cdots < \infty. \, </math>
 
This space has an orthonormal basis:
 
:<math>\begin{align}
e_1 &= (1,0,0,\dots)\\
e_2 &= (0,1,0,\dots)\\
& \ \  \vdots
\end{align}
</math>
 
More generally, if ''B'' is any set, then one can form a Hilbert space of sequences with index set ''B'', defined by
 
:<math> \ell^2(B) =\big\{ x : B \xrightarrow{x} \mathbb{C} \mid \sum_{b \in B} \left|x (b)\right|^2 < \infty \big\}.</math>
 
The summation over ''B'' is here defined by
 
:<math>\sum_{b \in B} \left|x (b)\right|^2 = \sup \sum_{n=1}^N |x(b_n)|^2</math>
 
the [[supremum]] being taken over all finite subsets of&nbsp;''B''.  It follows that, for this sum to be finite, every element of ''ℓ''<sup>&nbsp;2</sup>(''B'') has only countably many nonzero terms. This space becomes a Hilbert space with the inner product
:<math>\langle x, y \rangle = \sum_{b \in B} x(b)\overline{y(b)}</math>
for all ''x'' and ''y'' in ''ℓ''<sup>&nbsp;2</sup>(''B'').  Here the sum also has only countably many nonzero terms, and is unconditionally convergent by the Cauchy–Schwarz inequality.
 
An orthonormal basis of ''ℓ''<sup>&nbsp;2</sup>(''B'') is indexed by the set ''B'', given by
:<math>e_b(b') = \begin{cases}
1&\text{if } b=b'\\
0&\text{otherwise.}
\end{cases}</math>
 
{{Anchor|Bessel's inequality}}
{{Anchor|Parseval's formula}}
 
===Bessel's inequality and Parseval's formula===
Let {{nowrap|''f''<sub>1</sub>, &hellip;, ''f''<sub>''n''</sub>}} be a finite orthonormal system in&nbsp;''H''. For an arbitrary vector ''x'' in ''H'', let
 
:<math>y = \sum_{j=1}^n \, \langle x, f_j \rangle \, f_j.</math>
 
Then {{nowrap|⟨''x'', ''f''<sub>''k''</sub>⟩}}&nbsp;= {{nowrap|⟨''y'', ''f''<sub>''k''</sub>⟩}} for every ''k'' = {{nowrap|1, &hellip;, ''n''}}. It follows that {{nowrap|''x'' − ''y''}} is orthogonal to each ''f''<sub>''k''</sub>, hence {{nowrap|''x'' − ''y''}} is orthogonal to&nbsp;''y''. Using the Pythagorean identity twice, it follows that
 
:<math>\|x\|^2 = \|x - y\|^2 + \|y\|^2 \ge \|y\|^2 = \sum_{j=1}^n|\langle x, f_j \rangle|^2.</math>
 
Let {{nowrap|{''f''<sub>''i''</sub>&thinsp;}, ''i'' ∈ ''I''}}, be an arbitrary orthonormal system in&nbsp;''H''. Applying the preceding inequality to every finite subset ''J'' of ''I'' gives the ''Bessel inequality''<ref>For the case of finite index sets, see, for instance, {{harvnb|Halmos|1957|loc=§5}}.  For infinite index sets, see {{harvnb|Weidmann|1980|loc=Theorem 3.6}}.</ref>
 
:<math>\sum_{i \in I}|\langle x, f_i \rangle|^2 \le \|x\|^2, \quad x \in H</math>
 
(according to the definition of the [[Series (mathematics)#Summations over arbitrary index sets|sum of an arbitrary family]] of non-negative real numbers).
 
Geometrically, Bessel's inequality implies that the orthogonal projection of ''x'' onto the linear subspace spanned by the ''f''<sub>''i''</sub> has norm that does not exceed that of ''x''.  In two dimensions, this is the assertion that the length of the leg of a right triangle may not exceed the length of the hypotenuse.
 
Bessel's inequality is a stepping stone to the more powerful [[Parseval identity]], which governs the case when Bessel's inequality is actually an equality.  If {''e''<sub>''k''</sub>}<sub>''k'' ∈ ''B''</sub> is an orthonormal basis of ''H'', then every element ''x'' of ''H'' may be written as
 
:<math>x = \sum_{k \in B} \, \langle x, e_k \rangle \, e_k. </math>
 
Even if ''B'' is uncountable, Bessel's inequality guarantees that the expression is well-defined and consists only of countably many nonzero terms. This sum is called the ''Fourier expansion'' of ''x'', and the individual coefficients ⟨''x'',''e''<sub>''k''</sub>⟩ are the ''Fourier coefficients'' of ''x''.  Parseval's formula is then
:<math>\|x\|^2 = \sum_{k\in B}|\langle x, e_k\rangle|^2.</math>
 
Conversely, if {''e''<sub>''k''</sub>} is an orthonormal set such that Parseval's identity holds for every ''x'', then {''e''<sub>''k''</sub>} is an orthonormal basis.
 
===Hilbert dimension===
As a consequence of [[Zorn's lemma]], ''every'' Hilbert space admits an orthonormal basis; furthermore, any two orthonormal bases of the same space have the same [[cardinal number|cardinality]], called the Hilbert dimension of the space.<ref>{{harvnb|Levitan|2001}}.  Many authors, such as {{harvtxt|Dunford|Schwartz|1958|loc=§IV.4}}, refer to this just as the dimension. Unless the Hilbert space is finite dimensional, this is not the same thing as its dimension as a linear space (the cardinality of a Hamel basis).</ref> For instance, since ''ℓ''<sup>2</sup>(''B'') has an orthonormal basis indexed by ''B'', its Hilbert dimension is the cardinality of ''B'' (which may be a finite integer, or a countable or uncountable [[cardinal number]]).
 
As a consequence of Parseval's identity, if {''e''<sub>''k''</sub>}<sub>''k'' ∈ ''B''</sub> is an orthonormal basis of ''H'', then the map {{nowrap|Φ : ''H'' →}} &nbsp;ℓ<sup>2</sup>(''B'') defined by {{nowrap|Φ(''x'') {{=}} (⟨x,''e''<sub>''k''</sub>⟩)<sub>''k''∈''B''</sub>}} is an isometric isomorphism of Hilbert spaces: it is a bijective linear mapping such that
:<math>\langle \Phi \left(x\right), \Phi\left(y\right) \rangle_{\ell^2(B)} = \langle x, y \rangle_H</math>
for all ''x'' and ''y'' in ''H''. The [[cardinal number]] of ''B'' is the Hilbert dimension of ''H''.  Thus every Hilbert space is isometrically isomorphic to a sequence space &nbsp;ℓ<sup>2</sup>(''B'') for some set ''B''.
 
===Separable spaces===
A Hilbert space is [[separable space|separable]] if and only if it admits a [[countable]] orthonormal basis. All infinite-dimensional separable Hilbert spaces are therefore isometrically isomorphic to ''ℓ''<sup>2</sup>.
 
In the past, Hilbert spaces were often required to be separable as part of the definition.<ref>{{harvnb|Prugovečki|1981|loc=I, §4.2}}</ref> Most spaces used in physics are separable, and since these are all isomorphic to each other, one often refers to any infinite-dimensional separable Hilbert space as "''the'' Hilbert space" or just "Hilbert space".<ref>{{harvtxt|von Neumann|1955}} defines a Hilbert space via a countable Hilbert basis, which amounts to an isometric isomorphism with ''ℓ''<sup>2</sup>.  The convention still persists in most rigorous treatments of quantum mechanics; see for instance {{harvnb|Sobrino|1996|loc=Appendix B}}.</ref> Even in [[quantum field theory]], most of the Hilbert spaces are in fact separable, as stipulated by the [[Wightman axioms]].  However, it is sometimes argued that non-separable Hilbert spaces are also important in quantum field theory, roughly because the systems in the theory possess an infinite number of [[degrees of freedom (mechanics)|degrees of freedom]] and any infinite [[Hilbert tensor product]] (of spaces of dimension greater than one) is non-separable.<ref name="Streater">{{harvnb|Streater|Wightman|1964|pp=86&ndash;87}}</ref>  For instance, a [[bosonic field]] can be naturally thought of as an element of a tensor product whose factors represent harmonic oscillators at each point of space. From this perspective, the natural state space of a boson might seem to be a non-separable space.<ref name="Streater"/>  However, it is only a small separable subspace of the full tensor product that can contain physically meaningful fields (on which the observables can be defined).  Another non-separable Hilbert space models the state of an infinite collection of particles in an unbounded region of space. An orthonormal basis of the space is indexed by the density of the particles, a continuous parameter, and since the set of possible densities is uncountable, the basis is not countable.<ref name="Streater"/>
 
==Orthogonal complements and projections==
If ''S'' is a subset of a Hilbert space ''H'', the set of vectors orthogonal to ''S'' is defined by
:<math>S^\perp = \left\{ x \in H : \langle x, s \rangle = 0\ \forall s \in S \right\}.</math>
''S''<sup>⊥</sup> is a [[closed set|closed]] subspace of ''H'' (can be proved easily using the linearity and continuity of the inner product) and so forms itself a Hilbert space. If ''V'' is a closed subspace of ''H'', then ''V''<sup>⊥</sup> is called the ''orthogonal complement'' of ''V''. In fact, every ''x'' in ''H'' can then be written uniquely as ''x'' = ''v'' + ''w'', with ''v'' in ''V'' and ''w'' in ''V''<sup>⊥</sup>. Therefore, ''H'' is the internal Hilbert direct sum of ''V'' and ''V''<sup>⊥</sup>.
 
The linear operator P<sub>''V''</sub> : ''H'' → ''H'' that maps ''x'' to ''v'' is called the ''orthogonal projection'' onto ''V''. There is a [[natural transformation|natural]] one-to-one correspondence between the set of all closed subspaces of ''H'' and the set of all bounded self-adjoint operators ''P'' such that ''P''<sup>2</sup>&nbsp;=&nbsp;''P''. Specifically,
 
:'''Theorem'''. The orthogonal projection P<sub>''V''</sub> is a self-adjoint linear operator on ''H'' of norm ≤ 1 with the property P<sup>2</sup><sub>''V''</sub> = P<sub>''V''</sub>. Moreover, any self-adjoint linear operator ''E'' such that ''E''<sup>2</sup> = ''E'' is of the form P<sub>''V''</sub>, where ''V'' is the range of ''E''. For every ''x'' in ''H'', P<sub>''V''</sub>(''x'') is the unique element ''v'' of ''V'', which minimizes the distance ||''x'' − ''v''||.
 
This provides the geometrical interpretation of ''P<sub>V</sub>''(''x''): it is the best approximation to ''x'' by elements of ''V''.<ref>{{harvnb|Young|1988|loc=Theorem 15.3}}</ref>
 
Projections ''P<sub>U</sub>'' and ''P<sub>V</sub>'' are called mutually orthogonal if ''P''<sub>''U''</sub>''P''<sub>''V''</sub> = 0.  This is equivalent to ''U'' and ''V'' being orthogonal as subspaces of ''H''. The sum of the two projections ''P''<sub>''U''</sub> and ''P''<sub>''V''</sub> is a projection only if ''U'' and ''V'' are orthogonal to each other, and in that case ''P''<sub>''U''</sub>&nbsp;+&nbsp;''P''<sub>''V''</sup>&nbsp;=&nbsp;''P''<sub>''U''+''V''</sub>.  The composite ''P''<sub>''U''</sub>''P''<sub>''V''</sub> is generally not a projection; in fact, the composite is a projection if and only if the two projections commute, and in that case ''P''<sub>''U''</sub>''P''<sub>''V''</sub> = ''P''<sub>''U''∩''V''</sub>.
 
By restricting the codomain to the Hilbert space ''V'', the orthogonal projection ''P''<sub>''V''</sub> gives rise to a projection mapping π: ''H'' → ''V''; it is the adjoint of the [[inclusion mapping]]
 
:<math>i: V \to H,</math>
 
meaning that
 
:<math>\langle i x, y\rangle_H = \langle x, \pi y\rangle_V</math>
 
for all ''x''&nbsp;∈&nbsp;''V'' and ''y''&nbsp;∈&nbsp;''H''.
 
The operator norm of the orthogonal projection ''P''<sub>''V''</sub> onto a non-zero closed subspace ''V'' is equal to one:
:<math>\|P_V\| = \sup_{x\in H, x\not=0} \frac{\|P_V x\|}{\|x\|}=1.</math>
Every closed subspace ''V'' of a Hilbert space is therefore the image of an operator ''P'' of norm one such that ''P''<sup>2</sup>&nbsp;=&nbsp;''P''. The property of possessing appropriate projection operators characterizes Hilbert spaces:<ref>{{harvnb|Kakutani|1939}}</ref>
 
*A Banach space of dimension higher than 2 is (isometrically) a Hilbert space if and only if, for every closed subspace ''V'', there is an operator ''P''<sub>''V''</sub> of norm one whose image is ''V'' such that <math>P_V^2=P_V.</math>
 
While this result characterizes the metric structure of a Hilbert space, the structure of a Hilbert space as a [[topological vector space]] can itself be characterized in terms of the presence of complementary subspaces:<ref>{{harvnb|Lindenstrauss|Tzafriri|1971}}</ref>
*A Banach space ''X'' is topologically and linearly isomorphic to a Hilbert space if and only if, to every closed subspace ''V'', there is a closed subspace ''W'' such that ''X'' is equal to the internal direct sum ''V'' ⊕ ''W''.
 
The orthogonal complement satisfies some more elementary results.  It is a [[monotone function]] in the sense that if ''U'' ⊂ ''V'', then <math>V^\perp\subseteq U^\perp</math> with equality holding if and only if ''V'' is contained in the [[closure (topology)|closure]] of ''U''.  This result is a special case of the [[Hahn–Banach theorem]].  The closure of a subspace can be completely characterized in terms of the orthogonal complement: If ''V'' is a subspace of ''H'', then the closure of ''V'' is equal to <math>V^{\bot\bot}</math>. The orthogonal complement is thus a [[Galois connection]] on the [[partial order]] of subspaces of a Hilbert space.  In general, the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements:<ref>{{harvnb|Halmos|1957|loc=§12}}</ref> <math>\textstyle{\left(\sum_i V_i\right)^\perp = \bigcap_i V_i^\perp}</math>.  If the ''V''<sub>''i''</sub> are in addition closed, then <math>\textstyle{\overline{\sum_i V_i^\perp} = \left(\bigcap_i V_i\right)^\perp}</math>.
 
==Spectral theory==
There is a well-developed [[spectral theory]] for self-adjoint operators in a Hilbert space, that is roughly analogous to the study of [[symmetric matrix|symmetric matrices]] over the reals or self-adjoint matrices over the complex numbers.<ref>A general account of spectral theory in Hilbert spaces can be found in {{harvtxt|Riesz|Sz Nagy|1990}}.  A more sophisticated account in the language of C<sup>&lowast;</sup>-algebras is in {{harvtxt|Rudin|1973}} or {{harvtxt|Kadison|Ringrose|1997}}</ref> In the same sense, one can obtain a "diagonalization" of a self-adjoint operator as a suitable sum (actually an integral) of orthogonal projection operators.
 
The [[spectrum of an operator]] ''T'', denoted σ(''T'') is the set of complex numbers λ such that ''T''&nbsp;&minus;&nbsp;λ lacks a continuous inverse.  If ''T'' is bounded, then the spectrum is always a [[compact set]] in the complex plane, and lies inside the disc <math>\scriptstyle{|z|\le\|T\|.}</math>  If ''T'' is self-adjoint, then the spectrum is real.  In fact, it is contained in the interval [''m'',''M''] where
:<math>m=\inf_{\|x\|=1}\langle Tx, x\rangle,\quad M=\sup_{\|x\|=1}\langle Tx, x\rangle.</math>
Moreover, ''m'' and ''M'' are both actually contained within the spectrum.
 
The eigenspaces of an operator ''T'' are given by
:<math>H_\lambda = \ker(T-\lambda).\ </math>
Unlike with finite matrices, not every element of the spectrum of ''T'' must be an eigenvalue: the linear operator ''T''&nbsp;&minus;&nbsp;λ may only lack an inverse because it is not surjective.  Elements of the spectrum of an operator in the general sense are known as ''spectral values''.  Since spectral values need not be eigenvalues, the spectral decomposition is often more subtle than in finite dimensions.
 
However, the [[spectral theorem]] of a self-adjoint operator ''T'' takes a particularly simple form if, in addition, ''T'' is assumed to be a [[compact operator]].  The [[Compact operator on Hilbert space#Spectral theorem|spectral theorem for compact self-adjoint operators]] states:<ref>See, for instance, {{harvtxt|Riesz|Sz Nagy|1990|loc=Chapter VI}} or {{harvnb|Weidmann|1980|loc=Chapter 7}}.  This result was already known to {{harvtxt|Schmidt|1907}} in the case of operators arising from integral kernels.</ref>
* A compact self-adjoint operator ''T'' has only countably (or finitely) many spectral values.  The spectrum of ''T'' has no [[limit point]] in the complex plane except possibly zero.  The eigenspaces of ''T'' decompose ''H'' into an orthogonal direct sum:
*:<math>H=\bigoplus_{\lambda\in\sigma(T)}H_\lambda.</math>
:Moreover, if ''E''<sub>&lambda;</sub> denotes the orthogonal projection onto the eigenspace ''H''<sub>&lambda;</sub>, then
::<math>T = \sum_{\lambda\in\sigma(T)} \lambda E_\lambda,</math>
:where the sum converges with respect to the norm on B(''H'').
This theorem plays a fundamental role in the theory of [[integral equation]]s, as many integral operators are compact, in particular those that arise from [[Hilbert–Schmidt operator]]s.
 
The general spectral theorem for self-adjoint operators involves a kind of operator-valued [[Riemann–Stieltjes integral]], rather than an infinite summation.<ref>{{harvnb|Riesz|Sz Nagy|1990|loc=§§107–108}}</ref>  The ''spectral family'' associated to ''T'' associates to each real number λ an operator ''E''<sub>λ</sub>, which is the projection onto the nullspace of the operator (''T'' − λ)<sup>+</sup>, where the positive part of a self-adjoint operator is defined by
:<math>A^+ = \frac{1}{2}\left(\sqrt{A^2}+A\right).</math>
The operators ''E''<sub>λ</sub> are monotone increasing relative to the partial order defined on self-adjoint operators; the eigenvalues correspond precisely to the jump discontinuities.  One has the spectral theorem, which asserts
:<math>T = \int_\mathbb{R} \lambda\, dE_\lambda.</math>
The integral is understood as a Riemann–Stieltjes integral, convergent with respect to the norm on B(''H'').  In particular, one has the ordinary scalar-valued integral representation
:<math>\langle Tx, y\rangle = \int_{\mathbb{R}} \lambda\,d\langle E_\lambda x,y\rangle.</math>
A somewhat similar spectral decomposition holds for normal operators, although because the spectrum may now contain non-real complex numbers, the operator-valued Stieltjes measure ''dE''<sub>λ</sub> must instead be replaced by a [[resolution of the identity]].
 
A major application of spectral methods is the [[spectral mapping theorem]], which allows one to apply to a self-adjoint operator ''T'' any continuous complex function ''f'' defined on the spectrum of ''T'' by forming the integral
:<math>f(T) = \int_{\sigma(T)} f(\lambda)\,dE_\lambda.</math>
The resulting [[continuous functional calculus]] has applications in particular to [[pseudodifferential operators]].<ref>{{harvnb|Shubin|1987}}</ref>
 
The spectral theory of ''unbounded'' self-adjoint operators is only marginally more difficult than for bounded operators.  The spectrum of an unbounded operator is defined in precisely the same way as for bounded operators: λ is a spectral value if the [[resolvent operator]]
:<math>R_\lambda = (T-\lambda)^{-1}</math>
fails to be a well-defined continuous operator.  The self-adjointness of ''T'' still guarantees that the spectrum is real.  Thus the essential idea of working with unbounded operators is to look instead at the resolvent ''R''<sub>λ</sub> where λ is non-real.  This is a ''bounded'' normal operator, which admits a spectral representation that can then be transferred to a spectral representation of ''T'' itself.  A similar strategy is used, for instance, to study the spectrum of the Laplace operator: rather than address the operator directly, one instead looks as an associated resolvent such as a [[Riesz potential]] or [[Bessel potential]].
 
A precise version of the spectral theorem in this case is:<ref>{{harvnb|Rudin|1973|loc=Theorem 13.30}}.</ref>
:Given a densely defined self-adjoint operator ''T'' on a Hilbert space ''H'', there corresponds a unique [[resolution of the identity]] ''E'' on the Borel sets of '''R''', such that
::<math>\langle Tx, y\rangle = \int_\mathbb{R} \lambda\,dE_{x,y}(\lambda)</math>
:for all ''x''&nbsp;&isin;&nbsp;''D''(''T'') and ''y''&nbsp;&isin;&nbsp;''H''.  The spectral measure ''E'' is concentrated on the spectrum of ''T''.
There is also a version of the spectral theorem that applies to unbounded normal operators.
 
==See also==
{{portal|Mathematics}}
* [[Hilbert C*-module]]
* [[Hilbert algebra (disambiguation)|Hilbert algebra]]
* [[Hilbert manifold]]
* [[Rigged Hilbert space]]
* [[Topologies on the set of operators on a Hilbert space]]
* [[Operator theory]]
* [[Hadamard space]]
 
==Notes==
{{reflist|colwidth=30em}}
 
==References==
{{Refbegin|colwidth=30em}}
*{{Citation | last1=Bachman | first1=George | last2=Narici | first2=Lawrence | last3=Beckenstein | first3=Edward | title=Fourier and wavelet analysis | publisher=[[Springer-Verlag]] | location=Berlin, New York | series=Universitext | isbn=978-0-387-98899-3 | mr=1729490 | year=2000}}.
* {{citation|title=Partial differential equations|first=Lipman|last= Bers|authorlink=Lipman Bers|first2=Fritz|last2= John|authorlink2=Fritz John|first3= Martin|last3= Schechter|publisher= American Mathematical Society|year=1981|isbn=0-8218-0049-3}}.
* {{citation|first=Nicolas|last=Bourbaki|authorlink=Nicolas Bourbaki|title=Spectral theories|series=Elements of mathematics|publisher= Springer-Verlag|publication-place=Berlin|year=1986|isbn=0-201-00767-3}}.
* {{citation|first=Nicolas|last=Bourbaki|authorlink=Nicolas Bourbaki|title=Topological vector spaces|series=Elements of mathematics|publisher= Springer-Verlag|publication-place=Berlin|year=1987|isbn=978-3-540-13627-9}}.
*{{citation| authorlink1 = Carl Benjamin Boyer|last1=Boyer|first1=Carl Benjamin|last2=Merzbach|first2=Uta C| year = 1991| title = A History of Mathematics| edition= 2nd| publisher = John Wiley & Sons, Inc.|isbn=0-471-54397-7}}.
* {{citation|first1=S.|last1=Brenner|first2=R. L.|last2=Scott|title=The Mathematical Theory of Finite Element Methods|edition=2nd|publisher=Springer|year=2005|isbn=0-387-95451-1}}.
* {{Citation | last1=Buttazzo | first1=Giuseppe | last2=Giaquinta | first2=Mariano | last3=Hildebrandt | first3=Stefan | title=One-dimensional variational problems | publisher=The Clarendon Press Oxford University Press | series=Oxford Lecture Series in Mathematics and its Applications | isbn=978-0-19-850465-8 | mr=1694383 | year=1998 | volume=15}}.
* {{citation|first=J. A.|last=Clarkson|title=Uniformly convex spaces|journal=Trans. Amer. Math. Soc.|volume=40|year=1936|pages=396–414|doi=10.2307/1989630|issue=3|jstor=1989630}}.
* {{citation|first=Richard|last=Courant|authorlink=Richard Courant|first2=David|last2=Hilbert|authorlink2=David Hilbert|title=Methods of Mathematical Physics, Vol. I|publisher=Interscience|year=1953}}.
* {{citation| first = Jean| last = Dieudonné| authorlink=Jean Dieudonné|title= Foundations of Modern Analysis|publisher = Academic Press| year= 1960}}.
* {{citation|first=P.A.M.|last=Dirac|authorlink=Paul Dirac|title=[[Principles of Quantum Mechanics]] |publisher=Clarendon Press|publication-place=Oxford|year=1930}}.
* {{citation|first1=N.|last1=Dunford|first2=J.T.|last2=Schwartz|authorlink2=Jacob T. Schwartz|title=Linear operators, Parts I and II|publisher=Wiley-Interscience|year=1958}}.
* {{citation|first=P.|last= Duren|title=Theory of H<sup>p</sup>-Spaces|year=1970|publisher= Academic Press|publication-place= New York}}.
<!--*{{Citation | last1=Feintuch | first1=Avraham | last2=Saeks | first2=Richard | title=System theory | publisher=Academic Press Inc. [Harcourt Brace Jovanovich Publishers] | location=London | series=Pure and Applied Mathematics | isbn=978-0-12-251750-1 | mr=663906 | year=1982 | volume=102}}.-->
*{{citation |title=Fourier analysis and its application  |first=Gerald B.|last=Folland |url=http://books.google.com/books?as_isbn=0-8218-4790-2 |isbn=0-8218-4790-2 |publisher=American Mathematical Society Bookstore |year=2009 |edition=Reprint of Wadsworth and Brooks/Cole 1992}}.
* {{citation|first=Gerald B.|last=Folland|title=Harmonic analysis in phase space|series=Annals of Mathematics Studies|volume= 122|publisher=Princeton University Press|year= 1989|isbn= 0-691-08527-7}}.
* {{citation|first=Maurice|last=Fréchet|title=Sur les ensembles de fonctions et les opérations linéaires|journal=C. R. Acad. Sci. Paris|volume=144|pages=1414–1416|year=1907}}.
* {{citation|first=Maurice|last=Fréchet|title=Sur les opérations linéaires|year=1904–1907}}.
* {{citation|first=Enrico|last=Giusti|title=Direct Methods in the Calculus of Variations|publisher=World Scientific|year=2003|isbn=981-238-043-4}}.
* {{Citation | last1=Grattan-Guinness | first1=Ivor | title=The search for mathematical roots, 1870–1940 | publisher=[[Princeton University Press]] | series=Princeton Paperbacks | isbn=978-0-691-05858-0 | mr=1807717 | year=2000}}.
* {{citation| last =Halmos| first =Paul| authorlink=Paul Halmos|title=Introduction to Hilbert Space and the Theory of Spectral Multiplicity|year=1957|
publisher=Chelsea Pub. Co}}
* {{citation| last=Halmos|first=Paul|authorlink=Paul Halmos|title=A Hilbert Space Problem Book|year=1982|publisher=Springer-Verlag|isbn=0-387-90685-1}}.
* {{citation| last = Hewitt| first = Edwin| last2 = Stromberg| first2 = Karl| title = Real and Abstract Analysis| year = 1965| publisher = Springer-Verlag| location = New York}}.
* {{citation| last1=Hilbert| first1=David| authorlink1 = David Hilbert| last2=Nordheim| first2 = Lothar (Wolfgang)| authorlink2= Lothar Nordheim| last3= von Neumann| first3= John|authorlink3= John von Neumann| title = Über die Grundlagen der Quantenmechanik| url=http://dz-srv1.sub.uni-goettingen.de/sub/digbib/loader?ht=VIEW&did=D27779| journal = Mathematische Annalen| volume = 98| pages = 1–30| year = 1927| doi=10.1007/BF01451579}}.
* {{citation|first=Mark|last=Kac|authorlink=Mark Kac|title=Can one hear the shape of a drum?|journal=[[American Mathematical Monthly]]|volume=73|issue=4, part 2|year=1966|pages=1&ndash;23|doi=10.2307/2313748|jstor=2313748}}.
*{{Citation | last1=Kadison | first1=Richard V. | last2=Ringrose | first2=John R. | title=Fundamentals of the theory of operator algebras. Vol. I | publisher=[[American Mathematical Society]] | location=Providence, R.I. | series=Graduate Studies in Mathematics | isbn=978-0-8218-0819-1 | mr=1468229 | year=1997 | volume=15}}.
<!--* {{citation| title = Элементы теории функций и функционального анализа| last1 = Колмогоров| first1 = А. Н.| authorlink1 = Andrey Kolmogorov| last2= Фомин|first2= С. В.| authorlink2 = Sergei Fomin| year = 1989| edition = sixth Russian (with corrections)| publisher = "Nauka", Moscow| isbn= 5-02-013993-9}}.-->
*{{Citation | last1=Kakutani | first1=Shizuo | author1-link=Shizuo Kakutani | title=Some characterizations of Euclidean space | mr=0000895 | year=1939 | journal=Japanese Journal of Mathematics | volume=16 | pages=93–97}}.
*{{Citation | last1=Kline | first1=Morris | author1-link=Morris Kline | title=Mathematical thought from ancient to modern times, Volume 3 | year=1972 | publisher=[[Oxford University Press]] | edition=3rd | isbn=978-0-19-506137-6 | publication-date=1990}}.
* {{citation| title = Introductory Real Analysis| last1 = Kolmogorov| first1 = Andrey| authorlink1 = Andrey Kolmogorov| last2= Fomin|first2= Sergei V.| authorlink2 = Sergei Fomin| year = 1970| edition = Revised English edition, trans. by Richard A. Silverman (1975)| publisher = Dover Press| isbn= 0-486-61226-0}}.
* {{Citation | last1=Krantz | first1=Steven G. | authorlink=Steven Krantz|title=Function Theory of Several Complex Variables | publisher=[[American Mathematical Society]] | location=Providence, R.I. | isbn=978-0-8218-2724-6 | year=2002}}.
* {{citation |title=Applied analysis |first=Cornelius|last=Lanczos|isbn=0-486-65656-X |publisher=Dover Publications |year=1988 |edition=Reprint of 1956 Prentice-Hall |url=http://books.google.com/books?as_isbn=0-486-65656-X}}.
*{{Citation | last1=Lindenstrauss | first1=J. | last2=Tzafriri | first2=L. | title=On the complemented subspaces problem | mr=0276734 | year=1971 | journal=Israel Journal of Mathematics | issn=0021-2172 | volume=9 | pages=263–269 | doi=10.1007/BF02771592 | issue=2}}.
*{{MacTutor|class=HistTopics|id=Abstract_linear_spaces|title=Abstract linear spaces|date=1996}}.
*{{citation|first=Henri|last=Lebesgue|title=Leçons sur l'intégration et la recherche des fonctions primitives|url=http://books.google.com/?id=VfUKAAAAYAAJ&dq=%22Lebesgue%22%20%22Le%C3%A7ons%20sur%20l'int%C3%A9gration%20et%20la%20recherche%20des%20fonctions%20...%22&pg=PA1#v=onepage&q=|year=1904|publisher=Gauthier-Villars}}.
* {{springer|id=H/h047380|title=Hilbert space|author=B.M. Levitan}}.
* {{Citation | last1=Marsden | first1=Jerrold E. | author1-link=Jerrold E. Marsden | title=Elementary classical analysis | publisher=W. H. Freeman and Co. | mr=0357693 | year=1974}}.
* {{citation| last=Prugovečki|first=Eduard|title=Quantum mechanics in Hilbert space|publisher=Dover|edition=2nd|year=1981|publication-date=2006|isbn=978-0-486-45327-9}}.
* {{citation|first=Michael|last=Reed|authorlink=Michael Reed|first2=Barry|last2=Simon|authorlink2=Barry Simon|series=Methods of Modern Mathematical Physics|title=Functional Analysis|publisher=Academic Press|year=1980|isbn= 0-12-585050-6}}.
* {{citation|first=Michael|last=Reed|authorlink=Michael Reed|first2=Barry|last2=Simon|authorlink2=Barry Simon|title=Fourier Analysis, Self-Adjointness|series=Methods of Modern Mathematical Physics|publisher=Academic Press|year=1975|isbn=01258500026 }}. {{Please check ISBN|reason=Invalid length.}}
* {{citation| first=Frigyes|last=Riesz|authorlink=Frigyes Riesz|title=Sur une espèce de Géométrie analytique des systèmes de fonctions sommables|journal=C. R. Acad. Sci. Paris|volume=144|pages=1409–1411|year=1907}}.
* {{citation|first=Frigyes|last=Riesz|authorlink=Frigyes Riesz|title=Zur Theorie des Hilbertschen Raumes|journal=Acta Sci. Math. Szeged|volume=7|pages=34–38|year=1934}}.
* {{citation|first=Frigyes|last=Riesz|authorlink=Frigyes Riesz|first2=Béla|last2=Sz.-Nagy|authorlink2=Béla Szőkefalvi-Nagy|title=Functional analysis|publisher=Dover|year=1990|isbn= 0-486-66289-6}}.
* {{citation| first=Walter|last=Rudin|authorlink=Walter Rudin|title=Functional analysis|publisher=Tata MacGraw-Hill|year=1973}}.
*{{citation|first=Walter|last=Rudin|authorlink=Walter Rudin|title=Real and Complex Analysis|year=1987|publisher=McGraw-Hill|isbn=0-07-100276-6}}.
* {{citation|first=Stanisław|last=Saks|authorlink=Stanisław Saks|title=Theory of the integral|publisher=Dover|year=2005|edition=2nd Dover|isbn=978-0-486-44648-6}}; originally published ''Monografje Matematyczne'', vol. 7, Warszawa, 1937.
* {{citation| last=Schmidt| first=Erhard|authorlink=Erhard Schmidt|title=Über die Auflösung linearer Gleichungen mit unendlich vielen Unbekannten|journal=Rend. Circ. Mat. Palermo|volume=25|pages=63–77|year=1908| doi=10.1007/BF03029116}}.
*{{Citation | last1=Shubin | first1=M. A. | title=Pseudodifferential operators and spectral theory | publisher=[[Springer-Verlag]] | location=Berlin, New York | series=Springer Series in Soviet Mathematics | isbn=978-3-540-13621-7 | mr=883081 | year=1987}}.
*{{Citation | last1=Sobrino | first1=Luis | title=Elements of non-relativistic quantum mechanics | publisher=World Scientific Publishing Co. Inc. | location=River Edge, NJ | isbn=978-981-02-2386-1 | mr=1626401 | year=1996}}.
* {{citation|title=Calculus: Concepts and Contexts|edition=3rd|first=James|last=Stewart|publisher=Thomson/Brooks/Cole|year=2006}}.
*{{citation|last=Stein|first=E|title= Singular Integrals and Differentiability Properties of Functions, |publisher=Princeton Univ. Press|year=1970| isbn= 0-691-08079-8}}.
*{{citation|last1=Stein|first1=Elias|authorlink1=Elias Stein|first2=Guido|last2=Weiss|authorlink2=Guido Weiss|title=Introduction to Fourier Analysis on Euclidean Spaces|publisher=Princeton University Press|year=1971|isbn=978-0-691-08078-9|location=Princeton, N.J.}}.
* {{citation|last1=Streater|first1=Ray|authorlink1=Ray Streater|last2=Wightman|first2=Arthur|authorlink2=Arthur Wightman|title= PCT, Spin and Statistics and All That|year=1964|publisher=W. A. Benjamin, Inc}}.
* {{citation| last=Titchmarsh|first=Edward Charles|authorlink=Edward Charles Titchmarsh|title=Eigenfunction expansions, part 1|year=1946|publisher=Clarendon Press|publication-place=Oxford University}}.
*{{citation|first=François|last=Trèves|title=Topological Vector Spaces, Distributions and Kernels|publisher=Academic Press|year=1967}}.
* {{citation| last=von Neumann| first=John| authorlink=John von Neumann| title=Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren| journal=Mathematische Annalen| volume = 102| pages = 49–131| year = 1929| doi=10.1007/BF01782338}}.
* {{citation|first=John|last=von Neumann|authorlink=John von Neumann|title=Physical Applications of the Ergodic Hypothesis|year=1932|journal=Proc Natl Acad Sci USA|volume=18|pages=263–266|doi=10.1073/pnas.18.3.263|pmid=16587674|issue=3|pmc=1076204|jstor=86260|bibcode = 1932PNAS...18..263N }}.
*{{Citation | last1=von Neumann | first1=John | author1-link=John von Neumann | title=Mathematical foundations of quantum mechanics | publisher=[[Princeton University Press]] | series=Princeton Landmarks in Mathematics | isbn=978-0-691-02893-4 | mr=1435976 | publication-date=1996|year=1955}}.
*{{Citation | last1=Warner | first1=Frank | title=Foundations of Differentiable Manifolds and Lie Groups | publisher=[[Springer-Verlag]] | location=Berlin, New York | isbn=978-0-387-90894-6 | year=1983}}.
*{{Citation | last1=Weidmann | first1=Joachim | title=Linear operators in Hilbert spaces | publisher=[[Springer-Verlag]] | location=Berlin, New York | series=Graduate Texts in Mathematics | isbn=978-0-387-90427-6 | mr=566954 | year=1980 | volume=68}}.
* {{citation| last=Weyl| first=Hermann| authorlink=Hermann Weyl| title = The Theory of Groups and Quantum Mechanics| year = 1931| publisher = Dover Press| edition = English 1950| isbn= 0-486-60269-9}}.
* {{citation| last=Young|first=Nicholas|title=An introduction to Hilbert space|publisher=Cambridge University Press|year=1988|zbl=0645.46024|isbn=0-521-33071-8}}.
{{Refend}}
 
==External links==
{{Wikibooks|Functional Analysis/Hilbert spaces}}
* {{springer|title=Hilbert space|id=p/h047380}}
* [http://mathworld.wolfram.com/HilbertSpace.html Hilbert space at Mathworld]
* [http://terrytao.wordpress.com/2009/01/17/254a-notes-5-hilbert-spaces/ 245B, notes 5: Hilbert spaces] by [[Terence Tao]]
 
{{good article}}
 
{{Functional Analysis}}
 
{{DEFAULTSORT:Hilbert Space}}
[[Category:Concepts in physics]]
[[Category:Hilbert space|*]]
[[Category:Linear algebra]]
[[Category:Operator theory]]
[[Category:Quantum mechanics]]

Latest revision as of 19:31, 26 December 2014

by Nas, is very fitting and the film agrees with it. The next step is to visit your Word - Press blog dashboard. To find more information on wordpress dropbox backup check out our webpage. The Word - Press Dashboard : an administrative management tool that supports FTP content upload 2. Word - Press also provides protection against spamming, as security is a measure issue. All this is very simple, and the best thing is that it is totally free, and you don't need a domain name or web hosting.

Always remember that an effective linkwheel strategy strives to answer all the demands of popular search engines while reacting to the latest marketing number trends. If you are a positive thinker businessman then today you have to put your business online. Some plugins ask users to match pictures or add numbers, and although effective, they appear unprofessional and unnecessary. t need to use the back button or the URL to get to your home page. As soon as you start developing your Word - Press MLM website you'll see how straightforward and simple it is to create an online presence for you and the products and services you offer.

It is also popular because willing surrogates,as well as egg and sperm donors,are plentiful. Word - Press has different exciting features including a plug-in architecture with a templating system. For a much deeper understanding of simple wordpress themes", check out Upon browsing such, you'll be able to know valuable facts. Provide the best and updated information to the web searchers and make use of these wonderful free themes and create beautiful websites. Customization of web layout is easy due to the availability of huge selection of templates.

There has been a huge increase in the number of developers releasing free premium Word - Press themes over the years. High Quality Services: These companies help you in creating high quality Word - Press websites. Some examples of its additional features include; code inserter (for use with adding Google Analytics, Adsense section targeting etc) Webmaster verification assistant, Link Mask Generator, Robots. It supports backup scheduling and allows you to either download the backup file or email it to you. Make sure you have the latest versions of all your plugins are updated.

A sitemap is useful for enabling web spiders and also on rare occasions clients, too, to more easily and navigate your website. If you operate a website that's been built on HTML then you might have to witness traffic losses because such a site isn't competent enough in grabbing the attention of potential consumers. You can select color of your choice, graphics of your favorite, skins, photos, pages, etc. Word - Press is an open source content management system which is easy to use and offers many user friendly features. As for performing online business, websites and blogs are the only medium that are available to interact with customers and Word - Press perform this work with the help of cross-blog communication tools, comments and full user registration plug-ins.