# Operator (physics)

In physics, an operator is a function over the space of physical states. As a result of its application on a physical state, another physical state is obtained, very often along with some extra relevant information.

The simplest example of the utility of operators is the study of symmetry. Because of this, they are a very useful tool in classical mechanics. In quantum mechanics, on the other hand, they are an intrinsic part of the formulation of the theory.

## Operators in classical mechanics

In classical mechanics, the movement of a particle (or system of particles) is completely determined by the Lagrangian ${\displaystyle L(q,{\dot {q}},t)}$ or equivalently the Hamiltonian ${\displaystyle H(q,p,t)}$, a function of the generalized coordinates q, generalized velocities ${\displaystyle {\dot {q}}=\mathrm {d} q/\mathrm {d} t}$ and its conjugate momenta:

${\displaystyle p={\frac {\partial L}{\partial {\dot {q}}}}}$

If either L or H are independent of a generalized coordinate q, meaning the L and H do not change when q is changed, which in turn means the dynamics of the particle are still the same even when q changes, the corresponding momenta conjugate to those coordinates will be conserved (this is part of Noether's theorem, and the invariance of motion with respect to the coordinate q is a symmetry). Operators in classical mechanics are related to these symmetries.

More technically, when H is invariant under the action of a certain group of transformations G:

${\displaystyle S\in G,H(S(q,p))=H(q,p)}$.

the elements of G are physical operators, which map physical states among themselves.

### Table of classical mechanics operators

where ${\displaystyle R({\hat {\boldsymbol {n}}},\theta )}$ is the rotation matrix about an axis defined by the unit vector ${\displaystyle {\hat {\boldsymbol {n}}}}$ and angle θ.

## Concept of generator

If the transformation is infinitesimal, the operator action should be of the form

${\displaystyle I+\epsilon A}$

where ${\displaystyle I}$ is the identity operator, ${\displaystyle \epsilon }$ is a parameter with a small value, and ${\displaystyle A}$ will depend on the transformation at hand, and is called a generator of the group. Again, as a simple example, we will derive the generator of the space translations on 1D functions.

As it was stated, ${\displaystyle T_{a}f(x)=f(x-a)}$. If ${\displaystyle a=\epsilon }$ is infinitesimal, then we may write

${\displaystyle T_{\epsilon }f(x)=f(x-\epsilon )\approx f(x)-\epsilon f'(x).}$

This formula may be rewritten as

${\displaystyle T_{\epsilon }f(x)=(I-\epsilon D)f(x)}$

where ${\displaystyle D}$ is the generator of the translation group, which in this case happens to be the derivative operator. Thus, it is said that the generator of translations is the derivative.

## The exponential map

The whole group may be recovered, under normal circumstances, from the generators, via the exponential map. In the case of the translations the idea works like this.

The translation for a finite value of ${\displaystyle a}$ may be obtained by repeated application of the infinitesimal translation:

${\displaystyle T_{a}f(x)=\lim _{N\to \infty }T_{a/N}\cdots T_{a/N}f(x)}$

with the ${\displaystyle \cdots }$ standing for the application ${\displaystyle N}$ times. If ${\displaystyle N}$ is large, each of the factors may be considered to be infinitesimal:

${\displaystyle T_{a}f(x)=\lim _{N\to \infty }(I-(a/N)D)^{N}f(x).}$

But this limit may be rewritten as an exponential:

${\displaystyle T_{a}f(x)=\exp(-aD)f(x).}$

To be convinced of the validity of this formal expression, we may expand the exponential in a power series:

${\displaystyle T_{a}f(x)=\left(I-aD+{a^{2}D^{2} \over 2!}-{a^{3}D^{3} \over 3!}+\cdots \right)f(x).}$

The right-hand side may be rewritten as

${\displaystyle f(x)-af'(x)+{a^{2} \over 2!}f''(x)-{a^{3} \over 3!}f'''(x)+\cdots }$

which is just the Taylor expansion of ${\displaystyle f(x-a)}$, which was our original value for ${\displaystyle T_{a}f(x)}$.

The mathematical properties of physical operators are a topic of great importance in itself. For further information, see C*-algebra and Gelfand-Naimark theorem.

## Operators in quantum mechanics

The mathematical formulation of quantum mechanics (QM) is built upon the concept of an operator.

The wavefunction represents the probability amplitude of finding the system in that state. The terms "wavefunction" and "state" in QM context are usually used interchangeably.

Physical pure states in quantum mechanics are represented as unit-norm vectors (probabilities are normalized to one) in a special complex vector space: a Hilbert space. Time evolution in this vector space is given by the application of the evolution operator.

Any observable, i.e., any quantity which can be measured in a physical experiment, should be associated with a self-adjoint linear operator. The operators must yield real eigenvalues, since they are values which may come up as the result of the experiment. Mathematically this means the operators must be Hermitian.[1] The probability of each eigenvalue is related to the projection of the physical state on the subspace related to that eigenvalue. See below for mathematical details.

In the wave mechanics formulation of QM, the wavefunction varies with space and time, or equivalently momentum and time (see position and momentum space for details), so observables are differential operators.

In the matrix mechanics formulation, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices. Any other symmetry, mapping a physical state into another, should keep this restriction.

### Wavefunction

{{#invoke:main|main}}

The wavefunction must be square-integrable (see Lp spaces), meaning:

${\displaystyle \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|\psi ({\mathbf {r}})|^{2}{\rm {d}}^{3}{\mathbf {r}}=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\psi ({\mathbf {r}})^{*}\psi ({\mathbf {r}}){\rm {d}}^{3}{\mathbf {r}}<\infty }$

and normalizable, so that:

${\displaystyle \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|\psi ({\mathbf {r}})|^{2}{\rm {d}}^{3}{\mathbf {r}}=1}$

Two cases of eigenstates (and eigenvalues) are:

${\displaystyle |\psi \rangle =\sum _{i}c_{i}|\phi _{i}\rangle }$
where ci are complex numbers such that |ci|2 = ci*ci = probability of measuring the state ${\displaystyle |\phi _{i}\rangle }$, and has the corresponding set of eigenvalues ai is also discrete - either finite or countably infinite,
${\displaystyle |\psi \rangle =\int c(\phi ){\rm {d}}\phi |\phi _{i}\rangle }$
where c(φ) is a complex function such that |c(φ)|2 = c(φ)*c(φ) = probability of measuring the state ${\displaystyle |\phi \rangle }$, there is an uncountably infinite set of eigenvalues a.

### Linear operators in wave mechanics

{{#invoke:main|main}}

Let ψ be the wavefunction for a quantum system, and ${\displaystyle {\hat {A}}}$ be any linear operator for some observable A (such as position, momentum, energy, angular momentum etc.), then

${\displaystyle {\hat {A}}\psi =a\psi ,}$

where:

If ψ is an eigenfunction of a given operator A, then a definite quantity (the eigenvalue a) will be observed if a measurement of the observable A is made on the state ψ. Conversely, if ψ is not an eigenfunction of A, then it has no eigenvalue for A, and the observable does not have a single definite value in that case. Instead, measurements of the observable A will yield each eigenvalue with a certain probability (related to the decomposition of ψ relative to the orthonormal eigenbasis of A).

In bra–ket notation the above can be written;

{\displaystyle {\begin{aligned}&{\hat {A}}\psi ={\hat {A}}\psi (\mathbf {r} )={\hat {A}}\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \mid {\hat {A}}\mid \psi \right\rangle \\&a\psi =a\psi (\mathbf {r} )=a\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \mid a\mid \psi \right\rangle \\\end{aligned}}}

Due to linearity, vectors can be defined in any number of dimensions, as each component of the vector acts on the function separately. One mathematical example is the del operator, which is itself a vector (useful in momentum-related quantum operators, in the table below).

An operator in n-dimensional space can be written:

${\displaystyle \mathbf {\hat {A}} =\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}}$

where ej are basis vectors corresponding to each component operator Aj. Each component will yield a corresponding eigenvalue. Acting this on the wave function ψ:

${\displaystyle \mathbf {\hat {A}} \psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\sum _{j=1}^{n}\left(\mathbf {e} _{j}{\hat {A}}_{j}\psi \right)=\sum _{j=1}^{n}\left(\mathbf {e} _{j}a_{j}\psi \right)}$

in which

${\displaystyle {\hat {A}}_{j}\psi =a_{j}\psi .}$

In bra–ket notation:

{\displaystyle {\begin{aligned}&\mathbf {\hat {A}} \psi =\mathbf {\hat {A}} \psi (\mathbf {r} )=\mathbf {\hat {A}} \left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \mid \mathbf {\hat {A}} \mid \psi \right\rangle \\&\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi =\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\psi (\mathbf {r} )=\left(\sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\right)\left\langle \mathbf {r} \mid \psi \right\rangle =\left\langle \mathbf {r} \mid \sum _{j=1}^{n}\mathbf {e} _{j}{\hat {A}}_{j}\mid \psi \right\rangle \\\end{aligned}}\,\!}

### Commutation of operators on Ψ

{{#invoke:main|main}}

If two observables A and B have linear operators ${\displaystyle {\hat {A}}}$ and ${\displaystyle {\hat {B}}}$, the commutator is defined by,

${\displaystyle \left[{\hat {A}},{\hat {B}}\right]={\hat {A}}{\hat {B}}-{\hat {B}}{\hat {A}}}$

The commutator is itself a (composite) operator. Acting the commutator on ψ gives:

${\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi ={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi .}$

If ψ is an eigenfunction with eigenvalues a and b for observables A and B respectively, and if the operators commute:

${\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi =0,}$

then the observables A and B can be measured simultaneously with infinite precision i.e. uncertainties ${\displaystyle \Delta A=0}$, ${\displaystyle \Delta B=0}$ simultaneously. ψ is then said to be the simultaneous eigenfunction of A and B. To illustrate this:

{\displaystyle {\begin{aligned}\left[{\hat {A}},{\hat {B}}\right]\psi &={\hat {A}}{\hat {B}}\psi -{\hat {B}}{\hat {A}}\psi \\&=a(b\psi )-b(a\psi )\\&=0.\\\end{aligned}}}

It shows that measurement of A and B does not cause any shift of state i.e. initial and final states are same (no disturbance due to measurement). Suppose we measure A to get value a. We then measure B to get the value b. We measure A again. We still get the same value a. Clearly the state (ψ) of the system is not destroyed and so we are able to measure A and B simultaneously with infinite precision.

If the operators do not commute:

${\displaystyle \left[{\hat {A}},{\hat {B}}\right]\psi \neq 0,}$

they can't be prepared simultaneously to arbitrary precision, and there is an uncertainty relation between the observables,

${\displaystyle \Delta A\Delta B\geq {\frac {\hbar }{2}}}$

even if ψ is an eigenfunction the above relation holds.. Notable pairs are position and momentum, and energy and time - uncertainty relations, and the angular momenta (spin, orbital and total) about any two orthogonal axes (such as Lx and Ly, or sy and sz etc.).[2]

### Expectation values of operators on Ψ

The expectation value (equivalently the average or mean value) is the average measurement of an observable, for particle in region R. The expectation value ${\displaystyle \langle {\hat {A}}\rangle }$ of the operator ${\displaystyle {\hat {A}}}$ is calculated from:[3]

${\displaystyle \langle {\hat {A}}\rangle =\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\langle \psi |{\hat {A}}|\psi \rangle .}$

This can be generalized to any function F of an operator:

${\displaystyle \langle F({\hat {A}})\rangle =\int _{R}\psi (\mathbf {r} )^{*}\left[F({\hat {A}})\psi (\mathbf {r} )\right]\mathrm {d} ^{3}\mathbf {r} =\langle \psi |F({\hat {A}})|\psi \rangle ,}$

An example of F is the 2-fold action of A on ψ, i.e. squaring an operator or doing it twice:

{\displaystyle {\begin{aligned}&F({\hat {A}})={\hat {A}}^{2}\\&\Rightarrow \langle {\hat {A}}^{2}\rangle =\int _{R}\psi ^{*}\left(\mathbf {r} \right){\hat {A}}^{2}\psi \left(\mathbf {r} \right)\mathrm {d} ^{3}\mathbf {r} =\langle \psi \vert {\hat {A}}^{2}\vert \psi \rangle \\\end{aligned}}\,\!}

### Hermitian operators

{{#invoke:main|main}}

The definition of a Hermitian operator is:[1]

${\displaystyle {\hat {A}}={\hat {A}}^{\dagger }}$

Following from this, in bra–ket notation:

${\displaystyle \langle \phi _{i}|{\hat {A}}|\phi _{j}\rangle =\langle \phi _{j}|{\hat {A}}|\phi _{i}\rangle ^{*}.}$

Important properties of Hermitian operators include:

• real eigenvalues,
• eigenvectors with different eigenvalues are orthogonal,
• eigenvectors can be chosen to be a complete orthonormal basis,

### Operators in Matrix mechanics

An operator can be written in matrix form to map one basis vector to another. Since the operators and basis vectors are linear, the matrix is a linear transformation (aka transition matrix) between bases. Each basis element ${\displaystyle \phi _{j}}$ can be connected to another,[3] by the expression:

${\displaystyle A_{ij}=\langle \phi _{i}|{\hat {A}}|\phi _{j}\rangle ,}$

which is a matrix element:

${\displaystyle {\hat {A}}={\begin{pmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{pmatrix}}}$

A further property of a Hermitian operator is that eigenfunctions corresponding to different eigenvalues are orthogonal.[1] In matrix form, operators allow real eigenvalues to be found, corresponding to measurements. Orthogonality allows a suitable basis set of vectors to represent the state of the quantum system. The eigenvalues of the operator are also evaluated in the same way as for the square matrix, by solving the characteristic polynomial:

${\displaystyle \det \left({\hat {A}}-a{\hat {I}}\right)=0,}$

where I is the n × n identity matrix, as an operator it corresponds to the identity operator. For a discrete basis:

${\displaystyle {\hat {I}}=\sum _{i}|\phi _{i}\rangle \langle \phi _{i}|}$

while for a continuous basis:

${\displaystyle {\hat {I}}=\int |\phi \rangle \langle \phi |d\phi }$

### Inverse of an operator

A non-singular operator ${\displaystyle {\hat {A}}}$ has an inverse ${\displaystyle {\hat {A}}^{-1}}$ defined by:

${\displaystyle {\hat {A}}{\hat {A}}^{-1}={\hat {A}}^{-1}{\hat {A}}={\hat {I}}}$

If an operator has no inverse, it is a singular operator. In a finite-dimensional space, the determinant of a non-singular operator is non-zero:

${\displaystyle \det({\hat {A}})\neq 0}$

and hence it is zero for a singular operator.

### Table of QM operators

The operators used in quantum mechanics are collected in the table below (see for example,[1][4]). The bold-face vectors with circumflexes are not unit vectors, they are 3-vector operators; all three spatial components taken together.

### Examples of applying quantum operators

The procedure for extracting information from a wave function is as follows. Consider the momentum p of a particle as an example. The momentum operator in one dimension is:

${\displaystyle {\hat {p}}=-i\hbar {\frac {\partial }{\partial x}}}$

Letting this act on ψ we obtain:

${\displaystyle {\hat {p}}\psi =-i\hbar {\frac {\partial }{\partial x}}\psi ,}$

if ψ is an eigenfunction of ${\displaystyle {\hat {p}}}$, then the momentum eigenvalue p is the value of the particle's momentum, found by:

${\displaystyle -i\hbar {\frac {\partial }{\partial x}}\psi =p\psi .}$

For three dimensions the momentum operator uses the nabla operator to become:

${\displaystyle \mathbf {\hat {p}} =-i\hbar \nabla .}$

In Cartesian coordinates (using the standard Cartesian basis vectors ex, ey, ez) this can be written;

${\displaystyle \mathbf {e} _{\mathrm {x} }{\hat {p}}_{x}+\mathbf {e} _{\mathrm {y} }{\hat {p}}_{y}+\mathbf {e} _{\mathrm {z} }{\hat {p}}_{z}=-i\hbar \left(\mathbf {e} _{\mathrm {x} }{\frac {\partial }{\partial x}}+\mathbf {e} _{\mathrm {y} }{\frac {\partial }{\partial y}}+\mathbf {e} _{\mathrm {z} }{\frac {\partial }{\partial z}}\right),}$

that is:

${\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {\partial }{\partial x}},\quad {\hat {p}}_{y}=-i\hbar {\frac {\partial }{\partial y}},\quad {\hat {p}}_{z}=-i\hbar {\frac {\partial }{\partial z}}\,\!}$

The process of finding eigenvalues is the same. Since this is a vector and operator equation, if ψ is an eigenfunction, then each component of the momentum operator will have an eigenvalue corresponding to that component of momentum. Acting ${\displaystyle \mathbf {\hat {p}} }$ on ψ obtains:

{\displaystyle {\begin{aligned}{\hat {p}}_{x}\psi &=-i\hbar {\frac {\partial }{\partial x}}\psi =p_{x}\psi \\{\hat {p}}_{y}\psi &=-i\hbar {\frac {\partial }{\partial y}}\psi =p_{y}\psi \\{\hat {p}}_{z}\psi &=-i\hbar {\frac {\partial }{\partial z}}\psi =p_{z}\psi \\\end{aligned}}\,\!}