# Rotation group SO(3)

{{#invoke:Hatnote|hatnote}}Template:Main other

In mechanics and geometry, the 3D rotation group is the group of all rotations about the origin of three-dimensional Euclidean space R3 under the operation of composition.[1] By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance (so it is an isometry), and orientation (i.e. handedness of space). A distance-preserving transformation which reverses orientation is an improper rotation, that is a reflection or, in the general position, a rotoreflection. The origin in Euclidean space establishes a one-to-one correspondence between points and their coordinate vectors. Rotations about the origin can be thought of as magnitude-preserving linear transformations of Euclidean 3-dimensional vectors (whose vector space is also denoted as R3).

Composing two rotations results in another rotation; every rotation has a unique inverse rotation; and the identity map satisfies the definition of a rotation. Owing to the above properties (along with the associative property, which rotations obey), the set of all rotations is a group under composition. Moreover, the rotation group has a natural manifold structure for which the group operations are smooth; so it is in fact a Lie group. The rotation group is often denoted SO(3) (or, less ambiguously, SO(3, R)) for reasons explained below.

## Length and angle

Besides just preserving length, rotations also preserve the angles between vectors. This follows from the fact that the standard dot product between two vectors u and v can be written purely in terms of length:

${\displaystyle \mathbf {u} \cdot \mathbf {v} ={\tfrac {1}{2}}\left(\|\mathbf {u} +\mathbf {v} \|^{2}-\|\mathbf {u} \|^{2}-\|\mathbf {v} \|^{2}\right).}$

It follows that any length-preserving transformation in R3 preserves the dot product, and thus the angle between vectors. Rotations are often defined as linear transformations that preserve the inner product on R3, which is equivalent to requiring them to preserve length. See classical group for a treatment of this more general approach, where SO(3) appears as a special case.

## Orthogonal and rotation matrices

{{#invoke:main|main}}

Every rotation maps an orthonormal basis of R3 to another orthonormal basis. Like any linear transformation of finite-dimensional vector spaces, a rotation can always be represented by a matrix. Let R be a given rotation. With respect to the standard basis e1, e2, e3 of R3 the columns of R are given by (Re1, Re2, Re3). Since the standard basis is orthonormal, and since R preserves angles and length, the columns of R form another orthonormal basis. This orthonormality condition can be expressed in the form

${\displaystyle R^{\mathsf {T}}R=I,}$

where denotes the transpose of R and Template:Mvar is the 3 × 3 identity matrix. Matrices for which this property holds are called orthogonal matrices. The group of all 3 × 3 orthogonal matrices is denoted O(3), and consists of all proper and improper rotations.

In addition to preserving length, proper rotations must also preserve orientation. A matrix will preserve or reverse orientation according to whether the determinant of the matrix is positive or negative. For an orthogonal matrix R, note that det RTemplate:Sans-serif = det R−1 implies (det R)2 = 1, so that det R = ±1. The subgroup of orthogonal matrices with determinant +1 is called the special orthogonal group, denoted SO(3).

Thus every rotation can be represented uniquely by an orthogonal matrix with unit determinant. Moreover, since composition of rotations corresponds to matrix multiplication, the rotation group is isomorphic to the special orthogonal group SO(3).

Improper rotations correspond to orthogonal matrices with determinant −1, and they do not form a group because the product of two improper rotations is a proper rotation.

## Group structure

The rotation group is a group under function composition (or equivalently the product of linear transformations). It is a subgroup of the general linear group consisting of all invertible linear transformations of the real 3-space R3.[2]

Furthermore, the rotation group is nonabelian. That is, the order in which rotations are composed makes a difference. For example, a quarter turn around the positive x-axis followed by a quarter turn around the positive y-axis is a different rotation than the one obtained by first rotating around y and then x.

The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.

## Axis of rotation

{{#invoke:main|main}} Every nontrivial proper rotation in 3 dimensions fixes a unique 1-dimensional linear subspace of R3 which is called the axis of rotation (this is Euler's rotation theorem). Each such rotation acts as an ordinary 2-dimensional rotation in the plane orthogonal to this axis. Since every 2-dimensional rotation can be represented by an angle φ, an arbitrary 3-dimensional rotation can be specified by an axis of rotation together with an angle of rotation about this axis. (Technically, one needs to specify an orientation for the axis and whether the rotation is taken to be clockwise or counterclockwise with respect to this orientation).

For example, counterclockwise rotation about the positive z-axis by angle φ is given by

${\displaystyle R_{z}(\varphi )={\begin{bmatrix}\cos \varphi &-\sin \varphi &0\\\sin \varphi &\cos \varphi &0\\0&0&1\end{bmatrix}}.}$

Given a unit vector n in R3 and an angle φ, let R(φ, n) represent a counterclockwise rotation about the axis through n (with orientation determined by n). Then

• R(0, n) is the identity transformation for any n
• R(φ, n) = R(−φ, −n)
• R(π + φ, n) = R(π − φ, −n).

Using these properties one can show that any rotation can be represented by a unique angle φ in the range 0 ≤ φ ≤ π and a unit vector n such that

• n is arbitrary if φ = 0
• n is unique if 0 < φ < π
• n is unique up to a sign if φ = π (that is, the rotations R(π, ±n) are identical).

## Topology

{{#invoke:main|main}} The Lie group SO(3) is diffeomorphic to the real projective space RP3.

Consider the solid ball in R3 of radius π (that is, all points of R3 of distance π or less from the origin). Given the above, for every point in this ball there is a rotation, with axis through the point and the origin, and rotation angle equal to the distance of the point from the origin. The identity rotation corresponds to the point at the center of the ball. Rotation through angles between 0 and −π correspond to the point on the same axis and distance from the origin but on the opposite side of the origin. The one remaining issue is that the two rotations through π and through −π are the same. So we identify (or "glue together") antipodal points on the surface of the ball. After this identification, we arrive at a topological space homeomorphic to the rotation group.

Indeed, the ball with antipodal surface points identified is a smooth manifold, and this manifold is diffeomorphic to the rotation group. It is also diffeomorphic to the real 3-dimensional projective space RP3, so the latter can also serve as a topological model for the rotation group.

These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with antipodal surface points identified, consider the path running from the "north pole" straight through the interior down to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot be shrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or else the loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about the z-axis starting and ending at the identity rotation (i.e. a series of rotation through an angle φ where φ runs from 0 to ).

Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the north pole (using the fact that north and south poles are identified), and then again run from north pole down to south pole, so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the paths continuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can then be mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on the surface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north pole without problems. The Balinese plate trick and similar tricks demonstrate this practically.

The same argument can be performed in general, and it shows that the fundamental group of SO(3) is cyclic group of order 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objects known as spinors, and is an important tool in the development of the spin-statistics theorem.

The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitary group SU(2); it is also diffeomorphic to the unit 3-sphere S3 and can be understood as the group of versors (quaternions with absolute value 1). The connection between quaternions and rotations, commonly exploited in computer graphics, is explained in quaternions and spatial rotations. The map from S3 onto SO(3) that identifies antipodal points of S3 is a surjective homomorphism of Lie groups, with kernel {±1}. Topologically, this map is a two-to-one covering map.

## Connection between SO(3) and SU(2)

$B=1/2}} coordinatized by (ξ, η), here shown in cross section. The general reference for this section is Template:Harvtxt. The points P on the sphere S = {(x, y, z) ∈ ℝ3: x2 + y2 + z2 = {{ safesubst:#invoke:Unsubst||$B=1/4}}} can, barring the north pole N, be put into one-to-one bijection with points S(P) = P´ on the plane M defined by z = −{{ safesubst:#invoke:Unsubst||$B=1/2}}, see figure. The map S is called stereographic projection. Let the coordinates on Template:Mvar be (ξ, η). The line L passing through N and P can be written ${\displaystyle L=N+t(N-P)=(0,0,1/2)+t((0,0,1/2)-(x,y,z)),\quad t\in \mathbb {R} .}$ Demanding that the z-coordinate equals −{{ safesubst:#invoke:Unsubst||$B=1/2}}, one finds t = {{ safesubst:#invoke:Unsubst||$B=1/}}, hence ${\displaystyle S:\mathbf {S} \rightarrow M;P\mapsto P';(x,y,z)\mapsto (\xi ,\eta )=\left({\frac {x}{{\frac {1}{2}}-z}},{\frac {y}{{\frac {1}{2}}-z}}\right)\equiv \zeta =\xi +i\eta ,}$ where, for later convenience, the plane M is identified with the complex plane . For the inverse, write L as ${\displaystyle L=N+s(P'-N)=(0,0,{\frac {1}{2}})+s\left((\xi ,\eta ,-{\frac {1}{2}})-(0,0,{\frac {1}{2}})\right),}$ and demand x2 + y2 + z2 = {{ safesubst:#invoke:Unsubst||$B=1/4}} to find s = {{ safesubst:#invoke:Unsubst||B=1/1 + ξ2 + η2}} and thus ${\displaystyle S^{-1}:M\rightarrow \mathbf {S} ;P'\mapsto P;(\xi ,\eta )\mapsto (x,y,z)=\left({\frac {\xi }{1+\xi ^{2}+\eta ^{2}}},{\frac {\eta }{1+\xi ^{2}+\eta ^{2}}},{\frac {-1+\xi ^{2}+\eta ^{2}}{2+2\xi ^{2}+2\eta ^{2}}}\right).}$ If g ∈ SO(3) is a rotation, then it will take points on S to points on S by its standard action Πs(g) on the embedding space 3. By composing this action with S one obtains a transformation S ∘ Πs(g) ∘ S−1 of Template:Mvar, ζ = P ↦ Πs(g)P =gPS(gP) ≡ Πu(g)ζ = ζ´. Thus Πu(g) is a transformation of associated to the transformation Πs(g) of 3. It turns out that g ∈ SO(3) represented in this way by Πu(g) can be expressed as a matrix Πu(g) ∈ SU(2) (where the notation is recycled to use the same name for the matrix as for the transformation of it represents). To identify this matrix, consider first a rotation gφ about the z-axis through an angle Template:Mvar, {\displaystyle {\begin{aligned}x'&=x\cos \varphi -y\sin \varphi ,\\y'&=x\sin \varphi -y\cos \varphi ,\\z'&=z.\end{aligned}}} Hence ${\displaystyle \zeta '={\frac {x'+iy'}{{\frac {1}{2}}-z'}}={\frac {e^{i\varphi }(x+iy)}{{\frac {1}{2}}-z}}=e^{i\varphi }\zeta ={\frac {\cos \varphi \zeta +i\sin \varphi }{0\zeta +1}},}$ which, unsurprisingly, is a rotation in the complex plane. In an analogous way, if gθ is a rotation about the x-axis through and angle Template:Mvar, then ${\displaystyle w'=e^{i\theta }w,\quad w={\frac {x+iz}{{\frac {1}{2}}-x}},}$ which, after a little algebra, becomes ${\displaystyle \zeta '={\frac {\cos {\frac {\theta }{2}}\zeta +i\sin {\frac {\theta }{2}}}{i\sin {\frac {\theta }{2}}\zeta +\cos {\frac {\theta }{2}}}}.}$ These two rotations, gφ, gθ, thus correspond to bilinear transforms of 2 ≃ ℂ ≃ M, namely, they are examples of Möbius transformations. A general Möbius transformation is given by ${\displaystyle \zeta '={\frac {\alpha \zeta +\beta }{\gamma \zeta +\delta }},\quad \alpha \delta -\beta \gamma \neq 0.}$. The rotations, gφ, gθ generate all of SO(3) and the composition rules of the Möbius transformations show that any composition of gφ, gθ translates to the corresponding composition of Möbius transformations. The Möbius transformations can be represented by matrices ${\displaystyle \left({\begin{matrix}\alpha &\beta \\\gamma &\delta \end{matrix}}\right),\quad \quad \alpha \delta -\beta \gamma =1,}$ since a common factor of α, β, γ, δ cancels. For the same reason, the matrix is not uniquely defined since multiplication by I has no effect on either the determinant or the Möbius transformation. The composition law of Möbius transformations follow that of the corresponding matrices. The conclusion is that each Möbius transformation corresponds to two matrices g, −g ∈ SL(2, ℂ). Using this correspondence one may write {\displaystyle {\begin{aligned}\Pi _{u}(g_{\varphi })&=\Pi _{u}\left[\left({\begin{matrix}\cos \varphi &-\sin \varphi &0\\\sin \varphi &\cos \varphi &0\\0&0&0\end{matrix}}\right)\right]=\pm \left({\begin{matrix}e^{i{\frac {\varphi }{2}}}&0\\0&e^{-i{\frac {\varphi }{2}}}\end{matrix}}\right),\\\Pi _{u}(g_{\theta })&=\Pi _{u}\left[\left({\begin{matrix}0&0&0\\0&\cos \theta &-\sin \theta \\0&\sin \theta &\cos \theta \end{matrix}}\right)\right]=\pm \left({\begin{matrix}\cos {\frac {\theta }{2}}&i\sin {\frac {\theta }{2}}\\i\sin {\frac {\theta }{2}}&\cos {\frac {\theta }{2}}\end{matrix}}\right).\end{aligned}}} These matrices are unitary and thus Πu(SO(3)) ⊂ SU(2) ⊂ SL(2, ℂ). In terms of Euler angles[nb 1] one finds for a general rotation Template:NumBlk one has[3] Template:NumBlk For the converse, consider a general matrix ${\displaystyle \pm \Pi _{u}(g_{\alpha ,\beta })=\pm \left({\begin{matrix}\alpha &\beta \\-{\overline {\beta }}&{\overline {\alpha }}\end{matrix}}\right)\in \mathrm {SU} (2).}$ Make the substitutions {\displaystyle {\begin{aligned}\cos {\frac {\theta }{2}}&=|\alpha |,\quad \sin {\frac {\theta }{2}}=|\beta |,\quad (0\leq \theta \leq \pi ),\\{\frac {\varphi +\psi }{2}}&=\arg \alpha ,\quad {\frac {\psi -\varphi }{2}}=\arg \beta .\end{aligned}}} With the substitutions, Π(gα, β) assumes the form of the right hand side (RHS) of Template:EquationNote, which corresponds under Πu to a matrix on the form of the RHS of Template:EquationNote with the same φ, θ, ψ. In terms of the complex parameters α, β, ${\displaystyle g_{\alpha ,\beta }=\left({\begin{matrix}{\frac {1}{2}}(\alpha ^{2}-\beta ^{2}+{\overline {\alpha ^{2}}}-{\overline {\beta ^{2}}})&{\frac {i}{2}}(-\alpha ^{2}-\beta ^{2}+{\overline {\alpha ^{2}}}+{\overline {\beta ^{2}}})&-\alpha \beta -{\overline {\alpha }}{\overline {\beta }}\\{\frac {i}{2}}(\alpha ^{2}-\beta ^{2}-{\overline {\alpha ^{2}}}+{\overline {\beta ^{2}}})&{\frac {i}{2}}(\alpha ^{2}+\beta ^{2}+{\overline {\alpha ^{2}}}+{\overline {\beta ^{2}}})&-i(+\alpha \beta -{\overline {\alpha }}{\overline {\beta }})\\\alpha {\overline {\beta }}+{\overline {\alpha }}\beta &i(-\alpha {\overline {\beta }}+{\overline {\alpha }}\beta )&\alpha {\overline {\alpha }}-\beta {\overline {\beta }}\end{matrix}}\right).}$ To verify this, substitute for α. β the elements of the matrix on the RHS of Template:EquationNote. After some manipulation, the matrix assumes the form of the RHS of Template:EquationNote. It is clear from the explicit form in terms of Euler angles that the map p:SU(2) → SO(3);Π(±gαβ) ↦ gαβ just described is a smooth, 2:1 and onto group homomorphism. It is hence an explicit description of the universal covering map of SO(3) from the universal covering group SU(2). ### Quaternions of unit norm SU(2) is isomorphic to the quaternions of unit norm via a map given by ${\displaystyle q=a\mathrm {1} +b\mathrm {i} +c\mathrm {j} +d\mathrm {k} =\alpha +j\beta \leftrightarrow {\begin{bmatrix}\alpha &-{\overline {\beta }}\\\beta &{\overline {\alpha }}\end{bmatrix}}=U,\quad q\in \mathbb {H} ,\quad a,b,c,d\in \mathbb {R} ,\quad \alpha ,\beta \in \mathbb {C} ,\quad U\in \mathrm {SU} (2).}$[4] This means that there is a 2:1 homomorphism from quaternions of unit norm to SO(3). Concretely, a unit quaternion, Template:Mvar, with {\displaystyle {\begin{aligned}q&{}=w+{\mathbf {i}}x+{\mathbf {j}}y+{\mathbf {k}}z,\\1&{}=w^{2}+x^{2}+y^{2}+z^{2},\end{aligned}}} is mapped to the rotation matrix ${\displaystyle Q={\begin{bmatrix}1-2y^{2}-2z^{2}&2xy-2zw&2xz+2yw\\2xy+2zw&1-2x^{2}-2z^{2}&2yz-2xw\\2xz-2yw&2yz+2xw&1-2x^{2}-2y^{2}\end{bmatrix}}.}$ This is a rotation around the vector (x,y,z) by an angle 2θ, where cos θ = w and |sin θ| = ||(x,y,z)||. The proper sign for sin θ is implied, once the signs of the axis components are fixed. The 2:1-nature is apparent since both q and q map to the same Q. ## Lie algebra Associated with every Lie group is its Lie algebra, a linear space of the same dimension as the Lie group, closed under a bilinear alternating product called the Lie bracket. The Lie algebra of SO(3) is denoted by so(3) and consists of all skew-symmetric 3 × 3 matrices. This may be seen by differentiating the orthogonality condition, ATA = I, A ∈ SO(3).[nb 2] The Lie bracket of two elements of so(3) is, as for the Lie algebra of every matrix group, given by the matrix commutator, [A1, A2] = A1A2A2A1, which is again a skew-symmetric matrix. The Lie algebra bracket captures the essence of the Lie group product in a sense made precise by the Baker–Campbell–Hausdorff formula. A most often suitable basis for so(3) as a 3-dimensional vector space is ${\displaystyle L_{\mathbf {x}}={\begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix}},\quad L_{\mathbf {y}}={\begin{bmatrix}0&0&1\\0&0&0\\-1&0&0\end{bmatrix}},\quad L_{\mathbf {z}}={\begin{bmatrix}0&-1&0\\1&0&0\\0&0&0\end{bmatrix}}.}$ The commutation relations of these basis elements are, ${\displaystyle [L_{\mathbf {x}},L_{\mathbf {y}}]=L_{\mathbf {z}},\quad [L_{\mathbf {z}},L_{\mathbf {x}}]=L_{\mathbf {y}},\quad [L_{\mathbf {y}},L_{\mathbf {z}}]=L_{\mathbf {x}}.}$ One can conveniently identify any matrix in this Lie algebra with a vector in ℝ3,[5] {\displaystyle {\begin{aligned}{\boldsymbol {\omega }}&=(x,y,z)\in \mathbb {R} ^{3},\\{\boldsymbol {\tilde {\omega }}}&={\boldsymbol {\omega \cdot L}}=xL_{\mathbf {x}}+yL_{\mathbf {y}}+zL_{\mathbf {z}}={\begin{bmatrix}0&-z&y\\z&0&-x\\-y&x&0\end{bmatrix}}\in {\mathfrak {so}}(3).\end{aligned}}} This identification is sometimes called the hat-map.[6] Under this identification, the so(3) bracket corresponds in 3 to the cross product, ${\displaystyle [{\tilde {\mathbf {u}}},{\tilde {\mathbf {v}}}]={\widetilde {{\mathbf {u}}\!\times \!{\mathbf {v}}}},}$ hence 3 is itself a Lie algebra with the Lie bracket being the cross product. The matrix identified with a vector u has the property that ${\displaystyle {\tilde {\mathbf {u}}}{\mathbf {v}}={\mathbf {u}}\times {\mathbf {v}},}$ where ordinary matrix multiplication is implied on the left hand side. This implies that u is in the null space of the skew-symmetric matrix with which it is identified, because u × u = 0. ### Isomorphism with su(2) The Lie algebras so(3) and su(2) are isomorphic. One basis for su(2) is given by ${\displaystyle t_{1}={\frac {1}{2}}{\begin{bmatrix}0&-i\\-i&0\end{bmatrix}},\quad t_{2}={\frac {1}{2}}{\begin{bmatrix}0&-1\\1&0\end{bmatrix}},\quad t_{3}={\frac {1}{2}}{\begin{bmatrix}-i&0\\0&i\end{bmatrix}}.}$ These are related to the Pauli matrices by ti ↔ {{ safesubst:#invoke:Unsubst||B=1/2i}}σi. The Pauli matrices abide the physicist convention for Lie algebras. In that convention, Lie algebra elements are multiplied by Template:Mvar, the exponential map (below) is defined with an extra factor of Template:Mvar in the exponent and the structure constants remain the same, but the definition of them acquires a factor of Template:Mvar. Likewise, commutation relations acquire a factor of Template:Mvar. The commutation relations for the ti are

${\displaystyle [t_{i},t_{j}]=\epsilon _{ijk}t_{k},}$

where εijk is the totally anti-symmetric symbol with ε123 = 1. The isomorphism between so(3) and su(2) can be set up in several ways. For later convenience, so(3) and su(2) are identified by mapping

${\displaystyle L_{x}\leftrightarrow t_{1},\quad L_{y}\leftrightarrow t_{2},\quad L_{z}\leftrightarrow t_{3},}$

and extending by linearity.

## Exponential map

The exponential map for SO(3), is, since SO(3) is a matrix Lie group, defined using the standard matrix exponential series,

${\displaystyle \exp \colon {\mathfrak {so}}(3)\to SO(3);A\mapsto e^{A}=\sum _{k=0}^{\infty }{\frac {1}{k!}}A^{k}=I+A+{\tfrac {1}{2}}A^{2}+\cdots +{\tfrac {1}{k!}}A^{k}+\cdots .}$

For any skew-symmetric matrix Aso(3), eA is always in SO(3). The level of difficulty of proof depends on how a matrix group Lie algebra is defined. Template:Harvtxt defines the Lie algebra as the set of matrices AMn(ℝ)| etA ∈ SO(3) ∀t, in which case it is trivial. Template:Harvtxt uses for a definition derivatives of smooth curve segments in SO(3) through the identity taken at the identity, in which case it is harder.[7]

For a fixed A ≠ 0, etA, −∞ < t < ∞ is a one-parameter subgroup along a geodesic in SO(3). That this gives a one-parameter subgroup follows directly from properties of the exponential map.[8]

The exponential map provides a diffeomorphism between a neighborhood of the origin in the so(3) and a neighborhood of the identity in the SO(3).[9] For a proof, see Closed subgroup theorem.

The exponential map is surjective. This follows from the fact that that every R ∈ SO(3), since every rotation leaves an axis fixed (Euler's rotation theorem), and is conjugate to a block diagonal matrix of the form

${\displaystyle D=\left({\begin{matrix}\cos \theta &-\sin \theta &0\\\sin \theta &\cos \theta &0\\0&0&1\end{matrix}}\right)=e^{\theta L_{z}},}$

such that A = BDB−1, and that

${\displaystyle Be^{\theta L_{z}}B^{-1}=e^{B\theta L_{z}B^{-1}},}$

together with the fact that so(3) is closed under the adjoint action of SO(3), meaning that BθLzB−1so(3).

As shown above, every element Aso(3) is associated with a vector ω = θ u, where u = (x,y,z) is a unit magnitude vector. Since u is in the null space of Template:Mvar, if one now rotates to a new basis, through some other orthogonal matrix O, with u as the Template:Mvar axis, the final column and row of the rotation matrix in the new basis will be zero.

Thus, we know in advance from the formula for the exponential that exp(OAOT) must leave u fixed. It is mathematically impossible to supply a straightforward formula for such a basis as a function of u, because its existence would violate the hairy ball theorem; but direct exponentiation is possible, and yields

{\displaystyle {\begin{aligned}\exp({\tilde {\boldsymbol {\omega }}})&{}=\exp \left({\begin{bmatrix}0&-z\theta &y\theta \\z\theta &0&-x\theta \\-y\theta &x\theta &0\end{bmatrix}}\right)\\&{}={\boldsymbol {I}}+2cs~{\boldsymbol {u\cdot L}}+2s^{2}~({\boldsymbol {u\cdot L}})^{2}\\&{}={\begin{bmatrix}2(x^{2}-1)s^{2}+1&2xys^{2}-2zcs&2xzs^{2}+2ycs\\2xys^{2}+2zcs&2(y^{2}-1)s^{2}+1&2yzs^{2}-2xcs\\2xzs^{2}-2ycs&2yzs^{2}+2xcs&2(z^{2}-1)s^{2}+1\end{bmatrix}},\end{aligned}}}

where c = cosTemplate:Frac, s = sinTemplate:Frac. This is recognized as a matrix for a rotation around axis u by the angle Template:Mvar: cf. Rodrigues' rotation formula.

## Logarithm map

Given R ∈ SO(3), let

${\displaystyle A={\frac {R-R^{\mathrm {T} }}{2}}}$

denote the antisymmetric part. Then the logarithm of Template:Mvar is given by[10]

${\displaystyle \log R={\frac {\sin ^{-1}||A||}{||A||}}A.}$

The proof is found by examination of Rodrigues' formula on the mixed form

${\displaystyle e^{X}=I+{\frac {\sin \theta }{\theta }}X+2{\frac {\sin ^{2}{\frac {\theta }{2}}}{\theta ^{2}}}X^{2},\quad \theta =||X||,}$

observing that the first and last term on the right hand side are symmetric.

## Baker–Campbell–Hausdorff formula

{{#invoke:main|main}} Suppose Template:Mvar and Template:Mvar in the Lie algebra are given. Their exponentials, exp(X) and exp(Y), are rotation matrices, which can be multiplied. Since the exponential map is a surjection, for some Template:Mvar in the Lie algebra, exp(Z) = exp(X) exp(Y), and one may tentatively write

${\displaystyle Z=C(X,Y),}$

for Template:Mvar some expression in X and Y. When exp(X) and exp(Y) commute, then Z = X + Y, mimicking the behavior of complex exponentiation.

The general case is given by the more elaborate BCH formula, a series expansion of nested Lie brackets.[11] For matrices, the Lie bracket is the same operation as the commutator, which monitors lack of commutativity in multiplication. This general expansion unfolds as follows,[nb 3]

${\displaystyle Z=C(X,Y)=X+Y+{\tfrac {1}{2}}[X,Y]+{\tfrac {1}{12}}[X,[X,Y]]-{\tfrac {1}{12}}[Y,[X,Y]]+\cdots ~.}$

The infinite expansion in the BCH formula for SO(3) reduces to a compact form,

${\displaystyle Z=\alpha X+\beta Y+\gamma [X,Y],}$

for suitable trigonometric function coefficients (α, β, γ). Template:Hidden begin The (α, β, γ) are given by

${\displaystyle \alpha =\phi \cot(\phi /2)~\gamma ,\qquad \beta =\theta \cot(\theta /2)~\gamma ,\qquad \gamma ={\frac {sin^{-1}d}{d}}{\frac {c}{\theta \phi }}~~,}$

where

{\displaystyle {\begin{aligned}c&={\frac {1}{2}}\sin \theta \sin \phi -2\sin ^{2}{\frac {\theta }{2}}\sin ^{2}{\frac {\phi }{2}}\cos(\angle (u,v)),\quad a=c\cot(\phi /2),\quad b=c\cot(\theta /2),\\d&={\sqrt {a^{2}+b^{2}+2ab\cos(\angle (u,v))+c^{2}\sin ^{2}(\angle (u,v))}}~~,\end{aligned}}}

for

${\displaystyle \theta ={\frac {1}{\sqrt {2}}}||X||~,\quad \phi ={\frac {1}{\sqrt {2}}}||Y||~,\quad \angle (u,v)=\cos ^{-1}{\frac {\langle X,Y\rangle }{||X||||Y||}}~.}$

The inner product is the Hilbert–Schmidt inner product and the norm is the associated norm. Under the hat-isomorphism,

${\displaystyle \langle u,v\rangle ={\frac {1}{2}}\operatorname {Tr} X^{\mathrm {T} }Y,}$

which explains the factors for Template:Mvar and Template:Mvar. This drops out in the expression for the angle. Template:Hidden end

It is worthwhile to write this composite rotation generator as

${\displaystyle \alpha X+\beta Y+\gamma [X,Y]~{\underset {{\mathfrak {so}}(3)}{=}}~X+Y+{\tfrac {1}{2}}[X,Y]+{\tfrac {1}{12}}[X,[X,Y]]-{\tfrac {1}{12}}[Y,[X,Y]]+\cdots ,}$

to emphasize that this is a Lie algebra identity.

The above identity holds for all faithful representations of so(3). The kernel of a Lie algebra homomorphism is an ideal, but so(3), being simple, has no nontrivial ideals and all nontrivial representations are hence faithful. It holds in particular in the doublet or spinor representation. The same explicit formula thus follows in a simpler way through Pauli matrices, cf. the 2×2 derivation for SU(2).

Template:Hidden begin The Pauli vector version of the same BCH formula is the somewhat simpler group composition law of SU(2),

${\displaystyle e^{ia'({\hat {u}}\cdot {\vec {\sigma }})}e^{ib'({\hat {v}}\cdot {\vec {\sigma }})}=\exp \left({\frac {c'}{\sin c'}}\sin a'\sin b'~\left((i\cot b'{\hat {u}}+i\cot a'{\hat {v}})\cdot {\vec {\sigma }}+{\frac {1}{2}}[i{\hat {u}}\cdot {\vec {\sigma }},i{\hat {v}}\cdot {\vec {\sigma }}]\right)\right)~,}$

where

${\displaystyle \cos c'=\cos a'\cos b'-{\hat {u}}\cdot {\hat {v}}\sin a'\sin b'~,}$

the spherical law of cosines. (Note a', b' ,c' are angles, not the a,b,c above.)

This is manifestly of the same format as above,

${\displaystyle Z=\alpha 'X+\beta 'Y+\gamma '[X,Y],}$

with

${\displaystyle X=ia'{\hat {u}}\cdot \mathbf {\sigma } ,\quad Y=ib'{\hat {v}}\cdot \mathbf {\sigma } ~\in {\mathfrak {su}}(2),}$

so that

{\displaystyle {\begin{aligned}\alpha '&={\frac {c'}{\sin c'}}{\frac {\sin a'}{a'}}\cos b'\\\beta '&={\frac {c'}{\sin c'}}{\frac {\sin b'}{b'}}\cos a'\\\gamma '&={\frac {1}{2}}{\frac {c'}{\sin c'}}{\frac {\sin a'}{a'}}{\frac {\sin b'}{b'}}~.\end{aligned}}}

For uniform normalization of the generators in the Lie algebra involved, express the Pauli matrices in terms of Template:Mvar-matrices, σ →2i t, so that

${\displaystyle a'\mapsto -{\frac {\theta }{2}},\quad b'\mapsto -{\frac {\phi }{2}}.}$

To verify then these are the same coefficients as above, compute the ratios of the coefficients ,

{\displaystyle {\begin{aligned}{\frac {\alpha '}{\gamma '}}&={\theta }\cot {\frac {\theta }{2}}&={\frac {\alpha }{\gamma }}\\{\frac {\beta '}{\gamma '}}&=\phi \cot {\frac {\phi }{2}}&={\frac {\beta }{\gamma }}~.\end{aligned}}}

Finally, γ = γ' given the identity d = sin 2c'. Template:Hidden end

For the general n × n case, one might use Ref.[12]

## Infinitesimal rotations

The matrices in the Lie algebra are not themselves rotations; the skew-symmetric matrices are derivatives. An actual "differential rotation", or infinitesimal rotation matrix has the form

${\displaystyle I+A\,d\theta ~,}$

where is vanishingly small and A ∈ SO(3).

These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals .[13] To understand what this means, one considers

${\displaystyle dA_{\mathbf {x}}={\begin{bmatrix}1&0&0\\0&1&-d\theta \\0&d\theta &1\end{bmatrix}}~.}$

First, test the orthogonality condition, QTQ = I. The product is

${\displaystyle dA_{\mathbf {x}}^{T}\,dA_{\mathbf {x}}={\begin{bmatrix}1&0&0\\0&1+d\theta ^{2}&0\\0&0&1+d\theta ^{2}\end{bmatrix}},}$

differing from an identity matrix by second order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix.

Next, examine the square of the matrix,

${\displaystyle dA_{\mathbf {x}}^{2}={\begin{bmatrix}1&0&0\\0&1-d\theta ^{2}&-2d\theta \\0&2d\theta &1-d\theta ^{2}\end{bmatrix}}~.}$

Again discarding second order effects, note that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation,

${\displaystyle dA_{\mathbf {y}}={\begin{bmatrix}1&0&d\phi \\0&1&0\\-d\phi &0&1\end{bmatrix}}.}$

Compare the products dAxdAy to dAydAx,

{\displaystyle {\begin{aligned}dA_{\mathbf {x}}\,dA_{\mathbf {y}}&{}={\begin{bmatrix}1&0&d\phi \\d\theta \,d\phi &1&-d\theta \\-d\phi &d\theta &1\end{bmatrix}}\\dA_{\mathbf {y}}\,dA_{\mathbf {x}}&{}={\begin{bmatrix}1&d\theta \,d\phi &d\phi \\0&1&-d\theta \\-d\phi &d\theta &1\end{bmatrix}}.\\\end{aligned}}}

Since dθ dφ is second order, we discard it: thus, to first order, multiplication of infinitesimal rotation matrices is commutative. In fact,

${\displaystyle dA_{\mathbf {x}}\,dA_{\mathbf {y}}=dA_{\mathbf {y}}\,dA_{\mathbf {x}},\,\!}$

again to first order. In other words, the order in which infinitesimal rotations are applied is irrelevant.

This useful fact makes, for example, derivation of rigid body rotation relatively simple. But one must always be careful to distinguish (the first order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from Lie algebra elements. When contrasting the behavior of finite rotation matrices in the BCH formula above with that of infinitesimal rotation matrices, where all the commutator terms will be second order infinitesimals one finds a bona fide vector space. Technically, this dismissal of any second order terms amounts to Group contraction.

## Realizations of rotations

{{#invoke:main|main}}

We have seen that there are a variety of ways to represent rotations:

Another method is to specify an arbitrary rotation by a sequence of rotations about some fixed axes. See Euler angles.

## A note on representations

{{#invoke:main|main}} The Lie group SO(3) is compact and simple of rank 1, and so it has a single independent Casimir element, a quadratic invariant function of the three generators which commutes with all of them. The Killing form for the rotation group is just the Kronecker delta, and so this Casimir invariant is simply the sum of the squares of the generators, ${\displaystyle J_{x},\,J_{y},\,J_{z}}$, of the algebra

${\displaystyle [J_{\mathbf {x}},J_{\mathbf {y}}]=J_{\mathbf {z}},\quad [J_{\mathbf {z}},J_{\mathbf {x}}]=J_{\mathbf {y}},\quad [J_{\mathbf {y}},J_{\mathbf {z}}]=J_{\mathbf {x}}.}$

That is, the Casimir invariant is given by

${\displaystyle J^{2}\equiv {\boldsymbol {J\cdot J}}=J_{x}^{2}+J_{y}^{2}+J_{z}^{2}\propto I~.}$

For unitary irreducible representations Template:Mvar, the eigenvalues of this invariant are real and discrete, and characterize each representation, which is finite dimensional, of dimensionality 2Template:Mvar+1. That is, the eigenvalues of this Casimir operator are

${\displaystyle J^{2}=-j(j+1)~I_{2j+1}~,}$

where Template:Mvar is integer or half-integer, and referred to as the spin or angular momentum.

So, above, the 3×3 generators L displayed act on the triplet (spin 1) representation, while the the 2×2 ones (t) act on the doublet (spin-½) representation. By taking Kronecker products of Template:Mvar with itself repeatedly, one may construct all higher irreducible representations Template:Mvar. That is, the resulting generators for higher spin systems in three spatial dimensions, for arbitrarily large Template:Mvar, can be calculated using these spin operators and ladder operators.

For every unitary irreducible representations Template:Mvar there is an equivalent one, Template:Mvar. All infinite-dimensional irreducible representations must be non-unitary, since the group is compact.

In quantum mechanics, the Casimir invariant is the "angular-momentum-squared" operator; integer values of spin Template:Mvar characterize bosonic representations, while half-integer values fermionic representations, respectively. The antihermitean matrices used above are utilized as spin operators, after they are multiplied by Template:Mvar, so they are now hermitean (like the Pauli matrices). Thus, in this language,

${\displaystyle [J_{\mathbf {x}},J_{\mathbf {y}}]=iJ_{\mathbf {z}},\quad [J_{\mathbf {z}},J_{\mathbf {x}}]=iJ_{\mathbf {y}},\quad [J_{\mathbf {y}},J_{\mathbf {z}}]=iJ_{\mathbf {x}}.}$

and hence

${\displaystyle J^{2}=j(j+1)~I_{2j+1}~.}$

Explicit expressions for these Template:Mvar are,

{\displaystyle {\begin{aligned}\left(J_{z}^{(j)}\right)_{ba}&=(j+1-a)~\delta _{ab,a}\\\left(J_{x}^{(j)}\right)_{ba}&={\frac {1}{2}}(\delta _{b,a+1}+\delta _{b+1,a}){\sqrt {(j+1)(a+b-1)-ab}}\\\left(J_{y}^{(j)}\right)_{ba}&={\frac {1}{2i}}(\delta _{b,a+1}-\delta _{b+1,a}){\sqrt {(j+1)(a+b-1)-ab}}\\&1\leq a,b\leq 2j+1~,\end{aligned}}}

for arbitrary Template:Mvar.

For example, the resulting spin matrices for spin 1, spin {{ safesubst:#invoke:Unsubst||$B=3/2}}, and {{ safesubst:#invoke:Unsubst||$B=5/2}} are:

{\displaystyle {\begin{aligned}J_{x}&={\frac {1}{\sqrt {2}}}{\begin{pmatrix}0&1&0\\1&0&1\\0&1&0\end{pmatrix}}\\J_{y}&={\frac {1}{\sqrt {2}}}{\begin{pmatrix}0&-i&0\\i&0&-i\\0&i&0\end{pmatrix}}\\J_{z}&={\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix}}\end{aligned}}}

(Note, however, how these are in an equivalent, but different basis than the above i Ls.)