# Descent direction

In optimization, a descent direction is a vector ${\displaystyle {\mathbf {p} }\in {\mathbb {R} }^{n}}$ that, in the sense below, moves us closer towards a local minimum ${\displaystyle \mathbf {x} ^{*}}$ of our objective function ${\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }$.
Suppose we are computing ${\displaystyle \mathbf {x} ^{*}}$ by an iterative method, such as line search. We define a descent direction ${\displaystyle \mathbf {p} _{k}\in \mathbb {R} ^{n}}$ at the ${\displaystyle k}$th iterate to be any ${\displaystyle \mathbf {p} _{k}}$ such that ${\displaystyle \langle \mathbf {p} _{k},\nabla f(\mathbf {x} _{k})\rangle <0}$, where ${\displaystyle \langle ,\rangle }$ denotes the inner product. The motivation for such an approach is that small steps along ${\displaystyle \mathbf {p} _{k}}$ guarantee that ${\displaystyle \displaystyle f}$ is reduced, by Taylor's theorem.
Using this definition, the negative of a non-zero gradient is always a descent direction, as ${\displaystyle \langle -\nabla f(\mathbf {x} _{k}),\nabla f(\mathbf {x} _{k})\rangle =-\langle \nabla f(\mathbf {x} _{k}),\nabla f(\mathbf {x} _{k})\rangle <0}$.
More generally, if ${\displaystyle P}$ is a positive definite matrix, then ${\displaystyle d=-P\nabla f(x)}$ is a descent direction [1] at ${\displaystyle x}$. This generality is used in preconditioned gradient descent methods.