Random Fibonacci sequence: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Duncan.Hull
No edit summary
en>Clsn
Undid revision 635308256 by John Patrick Turner-Smith (talk)
 
Line 1: Line 1:
In mathematics, the '''Hessian matrix''' or '''Hessian''' is a [[square matrix]] of second-order [[partial derivative]]s of a [[function (mathematics)|function]]. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician [[Ludwig Otto Hesse]] and later named after him. Hesse originally used the term "functional determinants".


Given the [[real number|real]]-valued function


:<math>f(x_1, x_2, \dots, x_n),\,\!</math>
Adrianne Le is the logo my parents gave people but you can dial me anything you just like. My house is now for South Carolina. Filing is certainly my day job at this time but soon I'll getting on my own. What me and my family absolutely love is acting but I've can't make it some [http://www.Britannica.com/search?query=profession profession] really. See what's new on my [http://www.Reddit.com/r/howto/search?q=website website] here: http://circuspartypanama.com<br><br>Also visit my site ... hack clash of clans ([http://circuspartypanama.com check out the post right here])
 
if all second [[partial derivative]]s of ''f'' exist and are continuous over the domain of the function, then the Hessian matrix of ''f'' is
 
:<math>H(f)_{ij}(\mathbf x) = D_i D_j f(\mathbf x)\,\!</math>
 
where '''''x''''' = (''x''<sub>1</sub>, ''x''<sub>2</sub>, ..., ''x''<sub>n</sub>) and ''D''<sub>i</sub> is the differentiation operator with respect to the ''i''th argument. Thus
 
:<math>H(f) = \begin{bmatrix}
\dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex]
\vdots & \vdots & \ddots & \vdots \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2}
\end{bmatrix}.</math>
 
Because ''f'' is often clear from context, <math>H(f)(\mathbf x)</math> is frequently abbreviated to <math>H(\mathbf x)</math>.
 
The Hessian matrix is related to the [[Jacobian matrix]] by <math>H(f)(\mathbf x)</math> = <math>J(\nabla \! f)(\mathbf x)</math>.
 
The [[determinant]] of the above matrix is also sometimes referred to as the Hessian.<ref>{{cite book |last1=Binmore |first1=Ken |authorlink1=Kenneth Binmore |last2=Davies |first2=Joan |year=2007 |title=Calculus Concepts and Methods |oclc=717598615 |isbn=9780521775410 |publisher=Cambridge University Press|page=190}}</ref>
 
Hessian matrices are used in large-scale [[Optimization (mathematics)|optimization]] problems within [[Newton's method in optimization|Newton]]-type methods because they are the coefficient of the quadratic term of a local [[Taylor expansion]] of a function. That is,
:<math>y=f(\mathbf{x}+\Delta\mathbf{x})\approx f(\mathbf{x}) + J(\mathbf{x})\Delta \mathbf{x} +\frac{1}{2} \Delta\mathbf{x}^\mathrm{T} H(\mathbf{x}) \Delta\mathbf{x}</math>
where ''J'' is the [[Jacobian matrix]], which is a vector (the [[gradient]]) for scalar-valued functions. The full Hessian matrix can be difficult to compute in practice; in such situations, [[quasi-Newton method|quasi-Newton]] algorithms have been developed that use approximations to the Hessian. The best-known quasi-Newton algorithm is the [[Broyden–Fletcher–Goldfarb–Shanno algorithm|BFGS]] algorithm.{{Citation needed|date=April 2013}}
 
==Mixed derivatives and symmetry of the Hessian==
The '''mixed derivatives''' of ''f'' are the entries off the [[main diagonal]] in the Hessian. Assuming that they are continuous, the order of differentiation does not matter ([[Symmetry_of_second_derivatives#Clairaut.27s_theorem|Clairaut's theorem]]). For example,
 
:<math>\frac {\partial}{\partial x} \left( \frac { \partial f }{ \partial y} \right) =
      \frac {\partial}{\partial y} \left( \frac { \partial f }{ \partial x} \right).</math>
 
This can also be written
 
:<math>f_{yx} = f_{xy}. \,</math>
 
In a formal statement: if the second derivatives of ''f'' are all [[continuous function|continuous]] in a [[Neighbourhood (mathematics)|neighborhood]] ''D'', then the Hessian of ''f'' is a [[symmetric matrix]] throughout ''D''; see [[symmetry of second derivatives]].
 
==Critical points==
{{unreferenced|section|date=April 2013}}
If the [[gradient]] (the vector of the partial derivatives) of  a function ''f''  is zero at some point ''x'', then ''f'' has a ''[[critical point (mathematics)|critical point]]'' (or ''[[stationary point]]'') at ''x''. The [[determinant]] of the Hessian at ''x'' is then called the [[discriminant]]. If this determinant is zero then ''x'' is called a ''degenerate critical point'' of ''f'', or a ''non-Morse critical point'' of ''f''. Otherwise it is non-degenerate, and called a ''Morse critical point'' of ''f''.
 
The Hessian matrix plays an important role in [[Morse theory]], because its [[kernel of a matrix|kernel]] and [[eigenvalue]]s allow classification of the critical points.
 
==Second derivative test==
{{Main|Second partial derivative test}}
The following test can be applied at a non-degenerate critical point ''x''.  If the Hessian is [[Positive-definite matrix|positive definite]] at x, then ''f'' attains a local minimum at ''x''. If the Hessian is [[Positive-definite matrix#Negative-definite, semidefinite and indefinite matrices|negative definite]] at x, then ''f'' attains a local maximum at ''x''. If the Hessian has both positive and negative [[eigenvalue]]s then ''x'' is a [[saddle point]] for ''f'' (this is true even if ''x'' is degenerate). Otherwise the test is inconclusive.
 
Note that for positive semidefinite and negative semidefinite Hessians the test is inconclusive (yet a conclusion can be made that ''f'' is locally [[convex function|convex]] or [[concave function|concave]] respectively). However, more can be said from the point of view of [[Morse theory]].
 
The [[second derivative test]] for functions of one and two variables is simple. In one variable, the Hessian contains just one second derivative; if it is positive then ''x'' is a local minimum, and if it is negative then ''x'' is a local maximum; if it is zero then the test is inconclusive. In two variables, the [[determinant]] can be used, because the determinant is the product of the eigenvalues. If it is positive then the eigenvalues are both positive, or both negative. If it is negative then the two eigenvalues have different signs. If it is zero, then the second derivative test is inconclusive.
 
More generally, the second-order conditions that are sufficient for a local minimum or maximum can be expressed in terms of the sequence of principal (upper-leftmost) [[minor (linear algebra)|minors]] (determinants of sub-matrices) of the Hessian; these conditions are a special case of those given in the next section for bordered Hessians for constrained optimization—the case in which the number of constraints is zero.
 
==Bordered Hessian==
A '''bordered Hessian''' is used for the second-derivative test in certain constrained optimization problems.  Given the function as before:
 
:<math>f(x_1, x_2, \dots, x_n),</math>
 
but adding a constraint function such that:
 
:<math>g(x_1, x_2, \dots, x_n) = c,</math>
 
the bordered Hessian appears as
 
:<math>H(f,g) = \begin{bmatrix}
0 & \dfrac{\partial g}{\partial x_1} & \dfrac{\partial g}{\partial x_2} & \cdots & \dfrac{\partial g}{\partial x_n} \\[2.2ex]
\dfrac{\partial g}{\partial x_1} & \dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex]
\dfrac{\partial g}{\partial x_2} & \dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex]
\vdots & \vdots & \vdots & \ddots & \vdots \\[2.2ex]
\dfrac{\partial g}{\partial x_n} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2}
\end{bmatrix}</math>
 
If there are, say, ''m'' constraints then the zero in the north-west corner is an ''m'' × ''m'' block of zeroes, and there are ''m'' border rows at the top and ''m'' border columns at the left.
 
The above rules stating that extrema are characterized by a positive definite or negative definite Hessian cannot apply here since a bordered Hessian cannot be definite: we have ''z'Hz'' = 0 if vector ''z'' has a non-zero as its first element, followed by zeroes.  
 
The second derivative test consists here of sign restrictions of the determinants of a certain set of ''n - m'' submatrices of the bordered Hessian.<ref>{{Citation | last1=Neudecker | first1=Heinz | last2=Magnus | first2=Jan R. | title=Matrix differential calculus with applications in statistics and econometrics | publisher=[[John Wiley & Sons]] | location=New York | isbn=978-0-471-91516-4 | year=1988}}, page 136</ref> Intuitively, one can think of the ''m'' constraints as reducing the problem to one with ''n - m'' free variables. (For example, the maximization of <math>f(x_1,x_2,x_3)</math> subject to the constraint <math>x_1+x_2+x_3=1</math> can be reduced to the maximization of <math>f(x_1,x_2,1-x_1-x_2)</math> without constraint.)
 
Specifically,<ref>Chiang, Alpha C., ''Fundamental Methods of Mathematical Economics'', McGraw-Hill, third edition, 1984: p. 386.</ref> sign conditions are imposed on the sequence of principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian, the smallest minor consisting of the truncated first 2''m''+1 rows and columns, the next consisting of the truncated first 2''m''+2 rows and columns, and so on, with the last being the entire bordered Hessian. There are thus ''n''–''m'' minors to consider. A sufficient condition for a local ''maximum'' is that these minors alternate in sign with the smallest one having the sign of (–1)<sup>''m''+1</sup>.  A sufficient condition for a local ''minimum'' is that all of these minors have the sign of (–1)<sup>''m''</sup>. (In the unconstrained case of ''m''=0 these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively.)
 
==Vector-valued functions==
{{unreferenced|section|date=April 2013}}
If ''f'' is instead a function from <math>\mathbb{R}^n \to \mathbb{R}^m</math>, i.e.
 
:<math>f(x_1, x_2, \dots, x_n) = (f_1, f_2, \dots, f_m),</math>
 
then the array of second partial derivatives is not a two-dimensional matrix of size <math>n \times n</math>, but rather a [[tensor]] of order 3. This can be thought of as a multi-dimensional array with dimensions <math>m \times n \times n</math>, which degenerates to the usual Hessian matrix for <math>m = 1</math>.
 
==Generalizations to Riemannian manifolds==
 
Let <math>(M,g)</math> be a [[Riemannian manifold]] and <math>\nabla</math> its [[Levi-Civita connection]]. Let <math>f:M \to \mathbb{R}</math> be a smooth function. We may define the Hessian tensor
:<math>\displaystyle \mbox{Hess}(f) \in \Gamma(T^*M \otimes T^*M) </math> by <math>\mbox{Hess}(f):=\nabla \nabla f = \nabla df</math>,
 
where we have taken advantage of the first covariant derivative of a function being the same as its ordinary derivative.  Choosing local coordinates <math>\{x^i\}</math> we obtain the local expression for the Hessian as
 
:<math> \mbox{Hess}(f)=\nabla_i\, \partial_j f \ dx^i \!\otimes\! dx^j = \left( \frac{\partial^2 f}{\partial x^i \partial x^j}-\Gamma_{ij}^k \frac{\partial f}{\partial x^k} \right) dx^i \otimes dx^j </math>
 
where <math>\Gamma^k_{ij}</math> are the [[Christoffel symbols]] of the connection.  Other equivalent forms for the Hessian are given by
:<math>\mbox{Hess}(f)(X,Y)= \langle \nabla_X \mbox{grad}f,Y \rangle </math> and <math>\mbox{Hess}(f)(X,Y)=X(Yf)-df(\nabla_XY)</math>.
 
==See also==
* The determinant of the Hessian matrix is a covariant; see [[Invariant of a binary form]]
*[[Polarization identity]], useful for rapid calculations involving Hessians.
*[[Jacobian matrix]]
*The Hessian matrix is commonly used for expressing image processing operators in [[image processing]] and [[computer vision]] (see the [[Laplacian of Gaussian|Laplacian of Gaussian (LoG) blob detector]], [[blob detection#The determinant of the Hessian|the determinant of Hessian (DoH) blob detector]] and [[scale space]]).
 
==Notes==
<references/>
 
== External links ==
*{{MathWorld|Hessian|Hessian}}
 
[[Category:Differential operators]]
[[Category:Matrices]]
[[Category:Morse theory]]
[[Category:Multivariable calculus]]

Latest revision as of 20:56, 25 November 2014


Adrianne Le is the logo my parents gave people but you can dial me anything you just like. My house is now for South Carolina. Filing is certainly my day job at this time but soon I'll getting on my own. What me and my family absolutely love is acting but I've can't make it some profession really. See what's new on my website here: http://circuspartypanama.com

Also visit my site ... hack clash of clans (check out the post right here)