Heston model: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Phil Boswell
m convert dodgy URL to ID using AWB
 
Line 1: Line 1:
The  [http://www.Herandkingscounty.com/content/information-and-facts-you-must-know-about-hobbies phone psychic] individual who wrote the post is known as [http://srncomm.com/blog/2014/08/25/relieve-that-stress-find-a-new-hobby/ best psychic] Jayson Hirano and he totally digs that title. I've usually cherished residing in Alaska. What I love performing is football but I don't have the time lately. Office supervising is exactly where my primary earnings arrives from but I've always needed my personal company.<br><br>Here is my web-site [http://medialab.zendesk.com/entries/54181460-Will-You-Often-End-Up-Bored-Try-One-Of-These-Hobby-Ideas- psychic phone]
In [[mathematics]], specifically in [[algebraic combinatorics]] and [[commutative algebra]], the '''complete homogeneous symmetric polynomials''' are a specific kind of [[symmetric polynomial]]s. Every symmetric polynomial can be expressed as a polynomial expression in complete homogeneous symmetric polynomials.
 
==Definition==
 
The complete homogeneous symmetric polynomial of degree ''k'' in <math>n</math> variables ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>, written ''h''<sub>''k''</sub> for ''k'' = 0, 1, 2, ..., is the sum of all [[monomial]]s of total degree ''k'' in the variables. Formally,
:<math> h_k (X_1, X_2, \dots,X_n) = \sum_{1 \leq i_1 \leq i_2 \leq \cdots \leq i_k \leq n} X_{i_1} X_{i_2} \cdots X_{i_k}.</math>
 
The formula can also be written as:
:<math> h_k (X_1, X_2, \dots,X_n) =
\sum_{l_1+l_2+ \cdots + l_n=k; ~~ l_i \geq 0 }
X_{1}^{l_1} X_{2}^{l_2} \cdots X_{n}^{l_n}.</math>
Indeed, ''l<sub>p</sub>'' is just multiplicity of ''p'' in sequence ''i<sub>k</sub>''.
 
The first few of these polynomials are
:<math> h_0 (X_1, X_2, \dots,X_n) = 1,</math>
:<math> h_1 (X_1, X_2, \dots,X_n) = \sum_{1 \leq j \leq n} X_j,</math>
:<math> h_2 (X_1, X_2, \dots,X_n) = \sum_{1 \leq j \leq k \leq n} X_j X_k,</math>
:<math> h_3 (X_1, X_2, \dots,X_n) = \sum_{1 \leq j \leq k \leq l \leq n} X_j X_k X_l.</math>
 
Thus, for each nonnegative integer <math>k</math>, there exists exactly one complete homogeneous symmetric polynomial of degree <math>k</math> in <math>n</math> variables.
 
Another way of rewriting the definition is to take summation over all sequences ''i<sub>k</sub>'',
without condition of ordering <math> i_p \leq i_{p+1} </math>:
:<math> h_k (X_1, X_2, \dots,X_n) = \sum_{1 \leq i_1, i_2 , \cdots , i_k \leq n}
\frac{m_1! m_2 !...m_n!}{k!} X_{i_1} X_{i_2} \cdots X_{i_k},</math>
here ''m<sub>p</sub>'' is the multiplicity of number ''p'' in the sequence ''i<sub>k</sub>''.
 
For example
:<math> h_2 (X_1, X_2) = \frac{2!1!}{2!}X_1^2 +\frac{1!1!}{2!}X_1X_2 +\frac{1!1!}{2!}X_2X_1 + \frac{1!2!}{2!}X_2^2 = X_1^2+X_1X_2+X_2^2.</math>
 
The [[polynomial ring]] formed by taking all integral linear combinations of products of the complete homogeneous symmetric polynomials is a commutative ring.
 
==Examples==
The following lists the <math>n</math> basic (as explained below) complete homogeneous symmetric polynomials for the first three positive values of ''n''.
 
For ''n'' = 1:
:<math>h_1(X_1) = X_1\,.</math>
 
For ''n'' = 2:
:<math>\begin{align}
h_1(X_1,X_2)&= X_1 + X_2\\
h_2(X_1,X_2)&= X_1^2 + X_1X_2 + X_2^2.
\end{align}</math>
 
For ''n'' = 3:
:<math>\begin{align}
h_1(X_1,X_2,X_3) &= X_1 + X_2 + X_3\\
h_2(X_1,X_2,X_3) &= X_1^2 + X_2^2 + X_3^2 + X_1X_2 + X_1X_3 + X_2X_3\\
h_3(X_1,X_2,X_3) &= X_1^3+X_2^3+X_3^3 + X_1^2X_2+X_1^2X_3+X_2^2X_1+X_2^2X_3+X_3^2X_1+X_3^2X_2 + X_1X_2X_3.
\end{align}</math>
 
==Properties==
 
===Generating function===
The complete homogeneous symmetric polynomials are characterized by the following identity of formal power series in ''t'':
:<math>\sum_{k=0}^\infty h_k(X_1,\ldots,X_n)t^k = \prod_{i=1}^n\sum_{j=0}^\infty(X_it)^j = \prod_{i=1}^n\frac1{1-X_it}</math>
(this is called the [[generating function]], or generating series, for the complete homogeneous symmetric polynomials).
Here each fraction in the final expression is the usual way to represent the formal [[geometric series]] that is a factor in the middle expression. The identity can be justified by considering how the product of those geometric series is formed: each term of the product is obtained by multiplying one term chosen from each geometric series, and every monomial in the variables ''X''<sub>''i''</sub> is obtained for exactly one such choice of terms, and comes multiplied by a power of ''t'' equal to the degree of the monomial.
 
The formula above is in certain sense equivalent to [[MacMahon master theorem]]. Indeed, the right hand side can be interpreted as <math>1/det(1-tM)</math>, for the diagonal matrix ''M'' with ''X<sub>i</sub>'' on the diagonal. While at the left hand side one can recognize similar expressions as stands in MacMahon master theorem. Diagonalizable matrices are dense in the set of all matrices, and this consideration proves the whole theorem.
 
===Relation with the elementary symmetric polynomials===
 
There is a fundamental relation between the [[elementary symmetric polynomial]]s and the complete homogeneous ones:
 
:<math>\sum_{i=0}^m(-1)^ie_i(X_1,\ldots,X_n)h_{m-i}(X_1,\ldots,X_n)=0,</math>
 
which is valid for all ''m''&nbsp;&gt;&nbsp;0, and any number of variables&nbsp;''n''. The easiest way to see that it holds is from an identity of formal power series in ''t'' for the elementary symmetric polynomials, analogous to the one given above for the complete homogeneous ones:
 
:<math>\sum_{k=0}^\infty e_k(X_1,\ldots,X_n)(-t)^k = \prod_{i=1}^n(1-X_it)</math>
 
(this is actually an identity of polynomials in ''t'', because after ''e''<sub>''n''</sub>(''X''<sub>1</sub>,&hellip;''X''<sub>''n''</sub>) the elementary symmetric polynomials become zero). Multiplying this by the generating function for the complete homogeneous symmetric polynomials, one obtains the constant series&nbsp;1, and the relation between the elementary and complete homogeneous polynomials follows from comparing coefficients of ''t''<sup>''m''</sup>. A somewhat more direct way to understand that relation, is to consider the contributions in the summation involving a fixed monomial ''X''<sup>&alpha;</sup> of degree ''m''. For any subset ''S'' of the variables appearing with nonzero exponent in the monomial, there is a contribution involving the product ''X''<sub>''S''</sub> of those variables as term from ''e''<sub>''s''</sub>(''X''<sub>1</sub>,&hellip;,''X''<sub>''n''</sub>), where ''s''&nbsp;=&nbsp;#''S'', and the monomial ''X''<sup>&alpha;</sup>&nbsp;/&nbsp;''X''<sub>''S''</sub> from ''h''<sub>''m''−''s''</sub>(''X''<sub>1</sub>,&hellip;,''X''<sub>''n''</sub>); this contribution has coefficient (−1)<sup>''s''</sup>. The relation then follows from the fact that
 
:<math>\sum_{s=0}^l{l\choose s}(-1)^s=(1-1)^l=0\quad\mbox{for }l>0,</math>
 
by the [[binomial formula]], where ''l''&nbsp;&le;&nbsp;''m'' denotes the number of distinct variables occurring (with nonzero exponent) in ''X''<sup>&alpha;</sup>.
Since ''e''<sub>0</sub>(''X''<sub>1</sub>, &hellip;, ''X''<sub>''n''</sub>) and ''h''<sub>0</sub>(''X''<sub>1</sub>, &hellip;, ''X''<sub>''n''</sub>) are both equal to&nbsp;1, one can isolate from the relation either the first or the last terms of the summation. The former gives a sequence of equations
 
:<math>\begin{align}
h_1(X_1,\ldots,X_n)&=e_1(X_1,\ldots,X_n),\\
h_2(X_1,\ldots,X_n)&=h_1(X_1,\ldots,X_n)e_1(X_1,\ldots,X_n)-e_2(X_1,\ldots,X_n),\\
h_3(X_1,\ldots,X_n)&=h_2(X_1,\ldots,X_n)e_1(X_1,\ldots,X_n)-h_1(X_1,\ldots,X_n)e_2(X_1,\ldots,X_n)+e_3(X_1,\ldots,X_n),\\
\end{align}</math>
 
and so on, that allows to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials; the latter gives a set of equations
 
:<math>\begin{align}
e_1(X_1,\ldots,X_n)&=h_1(X_1,\ldots,X_n),\\
e_2(X_1,\ldots,X_n)&=h_1(X_1,\ldots,X_n)e_1(X_1,\ldots,X_n)-h_2(X_1,\ldots,X_n),\\
e_3(X_1,\ldots,X_n)&=h_1(X_1,\ldots,X_n)e_2(X_1,\ldots,X_n)-h_2(X_1,\ldots,X_n)e_1(X_1,\ldots,X_n)+h_3(X_1,\ldots,X_n),\\
\end{align}</math>
 
and so forth, that allows doing the inverse. The first ''n'' elementary and complete homogeneous symmetric polynomials play perfectly similar roles in these relations, even though the former polynomials then become zero, whereas the latter do not. This phenomenon can be understood in the setting of the [[Symmetric function#The ring of symmetric functions|ring of symmetric functions]]. It has a [[ring automorphism]] that interchanges the sequences of the ''n'' elementary and first ''n'' complete homogeneous ''symmetric functions''.
 
The set of complete homogeneous symmetric polynomials of degree 1 to ''n'' in ''n'' variables [[generator (mathematics)|generates]] the [[polynomial ring|ring]] of [[symmetric polynomial]]s in ''n'' variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring <math>\mathbf Z[h_1(X_1,\ldots,X_n),\ldots,h_n(X_1,\ldots,X_n)]</math>. This can be formulated by saying that <math> h_1(X_1,\ldots,X_n),\ldots,h_n(X_1,\ldots,X_n) </math> form an ''algebraic basis'' of the ring of symmetric polynomials in ''X''<sub>1</sub>, &hellip; ''X''<sub>''n''</sub> with integral coefficients (as is also true for the elementary symmetric polynomials). The same is true with the ring '''Z''' of integers replaced by any other [[commutative ring]]. These statements follow from analogous statements for the elementary symmetric polynomials, due to the indicated possibility of expressing either kind of symmetric polynomials in terms of the other kind.
 
===Relation with the monomial symmetric polynomials===
 
The polynomial ''h''<sub>''k''</sub>(''X''<sub>1</sub>, …, ''X''<sub>''n''</sub>) is also the sum of '''all''' distinct
[[Symmetric_polynomial#Monomial_symmetric_polynomials|monomial symmetric polynomials]] of degree ''k'' in ''X''<sub>1</sub>, …, ''X''<sub>''n''</sub>, for instance
:<math>\begin{align}
h_3(X_1,X_2,X_3)&=m_{(3)}(X_1,X_2,X_3)+m_{(2,1)}(X_1,X_2,X_3)+m_{(1,1,1)}(X_1,X_2,X_3)\\
&=(X_1^3+X_2^3+X_3^3)+(X_1^2X_2+X_1^2X_3+X_1X_2^2+X_1X_3^2+X_2^2X_3+X_2X_3^2)+(X_1X_2X_3).\\
\end{align}</math>
 
===Relation with symmetric tensors===
 
Consider an ''n''-dimensional vector space ''V'' and a linear operator <math>M: V \to V </math> with eigenvalues ''X<sub>1</sub>, X<sub>2</sub>,...,X<sub>n</sub>''.
Denote by ''Sym<sup>k</sup>(V)'' its ''k''-th symmetric tensor power and
''M<sup>sym(k)</sup>'' the induced operator <math> Sym^k (V) \to Sym^k (V)</math>.
 
'''Proposition:'''
:<math> Trace_{Sym^k (V)} (M^{sym(k)}) = h_{k}(X_1,X_2,...,X_n).</math>
 
The proof is easy: consider an eigenbasis ''e<sub>i</sub>'' for ''M''. The basis in
''Sym<sup>k</sup>(V)'' can be indexed by sequences ''i<sub>1</sub>≤i<sub>2</sub>≤...≤i<sub>k</sub>'', indeed, consider the symmetrizations
of <math>e_{i_1} \otimes e_{i_2}\otimes ... \otimes e_{i_k}</math>. All such vectors are eigenvectors for ''M<sup>sym(k)</sup>'' with eigenvalues <math>X_{i_1}X_{i_2}...X_{i_k}</math>, hence this proposition is true.
 
 
Similarly one can express elementary symmetric polynomials via traces over antisymmetric tensor powers. Both expressions are subsumed in expressions of [[Schur polynomial]]s
as traces over [[Schur functor]]s. Which can be seen as the [[Weyl character formula]] for GL(V).
 
==See also==
*[[Symmetric polynomial]]
*[[Elementary symmetric polynomial]]
*[[Schur polynomial]]
*[[Newton's identities]]
*[[MacMahon Master theorem]]
*[[Symmetric function]]
*[[Representation theory]]
 
==References==
* Macdonald, I.G. (1979), ''Symmetric Functions and Hall Polynomials''. Oxford Mathematical Monographs. Oxford: Clarendon Press.
* Macdonald, I.G. (1995), ''Symmetric Functions and Hall Polynomials'', second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998).
* Richard P. Stanley (1999), ''Enumerative Combinatorics'', Vol. 2. Camridge: Cambridge University Press. ISBN 0-521-56069-1
 
[[Category:Homogeneous polynomials]]
[[Category:Symmetric functions]]

Revision as of 21:55, 18 September 2013

In mathematics, specifically in algebraic combinatorics and commutative algebra, the complete homogeneous symmetric polynomials are a specific kind of symmetric polynomials. Every symmetric polynomial can be expressed as a polynomial expression in complete homogeneous symmetric polynomials.

Definition

The complete homogeneous symmetric polynomial of degree k in variables X1, ..., Xn, written hk for k = 0, 1, 2, ..., is the sum of all monomials of total degree k in the variables. Formally,

The formula can also be written as:

Indeed, lp is just multiplicity of p in sequence ik.

The first few of these polynomials are

Thus, for each nonnegative integer , there exists exactly one complete homogeneous symmetric polynomial of degree in variables.

Another way of rewriting the definition is to take summation over all sequences ik, without condition of ordering :

here mp is the multiplicity of number p in the sequence ik.

For example

The polynomial ring formed by taking all integral linear combinations of products of the complete homogeneous symmetric polynomials is a commutative ring.

Examples

The following lists the basic (as explained below) complete homogeneous symmetric polynomials for the first three positive values of n.

For n = 1:

For n = 2:

For n = 3:

Properties

Generating function

The complete homogeneous symmetric polynomials are characterized by the following identity of formal power series in t:

(this is called the generating function, or generating series, for the complete homogeneous symmetric polynomials). Here each fraction in the final expression is the usual way to represent the formal geometric series that is a factor in the middle expression. The identity can be justified by considering how the product of those geometric series is formed: each term of the product is obtained by multiplying one term chosen from each geometric series, and every monomial in the variables Xi is obtained for exactly one such choice of terms, and comes multiplied by a power of t equal to the degree of the monomial.

The formula above is in certain sense equivalent to MacMahon master theorem. Indeed, the right hand side can be interpreted as , for the diagonal matrix M with Xi on the diagonal. While at the left hand side one can recognize similar expressions as stands in MacMahon master theorem. Diagonalizable matrices are dense in the set of all matrices, and this consideration proves the whole theorem.

Relation with the elementary symmetric polynomials

There is a fundamental relation between the elementary symmetric polynomials and the complete homogeneous ones:

which is valid for all m > 0, and any number of variables n. The easiest way to see that it holds is from an identity of formal power series in t for the elementary symmetric polynomials, analogous to the one given above for the complete homogeneous ones:

(this is actually an identity of polynomials in t, because after en(X1,…Xn) the elementary symmetric polynomials become zero). Multiplying this by the generating function for the complete homogeneous symmetric polynomials, one obtains the constant series 1, and the relation between the elementary and complete homogeneous polynomials follows from comparing coefficients of tm. A somewhat more direct way to understand that relation, is to consider the contributions in the summation involving a fixed monomial Xα of degree m. For any subset S of the variables appearing with nonzero exponent in the monomial, there is a contribution involving the product XS of those variables as term from es(X1,…,Xn), where s = #S, and the monomial Xα / XS from hms(X1,…,Xn); this contribution has coefficient (−1)s. The relation then follows from the fact that

by the binomial formula, where l ≤ m denotes the number of distinct variables occurring (with nonzero exponent) in Xα. Since e0(X1, …, Xn) and h0(X1, …, Xn) are both equal to 1, one can isolate from the relation either the first or the last terms of the summation. The former gives a sequence of equations

and so on, that allows to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials; the latter gives a set of equations

and so forth, that allows doing the inverse. The first n elementary and complete homogeneous symmetric polynomials play perfectly similar roles in these relations, even though the former polynomials then become zero, whereas the latter do not. This phenomenon can be understood in the setting of the ring of symmetric functions. It has a ring automorphism that interchanges the sequences of the n elementary and first n complete homogeneous symmetric functions.

The set of complete homogeneous symmetric polynomials of degree 1 to n in n variables generates the ring of symmetric polynomials in n variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring . This can be formulated by saying that form an algebraic basis of the ring of symmetric polynomials in X1, … Xn with integral coefficients (as is also true for the elementary symmetric polynomials). The same is true with the ring Z of integers replaced by any other commutative ring. These statements follow from analogous statements for the elementary symmetric polynomials, due to the indicated possibility of expressing either kind of symmetric polynomials in terms of the other kind.

Relation with the monomial symmetric polynomials

The polynomial hk(X1, …, Xn) is also the sum of all distinct monomial symmetric polynomials of degree k in X1, …, Xn, for instance

Relation with symmetric tensors

Consider an n-dimensional vector space V and a linear operator with eigenvalues X1, X2,...,Xn. Denote by Symk(V) its k-th symmetric tensor power and Msym(k) the induced operator .

Proposition:

The proof is easy: consider an eigenbasis ei for M. The basis in Symk(V) can be indexed by sequences i1≤i2≤...≤ik, indeed, consider the symmetrizations of . All such vectors are eigenvectors for Msym(k) with eigenvalues , hence this proposition is true.


Similarly one can express elementary symmetric polynomials via traces over antisymmetric tensor powers. Both expressions are subsumed in expressions of Schur polynomials as traces over Schur functors. Which can be seen as the Weyl character formula for GL(V).

See also

References

  • Macdonald, I.G. (1979), Symmetric Functions and Hall Polynomials. Oxford Mathematical Monographs. Oxford: Clarendon Press.
  • Macdonald, I.G. (1995), Symmetric Functions and Hall Polynomials, second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998).
  • Richard P. Stanley (1999), Enumerative Combinatorics, Vol. 2. Camridge: Cambridge University Press. ISBN 0-521-56069-1