Symplectomorphism: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Enyokoyama
en>'Ff'lo
m Formal definition: a tad more minor copyediting
 
Line 1: Line 1:
The proposition in [[probability theory]] known as the '''law of total expectation''',<ref>Neil A. Weiss, ''A Course in Probability'', Addison&ndash;Wesley, 2005, pages 380&ndash;383.</ref>  the '''law of iterated expectations''', the '''tower rule''', the '''smoothing theorem''', among other names, states that if ''X'' is an integrable [[random variable]] (i.e., a random variable satisfying E(&nbsp;|&nbsp;''X''&nbsp;|&nbsp;)&nbsp;<&nbsp;∞) and ''Y'' is any random variable, not necessarily integrable, on the same [[probability space]], then
My name: Loretta Appleroth<br>My age: 30<br>Country: Denmark<br>Home town: Kobenhavn K <br>Post code: 1105<br>Address: Norrebrovanget 98<br><br>My weblog; [http://funfamilycycling.com/node/35998 cattle dog training]
 
:<math>\operatorname{E} (X) = \operatorname{E}_Y ( \operatorname{E}_{X \mid Y} ( X \mid Y)),</math>
 
i.e., the [[expected value]] of the conditional expected value of ''X'' given ''Y'' is the same as the expected value of ''X''.
 
The nomenclature used here parallels the phrase ''[[law of total probability]]''. See also [[law of total variance]].
 
(The [[conditional expected value]] E( ''X'' | ''Y'' ) is a random variable in its own right, whose value depends on the value of ''Y''.  Notice that the conditional expected value of ''X'' given the ''event''  ''Y'' = ''y'' is a function of ''y'' (this is where adherence to the conventional, rigidly case-sensitive [[notation in probability|notation of probability theory]] becomes important!).  If we write E( ''X'' | ''Y'' = ''y'') = ''g''(''y'') then the random variable E( ''X'' | ''Y'' ) is just ''g''(''Y'').
 
One special case states that if <math>A_1, A_2, \ldots, A_n</math> is a partition of the whole outcome space, i.e. these events are mutually exclusive and exhaustive, then
 
:<math>\operatorname{E} (X) = \sum_{i=1}^{n}{\operatorname{E}(X \mid A_i) \operatorname{P}(A_i)}.</math>
 
==Proof in the discrete case==
:<math>
\begin{align}
\operatorname{E}_Y \left( \operatorname{E}_{X\mid Y} (X \mid Y) \right) &{} = \operatorname{E}_Y \Bigg[ \sum_x x \cdot \operatorname{P}(X=x \mid Y) \Bigg] \\[6pt]
&{}=\sum_y \Bigg[ \sum_x x \cdot \operatorname{P}(X=x \mid Y=y) \Bigg] \cdot \operatorname{P}(Y=y) \\[6pt]
&{}=\sum_y \sum_x x \cdot \operatorname{P}(X=x \mid Y=y) \cdot \operatorname{P}(Y=y) \\[6pt]
&{}=\sum_x x \sum_y \operatorname{P}(X=x \mid Y=y) \cdot \operatorname{P}(Y=y) \\[6pt]
&{}=\sum_x x \sum_y \operatorname{P}(X=x, Y=y) \\[6pt]
&{}=\sum_x x \cdot \operatorname{P}(X=x) \\[6pt]
&{}=\operatorname{E}(X).
\end{align}
</math>
 
==Notational shortcut==
When using the expectation operator  <math>\operatorname{E}</math> , precision would require to always specify the variable the expectation of which is taken. In the case of discrete variables for instance,
 
:<math>\operatorname{E}_X (X\cdot Y) = \sum_x x\cdot P(X=x) \cdot Y,</math>
 
is most of the time very different from
 
:<math>\operatorname{E}_Y (X\cdot Y) = \sum_y X \cdot y \cdot P(Y=y),</math>
 
so that simply writing <math> \operatorname{E}(X\cdot Y) </math> may be confusing. However, adding indices to the expectation operator may lead to cumbersome notations and it is often the case that these indices are omitted. One must then determine from the context or from some convention which variable to take the expectation of.
 
Such notational shortcut is very common in the case of iterated expectations where <math>\operatorname{E} \left( \operatorname{E} (X \mid Y) \right) </math> usually stands for <math>\operatorname{E}_Y \left( \operatorname{E}_{X\mid Y} (X \mid Y) \right) </math>. By convention, in the notation without indices, the innermost expectation is the conditional expectation of <math>X</math> given <math>Y</math>, and that the outermost expectation is taken with respect to the conditioning variable <math>Y</math>. This convention is notably used in the rest of this article.
 
==Iterated expectations with nested conditioning sets==
The following formulation of the '''law of iterated expectations''' plays an important role in many economic and finance models:
 
:<math>\operatorname{E} (X \mid I_1) = \operatorname{E} ( \operatorname{E} ( X \mid I_2) \mid I_1),</math>
 
where the value of ''I''<sub>1</sub> is determined by that of ''I''<sub>2</sub>.  To build intuition, imagine an investor who forecasts a random stock price ''X'' based on the limited information set ''I''<sub>1</sub>.  The law of iterated expectations says that the investor can never gain a more precise forecast of ''X'' by conditioning on more specific information (''I''<sub>2</sub>), ''if'' the more specific forecast must itself be forecast with the original information (''I''<sub>1</sub>).
 
This formulation is often applied in a [[time series]] context, where E<sub>''t''</sub> denotes expectations conditional on only the information observed up to and including time period&nbsp;''t''.  In typical models the information set ''t''&nbsp;+&nbsp;1 contains all information available through time ''t'', plus additional information revealed at time ''t''&nbsp;+&nbsp;1.  One can then write:<ref>{{cite book|title=Recursive macroeconomic theory|authors=Lars Ljungqvist, Thomas J. Sargent|isbn=978-0-262-12274-0|url=http://books.google.com/books?id=Xx-j-tYaPQUC&printsec=frontcover&dq=recursive+macro+theory#v=snippet&q=recursions%20on%20equation&f=false}}</ref>
 
:<math>\operatorname{E}_t(X) = \operatorname{E}_t ( \operatorname{E}_{t+1} ( X )).</math>
 
==See also==
* [[Law of total probability]]
 
==References==
{{Reflist}}
*{{cite book | last=Billingsley | first=Patrick | title=Probability and measure | publisher=John Wiley & Sons, Inc | location=New York, NY | year=1995 | isbn=0-471-00710-2}} (Theorem 34.4)
*http://sims.princeton.edu/yftp/Bubbles2007/ProbNotes.pdf, especially equations (16) through (18)
 
{{DEFAULTSORT:Law Of Total Expectation}}
[[Category:Algebra of random variables]]
[[Category:Theory of probability distributions]]
[[Category:Statistical laws]]

Latest revision as of 14:47, 19 June 2014

My name: Loretta Appleroth
My age: 30
Country: Denmark
Home town: Kobenhavn K
Post code: 1105
Address: Norrebrovanget 98

My weblog; cattle dog training