|
|
Line 1: |
Line 1: |
| {{More footnotes|date=November 2009}}
| | These products call me Gabrielle. Vermont also has always been my existing place and I are blessed with everything that I definitely have here. As a girl what I surely like is going within order to karaoke but I haven't so much made a dime in addition to it. I am the cashier and I'm preparing pretty good financially. See what's new on this website here: http://prometeu.net<br><br>my web page; [http://prometeu.net clash of clans hack free] |
| | |
| {{Bayesian statistics}}
| |
| | |
| In [[Bayesian statistics]], the '''posterior probability''' of a [[random event]] or an uncertain proposition is the [[conditional probability]] that is assigned after the relevant [[Scientific evidence|evidence]] is taken into account. Similarly, the '''posterior probability distribution''' is the [[probability distribution]] of an unknown quantity, treated as a [[random variable]], [[conditional probability distribution|conditional on]] the evidence obtained from an experiment or survey. "Posterior", in this context, means after taking into account the relevant evidence related to the particular case being examined.
| |
| | |
| ==Definition==
| |
| | |
| The posterior probability is the probability of the parameters <math>\theta</math> given the evidence <math>X</math>: <math>p(\theta|x)</math>.
| |
| | |
| It contrasts with the [[likelihood function]], which is the probability of the evidence given the parameters: <math>p(x|\theta)</math>.
| |
| | |
| The two are related as follows:
| |
| | |
| Let us have a [[prior probability|prior]] belief that the [[probability distribution function]] is <math>p(\theta)</math> and observations <math>x</math> with the likelihood <math>p(x|\theta)</math>, then the posterior probability is defined as
| |
| :<math>p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)}.</math><ref>{{cite book| title=Pattern Recognition and Machine Learning| author=Christopher M. Bishop| publisher=Springer| year=2006| isbn=978-0-387-31073-2| pages=21–24}}</ref>
| |
| The posterior probability can be written in the memorable form as
| |
| :<math>\text{Posterior probability} \propto \text{Prior probability} \times \text{Likelihood}</math>.
| |
| | |
| ==Example==
| |
| | |
| Suppose there is a mixed school having 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem.
| |
| | |
| The event <math>G</math> is that the student observed is a girl, and the event <math>T</math> is that the student observed is wearing trousers. To compute <math>P(G|T)</math>, we first need to know:
| |
| * <math>P(G)</math>, or the probability that the student is a girl regardless of any other information. Since the observer sees a random student, meaning that all students have the same probability of being observed, and the percentage of girls among the students is 40%, this probability equals 0.4.
| |
| * <math>P(B)</math>, or the probability that the student is not a girl (i.e. a boy) regardless of any other information (<math>B</math> is the complementary event to <math>G</math>). This is 60%, or 0.6.
| |
| * <math>P(T|G)</math>, or the probability of the student wearing trousers given that the student is a girl. As they are as likely to wear skirts as trousers, this is 0.5.
| |
| * <math>P(T|B)</math>, or the probability of the student wearing trousers given that the student is a boy. This is given as 1.
| |
| * <math>P(T)</math>, or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since <math>P(T) = P(T|G)P(G) + P(T|B)P(B)</math> (via the [[law of total probability]]), this is <math>P(T)= 0.5\times0.4 + 1\times0.6 = 0.8</math>.
| |
| | |
| Given all this information, the probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula:
| |
| | |
| :<math>P(G|T) = \frac{P(T|G) P(G)}{P(T)} = \frac{0.5 \times 0.4}{0.8} = 0.25.</math>
| |
| | |
| == Calculation ==
| |
| | |
| The posterior probability distribution of one [[random variable]] given the value of another can be calculated with [[Bayes' theorem]] by multiplying the [[prior probability distribution]] by the [[likelihood function]], and then dividing by the [[normalizing constant]], as follows:
| |
| | |
| :<math>f_{X\mid Y=y}(x)={f_X(x) L_{X\mid Y=y}(x) \over {\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx}}</math> | |
| | |
| gives the posterior [[probability density function]] for a random variable <math>X</math> given the data <math>Y=y</math>, where
| |
| | |
| * <math>f_X(x)</math> is the prior density of <math>X</math>,
| |
| | |
| * <math>L_{X\mid Y=y}(x) = f_{Y\mid X=x}(y)</math> is the likelihood function as a function of <math>x</math>,
| |
| | |
| * <math>\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx</math> is the normalizing constant, and
| |
| | |
| * <math>f_{X\mid Y=y}(x)</math> is the posterior density of <math>X</math> given the data <math>Y=y</math>.
| |
| | |
| ==Classification==
| |
| | |
| In [[Statistical classification|classification]] posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also [[Class membership probabilities]].
| |
| While [[Statistical classification]] methods by definition generate posterior probabilities, Machine Learners usually supply membership values which do not induce any probabilistic confidence. It is desirable to transform or re-scale membership values to class membership probabilities, since they are comparable and additionally easier applicable for post-processing.
| |
| | |
| == See also ==
| |
| * [[Prediction interval]]
| |
| * [[Bernstein–von Mises theorem]]
| |
| * [[Monty Hall Problem]]
| |
| * [[Three Prisoners Problem]]
| |
| * [[Bertrand's box paradox]]
| |
| | |
| == References ==
| |
| | |
| {{reflist}}
| |
| * {{cite book| title=Bayesian Statistics, an introduction| url=http://www-users.york.ac.uk/~pml1/bayes/book.htm| author=Peter M. Lee| publisher=[[John Wiley & Sons|Wiley]]| year=2004| edition=3rd | isbn=978-0-340-81405-5}}
| |
| | |
| [[Category:Bayesian statistics]]
| |
These products call me Gabrielle. Vermont also has always been my existing place and I are blessed with everything that I definitely have here. As a girl what I surely like is going within order to karaoke but I haven't so much made a dime in addition to it. I am the cashier and I'm preparing pretty good financially. See what's new on this website here: http://prometeu.net
my web page; clash of clans hack free