Monomial: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>ChrisGualtieri
m WP:CHECKWIKI Fixes Interwiki Category Dups using AWB
 
en>D.Lazard
Reverted 1 edit by 99.165.75.159 (talk). (TW)
Line 1: Line 1:
I enjoy bilingual network marketing, spending time with family and friends and [http://Www.Google.com/search?q=developing+relationships&btnI=lucky developing relationships] via social media. I have been an avid network marketer for the past 5 years and believe that earning an annuity or [http://Www.Twitpic.com/tag/recurring+income recurring income] is the key. This means that you need to invest time, money and effort in a good system in a lucrative industry. I am certain we have it here. This is best value for money system and tools system I have seen as yet! I will work with you if you want to make it happen for you.<br><br>My site ... [https://www.youtube.com/watch?v=KqU_Gcmn0_w&list=UUg3DMmhiHy97o2DJARM9TRQ affiliate marketing/facebook instant income system/facebook marketing/blogging system/social network marketing/pure leverage tools/network marketing/blogging system/video email marketing/email service software/social media marketing/an authority blog/]
{{More footnotes|date=November 2009}}
 
{{Bayesian statistics}}
 
In [[Bayesian statistics]], the '''posterior probability''' of a [[random event]] or an uncertain proposition is the [[conditional probability]] that is assigned after the relevant [[Scientific evidence|evidence]] is taken into account. Similarly, the '''posterior probability distribution''' is the [[probability distribution]] of an unknown quantity, treated as a [[random variable]], [[conditional probability distribution|conditional on]] the evidence obtained from an experiment or survey. "Posterior", in this context, means after taking into account the relevant evidence related to the particular case being examined.
 
==Definition==
 
The posterior probability is the probability of the parameters <math>\theta</math> given the evidence <math>X</math>: <math>p(\theta|x)</math>.
 
It contrasts with the [[likelihood function]], which is the probability of the evidence given the parameters: <math>p(x|\theta)</math>.
 
The two are related as follows:
 
Let us have a [[prior probability|prior]] belief that the [[probability distribution function]] is <math>p(\theta)</math> and observations <math>x</math> with the likelihood <math>p(x|\theta)</math>, then the posterior probability is defined as
:<math>p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)}.</math><ref>{{cite book| title=Pattern Recognition and Machine Learning| author=Christopher M. Bishop| publisher=Springer| year=2006| isbn=978-0-387-31073-2| pages=21–24}}</ref>
The posterior probability can be written in the memorable form as
:<math>\text{Posterior probability} \propto \text{Prior probability} \times \text{Likelihood}</math>.
 
==Example==
 
Suppose there is a mixed school having 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem.
 
The event <math>G</math> is that the student observed is a girl, and the event <math>T</math> is that the student observed is wearing trousers. To compute <math>P(G|T)</math>, we first need to know:
* <math>P(G)</math>, or the probability that the student is a girl regardless of any other information. Since the observer sees a random student, meaning that all students have the same probability of being observed, and the percentage of girls among the students is 40%, this probability equals 0.4.
* <math>P(B)</math>, or the probability that the student is not a girl (i.e. a boy) regardless of any other information (<math>B</math> is the complementary event to <math>G</math>). This is 60%, or 0.6.
* <math>P(T|G)</math>, or the probability of the student wearing trousers given that the student is a girl. As they are as likely to wear skirts as trousers, this is 0.5.
* <math>P(T|B)</math>, or the probability of the student wearing trousers given that the student is a boy. This is given as 1.
* <math>P(T)</math>, or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since <math>P(T) = P(T|G)P(G) + P(T|B)P(B)</math> (via the [[law of total probability]]), this is <math>P(T)= 0.5\times0.4 + 1\times0.6 = 0.8</math>.
 
Given all this information, the probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula:
 
:<math>P(G|T) = \frac{P(T|G) P(G)}{P(T)} = \frac{0.5 \times 0.4}{0.8} = 0.25.</math>
 
== Calculation ==
 
The posterior probability distribution of one [[random variable]] given the value of another can be calculated with [[Bayes' theorem]] by multiplying the [[prior probability distribution]] by the [[likelihood function]], and then dividing by the [[normalizing constant]], as follows:
 
:<math>f_{X\mid Y=y}(x)={f_X(x) L_{X\mid Y=y}(x) \over {\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx}}</math>
 
gives the posterior [[probability density function]] for a random variable <math>X</math> given the data <math>Y=y</math>, where
 
* <math>f_X(x)</math> is the prior density of <math>X</math>,
 
* <math>L_{X\mid Y=y}(x) = f_{Y\mid X=x}(y)</math> is the likelihood function as a function of <math>x</math>,
 
* <math>\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx</math> is the normalizing constant, and
 
* <math>f_{X\mid Y=y}(x)</math> is the posterior density of <math>X</math> given the data <math>Y=y</math>.
 
==Classification==
 
In [[Statistical classification|classification]] posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also [[Class membership probabilities]].
While [[Statistical classification]] methods by definition generate posterior probabilities, Machine Learners usually supply membership values which do not induce any probabilistic confidence. It is desirable to transform or re-scale membership values to class membership probabilities, since they are comparable and additionally easier applicable for post-processing.
 
== See also ==
* [[Prediction interval]]
* [[Bernstein–von Mises theorem]]
* [[Monty Hall Problem]]
* [[Three Prisoners Problem]]
* [[Bertrand's box paradox]]
 
== References ==
 
{{reflist}}
* {{cite book| title=Bayesian Statistics, an introduction| url=http://www-users.york.ac.uk/~pml1/bayes/book.htm| author=Peter M. Lee| publisher=[[John Wiley & Sons|Wiley]]| year=2004| edition=3rd | isbn=978-0-340-81405-5}}
 
[[Category:Bayesian statistics]]

Revision as of 01:14, 4 February 2014

Template:More footnotes

Template:Bayesian statistics

In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence is taken into account. Similarly, the posterior probability distribution is the probability distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained from an experiment or survey. "Posterior", in this context, means after taking into account the relevant evidence related to the particular case being examined.

Definition

The posterior probability is the probability of the parameters given the evidence : .

It contrasts with the likelihood function, which is the probability of the evidence given the parameters: .

The two are related as follows:

Let us have a prior belief that the probability distribution function is and observations with the likelihood , then the posterior probability is defined as

[1]

The posterior probability can be written in the memorable form as

.

Example

Suppose there is a mixed school having 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem.

The event is that the student observed is a girl, and the event is that the student observed is wearing trousers. To compute , we first need to know:

Given all this information, the probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula:

Calculation

The posterior probability distribution of one random variable given the value of another can be calculated with Bayes' theorem by multiplying the prior probability distribution by the likelihood function, and then dividing by the normalizing constant, as follows:

gives the posterior probability density function for a random variable given the data , where

Classification

In classification posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also Class membership probabilities. While Statistical classification methods by definition generate posterior probabilities, Machine Learners usually supply membership values which do not induce any probabilistic confidence. It is desirable to transform or re-scale membership values to class membership probabilities, since they are comparable and additionally easier applicable for post-processing.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  1. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534