Import: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Guest2625
Undid revision 592146202 by 77.106.185.3 (talk)
en>W. P. Uzer
Undid revision 627933135 by 180.234.233.247 (talk)
 
Line 1: Line 1:
{{Bayesian statistics}}
Hi there. My name is Sophia Meagher even though it is not the name on my birth certificate. Invoicing is my occupation. The preferred pastime for him and his children is fashion and he'll be beginning some thing else alongside with it. For years he's been residing in Alaska and he doesn't plan on changing it.<br><br>Here is my page; real psychic ([http://netwk.hannam.ac.kr/xe/data_2/85669 visit the site])
{{Regression bar}}
In [[statistics]], '''Bayesian linear regression''' is an approach to [[linear regression]] in which the statistical analysis is undertaken within the context of [[Bayesian inference]]. When the regression model has [[errors and residuals in statistics|errors]] that have a [[normal distribution]], and if a particular form of [[prior distribution]] is assumed, explicit results are available for the [[posterior probability distribution]]s of the model's parameters.
 
==Model setup==
 
Consider a standard [[linear regression]] problem, in which for <math>i=1,...,n</math> we specify the [[conditional probability|conditional distribution]] of ''<math>y_i</math>'' given a ''<math>k \times 1</math>'' predictor vector ''<math>\mathbf{x}_i</math>'':
 
:<math>y_{i} = \mathbf{x}_{i}^{\rm T} \boldsymbol\beta  + \epsilon_{i},</math>
 
where ''<math>\boldsymbol\beta</math>'' is a ''<math>k \times 1</math>'' vector, and the <math>\epsilon_i</math> are [[i.i.d.|independent and identical]] [[normally distributed]] random variables:
 
:<math>\epsilon_{i} \sim N(0, \sigma^2).</math>
 
This corresponds to the following [[likelihood function]]:
 
:<math>\rho(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma^{2}) \propto (\sigma^{2})^{-n/2} \exp\left(-\frac{1}{2{\sigma}^{2}}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)\right).</math>
 
The [[ordinary least squares]] solution is to estimate the coefficient vector using the [[Moore-Penrose pseudoinverse]]:
 
:<math> \hat{\boldsymbol\beta} = (\mathbf{X}^{\rm T}\mathbf{X})^{-1}\mathbf{X}^{\rm T}\mathbf{y}</math>
 
where <math>\mathbf{X}</math> is the ''<math>n \times k</math>'' [[design matrix]], each row of which is a predictor vector <math>\mathbf{x}_{i}^{\rm T}</math>; and <math>\mathbf{y}</math> is the column <math>n</math>-vector <math>[y_1 \; \cdots \; y_n]^{\rm T}</math>.
 
This is a [[frequentist]] approach, and it assumes that there are enough measurements to say something meaningful about <math>\boldsymbol\beta</math>.  In the [[Bayesian inference|Bayesian]] approach, the data are supplemented with additional information in the form of a [[prior probability distribution]]. The prior belief about the parameters is combined with the data's likelihood function according to [[Bayes theorem]] to yield the [[posterior probability|posterior belief]] about the parameters <math>\boldsymbol\beta</math> and <math>\sigma</math>. The prior can take different functional forms depending on the domain and the information that is available a priori.
 
==With conjugate priors==
 
===Conjugate prior distribution===
For an arbitrary prior distribution, there may be no analytical solution for the [[posterior distribution]]. In this section, we will consider a so-called [[conjugate prior]] for which the posterior distribution can be derived analytically.
 
A prior <math>\rho(\boldsymbol\beta,\sigma^{2})</math> is [[conjugate prior|conjugate]] to this likelihood function if it has the same functional form with respect to <math>\boldsymbol\beta</math> and <math>\sigma</math>. Since the log-likelihood is quadratic in <math>\boldsymbol\beta</math>, the log-likelihood is re-written such that the likelihood becomes normal in <math>(\boldsymbol\beta-\hat{\boldsymbol\beta})</math>.  Write
 
:<math>
\begin{align}
(\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)
&= (\mathbf{y}- \mathbf{X} \hat{\boldsymbol\beta})^{\rm T}(\mathbf{y}- \mathbf{X} \hat{\boldsymbol\beta}) \\
&+ (\boldsymbol\beta - \hat{\boldsymbol\beta})^{\rm T}(\mathbf{X}^{\rm T}\mathbf{X})(\boldsymbol\beta - \hat{\boldsymbol\beta}).
\end{align}
</math>
 
The likelihood is now re-written as
 
:<math>
\begin{align}
\rho(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma^{2}) &\propto (\sigma^2)^{-v/2} \exp\left(-\frac{vs^{2}}{2{\sigma}^{2}}\right)(\sigma^2)^{-(n-v)/2} \\
&\times \exp\left(-\frac{1}{2{\sigma}^{2}}(\boldsymbol\beta - \hat{\boldsymbol\beta})^{\rm T}(\mathbf{X}^{\rm T}\mathbf{X})(\boldsymbol\beta - \hat{\boldsymbol\beta})\right),
\end{align}
</math>
 
where
 
:<math>vs^{2} =(\mathbf{y}- \mathbf{X} \hat{\boldsymbol\beta})^{\rm T}(\mathbf{y}- \mathbf{X} \hat{\boldsymbol\beta}), \text{and}\; v = n-k,</math>
 
where <math>k</math> is the number of regression coefficients.
 
This suggests a form for the prior:
 
:<math>\rho(\boldsymbol\beta,\sigma^{2}) = \rho(\sigma^{2})\rho(\boldsymbol\beta|\sigma^{2}),</math>
 
where <math>\rho(\sigma^{2})</math> is an [[inverse-gamma distribution]]
 
:<math> \rho(\sigma^{2}) \propto (\sigma^2)^{-(v_{0}/2+1)} \exp\left(-\frac{v_{0}s_{0}^{2}}{2{\sigma}^{2}}\right).</math>
 
In the notation introduced  in the [[inverse-gamma distribution]] article, this is the density of an <math> \text{Inv-Gamma}( a_0,b_0)</math> distribution with <math>a_0=v_0/2</math> and <math>b_0=\frac{1}{2}v_0s_0^2 </math> with <math>v_{0}</math> and <math>s_{0}^{2}</math> as the prior values of <math>v</math> and <math>s^{2}</math>, respectively.  Equivalently, it can also be described as a [[scaled inverse chi-squared distribution]], <math>\mbox{Scale-inv-}\chi^2(v_0, s_0^2).</math>.
 
Further the conditional prior density <math>\rho(\boldsymbol\beta|\sigma^{2})</math> is a [[normal distribution]],
 
:<math> \rho(\boldsymbol\beta|\sigma^{2}) \propto (\sigma^2)^{-k/2} \exp\left(-\frac{1}{2{\sigma}^{2}}(\boldsymbol\beta - \boldsymbol\mu_0)^{\rm T} \mathbf{\Lambda}_0 (\boldsymbol\beta - \boldsymbol\mu_0)\right) ).</math>
 
In the notation of the [[normal distribution]], the conditional prior distribution is <math> \mathcal{N}\left(\boldsymbol\mu_0, \sigma^2\mathbf{\Lambda}_0^{-1}\right).</math>
 
===Posterior distribution===
 
With the prior now specified, the posterior distribution can be expressed as
 
:<math> \rho(\boldsymbol\beta,\sigma^{2}|\mathbf{y},\mathbf{X}) \propto \rho(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma^{2})\rho(\boldsymbol\beta|\sigma^{2})\rho(\sigma^{2})  </math>
 
::<math>  \propto  (\sigma^{2})^{-n/2} \exp\left(-\frac{1}{2{\sigma}^{2}}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta)\right)</math>
 
:::<math>  \times  (\sigma^{2})^{-k/2} \exp\left(-\frac{1}{2{\sigma}^{2}}(\boldsymbol\beta -\boldsymbol\mu_0)^{\rm T} \boldsymbol\Lambda_0 (\boldsymbol\beta - \boldsymbol\mu_0)\right)</math>
 
:::<math>  \times  (\sigma^2)^{-(a_0+1)} \exp\left(-\frac{b_0}{{\sigma}^{2}}\right).</math>
 
With some re-arrangement, the posterior can be re-written so that the posterior mean <math>\boldsymbol\mu_n</math> of the parameter vector <math>\boldsymbol\beta</math> can be expressed in terms of the least squares estimator <math>\hat{\boldsymbol\beta}</math> and the prior mean <math>\boldsymbol\mu_0</math>, with the strength of the prior indicated by the prior precision matrix <math>\boldsymbol\Lambda_0</math>
 
:<math>\boldsymbol\mu_n = (\mathbf{X}^{\rm T}\mathbf{X}+\boldsymbol\Lambda_0)^{-1}(\mathbf{X}^{\rm T}\mathbf{X}\hat{\boldsymbol\beta}+\boldsymbol\Lambda_0\boldsymbol\mu_0) .</math>
 
To justify that <math>\boldsymbol\mu_n</math> is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a [[quadratic form (statistics)|quadratic form]] in <math>\boldsymbol\beta-\boldsymbol\mu_n</math>.<ref>The intermediate steps are in Fahrmeir et al. (2009) on page 188.</ref>
 
:<math> (\mathbf{y}- \mathbf{X} \boldsymbol\beta)^{\rm T}(\mathbf{y}- \mathbf{X} \boldsymbol\beta) + (\boldsymbol\beta - \boldsymbol\mu_0)^{\rm T}\boldsymbol\Lambda_0(\boldsymbol\beta - \boldsymbol\mu_0) = (\boldsymbol\beta-\boldsymbol\mu_n)^{\rm T}(\mathbf{X}^{\rm T}\mathbf{X}+\boldsymbol\Lambda_0)(\boldsymbol\beta-\boldsymbol\mu_n)+\mathbf{y}^{\rm T}\mathbf{y}-\boldsymbol\mu_n^{\rm T}(\mathbf{X}^{\rm T}\mathbf{X}+\boldsymbol\Lambda_0)\boldsymbol\mu_n+\boldsymbol\mu_0^{\rm T}\boldsymbol\Lambda_0\boldsymbol\mu_0 .</math>
 
Now the posterior can be expressed as a [[normal distribution]] times an [[inverse-gamma distribution]]:
 
:<math>\rho(\boldsymbol\beta,\sigma^{2}|\mathbf{y},\mathbf{X}) \propto  (\sigma^{2})^{-k/2} \exp\left(-\frac{1}{2{\sigma}^{2}}(\boldsymbol\beta - \boldsymbol\mu_n)^{\rm T}(\mathbf{X}^{\rm T}\mathbf{X}+\mathbf{\Lambda}_0)(\boldsymbol\beta - \boldsymbol\mu_n)\right) </math>
 
:::::::<math> \times  (\sigma^2)^{-(n+v_{0})/2-1} \exp\left(-\frac{b_0+\mathbf{y}^{\rm T}\mathbf{y}-\boldsymbol\mu_n^{\rm T}(\mathbf{X}^{\rm T}\mathbf{X}+\boldsymbol\Lambda_0)\boldsymbol\mu_n+\boldsymbol\mu_0^{\rm T}\boldsymbol\Lambda_0\boldsymbol\mu_0)}{2{\sigma}^{2}}\right) .</math>
 
Therefore the posterior distribution can be parametrized as follows.
 
:<math>\rho(\boldsymbol\beta,\sigma^{2}|\mathbf{y},\mathbf{X}) \propto  \rho(\boldsymbol\beta |\sigma^{2},\mathbf{y},\mathbf{X})  \rho(\sigma^{2}|\mathbf{y},\mathbf{X}), </math>
 
where the two factors correspond to the densities of <math> \mathcal{N}\left( \boldsymbol\mu_n, \sigma^2\boldsymbol\Lambda_n^{-1} \right)\,</math> and <math> \text{Inv-Gamma}\left(a_n,b_n \right) </math> distributions, with the parameters of these given by
:<math>\boldsymbol\Lambda_n=(\mathbf{X}^{\rm T}\mathbf{X}+\mathbf{\Lambda}_0), \quad \boldsymbol\mu_n = (\boldsymbol\Lambda_n)^{-1}(\mathbf{X}^{\rm T}\mathbf{X}\hat{\boldsymbol\beta}+\boldsymbol\Lambda_0\boldsymbol\mu_0) ,</math>
:<math>a_n=\frac{1}{2}(n+v_0), \qquad b_n=b_0+\frac{1}{2}(\mathbf{y}^{\rm T}\mathbf{y}+\boldsymbol\mu_0^{\rm T}\boldsymbol\Lambda_0\boldsymbol\mu_0-\boldsymbol\mu_n^{\rm T}\boldsymbol\Lambda_n\boldsymbol\mu_n) .</math>
 
This can be interpreted as Bayesian learning where the parameters are updated according to the following equations.
:<math> \boldsymbol\mu_n=(\mathbf{X}^{\rm T}\mathbf{X}+\boldsymbol\Lambda_0)^{-1} (\boldsymbol\Lambda_0\boldsymbol\mu_0+\mathbf{X}^{\rm T}\mathbf{X}\hat{\boldsymbol\beta})=(\mathbf{X}^{\rm T}\mathbf{X}+\boldsymbol\Lambda_0)^{-1} (\boldsymbol\Lambda_0\boldsymbol\mu_0+\mathbf{X}^{\rm T}\mathbf{y}) ,</math>
:<math>\boldsymbol\Lambda_n=(\mathbf{X}^{\rm T}\mathbf{X}+\boldsymbol\Lambda_0) ,</math>
:<math>a_n=a_0+\frac{n}{2} ,</math>
:<math>b_n=b_0+\frac{1}{2}(\mathbf{y}^{\rm T}\mathbf{y}+\boldsymbol\mu_0^{\rm T}\boldsymbol\Lambda_0\boldsymbol\mu_0-\boldsymbol\mu_n^{\rm T}\boldsymbol\Lambda_n\boldsymbol\mu_n) .</math>
 
===Model evidence===
The [[model evidence]] <math>p(\mathbf{y}|m)</math> is the probability of the data given the model <math>m</math>. It is also known as the [[marginal likelihood]], and as the prior predictive density. Here, the model is defined by the likelihood function <math>p(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma)</math> and the prior distribution on the parameters, i.e. <math>p(\boldsymbol\beta,\sigma)</math>. The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by [[Bayesian model comparison]]. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating <math>p(\mathbf{y},\boldsymbol\beta,\sigma|\mathbf{X})</math> over all possible values of <math>\boldsymbol\beta</math> and <math>\sigma</math>.
:<math>p(\mathbf{y}|m)=\int p(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma)\, p(\boldsymbol\beta,\sigma)\, d\boldsymbol\beta\, d\sigma</math>
This integral can be computed analytically and the solution is given in the following equation.<ref>The intermediate steps of this computation can be found in O'Hagan (1994) on page 257.</ref>
:<math>p(\mathbf{y}|m)=\frac{1}{(2\pi)^{n/2}}\sqrt{\frac{\det(\boldsymbol\Lambda_0)}{\det(\boldsymbol\Lambda_n)}} \cdot \frac{b_0^{a_0}}{b_n^{a_n}} \cdot \frac{\Gamma(a_n)}{\Gamma(a_0)}</math>
 
Here <math>\Gamma</math> denotes the [[gamma function]]. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of <math>\boldsymbol\beta</math> and <math>\sigma</math>.
:<math>p(\mathbf{y}|m)=\frac{p(\boldsymbol\beta,\sigma|m)\, p(\mathbf{y}|\mathbf{X},\boldsymbol\beta,\sigma,m)}{p(\boldsymbol\beta,\sigma|\mathbf{y},\mathbf{X},m)}</math>
Note that this equation is nothing but a re-arrangement of [[Bayes theorem]]. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.
 
==Other cases==
In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an [[approximate Bayesian computation|approximate Bayesian inference]] method such as [[Monte Carlo sampling]]<ref>Carlin and Louis(2008) and Gelman, et al. (2003) explain how to use sampling methods for Bayesian linear regression.</ref> or [[variational Bayes]].
 
The special case <math>\boldsymbol\mu_0=0, \mathbf{\Lambda}_0 = c\mathbf{I}</math> is called [[ridge regression]].
 
A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian [[estimation of covariance matrices]]: see [[Bayesian multivariate linear regression]].
 
==See also==
 
* [[Bayes linear statistics]]
* [[Tikhonov regularization]]
 
{{More footnotes|date=August 2011}}
 
==Notes==
<references/>
 
==References==
* {{cite book |authorlink=George E. P. Box |last=Box |first=G. E. P. |last2=Tiao |first2=G. C. |year=1973 |title=Bayesian Inference in Statistical Analysis |publisher=Wiley |isbn=0-471-57428-7 }}
* {{cite book|authors=Carlin, Bradley P. and Louis, Thomas A.| title = Bayesian Methods for Data Analysis, Third Edition | publisher = Boca Raton, FL: Chapman and Hall/CRC | year = 2008 | isbn = 1-58488-697-8}}
* {{cite book|authors=O'Hagan, Anthony| title = Bayesian Inference| volume= 2B |series = Kendall's Advanced Theory of Statistics| year = 1994 | edition= First | isbn = 0-340-52922-9| publisher = Halsted}}
* {{cite book|authors=[[Andrew Gelman|Gelman, Andrew]], Carlin, John B., Stern, Hal S. and Rubin, Donald B.| title = Bayesian Data Analysis, Second Edition | publisher = Boca Raton, FL: Chapman and Hall/CRC | year = 2003 | isbn = 1-58488-388-X}}
* {{cite paper |first=Gero |last=Walter |first2=Thomas |last2=Augustin |year=2009 |url=http://epub.ub.uni-muenchen.de/11050/1/tr069.pdf |title=Bayesian Linear Regression—Different Conjugate Models and Their (In)Sensitivity to Prior-Data Conflict |work=Technical Report Number 069, Department of Statistics, University of Munich }}
* {{cite book |first=Michael |last=Goldstein |first2=David |last2=Wooff |year=2007 |title=Bayes Linear Statistics, Theory & Methods |publisher=Wiley |isbn=978-0-470-01562-9 }}
* {{cite book|authors=Fahrmeir, L., Kneib, T., and Lang, S.| title = Regression. Modelle, Methoden und Anwendungen |edition=Second |publisher = Springer |location=Heidelberg | year = 2009 | isbn = 978-3-642-01836-7 | doi = 10.1007/978-3-642-01837-4}}
* {{cite book |first=Peter E. |last=Rossi |first2=Greg M. |last2=Allenby |first3=Robert |last3=McCulloch |title=Bayesian Statistics and Marketing |publisher=John Wiley & Sons |year=2006 |isbn=0470863676 }}
* Thomas P. Minka (2001) [http://research.microsoft.com/~minka/papers/linear.html ''Bayesian Linear Regression''], Microsoft research web page
 
==External links==
* [[b:en:R_Programming/Linear_Models#Bayesian_estimation|Bayesian estimation of linear models (R programming wikibook)]]. Bayesian linear regression as implemented in [[R (programming language)|R]].
 
{{Least Squares and Regression Analysis}}
{{Statistics|correlation}}
 
[[Category:Bayesian inference|Linear regression]]
[[Category:Regression analysis]]

Latest revision as of 12:45, 2 October 2014

Hi there. My name is Sophia Meagher even though it is not the name on my birth certificate. Invoicing is my occupation. The preferred pastime for him and his children is fashion and he'll be beginning some thing else alongside with it. For years he's been residing in Alaska and he doesn't plan on changing it.

Here is my page; real psychic (visit the site)