|
|
Line 1: |
Line 1: |
| In [[statistics]], '''binomial regression''' is a technique in which the [[dependent variable|response]] (often referred to as ''Y'') is the result of a series of [[Bernoulli trial]]s, or a series of one of two possible disjoint outcomes (traditionally denoted "success" or 1, and "failure" or 0).<ref name=Weisberg /> In binomial regression, the probability of a success is related to [[explanatory variable]]s: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.
| | Nice to satisfy you, my title is [http://Cervicalcancer.About.com/od/screening/f/genital_warts.htm Figures Held] although I don't really like being known as like that. Since she was 18 she's been working as a receptionist but her marketing never arrives. He is truly fond of performing ceramics but he is [http://browardchd.org/healths_sexually.aspx struggling] to find [http://indianrajneet.com/groups/best-strategies-for-keeping-candida-albicans-from-increasing/ home std test kit] time for it. at home [http://www.videokeren.com/user/BFlockhar std home test] test His family members lives in South Dakota but his spouse wants them to transfer.<br><br>Here is my web site: [http://www.bluecomtech.com/3dp/index.php?document_srl=701711&mid=bbs_firmware at home std testing] |
| | |
| Binomial regression models are essentially the same as [[binary choice model]]s, one type of [[discrete choice]] model. The primary difference is in the theoretical motivation: Discrete choice models are motivated using [[utility theory]] so as to handle various types of correlated and uncorrelated choices, while binomial regression models are generally described in terms of the [[generalized linear model]], an attempt to generalize various types of [[linear regression]] models. As a result, discrete choice models are usually described primarily with a [[latent variable]] indicating the "utility" of making a choice, and with randomness introduced through an [[error variable]] distributed according to a specific [[probability distribution]]. Note that the latent variable itself is not observed, only the actual choice, which is assumed to have been made if the net utility was greater than 0. Binary regression models, however, dispense with both the latent and error variable and assume that the choice itself is a [[random variable]], with a [[link function]] that transforms the expected value of the choice variable into a value that is then predicted by the linear predictor. It can be shown that the two are equivalent, at least in the case of binary choice models: the link function corresponds to the [[quantile function]] of the distribution of the error variable, and the inverse link function to the [[cumulative distribution function]] (CDF) of the error variable. The latent variable has an equivalent if one imagines generating a uniformly distributed number between 0 and 1, subtracting from it the mean (in the form of the linear predictor transformed by the inverse link function), and inverting the sign. One then has a number whose probability of being greater than 0 is the same as the probability of success in the choice variable, and can be thought of as a latent variable indicating whether a 0 or 1 was chosen.
| |
| | |
| ==Example application==
| |
| | |
| In one published example of an application of binomial regression,<ref>Cox & Snell (1981), Example H, p91</ref> the details were as follows. The observed outcome variable was whether or not a fault occurred in an industrial process. There were two explanatory variables: the first was a simple two-case factor representing whether or not a modified version of the process was used and the second was an ordinary quantitative variable measuring the purity of the material being supplied for the process.
| |
| | |
| ==Specification of model==
| |
| The results are assumed to be [[binomial distribution|binomially distributed]].<ref name=Weisberg>{{cite book|title=Applied Linear Regression|author=Sanford Weisberg|chapter=Binomial Regression|pages=253–254|publisher=Wiley-IEEE|year=2005|isbn=0-471-66379-4}}</ref> They are often fitted as a [[generalised linear model]] where the predicted values μ are the probabilities that any individual event will result in a success. The [[likelihood]] of the predictions is then given by
| |
| | |
| :<math>L(Y|\boldsymbol{\mu})=\prod_{i=1}^n \left ( 1_{y_i=1}(\mu_i) + 1_{y_i=0} (1-\mu_i) \right ), \,\!</math>
| |
| | |
| where 1<sub>A</sub> is the [[indicator function]] which takes on the value one when the event ''A'' occurs, and zero otherwise: in this formulation, for any given observation ''y<sub>i</sub>'', only one of the two terms inside the product contributes, according to whether ''y<sub>i</sub>''=0 or 1. The likelihood function is more fully specified by defining the formal parameters ''μ<sub>i</sub>'' as parameterised functions of the explanatory variables: this defines the likelihood in terms of a much reduced number of parameters. Fitting of the model is usually achieved by employing the method of [[maximum likelihood]] to determine these parameters. In practice, the use of a formulation as a generalised linear model allows advantage to be taken of certain algorithmic ideas which are applicable across the whole class of more general models but which do not apply to all maximum likelihood problems.
| |
| | |
| Models used in binomial regression can often be extended to multinomial data.
| |
| | |
| There are many methods of generating the values of μ in systematic ways that allow for interpretation of the model; they are discussed below.
| |
| | |
| == Link functions ==
| |
| | |
| There is a requirement that the modelling linking the probabilities μ to the explanatory variables should be of a form which only produces values in the range 0 to 1. Many models can be fitted into the form
| |
| | |
| :<math>\boldsymbol{\mu} = g(\boldsymbol{\eta}) \, .</math>
| |
| | |
| Here ''η'' is an intermediate variable representing a linear combination, containing the regression parameters, of the explanatory variables. The function
| |
| ''g'' is the [[cumulative distribution function]] (cdf) of some [[probability distribution]]. Usually this probability distribution has a range from minus infinity to plus infinity so that any finite value of ''η'' is transformed by the function ''g'' to a value inside the range 0 to 1.
| |
| | |
| In the case of [[logistic regression]], the link function is the log of the odds ratio or [[logistic function]]. In the case of [[probit model|probit]], the link is the cdf of the [[normal distribution]]. The [[linear probability model]] is not a proper binomial regression specification because predictions need not be in the range of zero to one, it is sometimes used for this type of data when the probability space is where interpretation occurs or when the analyst lacks sufficient sophistication to fit or calculate approximate linearizations of probabilities for interpretation.
| |
| | |
| == Comparison between binomial regression and binary choice models ==
| |
| | |
| A binary choice model assumes a [[latent variable]] ''U<sub>n</sub>'', the utility (or net benefit) that person ''n'' obtains from taking an action (as opposed to not taking the action). The utility the person obtains from taking the action depends on the characteristics of the person, some of which are observed by the researcher and some are not:
| |
| : <math> U_n = \boldsymbol\beta \cdot \mathbf{s_n} + \varepsilon_n </math>
| |
| where <math>\boldsymbol\beta</math> is a set of [[regression coefficient]]s and <math>\mathbf{s_n}</math> is a set of [[independent variable]]s (also known as "features") describing person ''n'', which may be either discrete "[[dummy variable]]s" or regular continuous variables. <math>\varepsilon_n</math> is a [[random variable]] specifying "noise" or "error" in the prediction, assumed to be distributed according to some distribution. Normally, if there is a mean or variance parameter in the distribution, it cannot be [[identifiability|identified]], so the parameters are set to convenient values — by convention usually mean 0, variance 1.
| |
| | |
| The person takes the action, {{nowrap|''y<sub>n</sub>'' {{=}} 1}}, if ''U<sub>n</sub>'' > 0. The unobserved term, ''ε<sub>n</sub>'', is assumed to have a [[logistic distribution]].
| |
| | |
| The specification is written succinctly as:
| |
| ** {{nowrap|''U<sub>n</sub>'' {{=}} ''βs<sub>n</sub>'' + ''ε<sub>n</sub>''}}
| |
| ** <math> Y_n = \begin{cases}
| |
| 1, & if \, U_n > 0, \\
| |
| 0, & if \, U_n \le 0
| |
| \end{cases}</math>
| |
| ** {{nowrap|''ε'' ∼ }} [[Logistic distribution|logistic]], standard [[normal distribution|normal]], etc.
| |
| | |
| Let us write it slightly differently:
| |
| ** {{nowrap|''U<sub>n</sub>'' {{=}} ''βs<sub>n</sub>'' - ''e<sub>n</sub>''}}
| |
| ** <math> Y_n = \begin{cases}
| |
| 1, & if \, U_n > 0, \\
| |
| 0, & if \, U_n \le 0
| |
| \end{cases}</math>
| |
| ** {{nowrap|''e'' ∼ }} [[Logistic distribution|logistic]], standard [[normal distribution|normal]], etc.
| |
| | |
| Here we have made the substitution ''e<sub>n</sub>'' = -''ε<sub>n</sub>''. This changes a random variable into a slightly different one, defined over a negated domain. As it happens, the error distributions we usually consider (e.g. [[logistic distribution]], standard [[normal distribution]], standard [[Student's t-distribution]], etc.) are symmetric about 0, and hence the distribution over ''e<sub>n</sub>'' is identical to the distribution over ''ε<sub>n</sub>''.
| |
| | |
| Denote the [[cumulative distribution function]] (CDF) of <math>e</math> as <math>F_e,</math> and the [[quantile function]] (inverse CDF) of <math>e</math> as <math>F^{-1}_e .</math>
| |
| | |
| Note that
| |
| ::<math>
| |
| \begin{align}
| |
| \Pr(Y_n=1) &= \Pr(U_n > 0) \\
| |
| &= \Pr(\boldsymbol\beta \cdot \mathbf{s_n} - e_n > 0) \\
| |
| &= \Pr(-e_n > -\boldsymbol\beta \cdot \mathbf{s_n}) \\
| |
| &= \Pr(e_n \le \boldsymbol\beta \cdot \mathbf{s_n}) \\
| |
| &= F_e(\boldsymbol\beta \cdot \mathbf{s_n})
| |
| \end{align}
| |
| </math>
| |
| | |
| Since ''Y_n'' is a [[Bernoulli trial]], where <math>\mathbb{E}[Y_n] = \Pr(Y_n = 1),</math> we have
| |
| | |
| :<math>\mathbb{E}[Y_n] = F_e(\boldsymbol\beta \cdot \mathbf{s_n})</math> | |
| | |
| or equivalently
| |
| | |
| :<math>F^{-1}_e(\mathbb{E}[Y_n]) = \boldsymbol\beta \cdot \mathbf{s_n} .</math>
| |
| | |
| Note that this is exactly equivalent to the binomial regression model expressed in the formalism of the [[generalized linear model]].
| |
| | |
| If <math>e_n \sim \mathcal{N}(0,1),</math> i.e. distributed as a [[standard normal distribution]], then
| |
| | |
| :<math>\Phi^{-1}(\mathbb{E}[Y_n]) = \boldsymbol\beta \cdot \mathbf{s_n}</math>
| |
| | |
| which is exactly a [[probit model]].
| |
| | |
| If <math>e_n \sim \operatorname{Logistic}(0,1),</math> i.e. distributed as a standard [[logistic distribution]] with mean 0 and [[scale parameter]] 1, then the corresponding [[quantile function]] is the [[logit function]], and
| |
| | |
| :<math>\operatorname{logit}(\mathbb{E}[Y_n]) = \boldsymbol\beta \cdot \mathbf{s_n}</math>
| |
| | |
| which is exactly a [[logit model]].
| |
| | |
| Note that the two different formalisms — [[generalized linear model]]s (GLM's) and [[discrete choice]] models — are equivalent in the case of simple binary choice models, but can be exteneded if differing ways:
| |
| *GLM's can easily handle arbitrarily distributed [[response variable]]s ([[dependent variable]]s), not just [[categorical variable]]s or [[ordinal variable]]s, which discrete choice models are limited to by their nature. GLM's are also not limited to link functions that are [[quantile function]]s of some distribution, unlike the use of an [[error variable]], which must by assumption have a [[probability distribution]].
| |
| *On the other hand, because discrete choice models are described as types of [[generative model]]s, it is conceptually easier to extend them to complicated situations with multiple, possibly correlated, choices for each person, or other variations.
| |
| | |
| == Latent variable interpretation / derivation ==
| |
| | |
| A [[latent variable model]] involving a binomial observed variable ''Y'' can be constructed such that ''Y'' is related to the latent variable ''Y*'' via
| |
| :<math>Y = \begin{cases}
| |
| 0, & \mbox{if }Y^*>0 \\
| |
| 1, & \mbox{if }Y^*<0.
| |
| \end{cases}
| |
| </math> | |
| The latent variable ''Y*'' is then related to a set of regression variables ''X'' by the model
| |
| | |
| :<math>Y^* = X\beta + \epsilon \ .</math> | |
| | |
| This results in a binomial regression model.
| |
| | |
| The variance of ''ϵ'' can not be identified and when it is not of interest is often assumed to be equal to one. If ''ϵ'' is normally distributed, then a probit is the appropriate model and if ''ϵ'' is [[Generalized extreme value distribution|log-Weibull]] distributed, then a logit is appropriate. If ''ϵ'' is uniformly distributed, then a linear probability model is appropriate.
| |
| | |
| ==See also== | |
| *[[Linear probability model]]
| |
| *[[Poisson regression]]
| |
| *[[Predictive modelling]]
| |
| | |
| ==Notes==
| |
| <references />
| |
| | |
| == References ==
| |
| Cox, D.R., Snell, E.J. (1981) ''Applied Statistics: Principles and Examples'', Chapman and Hall. ISBN 0-412-16570-8
| |
| | |
| {{statistics}}
| |
| | |
| [[Category:Generalized linear models]]
| |