|
|
Line 1: |
Line 1: |
| {{distinguish|mixture model}}
| | Alyson is what my husband enjoys to call me but I don't like when individuals use my complete title. Office supervising is where her primary income arrives from. I am truly fond of to go to karaoke but I've been using on new issues lately. Ohio is where his house is and his family enjoys it.<br><br>Also visit my web site - [http://www.weddingwall.com.au/groups/easy-advice-for-successful-personal-development-today/ psychic readings] |
| {{Regression bar}}
| |
| A '''mixed model''' is a statistical model containing both [[fixed effect]]s and [[random effect]]s, that is '''mixed effects'''. These models are useful in a wide variety of disciplines in the physical, biological and social sciences.
| |
| They are particularly useful in settings where [[repeated measures design|repeated measurements]] are made on the same [[statistical unit]]s ([[longitudinal study]]), or where measurements are made on clusters of related statistical units. Because of their advantage to deal with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures ANOVA.
| |
| | |
| ==History and current status==
| |
| | |
| [[Ronald Fisher]] introduced [[random effects model]]s to study the correlations of trait values between relatives.<ref>{{cite journal | last=Fisher | first=RA | title=The correlation between relatives on the supposition of Mendelian inheritance | journal=Transactions of the Royal Society of Edinburgh | year=1918 | volume=52 | pages=399–433 | doi=10.1017/S0080456800012163 | issue=2}}</ref> In the 1950s, [[Charles Roy Henderson]]
| |
| provided [[Gauss–Markov theorem|best linear unbiased estimates]] (BLUE) of fixed effects and [[best linear unbiased prediction]]s (BLUP) of random effects.<ref name=GKR1991>{{cite journal | last=Robinson | first=G.K. | title=That BLUP is a Good Thing: The Estimation of Random Effects | journal=Statistical Science | volume=6 | issue=1 | year=1991 | pages=15–32 | jstor=2245695 | doi=10.1214/ss/1177011926}}</ref><ref>{{cite journal | title = The Estimation of Environmental and Genetic Trends from Records Subject to Culling | journal = Biometrics | volume = 15 | year = 1959 |pages = 192–218 | jstor=2527669 | author = C. R. Henderson, Oscar Kempthorne, S. R. Searle and C. M. von Krosigk | doi = 10.2307/2527669 | issue = 2 | publisher = International Biometric Society}}</ref><ref name=LDVV1989>{{cite web | url = http://books.nap.edu/html/biomems/chenderson.pdf | title = Charles Roy Henderson, April 1, 1911 – March 14, 1989 | author = L. Dale Van Vleck | publisher = [[United States National Academy of Sciences]]}}</ref><ref>{{cite journal | last=McLean | first=Robert A. | coauthors=Sanders, William L.; Stroup, Walter W. | title= A Unified Approach to Mixed Linear Models | journal=The American Statistician | year=1991 | volume=45 | pages=54–64 | jstor=2685241 | doi=10.2307/2685241 | issue=1 | publisher=American Statistical Association}}</ref> Subsequently, mixed modeling has become a major area of statistical research, including work on computation of maximum likelihood estimates, non-linear mixed effect models, missing data in mixed effects models, and [[Bayesian statistics|Bayesian]] estimation of mixed effects models. Mixed models are applied in many disciplines where multiple correlated measurements are made on each unit of interest. They are prominently used in research involving human and animal subjects in fields ranging from genetics to marketing, and have also been used in industrial statistics.{{Citation needed|date=November 2010}}
| |
| | |
| ==Definition==
| |
| | |
| In [[Matrix_notation#Notation|matrix notation]] a mixed model can be represented as
| |
| | |
| :<math>\ y = X \beta + Zu + \epsilon\,\!</math>
| |
| | |
| where
| |
| | |
| *<math>y</math> is a vector of observations, with mean <math>E(y)=X\beta</math>
| |
| | |
| *<math>\beta</math> is a vector of fixed effects
| |
| | |
| *<math>u</math> is a vector of random effects with mean <math>E(u)=0</math> and variance-covariance matrix <math>\operatorname{var}(u)=G</math>
| |
| | |
| *<math>\epsilon</math> is a vector of IID random error terms with mean <math>E(\epsilon)=0</math> and variance <math>\operatorname{var}(\epsilon)=R</math>
| |
| | |
| *<math>X</math> and <math>Z</math> are matrices of regressors relating the observations <math>y</math> to <math>\beta</math> and <math>u</math>, respectively.
| |
| | |
| ==Estimation==
| |
| | |
| Henderson's "mixed model equations" (MME) are:<ref name=GKR1991/><ref name=LDVV1989/>
| |
| | |
| :<math>\begin{pmatrix} X'R^{-1}X & X'R^{-1}Z \\ Z'R^{-1}X & Z'R^{-1}Z + G^{-1}
| |
| \end{pmatrix}\begin{pmatrix} \tilde{\beta} \\ \tilde{u}
| |
| \end{pmatrix}=\begin{pmatrix} X'R^{-1}y \\ Z'R^{-1}y
| |
| \end{pmatrix}</math>
| |
| | |
| The solutions to the MME, <math>\textstyle\tilde{\beta}</math> and <math>\textstyle\tilde{u}</math> are best linear unbiased estimates (BLUE) and predictors for <math>\beta</math> and <math>u</math>, respectively. This is a consequence of the [[Gauss–Markov theorem|Gauss-Markov theorem]] when the conditional variance of the outcome is not scalable to the identity matrix. When the conditional variance is known, then the inverse variance weighted least squares estimate is BLUE. However, the conditional variance is rarely, if ever, known. So it is desirable to jointly estimate the variance and weighted parameter estimates when solving MMEs.
| |
| | |
| One method used to fit such mixed models is that of the [[EM algorithm]]<ref>{{cite journal | title=Newton-Raphson and EM algorithms for linear mixed-effects models for repeated-measures data | last=Lindstrom | first=ML | coauthors=Bates, DM | journal=JASA | volume=83 | year=1988 | pages=1014–1021 | issue=404 }}</ref> where the variance components are treated as unobserved [[nuisance parameter]]s in the joint likelihood. Currently, this is the implemented method for the major statistical software packages [[R (programming language)|R]] (lme in the nlme library) and [[SAS (software)|SAS]] (proc mixed). The solution to the mixed model equations is a maximum likelihood estimate when the distribution of the errors is normal.<ref>{{cite journal | title=Random-Effects Models for Longitudinal Data | last=Laird | first=Nan M. |coauthors=Ware, James H. | journal=Biometrics | volume=38 | year=1982 | pages=963–974 | jstor=2529876 | doi=10.2307/2529876 | issue=4 | publisher=International Biometric Society | pmid=7168798}}</ref><ref>[[Garrett M. Fitzmaurice]], [[Nan M. Laird]], and [[James H. Ware]], 2004. ''[[Applied Longitudinal Analysis (textbook)|Applied Longitudinal Analysis]]''. John Wiley & Sons, Inc., p. 326-328.</ref>
| |
| | |
| ==See also==
| |
| * [[Linear regression]]
| |
| * [[Fixed effects model]]
| |
| * [[Random effects model]]
| |
| * [[Multilevel model]]
| |
| * [[Mixed-design analysis of variance]]
| |
| * [[Repeated measures design]]
| |
| | |
| == References ==
| |
| {{Reflist}}
| |
| | |
| ==Further reading==
| |
| *Milliken, G. A., & Johnson, D. E. (1992). ''Analysis of messy data: Vol. I. Designed experiments''. New York: Chapman & Hall.
| |
| | |
| *West, B. T., Welch, K. B., & Galecki, A. T. (2007). Linear mixed models: A practical guide to using statistical software. New York: Chapman & Hall/CRC.
| |
| | |
| {{DEFAULTSORT:Mixed Model}}
| |
| [[Category:Statistical methods]]
| |
| [[Category:Regression analysis]]
| |
| [[Category:Analysis of variance]]
| |
Alyson is what my husband enjoys to call me but I don't like when individuals use my complete title. Office supervising is where her primary income arrives from. I am truly fond of to go to karaoke but I've been using on new issues lately. Ohio is where his house is and his family enjoys it.
Also visit my web site - psychic readings