Quantum reference frame: Difference between revisions
No edit summary |
en>Squids and Chips m Disambiguate |
||
Line 1: | Line 1: | ||
In [[time series analysis]], the '''moving-average''' ('''MA''') model is a common approach for modeling [[univariate]] time series models. The notation MA(''q'') refers to the moving average model of order ''q'': | |||
:<math> X_t = \mu + \varepsilon_t + \theta_1 \varepsilon_{t-1} + \cdots + \theta_q \varepsilon_{t-q} \,</math> | |||
where μ is the mean of the series, the ''θ''<sub>1</sub>, ..., ''θ''<sub>''q''</sub> are the parameters of the model and the ''ε''<sub>''t''</sub>, ''ε''<sub>''t''−1</sub>,... are [[white noise]] error terms. The value of ''q'' is called the order of the MA model. This can be equivalently written in terms of the [[backshift operator]] ''B'' as | |||
:<math>X_t = \mu + (1 + \theta_1 B + \cdots + \theta_q B^q)\varepsilon_t.</math> | |||
Thus, a moving-average model is conceptually a [[linear regression]] of the current value of the series against current and previous (unobserved) white noise error terms or random shocks. The random shocks at each point are assumed to be mutually independent and to come from the same distribution, typically a [[normal distribution]], with location at zero and constant scale. Fitting the MA estimates is more complicated than with [[autoregressive model]]s (AR models) because the lagged error terms are not observable. This means that iterative [[Curve fitting|non-linear fitting]] procedures need to be used in place of linear least squares. | |||
The moving-average model is essentially a [[finite impulse response]] filter applied to white noise, with some additional interpretation placed on it. The role of the random shocks in the MA model differs from their role in the AR model in two ways. First, they are propagated to future values of the time series directly: for example, <math>\varepsilon _{t-1}</math> appears directly on the right side of the equation for <math> X_t</math>. In contrast, in an AR model <math>\varepsilon _{t-1}</math> does not appear on the right side of the <math> X_t</math> equation, but it does appear on the right side of the <math> X_{t-1}</math> equation, and <math> X_{t-1}</math> appears on the right side of the <math> X_t</math> equation, giving only an indirect effect of <math>\varepsilon_{t-1}</math> on <math> X_t</math>. Second, in the MA model a shock affects <math> X</math> values only for the current period and ''q'' periods into the future; in contrast, in the AR model a shock affects <math> X</math> values infinitely far into the future, because <math>\varepsilon _t</math> affects <math> X_t</math>, which affects <math> X_{t+1}</math>, which affects <math> X_{t+2}</math>, and so on forever. | |||
Sometimes the [[autocorrelation function]] (ACF) and [[partial autocorrelation function]] (PACF) will suggest that an MA model would be a better model choice and sometimes both AR and MA terms should be used in the same model (see [[Box-Jenkins#Identify p and q]]). | |||
==See also== | |||
*[[Autoregressive moving-average model]] | |||
*[[Autoregressive model]] | |||
{{noreferences|date=January 2012}} | |||
==External links== | |||
*[http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc444.htm Common approaches to univariate time series] | |||
{{Stochastic processes}} | |||
{{NIST-PD}} | |||
[[Category:Noise]] | |||
[[Category:Time series models]] |
Revision as of 01:37, 19 April 2013
In time series analysis, the moving-average (MA) model is a common approach for modeling univariate time series models. The notation MA(q) refers to the moving average model of order q:
where μ is the mean of the series, the θ1, ..., θq are the parameters of the model and the εt, εt−1,... are white noise error terms. The value of q is called the order of the MA model. This can be equivalently written in terms of the backshift operator B as
Thus, a moving-average model is conceptually a linear regression of the current value of the series against current and previous (unobserved) white noise error terms or random shocks. The random shocks at each point are assumed to be mutually independent and to come from the same distribution, typically a normal distribution, with location at zero and constant scale. Fitting the MA estimates is more complicated than with autoregressive models (AR models) because the lagged error terms are not observable. This means that iterative non-linear fitting procedures need to be used in place of linear least squares.
The moving-average model is essentially a finite impulse response filter applied to white noise, with some additional interpretation placed on it. The role of the random shocks in the MA model differs from their role in the AR model in two ways. First, they are propagated to future values of the time series directly: for example, appears directly on the right side of the equation for . In contrast, in an AR model does not appear on the right side of the equation, but it does appear on the right side of the equation, and appears on the right side of the equation, giving only an indirect effect of on . Second, in the MA model a shock affects values only for the current period and q periods into the future; in contrast, in the AR model a shock affects values infinitely far into the future, because affects , which affects , which affects , and so on forever.
Sometimes the autocorrelation function (ACF) and partial autocorrelation function (PACF) will suggest that an MA model would be a better model choice and sometimes both AR and MA terms should be used in the same model (see Box-Jenkins#Identify p and q).