# Autoregressive model

Template:Multiple issues In statistics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it describes certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values. It is a special case of the more general ARMA model of time series.

## Definition

The notation AR(p) indicates an autoregressive model of order p. The AR(p) model is defined as

$X_{t}=c+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}\,$ $X_{t}=c+\sum _{i=1}^{p}\varphi _{i}B^{i}X_{t}+\varepsilon _{t}$ so that, moving the summation term to the left side and using polynomial notation, we have

$\phi (B)X_{t}=c+\varepsilon _{t}\,.$ An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise.

Some parameter constraints are necessary for the model to remain wide-sense stationary. For example, processes in the AR(1) model with |φ1| ≥ 1 are not stationary. More generally, for an AR(p) model to be wide-sense stationary, the roots of the polynomial $\textstyle z^{p}-\sum _{i=1}^{p}\varphi _{i}z^{p-i}$ must lie within the unit circle, i.e., each root $z_{i}$ must satisfy $|z_{i}|<1$ .

## Intertemporal effect of shocks

Because each shock affects X values infinitely far into the future from when they occur, any given value Xt is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression

$\phi (B)X_{t}=\varepsilon _{t}\,$ (where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as

$X_{t}={\frac {1}{\phi (B)}}\varepsilon _{t}\,.$ When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to $\varepsilon _{t}$ has an infinite order—that is, an infinite number of lagged values of $\varepsilon _{t}$ appear on the right side of the equation.

## Characteristic polynomial

The autocorrelation function of an AR(p) process can be expressed as {{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} $\rho (\tau )=\sum _{k=1}^{p}a_{k}y_{k}^{-|\tau |},$ $\phi (B)=1-\sum _{k=1}^{p}\varphi _{k}B^{k}$ where B is the backshift operator, where $\phi (.)$ is the function defining the autoregression, and where $\varphi _{k}$ are the coefficients in the autoregression. The autocorrelation function of an AR(p) process is a sum of decaying exponentials. • Each real root contributes a component to the autocorrelation function that decays exponentially. • Similarly, each pair of complex conjugate roots contributes an exponentially damped oscillation. ## Graphs of AR(p) processes AR(0); AR(1) with AR parameter 0.3; AR(1) with AR parameter 0.9; AR(2) with AR parameters 0.3 and 0.3; and AR(2) with AR parameters 0.9 and −0.8 The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise. For an AR(1) process with a positive $\varphi$ , only the previous term in the process and the noise term contribute to the output. If $\varphi$ is close to 0, then the process still looks like white noise, but as $\varphi$ approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a low pass filter. For an AR(2) process, the previous two terms and the noise term contribute to the output. If both $\varphi _{1}$ and $\varphi _{2}$ are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If $\varphi _{1}$ is positive while $\varphi _{2}$ is negative, then the process favors changes in sign between terms of the process. The output oscillates. This can be likened to edge detection or detection of change in direction. ## Example: An AR(1) process An AR(1) process is given by: $X_{t}=c+\varphi X_{t-1}+\varepsilon _{t}\,$ $\operatorname {E} (X_{t})=\operatorname {E} (c)+\varphi \operatorname {E} (X_{t-1})+\operatorname {E} (\varepsilon _{t}),$ that $\mu =c+\varphi \mu +0,$ and hence $\mu ={\frac {c}{1-\varphi }}.$ The variance is ${\textrm {var}}(X_{t})=\operatorname {E} (X_{t}^{2})-\mu ^{2}={\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}},$ ${\textrm {var}}(X_{t})=\varphi ^{2}{\textrm {var}}(X_{t-1})+\sigma _{\varepsilon }^{2},$ and then by noticing that the quantity above is a stable fixed point of this relation. The autocovariance is given by $B_{n}=\operatorname {E} (X_{t+n}X_{t})-\mu ^{2}={\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,\,\varphi ^{|n|}.$ The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform: $\Phi (\omega )={\frac {1}{\sqrt {2\pi }}}\,\sum _{n=-\infty }^{\infty }B_{n}e^{-i\omega n}={\frac {1}{\sqrt {2\pi }}}\,\left({\frac {\sigma _{\varepsilon }^{2}}{1+\varphi ^{2}-2\varphi \cos(\omega )}}\right).$ This expression is periodic due to the discrete nature of the $X_{j}$ , which is manifested as the cosine term in the denominator. If we assume that the sampling time ($\Delta t=1$ ) is much smaller than the decay time ($\tau$ ), then we can use a continuum approximation to $B_{n}$ : $B(t)\approx {\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,\,\varphi ^{|t|}$ which yields a Lorentzian profile for the spectral density: $\Phi (\omega )={\frac {1}{\sqrt {2\pi }}}\,{\frac {\sigma _{\varepsilon }^{2}}{1-\varphi ^{2}}}\,{\frac {\gamma }{\pi (\gamma ^{2}+\omega ^{2})}}$ $X_{t}=c\sum _{k=0}^{N-1}\varphi ^{k}+\varphi ^{N}X_{t-N}+\sum _{k=0}^{N-1}\varphi ^{k}\varepsilon _{t-k}.$ For N approaching infinity, $\varphi ^{N}$ will approach zero and: $X_{t}={\frac {c}{1-\varphi }}+\sum _{k=0}^{\infty }\varphi ^{k}\varepsilon _{t-k}.$ ### Explicit mean/difference form of AR(1) process The AR(1) model is the discrete time analogy of the continuous Ornstein-Uhlenbeck process. It is therefore sometimes useful to understand the properties of the AR(1) model cast in an equivalent form. In this form, the AR(1) model is given by: $X_{t+1}=X_{t}+\theta (\mu -X_{t})+\epsilon _{t+1}\,$ , where $|\theta |<1\,$ ., where $\mu$ is the model mean. $\operatorname {E} (X_{t+n}|X_{t})=\mu \left[1-\left(1-\theta \right)^{n}\right]+X_{t}(1-\theta )^{n}$ , and $\operatorname {Var} (X_{t+n}|X_{t})=\sigma ^{2}{\frac {\left[1-(1-\theta )^{2n}\right]}{1-(1-\theta )^{2}}}$ . ## Choosing the maximum lag {{#invoke:main|main}} ## Calculation of the AR parameters There are many ways to estimate the coefficients, such as the ordinary least squares procedure, method of moments (through Yule–Walker equations), or Markov chain Monte Carlo methods.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}

The AR(p) model is given by the equation

$X_{t}=\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}.\,$ It is based on parameters $\varphi _{i}$ where i = 1, ..., p. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule-Walker equations.

The Yule–Walker equations, named for Udny Yule and Gilbert Walker, are the following set of equations.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} $\gamma _{m}=\sum _{k=1}^{p}\varphi _{k}\gamma _{m-k}+\sigma _{\varepsilon }^{2}\delta _{m,0},$ where m = 0, ..., p, yielding p + 1 equations. Here $\gamma _{m}$ is the autocovariance function of Xt, $\sigma _{\varepsilon }$ is the standard deviation of the input noise process, and $\delta _{m,0}$ is the Kronecker delta function. Because the last part of an individual equation is non-zero only if m = 0, the set of equations can be solved by representing the equations for m > 0 in matrix form, thus getting the equation ${\begin{bmatrix}\gamma _{1}\\\gamma _{2}\\\gamma _{3}\\\vdots \\\gamma _{p}\\\end{bmatrix}}={\begin{bmatrix}\gamma _{0}&\gamma _{-1}&\gamma _{-2}&\dots \\\gamma _{1}&\gamma _{0}&\gamma _{-1}&\dots \\\gamma _{2}&\gamma _{1}&\gamma _{0}&\dots \\\vdots &\vdots &\vdots &\ddots \\\gamma _{p-1}&\gamma _{p-2}&\gamma _{p-3}&\dots \\\end{bmatrix}}{\begin{bmatrix}\varphi _{1}\\\varphi _{2}\\\varphi _{3}\\\vdots \\\varphi _{p}\\\end{bmatrix}}$ which can be solved for all $\{\varphi _{m};m=1,2,\cdots ,p\}.$ The remaining equation for m = 0 is $\gamma _{0}=\sum _{k=1}^{p}\varphi _{k}\gamma _{-k}+\sigma _{\varepsilon }^{2},$ An alternative formulation is in terms of the autocorrelation function. The AR parameters are determined by the first p+1 elements $\rho (\tau )$ of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating  $\rho (\tau )=\sum _{k=1}^{p}\varphi _{k}\rho (k-\tau )$ Examples for some Low-order AR(p) processes ### Estimation of AR parameters The above equations (the Yule–Walker equations) provide several routes to estimating the parameters of an AR(p) model, by replacing the theoretical covariances with estimated values.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Some of these variants can be described as follows:

• Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices.
• Formulation as a least squares regression problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of Xt on the p previous values of the same series. This can be thought of as a forward-prediction scheme. The normal equations for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate.
• Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model:
$X_{t}=c+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}^{*}\,.$ Here predicted of values of Xt would be based on the p future values of the same series. This way of estimating the AR parameters is due to Burg, and call the Burg method: Burg and later authors called these particular estimates "maximum entropy estimates", but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with maximum entropy spectral estimation.

Other possible approaches to estimation include maximum likelihood estimation. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial p values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.

## Spectrum

$S(f)={\frac {\sigma _{Z}^{2}}{|1-\sum _{k=1}^{p}\varphi _{k}e^{-2\pi ikf}|^{2}}}.$ ### AR(0)

For white noise (AR(0))

$S(f)=\sigma _{Z}^{2}.$ ### AR(1)

For AR(1)

$S(f)={\frac {\sigma _{Z}^{2}}{|1-\varphi _{1}e^{-2\pi if}|^{2}}}={\frac {\sigma _{Z}^{2}}{1+\varphi _{1}^{2}-2\varphi _{1}\cos 2\pi f}}$ ### AR(2)

AR(2) processes can be split into three groups depending on the characteristics of their roots:

$z_{1},z_{2}={\frac {1}{2}}\left(\varphi _{1}\pm {\sqrt {\varphi _{1}^{2}+4\varphi _{2}}}\right)$ $f^{*}={\frac {1}{2\pi }}\cos ^{-1}\left({\frac {\varphi _{1}(\varphi _{2}-1)}{4\varphi _{2}}}\right)$ Otherwise the process has real roots, and:

The process is stationary when the roots are outside the unit circle. The process is stable when the roots are within the unit circle, or equivalently when the coefficients are in the triangle $-1\leq \varphi _{2}\leq 1-|\varphi _{1}|$ .

The full PSD function can be expressed in real form as:

$S(f)={\frac {\sigma _{Z}^{2}}{1+\varphi _{1}^{2}+\varphi _{2}^{2}-2\varphi _{1}(1-\varphi _{2})\cos(2\pi f)-2\varphi _{2}\cos(4\pi f)}}$ ## Implementations in statistics packages

• R, the stats package includes an ar function.
• MATLAB's Econometrics Toolbox  includes autoregressive models 
• Matlab and Octave: the TSA toolbox contains several estimation functions for uni-variate, multivariate and adaptive autoregressive models.

Once the parameters of the autoregression

$X_{t}=c+\sum _{i=1}^{p}\varphi _{i}X_{t-i}+\varepsilon _{t}\,$ have been estimated, the autoregression can be used to forecast an arbitrary number of periods into the future. First use t to refer to the first period for which data is not yet available; substitute the known prior values Xt-i for i=1, ..., p into the autoregressive equation while setting the error term $\varepsilon _{t}$ equal to zero (because we forecast Xt to equal its expected value, and the expected value of the unobserved error term is zero). The output of the autoregressive equation is the forecast for the first unobserved period. Next, use t to refer to the next period for which data is not yet available; again the autoregressive equation is used to make the forecast, with one difference: the value of X one period prior to the one now being forecast is not known, so its expected value—the predicted value arising from the previous forecasting step—is used instead. Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after p predictions, all p right-side values are predicted values from prior steps.

There are four sources of uncertainty regarding predictions obtained in this manner: (1) uncertainty as to whether the autoregressive model is the correct model; (2) uncertainty about the accuracy of the forecasted values that are used as lagged values in the right side of the autoregressive equation; (3) uncertainty about the true values of the autoregressive coefficients; and (4) uncertainty about the value of the error term $\varepsilon _{t}\,$ for the period being predicted. Each of the last three can be quantified and combined to give a confidence interval for the n-step-ahead predictions; the confidence interval will become wider as n increases because of the use of an increasing number of estimated values for the right-side variables.

## Evaluating the quality of forecasts

The predictive performance of the autoregressive model can be assessed as soon as estimation has been done if cross-validation is used. In this approach, some of the initially available data was used for parameter estimation purposes, and some (from available observations later in the data set) was held back for out-of-sample testing. Alternatively, after some time has passed after the parameter estimation was conducted, more data will have become available and predictive performance can be evaluated then using the new data.

In either case, there are two aspects of predictive performance that can be evaluated: one-step-ahead and n-step-ahead performance. For one-step-ahead performance, the estimated parameters are used in the autoregressive equation along with observed values of X for all periods prior to the one being predicted, and the output of the equation is the one-step-ahead forecast; this procedure is used to obtain forecasts for each of the out-of-sample observations. To evaluate the quality of n-step-ahead forecasts, the forecasting procedure in the previous section is employed to obtain the predictions.

Given a set of predicted values and a corresponding set of actual values for X for various time periods, a common evaluation technique is to use the mean squared prediction error; other measures are also available (see Forecasting#Forecasting accuracy).

The question of how to interpret the measured forecasting accuracy arises—for example, what is a "high" (bad) or a "low" (good) value for the mean squared prediction error? There are two possible points of comparison. First, the forecasting accuracy of an alternative model, estimated under different modeling assumptions or different estimation techniques, can be used for comparison purposes. Second, the out-of-sample accuracy measure can be compared to the same measure computed for the in-sample data points (that were used for parameter estimation) for which enough prior data values are available (that is, dropping the first p data points, for which p prior data points are not available). Since the model was estimated specifically to fit the in-sample points as well as possible, it will usually be the case that the out-of-sample predictive performance will be poorer than the in-sample predictive performance. But if the predictive quality deteriorates out-of-sample by "not very much" (which is not precisely definable), then the forecaster may be satisfied with the performance.