Now that you understand what the Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) represent and how to plot them, let's focus on how these plots guide us in selecting appropriate time series models. Specifically, we'll look for patterns that suggest Autoregressive (AR), Moving Average (MA), or combined ARMA characteristics in our stationary time series data. Remember, ACF and PACF interpretation is typically performed after ensuring the time series is stationary, possibly through differencing (as discussed in Chapter 2).
The core idea is that pure AR and MA processes have distinct theoretical signatures on their ACF and PACF plots. By matching the patterns observed in your data's plots to these theoretical signatures, you can hypothesize the order of the model needed.
Before interpreting patterns, recall that ACF/PACF plots usually include shaded areas representing significance bounds (commonly 95% confidence intervals). Values that fall outside these bounds are considered statistically significantly different from zero. Values inside the bounds are generally considered statistically insignificant (consistent with zero correlation). We focus on the lags where the correlation spikes outside these bounds.
An Autoregressive model of order p, denoted AR(p), suggests the current value of the series depends linearly on its previous p values.
Yt=c+ϕ1Yt−1+ϕ2Yt−2+⋯+ϕpYt−p+ϵtwhere ϵt is white noise.
The theoretical signatures for an AR(p) process are:
Therefore, if you observe:
This pattern strongly suggests an AR(p) model is appropriate. The order p is determined by the last significant lag in the PACF plot.
Example PACF plot for an AR(2) process. Note the significant spikes at lags 1 and 2, followed by values within the significance bounds (dashed lines/shaded area). This suggests p=2.
A Moving Average model of order q, denoted MA(q), suggests the current value of the series depends linearly on the current and previous q white noise error terms.
Yt=c+ϵt+θ1ϵt−1+θ2ϵt−2+⋯+θqϵt−qThe theoretical signatures for an MA(q) process are essentially the opposite of an AR(p) process:
So, if your plots show:
This pattern strongly suggests an MA(q) model is appropriate. The order q is determined by the last significant lag in the ACF plot.
Example ACF plot for an MA(1) process. A significant spike occurs only at lag 1, followed by insignificant values. This suggests q=1.
An Autoregressive Moving Average model, ARMA(p,q), combines both AR and MA components.
Yt=c+ϕ1Yt−1+⋯+ϕpYt−p+ϵt+θ1ϵt−1+⋯+θqϵt−qThe theoretical signatures for an ARMA(p,q) process are less distinct:
If both the ACF and PACF plots show patterns of gradual decay (either exponential or sinusoidal), it suggests that an ARMA(p,q) model might be appropriate.
Identifying the specific orders p and q directly from the plots becomes more challenging compared to pure AR or MA models. The "tailing off" behavior doesn't give precise cut-off points. In practice, if both plots tail off, you might hypothesize low-order models like ARMA(1,1), ARMA(2,1), or ARMA(1,2) and then use model selection criteria (like AIC or BIC, discussed in Chapter 6) and residual analysis (Chapter 4) to choose the best fit.
Example ACF and PACF plots for an ARMA(1,1) process. Both functions exhibit a tailing-off behavior, making precise order identification difficult solely from these plots.
Model | ACF Pattern | PACF Pattern |
---|---|---|
AR(p) | Tails off gradually | Cuts off after lag p |
MA(q) | Cuts off after lag q | Tails off gradually |
ARMA(p,q) | Tails off after lag q | Tails off after lag p |
By carefully examining the ACF and PACF plots of your stationary time series, you gain valuable insights into its underlying structure, providing a strong foundation for building effective ARIMA forecasting models.
© 2025 ApX Machine Learning