Book 2. Quantitative Analysis
FRM Part 1
QA 11. Non-stationary Time Series

Presented by: Sudhanshu
Module 1. Time Trends
Module 2. Seasonality
Module 3. Unit Roots
Module 1. Time Trends
Topic 1. Understanding Time Trends
Topic 2. Linear and Nonlinear Trends
Topic 3. Log-Polynomial and Forecasting
Topic 1. Understanding Time Trends
- Non-stationary time series have statistical properties (mean, variance) that change over time.
-
Trend Components:
- Deterministic Trend: Predictable patterns such as consistent upward/downward movement.
- Stochastic Trend: Unpredictable, random walk behavior due to shocks (e.g., unit roots).
- Trends can make time series non-stationary, which violates assumptions for standard time series models (like ARMA).
- Objective: Identify and remove trend to obtain a stationary residual series for better modeling.
Practice Questions: Q1
Q1. An analyst has determined that monthly vehicle sales in the United States have been increasing over the last 10 years, but the growth rate over that period has been relatively constant. Which model is most appropriate to predict future vehicle sales?
A. Linear model.
B. Quadratic model.
C. Log-linear model.
D. Log-quadratic model.
Practice Questions: Q1 Answer
Explanation: C is correct.
A log-linear model is most appropriate for a time series that grows at a relatively
constant growth rate.
Topic 2. Linear and Nonlinear Trends
- Linear Trend:
- Constant additive change per period.
- Graph shows deviations around a straight line.
- Limitation: May predict negative values over time (e.g., prices).
- Nonlinear Trend (Polynomial):
- Models variable growth rates (acceleration or deceleration).
- Flexible, but risk of overfitting.
- Log-Linear Model:
- Implies a constant percentage growth rate over time.
- Common in economics/finance due to exponential growth in many variables.

Practice Questions: Q2
Q2. Using data from 2001 to 2020, an analyst estimates a model for an industry’s annual output as Outputt = 80.163 + 4.248t + εt, from a regression with a residual standard deviation of 107.574. Assume t equals a given full year (e.g., 2021) and that the error term is normally distributed. A 95% confidence interval for a forecast of 2021
industry output is closest to:
A. 8,374 to 8,796.
B. 8,455 to 8,876.
C. 8,477 to 8,693.
D. 8,557 to 8,773.
Practice Questions: Q2 Answer
Explanation: B is correct.
For t = 2021, a point forecast for industry output is 80.163 + 4.248(2021) = 8,665.371. A 95% confidence interval is 8,665.371 ± 1.96(107.574) = 8,454.526 to 8,876.216.
Topic 3. Log-Polynomial and Forecasting
- Extended Log Models:
- Useful when growth rate itself changes over time.
-
Model Estimation:
- Use Ordinary Least Squares (OLS), assuming εₜ is white noise.
- If εₜ is autocorrelated, OLS results may be biased/inconsistent.
- Forecasting h periods ahead (Linear model):
- 95% Confidence Interval:
(Assumes normal distribution of residuals)
-
Detrending:
- Subtract estimated trend from original series.
- Residuals may be modeled with AR, MA, or ARMA if stationary.
Module 2. Seasonality
Topic 1. Modeling Seasonality
Topic 2. Seasonal Differencing and Dummy Design
Topic 3. Forecasting with Seasonality
Topic 1. Modeling Seasonality
- Seasonality: Pattern that repeats periodically (e.g., monthly sales).
-
Calendar Effects:
- Monthly (e.g., December retail surge)
- Quarterly (e.g., earnings season)
- Weekly (e.g., weekend sales effect)
- Dummy Variable Regression:
- Include k−1 dummies for k seasons (e.g., 3 dummies for 4 quarters).
- The intercept term (β₀) represents the base category (e.g., Q4).
-
Interpretation:
- Coefficients measure average difference from the base season.
- Example:
Topic 2. Seasonal Differencing and Dummy Design
-
Seasonal Differencing:
- Helps when both seasonality and trend are present.
- Stabilizes mean and variance.
-
Avoiding Multicollinearity:
- Don't include all dummies (e.g., for 12 months, use only 11).
- One category acts as a reference base.
-
Advanced Modeling:
- Add dummy variables for holidays/trading day effects.
- Effective for event-driven financial forecasting.
Topic 3. Forecasting with Seasonality
-
h-step-ahead Forecasting:
y^T+h=δ0+δ1(T+h)+βj\hat{y}_{T+h} = \delta_0 + \delta_1(T+h) + \beta_j- Set dummy variable to 1 if season j occurs at T+h, else 0.
-
Example:
- Forecast EPS for Q1 of next year:
-
Practical Tip:
- Combine time trend and dummies for better forecast accuracy.
- Can include interaction terms if trend varies across seasons.
Practice Questions: Q1
Q1. Jill Williams is an analyst in the retail industry. She is modeling a company’s sales and has noticed a quarterly seasonal pattern. If Williams includes an intercept term in her model, how many dummy variables should she use to model the seasonality component?
A. 1.
B. 2.
C. 3.
D. 4.
Practice Questions: Q1 Answer
Explanation: C is correct.
Whenever we want to distinguish between s seasons in a model that incorporates an intercept, we must use s − 1 dummy variables. For example, if we have quarterly data, s = 4, and thus we would include s − 1 = 3 seasonal dummy variables.
Practice Questions: Q2
Q2. Consider the following regression equation utilizing dummy variables for explaining quarterly EPS in terms of the quarter of their occurrence:
The intercept term represents the average value of EPS for the:
A. first quarter.
B. second quarter.
C. third quarter.
D. fourth quarter.

Practice Questions: Q2 Answer
Explanation: D is correct.
The intercept term represents the average value of EPS for the fourth quarter. The slope coefficient on each dummy variable estimates the difference in EPS (on average) between the respective quarter (i.e., quarter one, two, or three) and the omitted quarter (the fourth quarter, in this case).
Practice Questions: Q3
Q3. A model for the change in a retailer’s quarterly sales, using seasonal dummy variables DQ, is estimated as:
In the third quarter, sales are forecast to:
A. decrease by 3.8.
B. decrease by 1.0.
C. increase by 1.1.
D. increase by 3.8.
Practice Questions: Q3 Answer
Explanation: C is correct.
Module 3. Unit Roots
Topic 1. Random Walks and Unit Root Processes
Topic 2. Challenges & Testing for Unit Roots
Topic 1. Random Walks and Unit Root Processes
-
Random Walk:
- Every observation = prior value + shock.
- Not mean-reverting, variance increases over time.
- No long-term equilibrium.
- Back-substitution:
- Entire history of shocks affects current value.
-
Unit Root:
-
A characteristic equation with root = 1.
-
Special case: random walk with drift.
-
Topic 2. Challenges & Testing for Unit Roots
-
Challenges:
- No mean reversion → no long-run value.
- High risk of spurious regression.
- ARMA models have biased estimation under unit roots (non-normal distributions).
-
Remedy:
- Use first differences:
- If still non-stationary, use second differencing.
-
Augmented Dickey-Fuller (ADF) Test:
-
- Reject H₀ if γ is significantly negative.
Practice Questions: Q1
Q1. A random walk is most accurately described as a time series whose value is a function of its:
A. previous value only.
B. beginning value only.
C. previous value and a random shock.
D. beginning value and all historical shocks.
Practice Questions: Q1 Answer
Explanation: D is correct.
For a random walk, so its value at time t is a function of its beginning value and all shocks, as well as the shock in the observation’s own period.
Practice Questions: Q2
Q2. An augmented Dickey-Fuller test will reject the hypothesis that a process is a unit root if the coefficient on the lagged value is statistically significantly:
A. less than zero.
B. equal to zero.
C. greater than zero.
D. different from zero.
Practice Questions: Q2 Answer
Explanation: A is correct.
Although the null hypothesis is that the coefficient on the lagged value is equal to zero, the rejection condition is that the coefficient is less than zero.
QA 11. Non-stationary Time Series
By Prateek Yadav
QA 11. Non-stationary Time Series
- 34