The Impact of Unexpected Shocks to the U.S. Economy: Impulse Response Functions Revisited(IRF)

Canva - Blue Body of Water With Orange Thunder

n a previous post, the impulse response functions for the German macroeconomic variables were estimated and graphically depicted using STATA. The dialogue focused on the interpretation of the impulse response graphs. While that entry was concerned with the practical estimation of a German economy model, this post will focus on the statistical definition of impulse response functions. Once the theory explained, a model will be estimated, and impulse responses calculated to provide context. Still, again the ambitious aim of this post will be to answer the following questions:

  • What are some of the assumptions behind impulse response functions and the underlying Vector Autoregressive (VAR) macroeconomic model?
  • How are impulse response functions derived from a VAR?
  • What do impulse responses tell us about the U.S. economy and where do they fall short in describing it?

Statistical Theory

The Vector Moving Average (VMA) description of a stationary VAR system derives the Impulse Response Functions (IRF) of a model, using the VMA representation of a stationary VAR model as the starting point.

The equation above is the VMA model with the structural error terms, but it is useful to write the expression in terms of the reduced form residuals.

To simplify notation the matrix of coefficients within the summation sign will be written in compact form using this definition:

The moving average representation now can be written more compactly in terms of the structural error terms.

The impact multiplier represents the instantaneous reaction of an external shock in one variable to another is written as:

Plots of this function on the y-axis with time on the x-axis would yield an impulse response graph. The summations of all impulse response functions as the forecast horizon approaches infinity are finite because the series is assumed to be stationary:

The summation above is referred to as the long-run multiplier.

U.S. Economic Model

Using U.S. quarterly data on inflation, unemployment, and interest rates, I replicated the analysis of Stock and Watson that appeared in the Journal of Economic Perspectives (Volume 14, Number 4, Fall 2001). Consistent with their results, I found that there are significant long-term effects on the economy when there are one-standard deviation shocks to these variables.

Using a Choleski decomposition on a VAR model with ordering 1) inflation, 2) unemployment, and 3) interest rates, I calculate the following impulse response functions for the U.S. unemployment rate:

A one standard deviation shock to the inflation rate increases the unemployment rate. The effect becomes statistically significant seven quarters after the excitement, and unemployment decreases to its previous value of about 24 years later or six years.

A one standard deviation shock to unemployment causes unemployment to peak about 2-3 quarters; then it begins to decrease eventually overshooting, leading to a decrease in unemployment about 12-16 quarters.

A one standard deviation shock to interest rates increases unemployment. Unemployment reaches a maximum of about nine quarters after the initial interest rate shock to the economy.


About the Author

Portrait of JJ Espinoza by Charles Ng | Time On Film Photography

JJ Espinoza is Senior Full Stack Data Scientist, Macroeconomist, and Real Estate Investor. He has over ten years of experience working in the world’s most admired technology and entertainment companies. JJ is highly skilled in data science, computer programming, marketing, and leading teams of data scientists. He double-majored in math and economics at UCLA before going on to earn his master’s in economics, focusing on macro econometrics and international finance.

You can follow JJ on MediumLinkedin, and Twitter.

 

Waiting for Consumers to Respond: The Error Correction Model for Long Term Numerial Relationships

There is a clear relationship between income and consumption that is present across time.  Although an increase in income does not necessarily translate into an instant increase in consumption, there is lag between changes in income and corresponding changes in consumption.  Obvious examples are the loss of a job which can reduce income substantially, but it may take a while for some expenditures to change.  Loss of employment does not mean that car, mortgage, and credit card payments go away.  On the other hand, a raise/promotion raises a person’s personal disposable income, but it may take a while before a person actually reacts to this higher income level.  Clearly the relationship between income and consumption is there, but how do lags in consumer response come into the picture?  Do these lags in consumer response to changes in income cause biased estimates in their correlation?

The purpose of this post is to analyze the long term relationship between income and consumption in the United States.  Using monthly data from the Federal Reserve Bank of St. Louis between the periods of 1980 to July 2009, I estimate a pooled regression to identify the relationship between income and consumption.  Then I calculated the Error Correction Model to capture the lag in consumer response to build a better model and thus provide better insight into consumer spending as a function of income.  I find that there is a lag which dampens the consumption behavior of consumers after in increase in personal consumption expenditures.

Testing for Unit Roots (Stationary Time Series) in Income and Consumption

After testing consumption and income at data at the levels one cannot reject the null hypothesis that they are a unit root process.  After taking the first difference of Personal Disposable Income (DPDI) and the second difference of Personal Consumption Expenditure (DDPCE) we can conclude that both series are stationary after those transformations. Recall that a stationary time series is needed for forecasting and hypothesis testing of time series data.

The graph above shows the second difference of Personal Consumption Expenditure which has a ADF test statistic of -13.46

The graph above is the firs difference of Disposable Personal Income is stationary given the ADF statistic of -2.29.

Showing Cointegration of Consumption and Income

We suspect that consumption is dependent on income, but we believe that consumption responds with a lag.  To express this idea formally the following equations will be a helpful introduction into the rigor of the error correction model.

 

1.  Define a linear combination of the suspected cointegrated variables at time t.

2.  Define a linear combination of a suspected cointegrated variable at time t-1.

3. Write the original model in terms of first differences and include the correction term.  The correction term is the first lag of the error term above.

The error correction model above is a regression on suspected cointegrated time series with the lag of the error term.  This error term captures the short run disturbances between changes in PCE and PDI.

 

Expected Theoretical Results

What sign should the coefficient in front of the lag error term have?

Case 1:  Positive

The coefficient could be positive if a change in your income, such as losing a job, produces a lagged response in consumption.  You might not be able to adjust instantly so you consume some of your savings or begin to borrow.

Case 2:  Negative

A person gets a raise but does not immediately begin consuming.  The lag error term would serve to reduce the amount of expected consumption because of this behavior of waiting for a secure and permanent source of income.

 

Empirical Results on EViews


series dpce=pce-pce(-1)
series ddpce=dpce-dpce(-1)
series ddspi=dspi-dspi(-1)
series error=-250.90+.963830*dspi
series u = pce-250.90-.963830*dspi
series lagu=u(-1)

The coefficient on LAGU is negative and statistically significant implying that changes in income don’t reflect automatic changes in consumption.  There is a countercyclical lag in the way consumer behave after increases or decreases in income. Given the rising income in the U.S. the lag in consumer response might be attributed to adjustments in spending after promotions and raises.

The Only Hope for Business/Economic Forecasting: Stationary Stochastic Processes

A stationary stochastic process generates data in a special way which makes it possible to attempt to forecast its future values.  The opposite of a stationary stochastic process is a non-stationary stochastic process which is commonly referred to as a random walk, and by definition if a time series is a random walk, it is virtually impossible to forecast its future values based on past values alone. A stationary process tends to converge to its mean over time, this property is called Mean Reversion.  How can one tell the difference between a stationary or a non-stationary stochastic process?  Why do only stationary stochastic processes lend themselves to forecasting with any amount of accuracy?  The objective of this post is to define these terms and conduct an empirical test for stationary processes that generate some well known macroeconomic time series.  I find that the second difference of GDP is a stationary process and that this is the correct way to look at the data for forecasting.

Necessary and Sufficient Conditions for A Stationary Process

A stationary time series has an average value that does not change over time to dramatically.  The average inflation rate in the U.S. since the mid-80’s has remained fairly steady so the inflation rate might adhere to this condition.  A second condition of a stationary time series is that the variance of the series is a constant or fairly close to being constant.  If over time a series values have become more volatile, it might violate the constant variance condition for a stationary time series.  The final condition for a stationary time series is that the covariance between values at time t and t-k are constant or close to constant.  This means that the correlation between current and past values has remained constant over the period under investigation. If any of these condition is violated a time series is said to be non-stationary or that it follows a random walk.  If the error term of a forecast has a mean value of zero, constant variance, and zero serial correlation then it is said to be a “white noise” process.  A white noise process is a special case of a stationary time series and if the errors of a forecast follow a white noise process then we can say that the forecast is unbiased.  In mathematical notation a white noise process is defined as:

Integrated Processes

Several important time series may not be a stationary process.  Many variables exhibit trends, different variances, and correlation between past and future values.  What can be done with these variables if one needs to forecast their future values?  The answer is first- differencing, or subtracting past previous values from future values and creating a new time series.  If the first difference of a stationary time series is stationary the it is said to be integrated of order 1 or I(1).  If a variable is stationary it s said to be integrated of order zero or I(0).  If second differencing makes a non-stationary variable stationary then it is said to be stationary of order 2 or I(2).

Properties of Integrated Processes

Empirical Test Whether or Not Gross Domestic Product is a Stationary Process

Using quarterly data for U.S. Gross Domestic Product  starting from 1947 we can test for a stationary process.  If the process is not stationary then the process is to first difference to until the process is stationary.

First we input the data and run the commands to tell STATA to recognize the data variable as a time variable and the data as a time series:

The date is stored as a string variable the following commands format date as a date variable and format it accordingly…

Now that STATA recognizes time2 as a date we can set the command to recognize the data set as a time series…

As one can see from the graph above, GDP has a definite time trend.  This suggest that the GDP is non-stationary, but to check with  a statistical test we can use a unit root test.  The above graphics and data manipulation where done on STATA, but just to mix it up a bit the remainder of the analysis will be conducted on Eviews.

Testing the First Difference

Selecting values for trend and intercept a Augmented Dickey Fuller-Test was run to test for a stationary process in GDP data.  A non-stationary process is also called a Unit Root process, hence the description of the test above and below.

The test shows that one cannot reject the hypothesis that GDP has a unit root, but could first differencing the data lead to a stationary time series? A second test (not shown) rejects the hypothesis that the first difference of GDP is a unit root process.  One can see that the time series of the first difference does not have time independent variance as shown below.

Testing the Second Difference of GDP

The second difference of GDP is a stationary process according to the Augmented Dickey-Fuller test.

The test above rejects the null hypothesis that the second difference of GDP is non-stationary, thus if one is going to use past values of GDP the second difference is the appropriate way to transform the data.   The final graph below shows the second difference of GDP which is the series that was shown to be stationary in the ADF test above.