## 1. Motivating Example

If you regress the current quarter’s inflation rate, , on the previous quarter’s rate using data from FRED over the period from Q3-1987 to Q4-2014, then you get the AR(1) point estimate,

(1)

where the number in parentheses denotes the standard error, and the inflation-rate time series, , has been demeaned. In other words, if the inflation rate is points higher in Q1-2015, then on average it will be points higher in Q2-2015, points higher in Q3-2015, and so on… The function that describes the cascade of future inflation-rate changes due to an unexpected shock in period is known as the impulse-response function.

But, many interesting time-series phenomena involve multiple variables. For example, Brunnermeier and Julliard (2008) show that the house-price appreciate rate, , is inversely related to the inflation rate. If you regress the current quarter’s inflation and house-price appreciation rates on the previous quarter’s rates using demeaned data from the Case-Shiller/S&P Index, then you get:

(2)

These point estimates indicate that, if the inflation rate were points higher in Q1-2015, then the inflation rate would be points higher in Q2-2015 and the house-price appreciation rate would be points lower in Q2-2015.

Computing the impulse-response function for this vector auto-regression (VAR) is more difficult than computing the same function for the inflation-rate AR(1) because the inflation rate and house-price appreciation rate shocks are correlated:

(3)

In other words, when you see a point shock to inflation, you also tend to see a point shock to the house-price appreciation rate. Thus, computing the future effects of a shock to the inflation rate and a point shock to the house-price appreciation rate gives you information about a unit shock that doesn’t happen in the real world. In this post, I show how to account for this sort of correlation when computing the impulse-response function for VARs. Here is the relevant code.

## 2. Impulse-Response Function

Before studying VARs, let’s first define the impulse-response function more carefully in the scalar world. Suppose we have some data generated by an AR(1),

(4)

where , , and . For instance, if we’re looking at quarterly inflation data, , then . In this setup, what would happen if there was a sudden shock to in period ? How would we expect the level of to change? What about the level of ? Or, the level of any arbitrary for ? How would a point shock to the current inflation rate propagate into future quarters?

Well, it’s easy to compute the time expectation of :

(5)

Iterating on this same strategy then gives the time expectation of :

(6)

So, in general, the time expectation of any future will be given by the formula,

(7)

and the impulse-response function for the AR(1) process will be:

(8)

If you knew that there was a sudden shock to of size , then your expectation of would change by the amount . The figure below plots the impulse-response function for using the AR(1) point estimate by Equation (1).

There’s another slightly different way you might think about an impulse-response function—namely, as the coefficients to the moving-average representation of the time series. Consider rewriting the data generating process using lag operators,

(9)

where , , and so on… Whenever the slope coefficient is smaller than , , we know that , and there exists a moving-average representation of :

(10)

That is, rather than writing each as a function of a lagged value, , and a contemporaneous shock, , we can instead represent each as a weighted average of all the past shocks that’ve been realized, with more recent shocks weighted more heavily.

(11)

If we normalize all of the shocks to have unit variance, then the weights themselves will be given by the impulse-response function:

(12)

Of course, this is exactly what you’d expect for a covariance-stationary process. The impact of past shocks on the current realized value had better be the same as the impact of current shocks on future values.

## 3. From ARs to VARs

We’ve just seen how to compute the impulse-response function for an AR(1) process. Let’s now examine how to extend this the setting where there are two time series,

(13)

instead of just . This pair of equations can be written in matrix form as follows,

(14)

where and . For example, if you think about as the quarterly inflation rate and as the quarterly house-price appreciation rate, then the coefficient matrix is given in Equation (2).

Nothing about the construction of the moving-average representation of demanded that be a scalar, so we can use the exact same tricks to write the -dimensional vector as a moving average:

(15)

But, it’s much less clear in this vector-valued setting how we’d recover the impulse-response function from the moving-average representation. Put differently, what’s the matrix analog of ?

Let’s apply the want operator. This mystery matrix, let’s call it , has to have two distinct properties. First, it’s got to rescale the vector of shocks, , into something that has a unit norm,

(16)

in the same way that in the analysis above. This is why I’m writing the mystery matrix as rather than just . Second, the matrix has to account for the fact that the shocks, and , are correlated, so that point shocks to the inflation rate are always accompanied by point shocks to the house-price appreciation rate. Because the shocks to each variable might have different standard deviations, for instance, while , the effect of a shock to the inflation rate on the house-price appreciation rate, , will be different than the effect of a shock to the house-price appreciation rate on the inflation rate, . Thus, each variable in the vector will have its own impulse-response function. This is why I write the mystery matrix as rather than .

It turns out that, if we pick to be the Cholesky decomposition of ,

(17)

then will have both of the properties we want as pointed out in Sims (1980). The simple -dimensional case is really useful for understanding why. To start with, let’s write out the variance-covariance matrix of the shocks, , as follows,

(18)

where . The Cholesky decomposition of can then be solved by hand:

(19)

Since we’re only working with a -dimensional matrix, we can also solve for by hand:

(20)

So, for example, if there is a pair of shocks, , then will convert this shock into:

(21)

In other words, the matrix rescales to have unit norm, , and rotates the vector to account for the correlation between and . To appreciate how the rotation takes into account the positive correlation between and , notice that matrix turns the shock into a vector that is pointing standard deviation in the direction and in the direction. That is, given that you’ve observed a positive shock, observing a shock would be a surprisingly low result.

If we plug into our moving-average representation of , then we get the expression below,

(22)

implying that the impulse-response function for is given by:

(23)

The figure below plots the impulse-response function for both and implied by a unit shock to using the coefficient matrix from Equation (2).