# Wavelet Variance

## 1. Motivation

Imagine you’re a trader who’s about to put on a position for the next month. You want to hedge away the risk in this position associated with daily fluctuations in market returns. One way that you might do this would be to short the S&P 500 since E-mini contracts are some of the most liquid in the world.

But… how much of the variation in the index’s returns is due to fluctuations at the daily horizon? e.g., the blue line in the figure to the right shows the minute-by-minute price of the E-mini contract on May 6th, 2010 during the flash crash. Over the course of minutes, the contract price fell ! It then rebounded back to nearly its original position over the next hour. Clearly, if most of the fluctuations in the E-mini S&P 500 contract value is due to shocks on the sub-hour time scale, this contract will do a poor job hedging away daily market risk.

This post demonstrates how to decompose the variance of a time series (e.g., the minute-by-minute returns on the E-mini) into horizon specific components using wavelets. i.e., using the wavelet variance estimator allows you to ask the questions: “How much of the variance is coming from fluctuations on the scale of minutes? hour? day? month?” I then investigate how this wavelet variance approach compares to other methods financial economists might employ such as auto-regressive functions and spectral analysis.

## 2. Wavelet Analysis

In order to explain how the wavelet variance estimator works, I first need to give a quick outline of how wavelets work. Wavelets allow you to decompose a signal into components that are independent in both the time and frequency domains. This outline will be as bare bones as possible. See Percival and Walden (2000) for an excellent overview of the topic.

Imagine you’ve got a time series of just returns:

(1)

and assume for simplicity that these returns have mean . One thing that you might do with this time series is estimate a regression with time fixed effects: . Here is another way to represent the same regression:

(2)

It’s really a trivial projection since . Call the projection matrix for “fixed effects” sot that .

Obviously, the above time fixed effect model would be a bit of a silly thing to estimate, but notice that the projection matrix has an interesting property. Namely, each column is orthonormal:

(3)

It’s orthogonal because unless . This requirement implies that each column in the projection matrix is picking up different information about . It’s normal because is normalized to equal . This requirement implies that the projection matrix is leaving the magnitude of unchanged. The time fixed effects projection matrix, , compares each successive time period, but you can also think about using other orthonormal bases.

e.g., the Haar wavelet projection matrix compares how the st half of the time series differs from the nd half, how the st quarter differs from the nd quarter, how the rd quarter differs from the th quarter, how the st eighth differs from the nd eighth, and so on… For the period return time series, let’s denote the columns of the wavelet projection matrix as:

(4)

and simple inspection shows that each column is orthonormal:

(5)

Let’s look at a concrete example. Suppose that we want to project the vector:

(6)

onto the wavelet basis:

(7)

What would the wavelet coefficients look like? Well, a little trial and error shows that:

(8)

since this is the only combination of coefficients that satisfies both :

(9)

and for all .

What’s cool about the wavelet projection is that the coefficients represent effects that are isolated in both the frequency and time domains. The index denotes the length of the wavelet comparison groups. e.g. the wavelets with compare period increments: the st period to the nd period, the rd period to the th period, and so on… Similarly, the wavelets with compare period increments: the st periods to the nd periods and the rd periods to the th periods. Thus, the captures the location of the coefficient in the frequency domain. The index signifies which comparison groups at horizon we are looking at. e.g., when , there are different comparisons to be made. Thus, the captures the location of the coefficient in the time domain.

## 3. Wavelet Variance

With these basics in place, it’s now easy to define the wavelet variance of a time series. First, I massage the standard representation of a series’ variance a bit. The variance of our term series is defined as:

(10)

since . Using the tools from the section above, let’s rewrite . This means that the variance formula becomes:

(11)

But I know that since each of the columns is orthonormal. Thus:

(12)

This representation gives the variance of a series as an average of squared wavelet coefficients.

The sum of the squared wavelet coefficients at each horizon, , is then an interesting object:

(13)

since denotes the fraction of the total variance of the time series explained by comparing successive periods of length . I refer to as the wavelet variance of a series at horizon . The sum of the wavelet variances at each horizon gives total variance:

(14)

## 4. Numerical Example

Let’s take a look at how the wavelet variance of a time series behaves out in the wild. Here’s the code I used to create the figures: . Specifically, let’s study the simulated data plotted below which consists of days of minute-by-minute return data with day-specific shocks:

(15)

where the volatility of the process is given by and there is a probability of realizing a shock on any given day. The days on which the data realized a shock are highlighted in red. These minute-by-minute figures amount to a annualized return and a annualized volatility.

The figure below then plots the wavelet coefficients, , at each horizon associated with this time series. A trading day is , so notice the spikes in the coefficient values in the panels near the day-specific shock dates corresponding to comparing successive , , and minute intervals. The remaining variation in the coefficient levels comes from the underlying white noise process . Because the break points in the wavelet projection affect the estimated coefficients, each data point in the plot actually represents the average of the coefficient estimates at a given point of time for all possible starting dates. See Percival and Walden (2000, Ch. 5) on the maximal overlap discrete wavelet transform for details.

Finally, I plot the of the wavelet variance at each horizon for both the simulated return process (red) and a white noise process with an identical mean and variance (blue). Note that I’ve switched from to on the -axis here, so a spike in the amount of variance at corresponds to a spike in the amount of variance explained by successive increments. This is exactly what you’d expect for day-specific shocks which have a duration of as indicated by the vertical gray line. The wavelet variance of an appropriately scaled white noise process gives a nice comparison group. To see why, note that for covariance stationary processes like white noise, the wavelet variance at a particular horizon is related to the power spectrum as follows:

(16)

Thus, the wavelet variance of white noise should follow a power law with:

(17)

giving a nice smooth reference point in plots.

## 5. Comparing Techniques

I conclude by considering how the wavelet variance statistic compares to other ways that a financial economist might look for horizon specific effects in data. I consider alternatives: auto-regressive models and spectral density estimators. First, consider estimating the auto-regressive model below with lags :

(18)

The left-most panel of the figure below reports the estimated values of for lags using the simulated data (red) as well as a scaled white noise process (blue). Just as before, the vertical grey line denotes the number of minutes in a day. There is no meaningful difference between the sets of coefficients. The reason is that the day-specific shocks are asynchronous. They aren’t coming at regular intervals. Thus, no obvious lag structure can emerge from the data.

Next, let’s think about estimating the spectral density of . This turns out to be the exact same exercise as the auto-regressive model estimation in different clothing. As shown in an earlier post, it’s possible to flip back and forth between the coefficients of an process and its spectral density via the relationship:

(19)

This one-to-one mapping between the frequency domain and the time domain for covariance stationary processes is known as the Wiener–Khinchin theorem with . Thus, the spectral density plot just reflects the same random noise as the auto-regressive model coefficients because of the same issue with asynchrony. The most interesting features of the middle panel occur at really high frequencies which have nothing to do with the day-specific shocks.

Here’s the punchline. The wavelet variance is the only estimator of the which can identify horizon-specific contributions to a time series’ variance which are not stationary.

# WSJ Article Subject Tags

1. Motivation This post investigates the distribution of subject tags for Wall Street Journal articles that mention S&P 500 companies. e.g., a December 2009 article entitled, When Even Your Phone Tells You You're Drunk, It's Time to Call a … [Continue reading]

# Randomized Market Trials

1. Motivation How much can traders learn from past price signals? It depends on what kind of assets sell. Suppose that returns are (in part) a function of $K = \Vert {\boldsymbol \alpha} \Vert_{\ell_0}$ different feature-specific … [Continue reading]

# Notes: Ang, Hodrick, Xing, and Zhang (2006)

1. Introduction In this post I work through the main results in Ang, Hodrick, Xing, and Zhang (2006) which shows not only that i) stocks with more exposure to changes in aggregate volatility have lower average excess returns, but also that ii) … [Continue reading]

# Using the Cross-Section of Returns

1. Introduction The empirical content of the discount factor view of asset pricing can all be derived from the equation below: \begin{align} 0 = \mathrm{E}[m \cdot r_n] \quad \text{for all } … [Continue reading]