I give some intuition behind the multiplicative decomposition of the stochastic discount factor introduced in Hansen and Scheinkman (2009). The economics underlying the original Hansen and Scheinkman (2009) results was not clear to me during my initial readings. This post collects my efforts to interpret these mathematical ideas in a sensible way.
Below I formally state the decomposition.
Theorem (Hansen and Scheinkman Decomposition): Suppose that is a principal eigenfunction with eigenvalue for the extended generator of the stochastic discount factor . Then this multiplicative functional can be decomposed as:
where is a local martingale.
The stochastic discount factor dictates how to discount cashflows occurring periods in the future in state . Roughly speaking, Hansen and Scheinkman (2009) factors into different pieces: a state independent component , an investment horizon independent component , and a white noise component .
Thus, you should think about as a generalized time preference parameter. will generally be negative, so is the continuous time representation of the state independent discount rate dictated by an asset pricing model. The ratio captures the rate at which I discount payments at time given the state today at time and the state at time . This ratio is independent of meaning that if , then for any and we have:
Finally, represents a random noise component with and independent increments.
The Hansen and Scheinkman decomposition generalizes the binomial options pricing framework for use in standard asset pricing applications by allowing for more complicated state space features like jumps and time averaging.2 The main advantages of casting the stochastic discount factor as a multiplicative functional are the use of the binomial pricing intuition to understand more complicated asset pricing models and the streamlining of the econometrics needed to compare excess returns at different horizons.3
To illustrate the basic intuition behind this analogy, I work through the Black, Derman and Toy (1990) model.
Example (Binomial Model): Consider a discrete time, binomial world with states in which traders have an independent probability of entering state in the next period regardless of the current state. In this world, the price at time of a risk free bond that pays out $1 at time is given by the expression:
This step ahead pricing rule applies at each and every starting date . All pricing computations at longer horizons are built up from this local relationship based on the prevailing short rate .
To solve the model, I need to assume that the short rate process has independent log-normal increments. I could then use the volatility of this process to pin down the values of the short rate for the entire binomial tree.
In general, models of this sort are easy to solve analytically if the short rate process has log-normal increments. The recent papers Lettau and Wachter (2007), Van Binsbergen, Brandt and Koijen (2010) and Backus, Chernov and Zin (2011) adopt similar approaches and try to extend these insights to equity markets.
Nevertheless, most asset pricing models are not log-normal and will not suffer pen and paper analysis of their term structure using existing methods. Thus, in order to use cross-horizon predictions to discriminate between alternative models, we must adopt new mathematical tools.
Example (Binomial Model, Ctd…): We use operator methods to factor the discount factor process which deflates payments in state at time horizon back to time into pieces, , and , where the first factor only depends on the investment horizon , the second factor only depends on the realized states and the third factor is noise, so that .
By visual analogy to the Black, Derman and Toy (1990) model, in a binomial world we can use this decomposition to rewrite the Euler equation below where the dependence on is implicit:
Thus, in the Hansen and Scheinkman (2009) decomposition, serves as a synthetic risk free rate and the serve as the twisted martingale measure.
In my work with Anmol Bhandari4 we look at a class of models for which is affine5 and show how to use this decomposition to compute a cross-horizon analogue to the Hansen and Jagannathan (1991) volatility bound. This new bound can be used to discriminate between different models which make identical predictions at a particular horizon. This exponentially affine structure is useful as it permits closed form solutions for the moments of :
In the next sections, I walk through the economics governing the and terms.
Where does come from? In the original article, the authors refer to as the principle eigen-value of the extended generator of ; however, has a well defined meaning without ever subscribing to Perron-Frobenius theory. is a generalization of the time preference parameter dictated by an asset pricing model.
Consider the following thought experiment which casts the term as the time preference parameter plus an extra Jensen inequality term.
Example (Generalized Time Preference): Suppose that an agent has preferences over a stream of consumption and that for each period , with probability and the remaining of the time or with equal probability. While , the certainty equivalent is .
In fact, with probability the agent will get a payout worth:
Let’s call this certainty equivelant gap :
should then include both time preference, , and also the expected Jensen’s inequality loss:
Thus, in a more general framework, we should expect to have roughly the following form:
where is an affine function. Heuristically, the component will capture how volatile the state space is while the component will capture how badly I need to discount this consumption stream due to Jensen’s inequality.
Next, in order to capture the dependence of the discount factor on the current and future state , Hansen and Scheinkman (2009) downshift to continuous time and apply the Perron-Frobenius theorem to the infinitesimal generator of the discount factor. When applied to the transition probability matrices, the Perron-Frobenius theory implies the largest eigen-pair dominates the behavior of a stochastic process as . Hansen and Scheinkman use this limiting result to argue that the ratio of , the largest eigen-functions of the generator of the discount factor , is a good choice for the state dependent component of .
It is important to note that Perron-Frobenius theory is only a modeling tool in the Hansen and Scheinkman (2009) construction, not a critical feature of their results. There may well be other reasonable choices for the state dependent component of . In its simplest form6, the result can be written as:
Theorem (Perron-Frobenius): The largest eigen-value of a positive square matrix is both simple and positive and belongs to a positive eigenvector . All other eigen-values are smaller in absolute value.7
In order to use this theorem, I need to have a positive square matrix to operate on. While strictly positive, is not a square matrix; however, its infinitesimal generator is. Heuristically, you can think about the infinitesimal generator as encoding the transition probability matrix under the equivalent martingale measure deflated by the time preference parameter.
Definition (Infinitesimal Generator): The infinitesimal generator of an Ito diffusion in is defined by:
where the set of functions such that the limit exists at is denoted by .
In words, the infinitesimal generator of the discount factor captures how my valuation of a $1 payment in, say, the up state will change if I move the payment from period in the future to periods in the future. To get a feel for what the infinitesimal generator captures, consider the following short example using a state Markov chain. First, I define the physical transition intensity matrix for the Markov process .
Example (Markov Process w/ States): Consider a state Markov chain with states . First, consider the physical evolution of the stochastic process which is governed by an intensity matrix . An intensity matrix encodes all of the transition probabilities. The matrix is the matrix of transition probabilities over a horizon . Since each row of the transition probability matrix must sum to , each row of the transition intensity matrix must sum to .
The diagonal entries are nonpositive and represent minus the intensity of jumping from the current state to a new one. The remaining row entries, appropriately scaled, represent the conditional probabilities of jumping to the respective states. For concreteness, the following parameter values would be suffice:
Next, I want to show how to modify this transition intensity matrix to describe the local evolution of the discount factor process . To do this, I first need to have an asset pricing model in mind, and I use a standard CRRA power utility model with risk aversion parameter as in Breeden (1979) where is the log of the expected consumption growth.
Example (Markov Process w/ States, Ctd…): Intuitively, I know that every period I push the payment out into the future, I will end up discounting the payment by an additional . However, I know that I will also have to twist from the physical measure over to the risk neutral measure. Thus, the resulting generator will look something like:
If we (correctly) assume that , then we have:
Note that the rows of will in general not sum to as in the physical transition intensity matrix .
I conclude by working through an extended example showing how to solve for each of the terms in a simple model. Think about a Vasicek (1977) interest rate model. Let be a risk factor with the following scalar Ito diffusion. I choose this model so that I can verify all of my solutions by hand using existing techniques.
Let and solves the following Ito diffusion.
Thus are described by parameter vector :
We need to restrict to ensure stationarity. Matching coefficients to ensure that does not move with yields the following characterization of .
Substituting back into the formula for yields.
We know that .
Exercise (Offsetting Shocks): If is the standard time preference parameter, when would ?
Exercise (Stochastic Volatility): Think about a Feller square root term to allow for stochastic volatility a lá Cox, Ingersoll and Ross (1985) interest rate model.
What are and ?
- Note: The results in this post stem from joint work I am conducting with Anmol Bhandari for our paper “Model Selection Using the Term Structure of Risk”. In this paper, we characterize the maximum Sharpe ratio allowed by an asset pricing model at each and every investment horizon. Using this cross-horizon bound, we develop a macro-finance model identification toolkit. ↩
- e.g., think of the state space needed in the Campbell and Cochrane (1999) habit model. ↩
- Investment horizon symmetry is an unexplored prediction of many asset pricing theory. Asset pricing models characterize how much a trader needs to be compensated in order to hold 1 unit of risk for 1 unit of time. The standard approach to testing these models is to fix the unit of time and then look for incorrectly priced packets of risk. e.g., Roll (1981) looked at the spread in 1 month holding period returns on 10 portfolios of NYSE firms sorted by market cap and found that small firms earned abnormal excess returns relative to the CAPM. Yet, I could just as easily ask the question: Given a model, how much more does a trader need to be compensated for her to hold the same 1 unit of risk for an extra 1 unit of time? This inversion is well defined as asset pricing models possess investment horizon symmetry. Models hold at each and every investment horizon running from second to 1 year to 1 century and everywhere in between. To illustrate this point via an absurd case, John Cochrane writes in his textbook (Asset Pricing (2005), Section 9.3.) that according to the consumption CAPM ‘…if stocks go up between 12:00 and 1:00, it must be because (on average) we all decided to have a big lunch.’ ↩
- See Model Selection Using the Term Structure of Risk. ↩
- This class of models allows for features such as rare disasters, recursive preferences and habit formation among others… ↩
- Really, this is just the Oskar Perron version of the theorem. ↩
- For an introduction to Perron-Frobenius theory, see MacCluer (2000). ↩