In standard cross-sectional asset-pricing models, expected returns are governed by exposure to aggregate risk factors in a market populated by fully rational investors. Here’s how these models work. Because investors are fully rational, they correctly anticipate which assets are most likely to have low returns in especially inconvenient future states of the world—i.e., returns that are highly correlated with aggregate risk factors. They won’t be willing to pay as much for the high risk-exposure assets today. So, the price of high risk-exposure assets will drop in equilibrium, giving these assets high expected returns going forward.
With this standard framework in mind, financial economists are constantly on the lookout for assets with similar risk exposures but different average returns. e.g., in a CAPM world, value and growth stocks would have similar average returns after adjusting for market beta; however, in the real world, there’s a 4%-per-year value premium. Assuming they are fully rational, this finding suggests that investors are worried about more than just aggregate market risk when pricing assets. It suggests they’re also paying attention to another as-yet-unknown risk factor(s). The central challenge in this literature is to figure out which one(s).
Unfortunately, after decades of work, there’s still no general consensus about which aggregate risk factors matter to real-world investors. Instead, the academic literature contains a zoo of candidate risk factors. Correlation with any of these factors will help predict an asset’s expected returns. But, it’s hard to believe that all of these aggregate risk factors actually matter to real-world investors, especially when they “have little in common economically with each other”.
Lax econometric standards are certainly one explanation for this factor zoo. The goal of this post is to suggest another: full rationality. Notice that full rationality plays two different roles in the discussion above. The first is to make sure that investors correctly anticipate the correlation between each asset’s future returns and the aggregate risk factors. If investors are fully rational, then changes in an asset’s risk exposure must be due to changes in fundamentals. The second role is to remove any logical limits on what these aggregate risk factors might be. If investors are fully rational, then they might potentially be worried about any future state of the world a researcher might dream up… and more! The whole premise of learning about the true risk factors requires real-world investors to know things that researchers haven’t yet noticed. And, if investors are fully rational, this additional knowledge might be arbitrarily subtle.
Below I show that, if researchers assume that investors are fully rational in both of the above senses, then identifying the true set of aggregate risk factors used by real-world investors is an impossible goal.
Economists think about randomized controlled trials (RCTs) as the gold standard for identification. Here’s how the RCT protocol works. Imagine you’re a medical researcher who’s just discovered a new cancer-treatment drug. You think your new discovery has promise, but the only way to know if it actually works is to give it to cancer patients and see whether they’re more likely to recover. But, how should you do this?
You could just distribute flyers advertising your new drug at the nearest hospital, give your drug to all the cancer patients who respond to the flyers, and then compare the recovery rate of the patients who took your drug to that of the remaining cancer patients. However, this is a bad idea. People try to make the best decision possible given all available knowledge about their current circumstances. So, we should expect that the cancer patients who respond to your flyer will be different from those who do not. We should expect them to be sicker, having exhausted all other treatment options. This means that any difference in recovery rates could be due to your new drug or to underlying differences in patient populations.
What’s more, if patients are optimizing based on information that’s unobservable to you (the researcher), then it doesn’t help to control for the differences in patient populations that you can see. Suppose you found two cancer patients, one who took your drug and one who decided not to, that looked identical in every conceivable way you could measure: both male, both white, both 43 years old, same height and weight, etc… If you really believed that these patients were making fully rational choices based on all the available information they had, then you must be missing something about each of their respective situations. Two identical fully rational people wouldn’t make two radically different life choices given the same information.
In short, to learn whether your new drug works, you have to break the link between drug treatment and patients’ optimal decisions based on (potentially) unobservable information. And, the RCT protocol does this by randomizing which cancer patients get your new drug and which get a sugar pill. You need to find a bunch of patients willing to participate in your study knowing that they have only a 50:50 chance of receiving the new experimental treatment. Then, with enough patients, the law of large numbers makes it very unlikely that the treated patient population will systematically differ from the untreated population. Thus, any difference in the recovery rates of these two groups must be due to your drug regimen.
Now, think about what’s going on when we test a cross-sectional asset-pricing model. A model is just a list of aggregate risk factors. A fully rational investor will anticipate which assets have returns that are highly correlated with these aggregate risk factors. So, if the model is correct, differences in expected returns across assets will be explained by differences in exposure to these aggregate risk factors.
This logic suggests a straightforward empirical approach. To test a cross-sectional asset-pricing model, first separately regress the excess returns of each asset on the aggregate risk factors:
Run a time-series regression involving observations for each asset. Then, take the estimated slope coefficients from these regressions, which capture each asset’s exposure to the aggregate risk factors, , and test whether differences in risk-factor exposure across assets explain differences in expected returns across assets:
Run one cross-sectional regression involving observations. If you’ve found the true factor model that real-world investors are using, then i) for all , ii) , and iii) .
But, satisfying these three criteria is only a necessary condition. It’s not sufficient for proving you’ve got the right model. Even if a cross-sectional asset-pricing model passes these hurdles, real-world investors might not be using those aggregate risk factors to price assets. Exposure to the K aggregate risk factors could be the result of correlations with other omitted variables that real-world investors really care about.
This is a question about identification. And, the RCT protocol suggests we can solve it by looking for random variation in an asset’s exposure to each of the risk factors that has nothing to do with changes in fundamentals. The whole point of using an RCT is to make sure that patient decisions based on unobserved information aren’t causing a spurious link between drug treatment and recovery. And, we want to make sure that investor decisions based on unobserved fundamentals aren’t causing a spurious link between risk exposure and expected returns. We need to block any possibility of an unobserved link between risk-factor exposure and asset fundamentals.
So, imagine that investors perceive a noisy version of each asset’s exposure to the th risk factor:
Above, denotes the th asset’s true risk exposure and denotes noise that’s unrelated to fundamentals. The only way to know that investors are using a particular set of aggregate risk factors and not some other correlated set of factors is to study how predicts expected returns. After all, differences in expected returns that are associated with estimation errors, , can’t be attributed to investors acting strategically based on unobserved information about asset fundamentals.
By now, you probably see the logical trap that’s been laid. A fully rational investor might potentially be reacting to any piece of unobserved information about an asset’s fundamentals. So, non-fundamental variation in their perception of risk exposure is crucial to identifying the model they’re using. But, non-fundamental variation in perceived risk exposure would represent an error. And, fully rational investors don’t make errors. Thus, if we are adamant that real-world investors are fully rational, then we must give up any hope of identifying the cross-sectional asset-pricing model they’re using.
Note that this impossibility result doesn’t say that investors need to be completely irrational… far from it. The true has to have some bearing on investors’ perceived . If investors aren’t strategically adjusting their demand today in response to actual future risks, then cross-sectional asset-pricing models have no content. Rather, the impossibility result says that, for researchers to identify the cross-sectional asset-pricing model that real-world investors are using, these perceptions can’t be perfectly accurate. For a useful analogy, think about every spy thriller with a canary trap that you’ve ever seen. In order for one spy to figure out what the other knows, he’s got to see how his adversary reacts to planted fake intel. If his foe always sees through the ploy (i.e., if his foe is “fully rational” in High Economyan), then there’s no hope of any success.
This impossibility result also suggests a new use for many of the cognitive errors documented by behavioral economists: as tools for testing whether or not real-world investors care about exposure to particular risk factors. The existing behavioral-finance literature contains a ready supply of s.