# Investigation Bandwidth

## 1. Motivation

Time is dimensionless in modern asset pricing theory. e.g., the canonical Euler equation:

(1)

says that the price of an asset at time (i.e., ) is equal to the risk-adjusted expectation at time (i.e., ) of the price of the asset at time plus the risk-adjusted expectation of any dividends paid out by the asset at time (i.e., ). Yet, the theory never answers the question: “Plus what?” Should we be thinking about seconds? Hours? Days? Years? Centuries? Millennia?

Why does this matter? An algorithmic trader adjusting his position each second worries about different risks than Warren Buffett who has a median holding period of decades. e.g., Buffett studies cash flows, dividends, and business plans. By contrast, the probability that a firm paying out a quarterly dividend happens to pay its dividend during any randomly chosen second time interval is . i.e., roughly the odds of picking a year at random since the time that the human and chimpanzee evolutionary lines diverged. Thus, if an algorithmic trader and Warren Buffett both looked at the exact same stock at the exact same time, then they would have to use different risk-adjusted expectations operators:

(2)

This note gives a simple economic model in which traders endogenously specialize in looking for information at a particular time scale and ignore predictability at vastly different time scales.

## 2. Simulation

I start with a simple numerical simulation that illustrates why traders at the daily horizon will ignore price patterns at vastly different frequencies. Suppose that the Cisco’s stock returns are composed of a constant growth rate , a daily wobble with , and a white noise term with :

(3)

I consider a world where the clock ticks forward in minute increments so that each tick represents th of a trading day. The figure below shows a single sample path of Cisco’s return process over the course of a month.

What are the properties of this return process? First, the constant growth rate, , implies that Cisco has a per year return on average. Second, the volatility of the noise component, , implies that the annualized volatility of Cisco’s returns is . Finally, since:

(4)

the choice of means that (in a world with a riskless rate) a trading strategy which is long Cisco stock in the morning and short Cisco stock in the afternoon will generate a return over the course of year. i.e., this is a big daily wobble! If you start with a on the morning of January st you end up with on the evening of December st on average by following this trading strategy. The figure below confirms this math by simulating year long realizations of this trading strategy’s returns.

Suppose you didn’t know the exact frequency of the wobble in Cisco’s returns. The wobble is equally likely to have a frequency of anywhere from cycles per day to cycles per day. Using the last month’s worth of data, suppose you estimated the regressions specified below:

(5)

and identified the frequency, , which best fit the data:

(6)

The figure below shows the empirical distribution of these best in-sample fit frequencies when the true frequency is a daily wobble. The figure reads: “A month’s worth of Cisco’s minute-by-minute returns best fits a factor with a frequency of about of the time when the true frequency is cycle a day.”

Suppose that you notice a wobble with a frequency of fit Cisco’s returns over the last month really well, but you also know that this is a noisy in-sample estimate. The true wobble could have a different frequency. If you can expend some cognitive effort to investigate alternate frequencies, how wide a bandwidth of frequencies should you investigate? Here’s where things get interesting. The figure above essentially says that you should never investigate frequencies outside of —i.e., plus or minus half the width of the bell. The probability that a pattern in returns with a frequency outside this range is actually driving the results is nil!

## 4. Costs and Benefits

Again, suppose you’re a trader whose noticed that there is a daily wobble in Cisco’s returns over the past month. i.e., using the past month’s data, you’ve estimated . Just as before, it’s a big wobble. Implemented at the right time scale, , you know that this strategy of buying early and selling late will generate a return. Nevertheless, you also know that isn’t necessarily the right frequency to invest in just because it had the lowest in-sample error over the last month. You don’t want to go to your MD and pitch a strategy only to have adjust it a month later due to poor performance. Let’s say that is costs you dollars to investigate a range of frequencies. If you investigate a particular range and is there, then you will discover with probability .

The question is then: “Which frequency buckets should you investigate?” First, are we losing anything by only searching -sized increments. Well, we can tile the entire frequency range with little tiny increments as follows:

(7)

i.e., starting at frequency we can iteratively add different increments of size . If we start at a small enough frequency, , and add enough increments, , then we can tile as much of the entire domain as we like so that is as small as we like.

Next, what are the benefits of discovering the correct time scale to invest in? If denotes the returns to investing in a trading strategy at the correct time scale over the course of the next month, let:

(8)

denote the correlation between the returns of the strategy at the true frequency and the strategy at the best in-sample fit frequency. We know that and that:

(9)

i.e., as gets farther and farther away from , your realized returns over the next month from a trading strategy implemented at horizon will become less and less correlated with the returns of the strategy implemented at and as a consequence shrink to . Thus, the benefit to discovering that the true frequency was not is given by .

Putting the pieces together, it’s clear that you should investigate a particular range of frequencies for a confounding explanation if the expected probability of finding there given the realized times the benefit of discovering the true in that range exceeds the search cost :

(10)

i.e., you’ll have a donut shaped search pattern around . You won’t investigate frequencies that are really different from since the probability of finding there will be too low to justify the search costs. By contrast you won’t investigate frequencies that are too similar to since the benefits to discovering this minuscule error don’t justify the costs even though such tiny errors may be quite likely.

## 5. Wrapping Up

I started with the question: “How can it be that an algorithmic trader and Warren Buffett worry about different patterns in the same price path?” In the analysis above I give one possible answer. If you see a tradable anomaly at a particular time scale (e.g., wobble per day) over the past month, then the probability that this anomaly was caused by a data generating process with a much shorter or much longer frequency is essentially . I used only sine wave plus noise processes above, but it seems like this assumption can be easily relaxed via results from, say, Friedlin and Wentzell.

# The Secrets N Prices Keep

## 1. Introduction

Prices are signals about shocks to fundamentals. In a world where there are many stocks and lots of different kinds of shocks to fundamentals, traders are often more concerned with identifying exactly which shocks took place than the value of any particular asset. e.g., imagine you are a day trader. While you certainly care about changes in the fundamental value of Apple stock, you care much more about the size and location of the underlying shocks since you can profit from this information elsewhere. On one hand, if all firms based in California were hit with a positive shock, you might want to buy shares of Apple, Banana Republic, Costco, …, and Zero Skateboards stock. On the other hand, if all electronic equipment companies were hit with a positive shock, you might want to buy up Apple, Bose, Cisco Systems, …, and Zenith shares instead.

It turns out that there is a sharp phase change in traders’ ability to draw inferences about attribute-specific shocks from prices. i.e., when there have been fewer than transactions, you can’t tell exactly which shocks affected Apple’s fundamental value. Even if you knew that Apple had been hit by some shock, with fewer than observations you couldn’t tell whether it was a California-specific event or an electronic equipment-specific event. By contrast, when there have been more than transactions, you can figure out exactly which shocks have occurred. The additional transactions simply allow you to fine tune your beliefs about exactly how large the shocks were. The surprising result is that is a) independent of traders’ cognitive abilities and b) easily calculable via tools from the compressed sensing literature. See my earlier post for details.

This signal recovery bound is thus a novel new constraint on the amount of information that real world traders can extract from prices. Moreover, the bound gives a concrete meaning to the term “local knowledge”. e.g., shocks that haven’t yet manifested themselves in transactions are local in the sense that no one can spot them through prices. Anyone who knows of their existence must have found out via some other channel. To build intuition, this post gives examples of this constraint in action.

First I show where this signal recovery bound comes from. People spend lots of time looking for houses in different cites. e.g., see Trulia or my paper. Suppose you moved away from Chicago a year ago, and now you’re moving back and looking for a house. When studying at a list of recent sales prices, you find yourself a bit surprised. People must have changed their preferences for of different amenities: a car garage, a rd bedroom, a half-circle driveway, granite countertops, energy efficient appliances, central A/C, or a walk-in closet. Having the mystery amenity raises the sale price by dollars. You would know how preferences had evolved if you had lived in Chicago the whole time; however, in the absence of this local knowledge, how many sales would you need to see in order to figure out which of the amenities mattered?

The answer is . How did I come up with this number? For ease of explanation, let’s normalize expected house prices to . Suppose you found one house with amenities , a second house with amenities , and a third house with amenities . The combination of prices for these houses would reveal exactly which amenity had been shocked. i.e., if only the first house’s price was higher than expected, , then Chicagoans must have changed their preferences for having a car garage:

(1)

By contrast, if it was the case that , , and , then you would know that people now value walk-in closets much more than they did a year ago.

Here is the key point. sales is just enough information to answer yes or no questions and rule out the possibility of no change:

(2)

sales simply narrows your error bars around the exact value of . sales only allows you to distinguish between subsets of amenities. e.g., seeing just the st and nd houses with unexpectedly high prices only tells you that people like either half-circle driveways or walk-in closets more. It doesn’t tell you which one. The problem changes character at . When you have seen fewer than sales, information about how preferences have changed is purely local knowledge. Prices can’t publicize this information. You must live and work in Chicago to learn it.

Next, I illustrate how this signal recovery bound acts like a cognitive constraint for would be arbitrageurs. Suppose you’re a petroleum industry analyst. Through long, hard, caffeine-fueled nights of research you’ve discovered that oil companies such as Schlumberger, Halliburton, and Baker Hughes who’ve invested in hydraulic fracturing (a.k.a., “fracking”) are due for a big unexpected payout. This is really valuable information affecting only a few of the major oil companies. Many companies haven’t really invested in this technology, and they won’t be affected by the shock. How aggressively should you trade Schlumberger, Halliburton, and Baker Hughes? On one hand, you want to build up a large position in these stocks to take advantage of the future price increases that you know are going to happen. On the other hand, you don’t want to allow news of this shock to spill out to the rest of the market.

The figure above gives a sense of the number of different kinds of shocks that affect the petroleum industry. It reads: “If you select a Wall Street Journal article on the petroleum industry over the period from 2011 to 2013 there is a chance that ‘Oil sands’ is a listed descriptor and a chance that ‘LNG’ (i.e., liquid natural gas) is a listed descriptor.” Thus, oil stock price changes might be due to different shocks:

(3)

where denotes stock ‘s exposure to the th attribute. e.g., in this example if the company invested in fracking (i.e., like Schlumberger, Halliburton, and Baker Hughes) and if the company didn’t. What’s more, very few of the possible attributes matter each month. e.g., the plot below reads: “Only around of all the descriptors in the Wall Street Journal articles about the petroleum industry over the period from January 2011 to December 2013 are used each month.” Thus, only of the possible attributes appear to realize shocks each period:

(4)

Note that this calculation includes terms like ‘Crude oil prices’ which occur in roughly half the articles, so the actual rate is likely much lower. Crude oil prices is just a synonym for the industry.

For simplicity, suppose that attributes out of a possible realized a shock in the previous period, and you discovered of them. How long does your informational monopoly last? Using tools from Wainwright (2009) it’s easy to show that uninformed traders need at least:

(5)

observations to identify which of the possible payout-relevant attributes in the petroleum industry has realized a shock. If it takes you (…and other industry specialists like you) around hour to materially increase your position, then you have roughly days (i.e., around trading week) to build up a position before the rest of the market catches on assuming an hour trading day.

## 4. Asset Management Expertise

Finally, I show how there can be situations where you might not bother trying to learn from prices because there are too many plausible explanations to check out. In this world everyone specializes in acquiring local knowledge. Suppose you’re a wealthy investor, and I’m a broke asset manager with a trading strategy. I walk into your office, and I try to convince you to finance my strategy that has abnormal returns of per month:

(6)

where per month to make the algebra neat. For simplicity, suppose that there is no debate . In return for running the trading strategy, I ask for fees amounting to a fraction of the gross returns. Of course, I have to tell you a little bit about how the trading strategy works, so you can deduce that I’m taking on a position that is to some extent a currency carry trade and to some extent a short-volatility strategy. This narrows down the list a bit, but it still leaves a lot of possibilities. In the end, you know that I am using some combination of out of possible strategies.

You have options. On one hand, if you accept the terms of this offer and finance my strategy, you realize returns net of fees equal to:

(7)

This approach would net you an annualized Sharpe ratio of e.g., if I asked for a fee of , and my strategy yielded a return of per month, then your annualized Sharpe ratio net of my fees would be .

On the other hand, you could always refuse my offer and try to back out which strategies I was following using the information you gained from our meeting. i.e., you know that my strategy involves using some combination of factors out of a universe of possibilities:

(8)

In order to deduce which strategies I was using as quickly as possible, you’d have to trade random portfolio combinations of these different factors for:

(9)

Your Sharpe ratio during this period would be , and afterwards you would earn the same Sharpe ratio as before without having to pay any fees to me:

(10)

However, if you have to show your investors reports every year, it may not be worth it for you to reverse engineer my trading strategy. Your average Sharpe ratio during this period would be:

(11)

which is well below the Sharpe ratio on the market portfolio. Thus, you may just want to pay my fees. Even though you could in principle back out which strategies I was using, it would take too long. You’re investors would withdraw due to poor performance before you could capitalize on your newfound knowledge.

## 5. Discussion

To cement ideas, let’s think about what this result implies for a financial econometrician. We’ve known since the 1970s that there is a strong relationship between oil shocks and the rest of the economy. e.g., see Hamilton (1983), Lamont (1997), and Hamilton (2003). Imagine you’re now an econometrician, and you go back and pinpoint the exact house when each fracking news shock occurred over the last years. Using this information, you then run an event study which finds that petroleum stocks affected by each news shock display a positive cumulative abnormal return over the course of the following week. Would this be evidence of a market inefficiency? Are traders still under-reacting to oil shocks? No. Ex post event studies assume that traders know exactly what is and what isn’t important in real time. Non-petroleum industry specialists who didn’t lose sleep researching hydraulic fracturing have to parse out which shocks are relevant only from prices. This takes time. In the interim, this knowledge is local.

# How Quickly Can We Decipher Price Signals?

## 1. Introduction

There are many different attribute-specific shocks that might affect an asset’s fundamental value in any given period. e.g., the prices of all stocks held in model-driven long/short equity funds might suddenly plummet as happened in the Quant Meltdown of August 2007. Alternatively, new city parking regulations might raise the value of homes with a half circle driveway. Innovations in asset prices are signals containing different kinds of information: a) which of these different shocks has taken place and b) how big each of them was.

It’s often a challenge for traders to answer question (a) in real time. e.g., Daniel (2009) notes that during the Quant Meltdown “markets appeared calm to non-quantitative investors… you could not tell that anything was happening without quant goggles.” This post asks the question: How many transactions do traders need to see in order to identify shocked attributes? The surprising result is that there is a well-defined and calculable answer to this question that is independent of traders’ cognitive abilities. Local knowledge is an unavoidable consequence of this location recovery bound.

## 2. Motivating Example

It’s easiest to see where this location recovery bound comes from via a short example. Suppose you moved away from Chicago a year ago, and now you’re moving back and looking for a house. When looking at a list of recent sales prices, you find yourself surprised. People must have changed their preferences for of different amenities: a car garage, a rd bedroom, a half-circle driveway, granite countertops, energy efficient appliances, central A/C, or a walk-in closet. Having the mystery amenity raises the sale price by dollars. To be sure, you would know how preferences had evolved if you had lived in Chicago the whole time; however, in the absence of this local knowledge, how many sales would you need to see in order to figure out which of the amenities mattered?

The answer is . Where does this number come from? For ease of explanation, let’s normalize the expected house prices to . Suppose you found one house with amenities , a second house with amenities , and a third house with amenities . The combination of prices for these houses would reveal exactly which amenity had been shocked. i.e., if only the first house’s price was higher than expected, , then Chicagoans must have changed their preferences for having a car garage:

(1)

By contrast, if it was the case that , , and , then you would know that people now value walk-in closets much more than they did a year ago.

Here is the key point. sales is just enough information to answer yes or no questions and rule out the possibility of no change:

(2)

sales simply narrows your error bars around the exact value of . sales only allows you to distinguish between subsets of amenities. e.g., seeing just the st and nd houses with unexpectedly high prices only tells you that people like either half-circle driveways or walk-in closets more. It doesn’t tell you which one. The problem changes character at … i.e., the location recovery bound.

## 3. Main Results

This section formalizes the intuition from the example above. Think about innovations in the price of asset as the sum of a meaningful signal, , and some noise, :

(3)

where the signal can be decomposed into different attribute-specific shocks. In Equation (3) above, denotes a shock of size to the th attribute and denotes the extent to which asset displays the th attribute. Each of the data columns is normalized so that .

In general, when there are more attributes than shocks, , picking out exactly which attributes have realized a shock is a combinatorially hard problem as discussed in Natarajan (1995). However, suppose you had an oracle which could bypass this hurdle and tell you exactly which attributes had realized a shock:

(4)

In this world, your mean squared prediction error, , is given by:

(5)

where denotes the number of observations necessary for your oracle. e.g., if each , then since there is only variation in the location of the shocks and not the size of the shocks.

It turns out that if each asset isn’t too redundant relative to the number of shocked attributes, then you can achieve a mean squared error that is within a log factor of the oracle’s mean squared error using many fewer observations than there are attributes, . e.g., suppose that you used a lasso estimator:

(6)

with . Then, Candes and Davenport (2011) show that:

(7)

with probability where is a small numerical constant. However, this paragraph is quite loose. i.e., what exactly does the condition that “each asset isn’t too redundant relative to the number of shocked attributes” mean? Exactly how many observations would you need to see if each asset’s attribute exposure is drawn as ?

Here’s where things get really interesting. Wainwright (2009) shows that there is a sharp bound on the number of observations, , that you need to observe in order for -type estimators like Lasso to succeed when attribute exposure is drawn iid Gaussian:

(8)

with , , and for some . When traders observe observations picking out which attributes have realized a shock is an NP-hard problem; whereas, when they observe there exist efficient convex optimization algorithms that solve this problem. This result says how the location recovery bound from the motivating example generalizes to arbitrary numbers of attributes, , and shocks, .

## 4. Just Identified

I conclude this post by discussing the non-sparse case. i.e., usually isn’t sparse in econometric textbooks á la Hayashi, Wooldridge, or Angrist and Pischke. When every one of the attributes matters, it’s easy to decide which attributes to pay attention to—i.e., all of them. In this situation the mean squared error for an oracle is the same as the mean squared error for mere mortals:

(9)

Does the location recovery bound disappears in this setting?

No. This is not the case. Indeed, the attribute selection bound corresponds to the usual requirement for identification. To see why, let’s return to the motivating example in Section 2, and consider the case where any of the attributes could have realized a shock. This leaves us with different shock combinations:

(10)

so that gives just enough differences to identify which combination of shocks was realized. More generally, we have that for any number of attributes, :

(11)

This gives an interesting information theoretic interpretation to the meaning of “just identified” that has nothing to do with linear algebra or the invertibility of a matrix.

# Constraining Effort Not Bandwidth

## 1. Introduction

Imagine trying to fill up a gallon bucket using a hand-powered water pump. You might end up with a half-full bucket after hour of work for either of reasons. First, the spigot might be too narrow. i.e., even though you are doing enough work to pull gallons of water out of the ground each hour, only gallons can actually flow through the spigot during the allotted time. This is a bandwidth constraint. Alternatively, the pump handle might be too short. i.e., you have to crank the handle twice as many times to pull each gallon of water out of the ground. This is an effort constraint.

Existing information-based asset pricing models a la Grossman and Stiglitz (1980) restrict the bandwidth of arbitrageurs’ flow of information. The market just produces too much information per trading period. i.e., people’s minds have narrow spigots. However, traders also face restrictions on how much work they can do each period. Sometimes it’s hard to crank the pump handle often enough to produce the relevant information. e.g., think of a binary signal such as knowing if a cancer drug has completed a phase of testing as in Huberman and Regev (2001). It doesn’t take much bandwidth to convey this news. After all, people immediately recognized its significance when the New York Times wrote about it. Yet, arbitrageurs must have faced a restriction on the number of scientific articles they could read each year since Nature reported this exact same news months earlier and no one batted an eye! These traders left money on the table because they anticipated having to pump the handle too many times in order to uncover a really simple signal.

This post proposes algorithmic flop counts as a way of quantifying how much effort traders have to do in order to uncover profitable information. I illustrate the idea via a short example and some computations.

## 2. Effort via Flop Counts

One way of quantifying how much effort it takes to discover a piece of information is to count the number of floating-point operations (i.e., flops) that a computer has to do to estimate the number. I take my discussion of flop counts primarily from Boyd and Vandenberghe and define a ﬂop as an addition, subtraction, multiplication, or division of floating-point numbers. e.g., I use flops as a unit of effort so that:

(1)

in the same way that the cost of a Snickers bar might be . I then count the total number of ﬂops needed to calculate a number as a proxy for the effort needed to find out the associated piece of information. e.g., if it took flops to compute the average return of all technology stocks but flops to arrive at the median return on assets for all value stocks, then I would say that it is easier (i.e., took less effort) to know the mean return. The key thing here is that this measure is independent of the amount of entropy that either of these calculations resolves.

I write flop counts as a polynomial function of the dimensions of the matrices and vectors involved. Moreover, I always simplify the expression by removing all but the highest order terms. e.g., suppose that an algorithm required:

(2)

In this case, I would write the flop count as:

(3)

since both these terms are of order . Finally, if I also know that , I might further simplify to flops. Below, I am going to be thinking about high-dimensional matrices and vectors (i.e., where and are big), so these simplifications are sensible.

Let’s look at a couple of examples to fix ideas. First, consider the task of matrix-to-vector multiplication. i.e., suppose that there is a matrix and we want to calculate:

(4)

where we know both and and want to figure out . This task takes an effort of flops. There are elements in the vector , and to compute each one of these elements, we have to multiply numbers together times as:

(5)

This setup is analogous to being a dataset with observations on different variables where each variable has a linear affect on the outcome variable .

Next, let’s turn the tables and look at the case when we know the outcome variable and want to solve for when . A standard approach here would be to use the factor-solve method whereby we first factor the data matrix into the product of components, , and then use these components to iteratively compute as:

(6)

We call the process of computing the factors the factorization step and the process of solving the equations the solve step. The total flop count of a solution strategy is then the sum of the flop counts for each of these steps. In many cases the cost of the factorization step is the leading order term.

e.g., consider the Cholesky Factorization method that is commonly used in statistical software. We know that for every there exists a factorization:

(7)

where is lower triangular and non-singular with positive diagonal elements. Cost of computing these Cholesky factors is flops. By contrast, the resulting solve steps of and each have flop counts of flops bringing the total flop count to flops. In the general case, the effort involved in solving a linear system of equations for when grows with . Boyd and Vandenberghe argue that “for more than a thousand or so, generic methods… become less practical,” and financial markets definitely have more than “a thousand or so” trading opportunities to check!

## 3. Asset Structure

Consider constraining traders’ cognitive effort in an information-based asset pricing model a la Kyle (1985) but with many assets and attribute-specific shocks. Specifically, suppose that there are stocks that each have different payout-relevant characteristics. Every characteristic can take on distinct levels. I call a (characteristic, level) pairing an ‘attribute’ and use the indicator variable to denote whether or not a stock has an attribute. Think about attributes as sitting in a -dimensional matrix, , as illustrated in Equation (8) below:

(8)

I’ve highlighted the attributes for Micron Technology. e.g., we have that while since Micron Technology is based in Boise, ID while Western Digital is based in SoCal.

Further, suppose that each stock’s value is then the sum of a collection of attribute-specific shocks:

(9)

where the shocks are distributed according to the rule:

(10)

Each of the indicates whether or not the attribute happened to realize a shock. The term represents the amplitude of all shocks in units of dollars per share, and the term represents the probability of either a positive or negative shock to attribute each period.

If value investors learn asset-specific information and Kyle (1985)-type market makers price each individual stock using only their own order flow in a dynamic setting, then each individual stock will be priced correctly:

(11)

where denotes the aggregate order flow for stock . Yet, the high-dimensionality of market would mean that there still could be groups of mispriced stocks:

(12)

where denotes the sample average price for stocks with a particular attribute, . This is a case of more is different. If an oracle told you that for some attribute , then you would know that the average price of stocks with attribute would be:

(13)

since in a dynamic Kyle (1985) model where informed traders have an incentive to trade less aggressively today (i.e., decrease and thus ) in order to act on their information again tomorrow. In this setting, will be less than its fundamental value even though it will be easy to see that as .

## 4. Arbitrageurs’ Inference Problem

So how much effort does it take to discover the set of shocked attributes, :

(14)

given their price impact? What’s stopping arbitrageurs from trading away these attribute-specific pricing errors? Well, the problem of finding the attributes in boils down to solving:

(15)

for where , , and . i.e., this is a similar problem as the linear solve in Section 3 above, but with additional complications. First, the system is underdetermined in the sense that there are many more payout-relevant attributes than stocks, . Second, arbitrageurs don’t know exactly how many attributes are in . They know that on average, ; however, itself is a random variable.

It’s easy enough to extend the solution strategy in Section 3 to the case of an underdetermined system where a solution is a member of the set:

(16)

where is a matrix whose column vectors are a basis for the null space of . Suppose that is -dimensional and non-singular, then:

(17)

Obviously, setting is one solution. The full set of solutions defining the null space is given by:

(18)

Thus, if it takes flops to factor into and flops to solve each linear system of the form , then the total cost of parameterizing all the solutions is:

(19)

Via the LU factorization method, I know that the factorization step will cost roughly:

(20)

Moreover, we know from Section 3 that the cost of the solve step will be on the order . However, there is one detail left to consider still. Namely that arbitrageurs don’t know . Thus, they have to solve for both and by starting at and iterating on the above process until the columns of actually represents a basis for the null space of . Thus, the total effort needed is:

(21)

where is the convergence rate and the calculation is dominated by the effort spent searching through the null space to be sure that is correct. More broadly, this step is just one way of capturing the deeper idea that knowing where to look is hard. e.g., Warren Buffett says that he “can say no in seconds or so to or more of all the [investment opportunities] that come along.” This is great… until you consider how many investment opportunities Buffett runs across every single day. Saying no in second flat then turns out to be quite a chore! Alternatively, as the figure above highlights, this is why traders use personalized multi-monitor computing setups that make it easy to spot patterns instead of a shared super computer with minimal output.

## 5. Clock Time Interpretation

Is flops a big number? Is it a small number? Flop counts were originally used when floating-point operations were the main computing bottleneck. Now things relating to how data are stored such as cache boundaries and reference locality have first order affects on computation time as well. Nevertheless, ﬂop counts can still give a good back of the envelope estimate of the relative amount of time it would take to execute a procedure, and such a calculation would be helpful in trying to interpret the unit of measurement “flops” on a human scale. e.g., on one hand, arbitrageur effort would be a silly constraint to worry about if the time it took to execute real world calculations was infinitesimally small. On the other hand, flops might be a poor unit of measure for arbitrageurs’ effort if the time it took to carry out reasonable calculations was on the order of millennia since arbitrageurs clearly don’t wait this long to act! Actually doing a quick computation can allay these fears.

Suppose that computers can execute roughly operations per second. Millions of instructions per second (i.e., ) is a standard unit of computational speed. I can then calculate the amount of time it would take to execute a given number of flops at a speed of as:

(22)

Thus, if there are roughly characteristics that can take on different levels and out of every attributes realizes a shock each period, then even if arbitrageurs guess exactly right on the number of shocked attributes (i.e., so that ) a brute force search would take days to complete. Clearly, a brute force search strategy just isn’t feasible. There just isn’t enough time to physically do all of the calculations.

## 6. A Persistent Problem

I conclude by addressing a common question. You might ask: “Won’t really fast computers make cognitive control irrelevant?” No. Progress in computer storage has actually outstripped progress in processing speeds by a wide margin. This is known as Kryder’s Law. Over years the cost of processing has dropped by a factor of roughly (i.e., Moore’s Law). By contrast, the cost of storage has dropped by a factor of over the same period. e.g., take a look at the figure below made using data from www.mkomo.com which shows that the cost of disk space decreases by each year. What does this mean in practice? Well, as late as 1980 a hard drive cost , implying that a hard drive would have cost upwards of . These days you can pick up a drive for about ! We have so much storage that finding things is now an important challenge. This is why we find Google so helpful. Instead of being eliminated by computational power, cognitive control turns out to be a distinctly modern problem.

# Many Assets with Attribute-Specific Shocks

## 1. Motivation and Outline

Asset pricing models tend to focus on a single stock that realizes a normally distributed value shock of undefined origins. e.g., think of Kyle (1985) as a representative example. This is a great starting point; however, massive size and dense interconnectedness are key features of financial markets. Studying a financial market without these features is like studying dry water. In this post I suggest a simple way to modify the standard payout structure to allow for many assets and attribute-specific shocks.

What do I mean by attribute-specific shocks? To illustrate, have a look at the figure below which shows the most common of topics that came into play when journalists from the Wall Street Journal wrote about Micron Technology from 2001 to 2012. The figure reads that: “If you select a Wall Street Journal article that mentioned Micron Technology in the abstract at random, then there is a chance that ‘Antitrust’ is a listed subject.” Here’s the key point. When news about Micron Technology emerged, it was never just about Micron Technology. Journalists wrote about a particular SEC investigation, or a technology shock affecting all hard disk drive makers, or the firms currently active in the mergers and acquisitions market, etc… Value shocks are physical. They are rooted in particular events affecting subsets of stocks.

A big market with attribute-specific shocks means perspective matters. Consider a real world example. e.g., Khandani and Lo (2007) wrote about the ‘Quant Meltdown’ of 2007 that “the most remarkable aspect of these hedge-fund losses was the fact that they were confined almost exclusively to funds using quantitative strategies. With laser-like precision, model-driven long/short equity funds were hit hard on Tue Aug th and Wed Aug th, despite relatively little movement in [the average level of] fixed-income and equity markets during those days and no major losses reported in any other hedge-fund sectors.” Every individual stock was priced correctly, yet there was still a huge multi-stock price movement in a particular subset of stocks. Here’s the kicker: You would never have noticed this shock unless you knew exactly where to look!

## 2. Payout Structure

In Kyle (1985) there is a single stock with a fundamental value distributed as . Suppose that, instead, there are actually stocks that each have different payout-relevant characteristics. Every characteristic can take on distinct levels. I call a (characteristic, level) pairing an ‘attribute’ and use the indicator variable to denote whether or not a stock has an attribute. Think about attributes as sitting in a -dimensional matrix, , as illustrated in Equation (1) below:

(1)

I’ve highlighted the attributes for Micron Technology. e.g., we have that while since Micron Technology is based in Boise, ID while Western Digital is based in SoCal.

Further, suppose that each stock’s value is then the sum of a collection of attribute-specific shocks:

(2)

where the shocks are distributed according to the rule:

(3)

Each of the indicates whether or not the attribute happened to realize a shock. The term represents the amplitude of all shocks in units of dollars per share, and the term represents the probability of either a positive or negative shock to attribute each period.

You could also add the usual factor exposure and firm-specific shocks to the model:

(4)

I’ve excluded these terms for clarity since they are not new. You might be wondering: “Aren’t these attribute-specific shocks captured by a covariance matrix, though?” No. The covariance between any assets in this setup is:

(5)

where the first corresponds to the number of characteristics, the term denotes the probability that both stocks have the same level for a particular characteristic, the term denotes the probability that the attribute realizes a shock, and the term denotes the squared attributes-specific shock. The takeaway from this calculation is that the covariance matrix is completely flat (i.e., it doesn’t matter which and you compare) and arbitrarily small.

Lots of things that you might think of as explained by constant covariance aren’t. e.g., the figure above shows the maximum industry-specific contribution to daily return variance from January 1976 to December 2011 using the methodology in Campbell, Lettau, Malkiel, and Xu (2001). The vertical text at the bottom gives the name of the industry with the largest industry-specific contribution to daily return variance each month any time it changes from the previous month. The figure reads that: “While traders can usually expect to understand no more than of a typical firm’s variation in daily returns, there are times such as in when this figure suddenly jumps to over . What’s more, the density of the text along the base of the figure shows that the important (i.e., extremal) industry regularly changes from month to month.”

## 3. Approximation Error

One of the nice features of this reformulation of the usual normal value shocks is that, although it changes the interpretation of where each firm’s value comes from, it doesn’t alter any of the Guassian structure of the problem. i.e., the normal approximation to the binomial distribution says that:

(6)

where the “ish” means that there is a small and easy to compute approximation error. e.g., consider the collection of attribute-specific shocks for asset , , with , , and and define the normalized sum with the cumulative density function . Then, we know via the central limit theorem that as where is the standard normal distribution.

Moreover, the Berry-Esseen Theorem says that:

(7)

where the second equals sign applies only in the special case of the sum of binomially distributed random variables. The figure above shows how well this approximation holds as the number of payout-relevant characteristics, , increases from to in a world where . I compute the -axis on a grid of unit length . If there are firms with values that typically range over an area of , then in a world with payout-relevant characteristics only stocks will be misvalued by a mere if you use the normal approximation to the binomial distribution rather than the true distribution. Thus, less than dollar in isn’t accounted for by the approximation:

(8)

By contrast, the figure below shows the moving average of the percent of the variance in firm-level daily returns explained by market and industry factors over the time period from January 1976 to December 2011 using the methodology from Campbell, Lettau, Malkiel, and Xu (2001). This figure reads that: “For a randomly selected stock in 1999, market and industry considerations only account for around of its daily return variation.” In other words, the usual factor models typically account for less than half of the fluctuations in firm value. i.e., they are orders of magnitude less precise than the approximation error!

## 4. Whose Perspective?

You might ask: “Why bother adding this extra structure?” In a big market with attribute-specific shocks, perspective matters. This is the punchline. Asset values and attribute-specific shocks essentially carry the same information since:

(9)

However, knowing the value of an asset tells you very little about whether any particular one of its attributes has realized a shock. Similarly, knowing whether an attribute has realized a shock is a really noisy signal about the value of any particular stock with that attribute.

To see how this duality might affect asset prices, consider a simple example. e.g., suppose that we are in a multi-period Kyle (1985)-type world where value investors know the fundamental value of a particular stock, and they place orders with a market maker who processes only the order flow for that particular stock. It could well be the case that market makers price each stock correctly on average:

(10)

Yet, the high-dimensionality of market would mean that there still could be groups of mispriced stocks:

(11)

where denotes the sample average time price for stocks with a particular attribute, . This is a case of more is different. If an oracle told you that for some attribute , then you would know that the average price of stocks with attribute would be:

(12)

where since value investors would have an incentive to delay trading in a dynamic model. i.e., will be less than its fundamental value even though it will be easy to see that as .

There are way more payout-relevant attributes than anyone could ever investigate in a single period. This is why Charlie Munger explains that it’s his job “to find a few intelligent things to do, not to keep up with every damn thing in the world.” If we think about each stock as a location in a “spatial” domain and the attribute-specific shocks as particular points in a “frequency” domain, this result takes on the flavor of a generalized uncertainty principle. i.e., it’s really hard to simultaneously estimate the price of a portfolio at both very fine scales (i.e., containing a single asset) and very low frequencies (i.e., affecting every stock with an attribute).