The Tao of Portfolio Management

“Shape clay into a vessel;
It is the space within that makes it useful.
Cut doors and windows for a room;
It is the holes which make it useful.
Therefore benefit comes from what is there;
Usefulness from what is not there. ”
– Lao Tzu

“The limits of my language means the limits of my world. ”
– Ludwig Wittgenstein

“The question is not what you look at, but what you see.”
– Henry David Thoreau

“A European says: I can’t understand this. What’s wrong with me?
An American says: I can’t understand this. What’s wrong with him?”
– Terry Pratchett

I want to start this note in a manner that’s sure to annoy some readers, and that’s to reference the George Zimmerman trial. If gold is the third rail of financial commentary (“How Gold Lost its Luster”), then the Zimmerman trial must be the Death Star planet-destroying laser beam of such notes. But the shaping of the post-trial Zimmerman Narrative is a precise example of the behavioral phenomenon that I want to examine this week. So with considerable trepidation, here goes …

As discussed in last week’s note (“The Market of Babel”), groups speaking different languages – whether it’s an everyday language like English or Japanese, or an investing language like Value or Growth – have both a translation friction to overcome in inter-group communications as well as a potential dislocation of meaning in vocabulary and grammar. This latter problem is far more insidious and injurious to joint utility functions than the former, and the post-trial “conversation” between groups that support the Zimmerman verdict and groups that are appalled by the Zimmerman verdict is a perfect example of the problem of meaning. In fact, there is no conversation possible here at all, because each group is seeing the same observable data points through very different perceptual lenses. The chasm of meaning between these two groups is formed by an ecological divide, which is also a common source of meaning disparity in market communications and languages. As such, it is well worth our attention in Epsilon Theory.

An ecological divide is a difference in perception of useful signal aggregation. In the Zimmerman case, those appalled by the verdict are seeing the broad social context of the available information. How is it possible, they ask, for a society to allow an unarmed black minor walking home from a store to be shot dead with no legal sanction against his killer? Those supportive of the verdict, on the other hand, are seeing the individual instantiation of the available information in this particular case. How is it possible, they ask, to evaluate the specific evidence presented in this specific trial against the specific legal charges and fail to deliver a Not Guilty verdict?

The Western system of jurisprudence is based on liberal notions (that’s John Stuart Mill liberalism, not Walter Mondale liberalism) of the primacy of individual rights, as opposed to communitarian notions of aggregate social utility. What this means is that the rules of the trial-by-jury game, from jury instructions to allowed evidence, are set up to focus attention on specific fact patterns relevant to a specific defendant. And as a result, it makes a lot of sense (to me, anyway) that Zimmerman was found Not Guilty by virtue of reasonable doubt regarding the specific charges levied against him. On the other hand, the rules of the game for the Western system of political representation do not give a whit about individual rights, but favor the ability to mobilize like-minded groups of citizens on the basis of widely-held social grievances. So it also makes a lot of sense to me that the political dynamics outside the courtroom treat Zimmerman-like actions (and Zimmerman individually as a member of the Zimmerman-like set) as unjust and the object of sanction.

Each perspective is entirely valid within its relevant sphere of aggregation, and each perspective is extremely problematic in the other sphere. To deny the existence of racial bias in the aggregate data regarding crime and punishment in the United States – the application of the death penalty, for example – is, in my opinion, like denying that the Earth goes around the sun. However, this does NOT mean that ANY individual death penalty case, much less every death penalty case, is necessarily racially biased or that racial bias was a meaningful cause of any death penalty decision. I know this seems counter-intuitive … how can a population have a meaningful attribute in the aggregate, but no individual member of that population demonstrate that attribute in a meaningful way? … but it’s the truth. Or rather, it’s the inescapable conclusion of a consistent application of statistical logic.

Systems that demonstrate this sort of ecological divide are much more common than you might think, and are at the heart of any tool or technology that utilizes large numbers of small signals – each of which is inconsequential in its own right – to create or observe a meaningful signal in the aggregate. For example, the gamma knife technology used to shrink inoperable cancerous tumors works in this manner, by focusing hundreds or thousands of weak radiation beams from multiple directions on a cluster of cells. No single beam is meaningfully dangerous in and of itself, because otherwise the healthy cells hit by one of these rays might be injured, but the combination of many of these beams is deadly to a cell. Each individual beam of radiation is “Not Guilty” of causing irreparable harm to any individual cell, and no individual beam is biased/targeted specifically to any type of cell. But the overall system is a superb killer of cells subject to the bias/targeting of the system. The effectiveness of the gamma knife technology is entirely based on statistical assessments of probabilistic outcomes of cellular damage when exposed to a burst of radiation, both at the individual and aggregate levels. Because the radiation bursts can be reduced to multiple rays with very small individual impacts (probabilistically speaking), an ecological threshold can be calculated and implemented to create a potent cancer treatment therapy.

epsilon-theory-the-tao-of-portfolio-management-july-21-2013-gamma-knife

It’s no accident that technologies like the gamma knife are largely computer-driven, because humans are remarkably poor calculators of ecological thresholds. The human brain has evolved over millions of years and we have trained ourselves for hundreds of thousands of years to be very effective social animals making ecological inferences on a scale that makes sense for small group survival on an African savannah, not the smooth functioning of a mass society that spans a continent and has hundreds of millions of members. As a result, we are hard-wired to underestimate the cumulative impact of massive numbers of small signals that form part of a consistent system, and we consistently overestimate the contribution of any one contributory signal when we focus on the aggregate outcome. That latter decision-making mistake, where individual characteristics are improperly inferred from aggregate characteristics, has a name. It’s an ecological fallacy, and it’s an inherent problem for every aspect of human society in the modern age of massive aggregation, from the effective operation of a system of justice to the effective operation of a system of market exchange.

In the case of a justice system, the meaning of Trayvon Martin’s death is different when seen through the lens of individual rights at trial than when seen through the lens of social utility at large. What happened in Sanford was an instantiation of what I believe is a demonstrably unjust and racially biased system, and it deserves political action to recalibrate the societal gamma knife machine that ends up killing black cells preferentially over white cells. But that doesn’t mean that Zimmerman the individual was necessarily guilty of any crime, and to conclude that he is racially biased to a criminal degree because his actions form part of an unjust and racially biased system is an ecological fallacy. Such a conclusion is natural and all too human, but it is also illogical and unjust. It’s also a difficult point to fit into a soundbite for Fox or MSNBC, so I imagine that the demonization of both sides and the further polarization of American society will proceed with all deliberate speed.

In the case of a system of market exchange, I want to make two points about the impact of ecological divides and the hard-wired human tendency to make poor decisions under the influence of an ecological fallacy. The first, which I’ll only note briefly today but will describe in much more detail in subsequent weeks, is that it’s crucial for any investor to understand the basics of computer-driven methodologies of ecological inference. These methodologies, which fall under the rubric of Big Data, are driving revolutionary applications in fields as diverse as medicine, oil and gas exploration, and national security (this is the technology that underpins the recently revealed NSA monitoring program of mobile telephone meta-data). The technology has made some inroads within the financial services industry, particularly in the liquidity operations of market-makers (see “The Market of Babel”), but is surprisingly absent in risk management and security selection applications. It’s coming. And when these technologies do arrive, their impact on investing and money management will be as significant as that of the telegraph or the semiconductor. My hope is that Epsilon Theory will play some role in that arrival, both as a herald and as a provider.

The second point is that there is a huge ecological divide between investors, based on – as with all ecological divides – the perceived level of useful signal aggregation. When market participants describe themselves as bottom-up or fundamental investors, they typically mean that they base their decisions on signals pertaining to individual securities. When market participants describe themselves as top-down or macro investors, they typically mean that they base their decision on signals pertaining to an aggregated set of securities, perhaps an entire asset class of securities. For both bottom-up investors and top-down investors the English language uses the same word – “portfolio” – to describe the collection of securities that they own. But there is an enormous difference in meaning between a collection of securities that is seen and understood as an aggregate collection of securities versus a collection of securities that is seen and understood in terms of the individual members of that collection. The meaning of portfolio construction and risk management is very different when seen through the lens of a bottom-up stock-picking strategy than when seen through the lens of a top-down macro strategy, and the impact of this difference is underappreciated by investors, managers, allocators, and service providers.

To a top-down investor the portfolio IS the unit of analysis. A portfolio of securities is created for the express purpose of creating some set of characteristics in the aggregate. A top-down investor is trying to make a tasty stew, and the individual components of a portfolio are nothing more than ingredients that are intended to be blended together according to whatever recipe the portfolio manager is following. Securities are chosen solely for their contribution to the overall portfolio, and their usefulness is defined solely by that contribution. Individual securities have no meaning in and of themselves to a top-down investor, as it is the portfolio itself which is vested with meaning and is the object of the investor’s behavior.

To a bottom-up investor it is tempting to think of the portfolio as the unit of analysis, because it’s the performance of the portfolio that generates a manager’s compensation. But it’s not. To a bottom-up investor a portfolio is a collection of individually-analyzed exposures, where all the meaning resides in the individual exposures. It’s a “portfolio” simply because the bottom-up investor owns several individual exposures and that’s the word the English language gives to the owning of several individual exposures, not because there was any attempt to create or achieve some set of aggregate characteristics by owning several individual exposures. To use the imagery of Lao Tzu, a portfolio is a clay vessel to a fundamental investor, a provider of empty space that holds what is meaningful, rather than something that is meaningful in and of itself. The existence of a portfolio is an epiphenomenon to the behavior of a fundamental investor, not the object of that behavior, and to treat it as more than that or differently from that is a mistake.

Okay, Ben … that’s a very poetic metaphor. But what’s the problem here in concrete terms?

Both the bottom-up and top-down perspectives are demonstrably valid and effective within their own spheres. But when those spheres blur within investment decision-making you’ve got a problem. For a top-down portfolio manager this usually takes the form of imbuing meaning to an individual security (“Hmm … I think I will choose this stock to express the characteristic I want to have in my portfolio because I heard that it might be the target of a proxy fight. It’s like a free call option, right?”), and for a bottom-up portfolio manager this usually takes the form of tinkering with individual exposures in order to adjust or mitigate some portfolio-level attribute (“Hmm … I’m 40% net beta long and I’m really worried about this market. I better cut some of my high beta longs, maybe add some S&P puts. Gotta manage risk, right?”). Both of these behaviors fall into the chasm of the ecological divide, and the latter in particular is an expression of an ecological fallacy, no different in its logical inconsistency than believing that Zimmerman the individual should have been found guilty because he is part of a large set of individuals and actions that bear responsibility in the aggregate for a significant social iniquity.

The ecological fallacy expressed by tinkering with the individual exposures of a bottom-up, stock-picking portfolio happens all the time, in large part because these portfolios are typically judged and evaluated with the same tools and the same criteria used for top-down portfolios. A bottom-up portfolio manager is absolutely inundated with signals of all sorts about the aggregate characteristics of his portfolio … scenario analyses, volatilities, betas, correlations, active weights, gross and net exposures, etc. … and everyone knows that it’s critical to manage your exposure to this factor and that factor, that you should seriously consider a “trading overlay” or a “volatility hedge” to your portfolio. Or so we are told. And so we believe, because every institutional investor asks the same questions and collects the same performance and exposure data based on aggregate portfolio characteristics. We believe that everyone knows that everyone knows that it’s critical to manage your exposure to this factor or that factor, and thus it becomes Common Knowledge. And once it becomes Common Knowledge, then even if a fundamental investor privately believes that this is all hokum for the way he manages money, it doesn’t matter. The dynamics of the game are such that the rational choice is to go along with the Common Knowledge, else you are the odd man out. The Common Knowledge game is rampant in the business of money management, in exactly the same way that it is rampant in the intrinsic market activities of managing money.

The best stock-picking portfolio managers I know ignore 99% of the portfolio level data they are bombarded with, and good for them! A logically consistent bottom-up portfolio manager does not “manage to” some target Volatility or Sharpe Ratio or any other aggregate portfolio characteristic, because it makes no sense given what a portfolio means to a logically consistent fundamental investor. Again to refer to Lao Tzu, portfolio and risk management tools for the fundamental investor are moreuseful if they cut out measures and algorithms that do not make sense for the purpose or meaning of “portfolio” in the context of investing in individual securities.

But does that mean that fundamental investors are destined to fly by the seat of their pants through what is a decidedly foggy and stormy environment? Are there no effective instruments or tools that can help allocators and managers understand what makes one stock-picking portfolio different or better from another? I think that there are – or rather, could be – but these instruments need to be designed on the basis of what a portfolio means to a bottom-up investor, not what a portfolio means to a top-down investor. Unfortunately, every portfolio risk management tool or concept on the market today (to my knowledge) is based on the top-down investor’s perspective of portfolio-as-tasty-stew, as the direct object of analysis for the risk management tool, rather than the bottom-up investor’s perspective of portfolio-as-clay-vessel, as the indirect object of analysis for the risk management tool.

So what is a useful way of evaluating a portfolio-as-clay-vessel? To answer that question we need to ask why a fundamental investor has a portfolio at all. Why not just have three or four very large positions in your highest conviction stock-picking investment ideas and call it a day? One answer, of course, is that this approach doesn’t scale very well. If you’re managing more than a hundred million dollars, much less several billion dollars, finding sufficient liquidity depth in your best ideas is at least as difficult a task as identifying the best ideas in the first place. But let’s leave this aside for now as a practical challenge to a highly concentrated portfolio, not a fundamental flaw.

The fundamental flaw with concentrating investment decisions in a handful of exposures is that any investment is an exercise in decision-making under uncertainty. All fundamental investors “know their companies” as well as they possibly can, but in this business you’re wrong about something every single day. And that’s fine. In fact, it’s perfectly fine to be wrong more often than you’re right, provided that you have a highly asymmetric risk/reward payoff associated with being right or wrong with your fundamental analysis. In the same way that you would think about your bets at a horse track in terms of the expected pay-off odds AND your assessment of the expected race outcome, so are the exposures within a bottom-up portfolio based on a joint view of the likelihood of being right about future events AND the pay-off for being right. Different managers have different business models and views about the types of bets and the time frames of bets that are right for them, but this is the common language for all bottom-up investment strategies.

Thinking in terms of this joint probability function reveals why a bottom-up investor owns more than three or four exposures. Your best investment idea may not be (and in fact rarely is) the one where you are simply the most confident of the horse winning the race. It’s the one where you are most confident of the horse winning the race relative to the expected pay-off for winning the race and the expected loss for losing the race. Your best investment idea may well be (and in fact often is) based on a low probability event with a very high pay-off if you’re right and a reasonably low cost if you’re wrong, but you would be a fool to have a highly concentrated portfolio based solely on low probability events because the odds are too high that you will run into a streak of bad luck where none of the low probability events occur. Instead, you want your investment ideas to be sized in a way that maximizes the total of the expected returns from all of these individual joint probability calculations, but within a framework that won’t let a run of bad luck at the individual level put you out of business. That’s what a portfolio means to a bottom-up investor.

The language I just described – assessing risk and reward as a function of the probability of various informational outcomes and the pay-off associated with those outcomes – is called Expected Utility. It is the language of both Game Theory and Information Theory, and it is the language of the Epsilon Theory algorithms. In the same way that we can describe the informational surface of a security (see “Through the Looking Glass” and “The Music of the Spheres”), where price forms an equilibrium “trough” and the height of the “walls” around that trough represent the informational strength of the signal required to move the price outcome to a new equilibrium level, so can we describe the informational value of a specific portfolio exposure, where the vector (weight and direction) of that exposure versus the informational surface of the security represents the risk/reward asymmetry of that particular exposure from an Information Theory perspective. These individual informational values can be arrayed against probability distributions of new information coming into play for each individual security, and Monte Carlo simulations can then generate the optimal exposure weights for each individual security within the context of an overall business tolerance for bad luck. The resulting portfolio should be, by definition, the perfectly sized clay vessel to hold the securities chosen by the manager for their individual characteristics within a specified framework of business risk. The portfolio is the byproduct of the risk/reward attributes of the individual securities, not a directly constructed entity, and its own attributes (while measurable by traditional top-down tools if you care to do so) are relegated to the background where they belong.

I recognize that the preceding paragraph is quite a mouthful, and the language is foreign to most readers, in particular most bottom-up investors. I mean … very few bottom-up investors read up on Simpson’s Paradox or the latest applications of negative binomial stochastic distributions in their spare time. A stock-picker reads 10-Q’s and bond covenants in his spare time. A stock-picker is fluent in the written language of financial statements and the body language of management one-on-one’s, not the mathematical language of causal inference. But unfortunately there’s no getting around the mathematical language of statistical logic and causal inference whenever you start to aggregate complex things into complex collections of things, particularly when trillions of dollars are sloshing around in these complex aggregations. Without the structure and guard rails of mathematical tools and constructs, human decision-makers tend to fall into ecological chasms whenever they turn their focus from the individual level to the aggregate level to the individual level again.

The problem is that bottom-up investors have been ill-served by those who ARE fluent in these statistical languages. The available tools for portfolio construction and risk management aren’t guard rails at all to a bottom-up investor, but actually serve to encourage ecological fallacies and poor portfolio management. That’s because these tools were all designed from a top-down investment perspective, not out of malice or spite, but out of the intellectual hegemony that Modern Portfolio Theory exercises far and wide. It’s time for a re-conceptualization of these tools, one based on the truly fundamental language of Information and a recognition of the validity of different investment perspectives. That’s what I’m trying to achieve with Epsilon Theory.

epsilon-theory-the-tao-of-portfolio-management-july-21-2013.pdf (596KB)21