Epsilon Theory is Dr. Ben Hunt’s ongoing examination of the narrative machine driving human behavior, political policy and, ultimately, capital markets—an unconventional worldview best understood through the lenses of history, game theory and philosophy.
Ben’s originally from Alabama but now lives out in the wilds of Fairfield County, Connecticut, on a “farm” of 44 acres. While his pre-electrification, dairy farmer grandfather may’ve laughed at his calling his modest spread a farm, Ben’s learned a few things over the years from the farm and its animals, and they’ve helped him become a better investor. Here are his notes from the field in this regular Epsilon Theory feature.
From our A-Team of seasoned subject matter experts, the Narrative is an evolving stream of perspectives and insights on a broad range of financial subjects and asset classes. Please read, enjoy and share.
To understand the impact of catalytic narrative forces, we have to monitor the vital signs of the capital markets they affect. To analyze the big picture through the lenses of game theory and history, we must also examine the details through lenses like volatility, momentum, income, correlation and inflation. These are the indicators of systemic vitality and stress—the fine details we use to fine-tune our worldview. We hope they help you sharpen your understanding of the investable universe.
Dr. Ben Hunt hosts the Epsilon Theory podcast with co-hosts and special guests from financial services, the financial media *gasp* and beyond. The Epsilon Theory podcast is the quickest way to get all of the unconventional perspective, historical context and narrative analysis you’ve come to expect from Epsilon Theory pumped directly into your head.
We’re growing our family of Epsilon Theory contributors to include a broad range of voices on an evolving range of subject matter. If you listen to the podcast, you’ll recognize some of the names as colleagues, partners and friends of Ben from Salient, any number of past lives, and the growing circle of outspoken truth-seekers in financial services and beyond.
Epsilon Theory author Dr. Ben Hunt is frequently quoted in print, radio and TV appearances.
Salient Partners is the proud parent company of Epsilon Theory. Salient is a diversified asset management firm and leading provider of real asset and alternative investment strategies for institutional investors and investment advisors.
Let’s talk. We actually read and respond to your emails. Questions, comments, theories, ideas—we’d love to hear from you.
As part of the ET 2.0 expanded sandbox, I’ve asked Neville Crawley to write a weekly-ish “Down the Rabbit Hole” column with his observations on what he calls Big Compute, I call non-human intelligences, and the rest of the world calls AI. This is the biggest revolution in markets and the world today.
Neville will be publishing under his own byline in the near future — his commentary continues below.
Someone tweeted this cartoon at me last week, presumably in angry response to an Epsilon Theory post, as the Tweet was captioned “My feelings towards ‘A.I.’ (and/or machine learning) and investing”:
Unsurprisingly, we humans are pretty competent creatures within the domains we have contrived (such as finance) and spent decades practicing. So it is, generally, still hard (and expensive) in 2017 to quickly build a machine which is consistently better at even a thin, discrete sliver of a complex, human-contrived domain.
The challenge, as this cartoon humorously alludes to, is that it is currently often difficult (and sometimes impossible) to know in advance just how hard a problem is for a machine to best a human at.
BUT, what we do know is that once an ML/AI-driven machine dominates, it can truly dominate, and it is incredibly rare for humans to gain the upper hand again (although there can be periods of centaur dominance, like the ‘Advanced Chess’ movement).
As a general heuristic, I think we can say that tasks at which machines are now end-to-end better have one or some of the following characteristics:
Are fairly simple and discrete tasks which require repetition without error (AUTOMATION)
and/or are extremely large in data scale (BIG DATA)
and/or have calculation complexity and/or require a great deal of speed (BIG COMPUTE)
and where a ‘human in-the-loop’ degrades the system (AUTONOMY)
But equally there are still many things on which machines are currently nowhere close to being able to reach human-parity, mostly involving ‘intuition’, or many, many models with judgment on when to combine or switch between the models.
Will machines eventually dominate all? Probably. When? Not anytime soon.
The key, immediate, practical point is that the current over-polarization of the human-oriented and machine-oriented populations, particularly in the investing world, is both a challenge and an opportunity as each sect is not fully utilizing the capabilities of the other. Good Bloomberg article from a couple of months back on Point72 and BlueMountain’s challenges in reconciling this in an existing environment.
The myth of superhuman AI
On the other side of the spectrum from our afore-referenced Tweeter are those who predict superhuman AIs taking over the world.
I find this to be a very bogus argument in anything like the foreseeable future, reasons for which are very well laid out by Kevin Kelly (of Wired, Whole Earth Review and Hackers’ Conference fame) in this lengthy essay.
The crux of Kelly’s argument:
Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
Humans do not have general purpose minds and neither will AIs.
Emulation of human thinking in other media will be constrained by cost.
Dimensions of intelligence are not infinite.
Intelligences are only one factor in progress.
Instead of a single line, a more accurate model for intelligence is to chart its possibility space. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent and co-created. Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition.
(BTW: Kevin Kelly has led an amazing life – read his bio here.)
Can’t we just all be friends?
On somewhat more prosaic uses of AI, the New York Times has a nice human-angle on the people whose job is to train AI to do their own jobs. My favorite line from the legal AI trainer: “Mr. Rubins doesn’t think A.I. will put lawyers out of business, but it may change how they work and make money. The less time they need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.” Oh, boy!
And finally, because it it just really tickles me in a funny-because-it’s-true way: Benedict Evans’ @a16z’s guide to the (Silicon) Valley grammar of IP development and egohood:
I am implementing a well-known paradigm.
You are taking inspiration.
They are rip-off merchants.
So true. So many attorney’s fees. Better rev up that AI litigator.