Mo’ Compute Mo’ Problems

Category: Note, Narrative, Rabbit Hole

As part of the ET 2.0 expanded sandbox, I’ve asked Neville Crawley to write a weekly-ish “Down the Rabbit Hole” column with his observations on what he calls Big Compute, I call non-human intelligences, and the rest of the world calls AI. This is the biggest revolution in markets and the world today.

Neville will be publishing under his own byline in the near future — his commentary continues below.

Enjoy.

-Ben

Hard problems

Someone tweeted this cartoon at me last week, presumably in angry response to an Epsilon Theory post, as the Tweet was captioned “My feelings towards ‘A.I.’ (and/or machine learning) and investing”:

Source: xkcd

To be clear: YES, I AGREE

Unsurprisingly, we humans are pretty competent creatures within the domains we have contrived (such as finance) and spent decades practicing. So it is, generally, still hard (and expensive) in 2017 to quickly build a machine which is consistently better at even a thin, discrete sliver of a complex, human-contrived domain.

The challenge, as this cartoon humorously alludes to, is that it is currently often difficult (and sometimes impossible) to know in advance just how hard a problem is for a machine to best a human at.

BUT, what we do know is that once an ML/AI-driven machine dominates, it can truly dominate, and it is incredibly rare for humans to gain the upper hand again (although there can be periods of centaur dominance, like the ‘Advanced Chess’ movement).

As a general heuristic, I think we can say that tasks at which machines are now end-to-end better have one or some of the following characteristics:

  • Are fairly simple and discrete tasks which require repetition without error (AUTOMATION)
  • and/or are extremely large in data scale (BIG DATA)
  • and/or have calculation complexity and/or require a great deal of speed (BIG COMPUTE)
  • and where a ‘human in-the-loop’ degrades the system (AUTONOMY)

But equally there are still many things on which machines are currently nowhere close to being able to reach human-parity, mostly involving ‘intuition’, or many, many models with judgment on when to combine or switch between the models.

Will machines eventually dominate all? Probably. When? Not anytime soon.

The key, immediate, practical point is that the current over-polarization of the human-oriented and machine-oriented populations, particularly in the investing world, is both a challenge and an opportunity as each sect is not fully utilizing the capabilities of the other. Good Bloomberg article from a couple of months back on Point72 and BlueMountain’s challenges in reconciling this in an existing environment.

The myth of superhuman AI

On the other side of the spectrum from our afore-referenced Tweeter are those who predict superhuman AIs taking over the world.

I find this to be a very bogus argument in anything like the foreseeable future, reasons for which are very well laid out by Kevin Kelly (of Wired, Whole Earth Review and Hackers’ Conference fame) in this lengthy essay.

The crux of Kelly’s argument:

  • Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  • Humans do not have general purpose minds and neither will AIs.
  • Emulation of human thinking in other media will be constrained by cost.
  • Dimensions of intelligence are not infinite.
  • Intelligences are only one factor in progress.

Key quote:

Instead of a single line, a more accurate model for intelligence is to chart its possibility space. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent and co-created. Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition.

(BTW: Kevin Kelly has led an amazing life – read his bio here.)

Can’t we just all be friends?

On somewhat more prosaic uses of AI, the New York Times has a nice human-angle on the people whose job is to train AI to do their own jobs. My favorite line from the legal AI trainer: “Mr. Rubins doesn’t think A.I. will put lawyers out of business, but it may change how they work and make money. The less time they need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.” Oh, boy!

Valley Grammar

And finally, because it it just really tickles me in a funny-because-it’s-true way: Benedict Evans’ @a16z’s guide to the (Silicon) Valley grammar of IP development and egohood:

  • I am implementing a well-known paradigm.
  • You are taking inspiration.
  • They are rip-off merchants.

So true. So many attorney’s fees. Better rev up that AI litigator.

epsilon-theory-rabbit-hole-ben-hunt-may-16-2017.pdf (372KB)