There are two ways that people think about AI risk and I am not a big fan of either of them.

One is the “paper clip” scenario. Some group of software engineers is working on AI, and all of a sudden, it works. Boom, there is now a superintelligent AI, it’s so much smarter than people it takes over the world. Now it does whatever it wants, probably some crazy thing like turning the world into paper clips. The solution is to develop better programming methods, so that when you build an all-powerful AGI (artificial general intelligence), it will do good for humanity, rather than turning us all into paper clips.

The other is the “management consultant” method of analyzing risk. You basically list off a bunch of different practical problems with AI that don’t sound like science fiction and think about it the same way that you would think about the risk of currency exchange rate fluctuation. The solution is to run your business in a slightly different way.

I do think that AI has the potential to become smarter than humans, and a world where that happened would be quite unusual, with weird things happening. I don’t think the “management consultant” method is really taking that into account. I don’t believe the scenario where AI basically works, but its impact on the world is no greater than the fax machine’s.

On the other hand, I don’t believe that AI will make a quantum leap to generic superhuman ability. Computers tend to be very good at some things, and very bad at other things. Computers can destroy me at chess, but no robot can come close to my performance at folding laundry. The sort of mastery of the real world required to turn the planet into paper clips is a lot harder than folding laundry.

It’s possible that AI will be superhuman in every way, at some point. But if we go down the path to get there, I think we will first encounter other world events that revolutionize our understanding of what AI is capable of.

The AI Apocalypse

Generic superhuman intelligence is just not necessary for AI to destroy the world. There are several ways for just a superhuman narrow AI to cause an apocalyptic disaster.

How can a superhuman narrow AI cause an apocalyptic disaster? One possibility is via hacking. Imagine an AI that worked like:

  • Scan the internet for unprotected systems
  • Hack into them
  • Whatever you get access to, hold it hostage for money
  • Spend that money on more computing power
  • Repeat

If the AI gets access to nuclear weapons it can blow up the whole world.

This scenario does not require general artificial intelligence. The AI only needs to be a superhuman hacker. Once it has systems access, it can do the hostage negotiations with the equivalent of 80 IQ.

Another possibility is via real-world violence. Imagine an AI that worked like:

  • Build some killer robots
  • Capture some people
  • Hold it hostage for money
  • Spend that money building more killer robots
  • Repeat

This seems harder, to me, because it seems harder to build killer robots than to build an AI hacking program. But I could be wrong.

You might wonder why anyone would build such a disastrous AI. At the beginning, it might just be a way for a criminal to make money. Build one of these AIs, hardwire it to send 10% of its take into your account, now you have a plan to make money. One day, the original creator disappears, and the killer robots just keep on going.

It does seem like a dangerous AI it likely to need access to the financial system. Cryptocurrency seems particularly risky here, because an AI can perform crypto transactions just as easily as a human can. Cryptocurrency dark markets could also enable an AI that doesn’t have to do all the hacking itself. Instead, it buys exploits with its earnings.

Don’t Worry About AGI

People worrying about AGI should be good Bayesian reasoners. If hypotheses A and B both imply an apocalypse, but A implies B, then it is more likely that B causes the apocalypse, than that A implies the apocalypse. Superhuman generic intelligence implies superhuman narrow intelligence, so fill in the blanks, A = AGI, B = narrow AI, and worry about the dangers of superhuman narrow intelligence instead.

One problem with dangerous narrow AI is that better AI methods don’t really do anything to help. The problem isn’t AI that was intended to be good and gets out of control of the humans building it. When there’s money to be made, humans will happily build AI that is intended to be evil. In general, AI research is just making this problem worse.

Instead, we should be worrying about the security of the systems that narrow AI would subvert to cause a disaster. Can we make a crypto wallet that is impossible to hack? Can we make it impossible to hack a hospital IT system? To me, research and development here seems underrated.