The Subtle Threat of Artificial Intelligence
By Taylor Marvin
The Atlantic has an interesting cover story about scientists’ quest to develop computers whose intelligence equals humans. Over at Discover, Kyle Munkittrick isn’t concerned about the prospect of computers surpassing human intelligence:
“…my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not.”
This makes a lot of sense. It seems pretty clear that that strong AI is coming- given how fast computer technology has advanced in the last two decades I find it very hard to believe that machine intelligence won’t arise some way or another this century, probably sooner than later. And Munkittrick makes a good point that AGI isn’t the physical threat most people think it will be, because it’s hard to believe that anyone would be stupid enough to actually give an intelligent computer access to the physical world. No matter how malevolent Skynet was it would be harmless, because it would still be trapped in its physical processors. Munkittrick’s other perceptive point is that fears of a renegade AGI would launching a nuclear attack against humanity are unfounded, because a nuclear war would probably destroy, or at least threaten, the power infrastructure an AGI is dependent on. It’s unlikely that an AGI’s incentives to destroy humanity would outweigh this risk.
That being said, Munkittrick underestimates the threat an AGI would pose. Human society is vulnerable to a lot more than direct physical destruction. This reminds me of a startling realistic short story be John Scalzi, about intelligent cups of yogurt that slowly dominate humanity. The yogurt give human leaders a plan to reduce the deficit, the economy promptly collapses. Humans become more and more dependent on the guidance of the yogurt, and begin to suspect that maybe their yogurt overlords intended to destroy the economic in a bid to increase their control. I think this is a good idea of what the emergence of an artificial intelligence could look like. While its hard to imagine anyone giving an AGI control over robots in the physical world, it’s easy to picture policymakers taking economic advice from an intelligent computer in the midst of a crisis. Given that an AGI could easily understand economics much more thoroughly than us, the machine intelligence would be in a position to subtly collapse a good part of modern civilization. If an AGI was able to manipulate its human handlers to continually grow its physical infrastructure it would be able to learn extremely quickly, and plan over the long-term in a way humans can’t. The influence this kind of intelligence would have over society is enormous, and it isn’t difficult humans incrementally giving it more and more control. In a way this is already happening- complex algorithms already execute about 70% of all Wall Street trades, often autonomously. The traditional vision of the singularity is one supercomputer suddenly achieving hyperintelligence. This isn’t that scary- if the machine seemed to be malevolent, it wouldn’t be hard to unplug every computer with the processing power to support an AGI. What’s a lot more threatening is the slow outsourcing of key social functions to lots of separate machine intelligences. Traders already choose to delegate control over much of the economy to limited algorithms because it’s profitable- it isn’t a huge leap to give an AGI the same control for potentially much greater rewards. We would have a very hard time understanding AGIs motivation, and a pervasive system of separate privately-owned AGI actors could easily be motivated by incentives separate from human welfare.
The other big threat from the emergence of a strong AI is its indirect implications for human society. Suddenly sharing the planet with an entity much more intelligent than ourselves would be the most stressful event in human history. An AGI would be by definition unknowable, completely alien, and advance far quicker than what’s possible for us. Probably the closest analog in history for human AGI interaction is the encounters between Native Americans and European colonists- the differences in technological capability and culture gap would be about the same. This isn’t encouraging. Humans don’t have a good track record responding to stressful change, and even if an AGI was benevolent it’s easy to imagine society not reacting well to its emergence. The development of computers more intelligent than us could be a great thing for society, with accelerated advancements in science, economics and medicine bringing huge gains in human welfare. But it’s important to remember that a machine intelligence, like us, would be self-motivated, but its interests would be completely different from our. If those interests conflict we could lose.