Skip to content

The Subtle Threat of Artificial Intelligence

By Taylor Marvin

Map of the internet in 2005, by Matt Britt.

Map of the internet in 2005, by Matt Britt.

The Atlantic has an interesting cover story about scientists’ quest to develop computers whose intelligence equals humans. Over at Discover, Kyle Munkittrick isn’t concerned about the prospect of computers surpassing human intelligence:

“…my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not.”

This makes a lot of sense. It seems pretty clear that that strong AI is coming- given how fast computer technology has advanced in the last two decades I find it very hard to believe that machine intelligence won’t arise some way or another this century, probably sooner than later. And Munkittrick makes a good point that AGI isn’t the physical threat most people think it will be, because it’s hard to believe that anyone would be stupid enough to actually give an intelligent computer access to the physical world. No matter how malevolent Skynet was it would be harmless, because it would still be trapped in its physical processors. Munkittrick’s other perceptive point is that fears of a renegade AGI would launching a nuclear attack against humanity are unfounded, because a nuclear war would probably destroy, or at least threaten, the power infrastructure an AGI is dependent on. It’s unlikely that an AGI’s incentives to destroy humanity would outweigh this risk.

That being said, Munkittrick underestimates the threat an AGI would pose. Human society is vulnerable to a lot more than direct physical destruction. This reminds me of a startling realistic short story be John Scalzi, about intelligent cups of yogurt that slowly dominate humanity. The yogurt give human leaders a plan to reduce the deficit, the economy promptly collapses. Humans become more and more dependent on the guidance of the yogurt, and begin to suspect that maybe their yogurt overlords intended to destroy the economic in a bid to increase their control. I think this is a good idea of what the emergence of an artificial intelligence could look like. While its hard to imagine anyone giving an AGI control over robots in the physical world, it’s easy to picture policymakers taking economic advice from an intelligent computer in the midst of a crisis. Given that an AGI could easily understand economics much more thoroughly than us, the machine intelligence would be in a position to subtly collapse a good part of modern civilization. If an AGI was able to manipulate its human handlers to continually grow its physical infrastructure it would be able to learn extremely quickly, and plan over the long-term in a way humans can’t. The influence this kind of intelligence would have over society is enormous, and it isn’t difficult humans incrementally giving it more and more control. In a way this is already happening- complex algorithms already execute about 70% of all Wall Street trades, often autonomously. The traditional vision of the singularity is one supercomputer suddenly achieving hyperintelligence. This isn’t that scary- if the machine seemed to be malevolent, it wouldn’t be hard to unplug every computer with the processing power to support an AGI. What’s a lot more threatening is the slow outsourcing of key social functions to lots of separate machine intelligences. Traders already choose to delegate control over much of the economy to limited algorithms because it’s profitable- it isn’t a huge leap to give an AGI the same control for potentially much greater rewards. We would have a very hard time understanding AGIs motivation, and a pervasive system of separate privately-owned AGI actors could easily be motivated by incentives separate from human welfare.

The other big threat from the emergence of a strong AI is its indirect implications for human society. Suddenly sharing the planet with an entity much more intelligent than ourselves would be the most stressful event in human history. An AGI would be by definition unknowable, completely alien, and advance far quicker than what’s possible for us. Probably the closest analog in history for human AGI interaction is the encounters between Native Americans and European colonists- the differences in technological capability and culture gap would be about the same. This isn’t encouraging. Humans don’t have a good track record responding to stressful change, and even if an AGI was benevolent it’s easy to imagine society not reacting well to its emergence. The development of computers more intelligent than us could be a great thing for society, with accelerated advancements in science, economics and medicine bringing huge gains in human welfare. But it’s important to remember that a machine intelligence, like us, would be self-motivated, but its interests would be completely different from our. If those interests conflict we could lose.

3 Comments Post a comment
  1. Just as the question will always define the answer and the questioner himself, it is impossible for us to pose the right question about AI, it’s as wide a gap as between the 3rd and 4th dimensions. First of all, there is no morality, good or bad, available to the thought process of a machine, just 0 or 1 by definition. Even the complex algorithms used increasingly by our decision makers are based on an analysis of human behavior, which is ultimately guided by personal morals. We are trapped in our own evolutionary cycle, but yes, we might be in for a big surprise when handing our power over to a completely amoral decision maker.

    April 18, 2011
  2. It alwayas start by people using technology for war. In war no one wants to be left behind so they will start giving more power to de machines for them to come with better machines in order to keep up or take the front, until one day the will be in the position to decide to replace us… its not a matter of if but of when will it happen.

    C3po could speak fluidlently 7,000,000 languages, add to that knowldge of every science knonw to man data about millions of people with psicological profile and everything… its the same as god. Imagine a conversation between gods, what would thay think about the way we are handling things…. face it, its a matter of time.

    Sorry about my english its not so good.

    February 22, 2012

Trackbacks & Pingbacks

  1. What Would an Expansionist Alien Species Be Like? | Smoke & Stir

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: