He may be one of the world’s pivotal computing pioneers, mentioned in the same exuberantly geeky breath as Steve Wozniak, Steve Jobs, and Tim Berners-Lee. His technological exploits may have earned him over $80 billion, making him the world’s richest man. Yet even Bill Gates is somewhat concerned about the potentially destructive power of technology.
In a Reddit AMA (Ask Me Anything) on Wednesday afternoon, Microsoft’s former CEO fielded an array of questions ranging from the banal (“Do you have a pet?”) to the cringeworthy (“Star Trek or Star Wars?”) to the painfully esoteric.
Certain themes popped up again and again. BitCoin and other cryptocurrencies made regular appearances, as well as jokes about Bing, Zunes and other Microsoft properties. Gates’s charity work was also frequently brought up, with his latest contribution — a sewage treatment machine that turns wastewater (or “Poop Water,” as Gates calls it, sounding like a dad trying desperately to sound cool) into perfectly clean drinking water — an object of inevitable Reddit interest and/or disgust.
Another recurring theme was that of artificial intelligence, and, by extension, superintelligence. It was in these answers that Gates expressed a somewhat surprising opinion:
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
The concerns of Elon Musk that Gates refers to are based in the work of Swedish philosopher Nick Bostrom. In a tweet last August, Musk mentioned Bostrom’s work, adding that if handled poorly, AI could be “more destructive than nukes.”
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
In Superintelligence, Bostrom suggests that the creation of artificial intelligences with abilities comparable to those of humans will quickly give way to “superintelligences” — computers with cognitive and computational abilities that far surpass those of humanity.
The issue with these superintelligences, though, is that once they’re created, they’ll be too hard to control. Though we may imbue superintelligences with goals that are intended for our benefit, Bostrom suggests that the nuances of these goals may be lost once translated into a language understandable to machines.
An example Bostrom gives is that of a hypothetical intelligence charged with something innocuous like “making people smile.” To a human researcher, this would suggest a robot that tells jokes or stories. To a superintelligence, however, a more efficient route to accomplish its goal would be to paralyze the facial muscles into a fixed grin.
Pictured: The Future
As far as Gates is concerned, though, this terrifying dystopian future is a long way off. In another comment, he assures a questioner that a career in programming is still a safe choice for now.
Click here to read the full AMA.
Top Reads from The Fiscal Times: