Do We Really Have To Fear Artificial Intelligence?
"I’m sorry, Dave. I’m afraid I can’t do that." - HAL9000, "2001: A Space Odyssey (1968)"
The line spoken by c in the classic science fiction movie "2001: A Space Odyssey," when a sentient computer took over the Discovery One spacecraft and refused to obey an astronaut’s command, has become a part of American culture – and it’s usually quoted in jest. For many people who don’t completely understand artificial intelligence, though, it can also represent an unreasonable distrust and fear of machines which they fear may one day become too intelligent for humans to control.
But is it really an unreasonable fear?
You may join the crowd which is at least a little afraid of artificial intelligence (AI) after learning about an open letter written to the scientific and AI communities earlier this year.
This four-paragraph letter wasn’t from a few low-level lab workers, or a group of scare-mongering activists. It was signed by more than 150 of the most respected names in the field. Among the scientific luminaries were revered physicist Stephen Hawking, Tesla and SpaceX founder Elon Musk, Google research director Peter Norvig, and MIT professor Max Tegmark, who co-founded the "Future of Life" institute.
The message in the open letter was simple but powerful. It first stated that the combination of human and artificial intelligence has enormous potential benefits, including the possible elimination of poverty and disease. But the letter went on to add a startling warning, couched in careful language: "Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls...our AI systems must do what we want them to do."
In other words, at least some of the scientists are concerned that we could someday see super-intelligent machines which can’t be controlled by humans. That’s not reading too much between the lines; Hawking had previously told the BBC that he believed advanced uses for artificial intelligence could "spell the end of the human race," and Musk has called AI "humanity’s biggest existential threat."
However, the wording of the letter was carefully crafted in order to bring as many AI experts on board as possible, including those who believe that talk about uncontrollable machines is simply alarmist. Some signatories, such as Oren Etzoni of the Allen Institute for Artificial Intelligence and Italian AI expert Francesca Rossi, claim that public discussions about the dangers of AI "unleashing the demon" is impugning a respected scientific discipline. They agreed to sign the letter with the hope that it might focus public attention on the potential benefits of intelligent machines, while showing that researchers are seriously looking at ethical and safety concerns.
The paper called for more study on the effects that artificial intelligence has on society, and was issued along with a 12-page proposal for specific research subjects. But it may have actually split scientists more than united them; while Hawking and Musk haven’t toned down their rhetoric, Etzoni has gone on record saying that worries about AI running amok "may make a great plot for a Hollywood movie, but…is not realistic."