“L’EXPRESS” ARTICLE BY VICTOR GARCIA

Military “Artificial Intelligence”: why scientists are concerned

Excerpts of an article published in the French magazine L’Express, by Victor Garcia, released 07/28/2015 at 6:17 p.m, Updated 07.29.2015 at 11:45

More than 1,000 experts in robotics and artificial intelligence have signed a letter warning of the development of artificial intelligence and military “autonomous weapons” capable of taking life.

Prevent the “doomsday “. This is what more than 1,000 experts in robotics and artificial intelligence, but also intellectuals and philosophers, are determined to do. Their open letter warning against the “autonomous weapons” and “the race to develop an artificial intelligence military “( the full letter here ) quickly went around the world, or at least the Internet, and has created much debate.

At the call of Elon Musk , SpaceX founder entrepreneur and Tesla Motors, John Carmack, one of the developers of the world’s most famous video games, replied. “It seems inevitable too. Arguing against would be futile.”

“Even if it is inevitable, we should at least try to delay the advent of the military artificial intelligence. The earlier is not the best,” retorted Musk. A debate in which Markus Persson, the game developer of Minecraft, and Marc Andreessen, founder of Netscape, have joined.

War against the “killer robots”

What precisely fear physicist Stephen Hawking, the entrepreneur Elon Musk , the philosopher Noam Chomsky and other big names who signed the open letter, it is a “race to develop a military AI”. And more particularly its culmination: the “autonomous weapons”, more commonly known as “killer robots”. Specifically, it would be drones -tanks, planes- or humanoid robots that could decide, autonomously, to take life.

How is it different from the current human-steered drones or human-guided missiles? It would not be a human who “will press down the button” to take life, but a machine, a program based on a sophisticated algorithm, “smart”, able to determine who should die or not. This “threat” is very real “in light of technological advances” and could occur “within a few years,” say the researchers.

The reflection is not new. Five experts in artificial intelligence including Stuart Russell, professor of computer science at the University of California, had already published a report in the journal Nature in May. The NGO Human Rights Watchs had done the same a month earlier, with particular regard to legal problems and the inability to determine legal liability for murder. Not to mention Stephen Hawking, who was already worried in 2014 nor Elon Musk, who has already spoken on this subject many times.

If “a major military power begins to grow seriously in this field inevitably others will follow.” It is therefore necessary, according to the signatories of the open letter, to prohibit the faster the use of AI in the military field, as the race to ” Star Wars “or the use of chemical and biological weapons were banned in various international treaties.

What is the current status of military and civilian research?

To be clear, for the moment, we are still far from Skynet, a true artificial intelligence conscious of itself and its Terminators. But the progress-which will soon enable the military to present an “acceptable, reliable, safe” technology – are real.

The proof? The recent explosion of “deep learning” -used by Google, Facebook, Microsoft, Amazon etc.- for example. This technology is based on “artificial neural network” capable of learning to recognize a voice, language, faces. Another technological advance: the mechanical progress that now allow robots to move on land, sea, air, more and more easily.

>> Read also: Artificial intelligence: he is successful the Turing test really matter?

(The above title must be translated as: Does passing the Turing test really matters- Blog Editor’s note)

Mark Gubrud, a researcher at Princeton University and a member of the international committee of control of armed robots, is convinced: “The army (American, Editor’s note) does not want to hear about a red line (of prohibiting robots that can decide to kill, Editor’s note) which is to say ‘we will do it,’ “he explains on The Verge .

Fear of losing control over the machine

But why would the “killer robots” necessarily pose a threat? In their letter, the scientists address some arguments “for or against”. The “autonomous weapons” could replace men on the battlefield and thus reduce the loss of life -for the camp who owns them. But they could also reduce the reserves of warmongers: without casualties, less criticism of public opinion.

“Inexpensive and not requiring rare materials, unlike nuclear weapons, these weapons would quickly become ubiquitous. We should not wait long to find them on the black market and into the hands of terrorists, dictators who want to control their population, warlords with genocidal tendencies, etc. “still imagine the authors of the letter.

Read Google translated full article here.

And the original article in French here.

4 thoughts on ““L’EXPRESS” ARTICLE BY VICTOR GARCIA

Leave a Comment