From sci-fi movies to modern-day reality, technologists warn against a possibility of an arms war in artificial intelligence.
Critics didn’t care much for Arnold Schwarzenegger’s AI infused Terminator: Genisys, but this hasn’t stopped the tech sector from taking a stand against AI’s deadly potential.
Tesla’s Elon Musk, author and physicist Stephen Hawking, Apple co-founder Steve Wozniak, and Demis Hassabis, the CEO of Google’s recently acquired AI startup DeepMind, joined more than 1,000 other AI and robotics researchers in a letter calling for a ban on AI weapons development. The letter was released July 27 at the International Joint Conference on Artificial Intelligence in Buenos Aires.
The group warned that the sci-fi scare now had the technical and political foundations to become a reality if left to its own devices. On numerous occasions, both Hawking and Musk have warned against the foreseeable dangers. In an interview with the BBC, Hawking was even reported to have said AI could translate to the downfall of human race, reasoning that AI would rapidly evolve past our ability to control it.
“Artificial Intelligence technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” the group stated.
The message challenged military superpowers — like the U.S., Russia and China — to make long-standing international policy agreements against the use of weaponized AI on the battlefield. Despite initial benefits, like reduced casualties, lower costs and increased efficiencies, the group forecasted that AI weapons would inevitably spawn a new global arms race. Terrorists, dictators and criminal organizations would likely have access to it on the black market, and death tolls would be catastrophic.
“Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,” the letter stated.
The group went on to say that weaponized AI could be manufactured cheaply and embedded in low tech machines like quadcopters — a popular drone for hobbyists. Once installed, the miniature drones would be able to scout for predefined targets and eliminate them.
“Unlike nuclear weapons, [AI weapon systems] require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” the group said.
The message also had a dual purpose — it was as much about AI regulation as it was about AI research protections. Fears tied themselves around the hypothetical possibility that exploitation would set off a chain of AI research and development restrictions for nonweaponized areas of study. Today, AI is most visible in emmerging technologies like driverless cars, planes and other vehicles. However, AI growth is slowly spreading in health care, finance and communications.