IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Artificial Intelligence: Navy Works on Teaching Robots How to Behave

The Navy is funding a slew of projects at universities and institutes that look at how to train such systems, including stopping robots from harming people.

(TNS) -- The rise of artificial intelligence has long stoked fears of killer robots like the “Terminator,” and early versions of military automatons are already in the battlefield. Now the Navy is looking into how it can teach machines to do the right thing.

“We’ve been looking at different ways that we can have people interact with autonomous systems,” Marc Steinberg, an Office of Naval Research manager, said in a phone interview this month.

The Navy is funding a slew of projects at universities and institutes that look at how to train such systems, including stopping robots from harming people.

In 1979, a Ford autoworker in Michigan became the first person killed by a robot when he was struck in the head by the arm of a 1-ton production-line machine, according to Guinness World Records. More recently, police in Dallas used a robot to deliver a bomb that killed the shooter who opened fire on officers at a Black Lives Matter protest.

Science-fiction author Isaac Asimov’s 1950 book of short stories “I, Robot” is credited with creating the three laws of robotics that include a “robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Rather than try to control machines with Asimov’s laws, Navy researchers are taking other approaches. They’re showing robots what to do, putting them through their paces and then critiquing them and telling them what not to do, Steinberg said.

“We’re trying to develop systems that don’t have to be told exactly what to do,” he said. “You can give them high-level mission guidance, and they can work out the steps involved to carry out a task.”

That could be important as the Navy fields more unmanned systems. It’s already flying drones, driving unmanned speed boats and sending robotic submersibles to collect data beneath the waves.

The Navy has no plans to create robots that attack enemy forces without oversight. Humans would always been in command of a machine ordered to attack, Steinberg said.

However, there are situations where a military robot might have to weigh risks to humans and make appropriate decisions, he said.

“Think of an unmanned surface vessel following the rules of the road,” Steinberg said. “If you have another boat getting too close, it could be an adversary or it could be someone who is just curious who you don’t want to put at risk.”

The robot research is in its early stages, and likely will take decades to mature, he said.

Teaching human values

A Navy-funded project at the Georgia Institute of Technology involves an artificial intelligence software program named Quixote that uses stories to teach robots acceptable behavior.

Quixote could serve as a “human user manual” by teaching robots values through simple stories that reflect shared cultural knowledge, social mores and protocols, said Mark Riedl, director of Georgia Tech’s Entertainment Intelligence Lab.

For their research, Riedl and his team have searched online for stories that highlight daily social interactions — going to a pharmacy or restaurant, for example — as well as socially appropriate behaviors like paying for meals.

The team plugged the data into Quixote to create a virtual agent — in this case, a video-game character placed in game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of people in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time, a Navy statement said.

“Social norms are designed to keep us out of conflict with each other, and we want robots to be aware of the way humans work with each other,” Riedl said, adding that smartphone applications such as Siri and Cortana are programmed not to say hurtful or insulting things to users. “We want Quixote to be able to read literature off the internet and reverse-engineer social conventions from those stories.”

Quixote could help train soldiers by simulating foreign cultures that have different social norms, he said.

“A robot with a real soldier needs to have an idea of how people do things,” he said. “It shouldn’t respond in an inappropriate way just because people behave differently overseas.”

The goal is to build a tool that lets people without computer-science or artificial-intelligence backgrounds train robots, Riedl said.

It’s an approach that backfired recently with Microsoft’s Tay chatbot. Engineered to convey the persona of a teenage girl, Tay learned through conversations with online users but was switched off after evolving into a sex-crazed Nazi, tweeting, for example, that “Hitler did nothing wrong” and asking her followers for sex.

“Right now it’s not something we need to worry about because artificial intelligence bots are very simplistic,” Reidl said. “It’s hard to get them to do anything, period, but you can imagine a day in the future where robots have much more capabilities.”

There’s always a risk that people will use a tool to do harm; however, Quixote should be relatively tamper-proof because it will tap into a vast trove of online literature to discern appropriate values, he said.

“There is subversive literature out there, but the vast majority of what it is going to read will be about … normal human behavior, so in the long term Quixote will be kind of resistant to tampering,” Reidl said.

Human in control of weapons

Humans are hard-wired through social conventions to avoid conflict, Reidl said, although that hasn’t stopped mankind from engaging in near-constant warfare for millennia. That hasn’t deterred the researchers, but it may concern groups campaigning for a ban on autonomous military robots.

A recent Human Rights Watch and Harvard Law School report calls for humans to remain in control over all weapons systems. Last year a group of technology experts — including physicist Stephen Hawking, Tesla Motors chief Elon Musk and Apple co-founder Steve Wozniak — warned that autonomous weapons could be developed within years, not decades.

Peter Asaro, vice-chair of the International Committee for Robot Arms Control — which is campaigning for a treaty to ban “killer robots” — questions whether a machine can be programmed to make the sort of moral and ethical choices that a human does before taking someone’s life.

Soldiers must consider whether their actions are justified and risks that they take are proportionate to a threat, he said.

“I don’t know that it’s a role that we can give to a machine,” he said. “I don’t know that looking at a bunch of different examples is going to teach it what it needs to know. Who is responsible if something goes wrong?”

If a robot follows its programming but does something wrong, it’s hard to decide who to hold responsible, Asaro said.

“Is it the people who built it, the people who deployed it, or the people operating it? There’s an accountability gap,” he said.

Ansaro cited a pair of 2003 incidents in which a U.S. Patriot batteries shot down a Royal Air Force Tornado jet fighter and a U.S. Navy F-18 Hornet from the carrier Kitty Hawk in Kuwait and Iraq. In both cases, computers identified the planes as enemy missiles.

“You don’t want to start building systems that are engaging targets without a human in control,” he said. “You… aren’t going to eliminate those types of mistakes (and) the more you have these systems the more likely you are to have these incidents, and the worse they are going to become.”

A treaty banning the production of autonomous weapons would not eliminate the problem but would provide the sort of protection that has stopped widespread use of weapons of mass destruction, he said.

“There are people who will use these weapons, but there will be diplomatic consequences if they do that,” he said. “It doesn’t mean a terrorist can’t build these weapons and use them, but there won’t be an international market.”

David Johnson, director of the Center for Advanced Defense Studies in Washington, D.C., is less concerned.

“We are many, many years away from autonomous systems that have enough connectivity to be truly thinking, and they will operate under the guidance they are given,” he said.

A ban on such weapons won’t work in the long run because they could be developed by America’s adversaries, individuals or non-state groups, Johnson said.

“People might not want the military to look at that technology, but how do they stop a corporation or individual or another country? If you put your head in the sand, it doesn’t stop time moving forward,” he said.

Despite the lack of an immediate threat, Johnson thinks the research is a good idea.

“I would question if it’s in the U.S.’s best interests to build autonomous systems, but I don’t question whether it is worth researching them,” he said.

Technology such as smart bombs has already enabled the military to reduce war damage to infrastructure and cut civilian casualties, said Arizona State engineering professor Braden Allenby.

“A technology like this is scary to many people because it involves the military, and people have all these images of evil robots in science fiction,” he said. “In films, the robots are evil and based on evil people.”

However, artificial intelligence doesn’t mirror human thought, Allenby said.

“A lot of what robots and artificial intelligence do in modern combat is enable us to handle very large flows of information in real time so we can protect our warriors and civilians on the battlefield,” he said.

The U.S. military should understand technology that will likely be used by its adversaries before too long, Allenby said.

“The question of how we deploy it is critical, but it needs to be presented responsibility, which is what the Navy is doing here,” he said.

©2016 the Stars and Stripe. Distributed by Tribune Content Agency, LLC.