Killer humanoids are just one of the areas lawmakers are calling for regulation. A bill to create a group that would look at the issues around this emerging technology is on track to pass.
(TNS) — New York is on track to pass its first law to begin governing artificial intelligence in robots and other autonomous machines through a commission that would examine thorny issues, such as who would be responsible for the actions of such an entity.
“If someone builds a humanoid that goes out and kills folks, who is responsible?” asked Assemb. Clyde Vanel (D-Queens), sponsor of the bill moving toward final legislative approval and chairman of the Assembly’s subcommittee on Internet and New Technology. “Is the builder responsible? Is the programmer responsible? It is not too far from us.”
This fictional “Terminator” scenario crops up often in this new intersection of law and artificial intelligence. But the need for laws to avoid turmoil in the labor force because of the loss of jobs to automation and other areas while also nurturing the benefits of this brave new world is a serious issue.
The scramble is just beginning to enact laws to catch up with the use of automation already in place in such diverse fields as banking, retail, construction, home kitchens, warehousing, hotels, medicine and government offices, as well as trucking and criminal justice.
“The whole world is behind, not just the state government,” Vanel said. Artificial intelligence "isn't asking permission if it can replace a job or not. It’s happening.”
Opposition to artificial intelligence has been around as long as artificial intelligence, which in various nascent forms began decades ago when it was usually referred to as automation of simple, isolated tasks. Now, artificial intelligence in facial recognition software is used by police in searching for suspects and by some retailers to predict buying impulses. San Francisco's city government is considering a ban or restriction on facial recognition technology because of errors and privacy concerns. Two bills in Albany would restrict its use. One would prohibit the state from retaining facial recognition images. The other would bar landlords from using facial recognition images in dealing with prospective tenants.
Unconvinced that the technology is on fast forward? Ask Siri on a cellphone or Alexa at home, said Sen. Diane Savino.
“We are way behind the eight ball -- most states are,” said Savino (D-Staten Island), co-sponsor of the bill, which passed the Senate two weeks ago. “Artificial intelligence is an essential part of the workforce now. Every day we use artificial intelligence … you call up a company and half the time you are talking to a chatbot, and they sound like real people.”
She said half of the workforce at an Amazon redistribution center on Staten Island is robots.
But rather than fear a future in which machines evolve beyond humans, the state commission would find ways to nurture artificial intelligence that could be an economic boon, while guiding the rapidly developing technology to avoid bad results. Those fears include errors in the potential sentencing of criminal defendants and denial of loans based on figures, rather that on what might not be best for a community or for a family worthy of taking a chance.
The commission would include experts in the field. It would evaluate the impact of artificial intelligence in eliminating jobs statewide and the need to protect confidential information, as well as considering “potential restrictions … [and] criminal and civil liability regarding violations of law caused by entities with artificial intelligence, robotics and automation,” according to the bill.
The commission also would be likely to look at “deep thinking,” a process in which computers with little or no supervision take clues — such as street signs and speed limit signs — then make decisions and act.
“I don’t want to fight the robots — we may lose that fight,” Vanel said. “This is about how to work alongside them. This is to show how to program them, how to train them, how to build them.”
With few think tanks or governments devoted to the issue, law students over a year ago created The Stanford Artificial Intelligence and Law Society. Founder Zach Harned has organized discussions on topics including autonomous weapons in warfare, law and order, “killer robots,” human rights and artificial intelligence, and “the case for artificial intelligence personhood.”
He said states should be looking into the difficulty of preserving privacy and anonymity in a time of growing use of facial recognition technology; cybersecurity of state data and resources; “deep fake” images and videos in which artificial technology can manipulate images to spread disinformation; the need to instill fairness into decisions made by algorithms and the safety of fast-advancing medical technologies.
Harned said working groups such as the state’s proposed commission and the Stanford University group are essential now.
“This issue is continually in flux,” he said. “As legal, policy, and regulatory issues involving AI continue to make splashy headlines, the easier it is to recognize the importance of this organization and our mission.”
©2019 Newsday. Distributed by Tribune Content Agency, LLC.
Never miss a story with the daily Govtech Today Newsletter.