IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Privacy, Ethics and Regulation in Our New World of Artificial Intelligence

With technology changing so rapidly all around us, how can we think about the ethical implications of what is now becoming possible? With big data analytics, machine learning and artificial intelligence (AI) growing so fast, can we agree on inappropriate uses? Are privacy regulations even workable? Will hackers just go around the rules? Let’s explore.

high-tech-185146_1280
New applications with technology are simply amazing. From new uses of data analytics to artificial intelligence and machine learning, our world is rapidly transforming before our eyes. But what are the ethical considerations for safe use?


Here are some recent headline examples:

Forbes: The Amazing Ways Googles Uses Artificial Intelligence

Excerpt: Google services such as its image search and translation tools use sophisticated machine learning which allow computers to see, listen and speak in much the same way as human do. …

Google also uses machine learning in its Nest “smart” thermostat products — by analyzing how the devices are used in households they become better at predicting when and how their owners want their homes to be heated, helping to cut down on wasted energy.

Daily Mail (UK): Devices can scan bodies to detect mood and health

Excerpt: Spy cameras could soon know what we're thinking and feeling simply by scanning our BODIES — and there may be no way to opt-out.

Speaking at a TED Conference in Vancouver, BC, Dolby Labs' chief scientist Poppy Crum said:

“We like to believe we have cognitive control over what someone else knows, sees, understands about our own internal states — our emotions, insecurities, bluffs or trials and tribulations. But technologies can already distinguish a real smile from a fake one.

The dynamics of our thermal signature give away changes in how hard our brains are working, how engaged or excited we might be in the conversation we are having, and even whether we're reacting to an image of fire as if it were real. …”

Bloomberg: Artificial Intelligence Presents A Golden Opportunity

Imagine if technology powered by artificial intelligence (AI) could help visually impaired people see? Such technology actually exists in the form of a smartphone app called Seeing AI that literally serves as a talking camera that helps visually impaired people see by describing their surroundings at any given moment and can improve the quality of life for millions of people.

Facebook Puts Future Hope in AI

Meanwhile, in testimony this week before Congress, Mark Zuckerberg put many of Facebook’s failings on the lack of artificial intelligence tools. The Verge criticized this testimony by saying that AI is an excuse for Facebook to keep messing up. “Artificial intelligence is not a solution for shortsightedness and lack of transparency.”

“Moderating hate speech? AI will fix it. Terrorist content and recruitment? AI again. Fake accounts? AI. Russian misinformation? AI. Racially discriminatory ads? AI. Security? AI. …

Zuckerberg said again and again in the hearings that in five to 10 years, he was confident that they would have sophisticated AI systems that would be up to the challenge of even dealing with linguistic nuance. Give us five to 10 years, and we’ll have this all figured out. ….”

Ethics of AI

In 2016, the World Economic Forum released a report listing the top 9 ethical issues in artificial intelligence. Here is that list (with details in the article).

  1. What happens after the end of jobs?
  2. How do we distribute the wealth created by machines?
  3. How do machines affect our behavior and interaction?
  4. Artificial stupidity. How can we guard against mistakes?
  5. Racist robots. How do we eliminate AI bias?
  6. Security. How do we keep AI safe from adversaries?
  7. Evil genies. How do we protect against unintended consequences?
  8. Singularity. How do we stay in control of a complex intelligent system?
  9. Robot rights. How do we define the humane treatment of AI?
More recently Google has announced that they are drafting ethical principles for AI, after employees complained about plans to help the U.S. Department of Defense (DoD) in a wide variety of ways, including AI. 

“After it emerged last month that Google was working with the Defense Department on a project for analyzing drone footage using “artificial intelligence” techniques, Google’s employees were not happy. Seeing as the technology could plausibly help target people for death — although Google insists the work is for “non-offensive” purposes — more than 3,000 of the employees signed a letter to CEO Sundar Pichai, demanding that the company scrap the deal.

Now, according to a Defense One article, Google is trying to dampen that disquiet with promises of new ethical standards for the company. …”

What is becoming clearer is that more regulation is coming for technology companies regarding social media, AI and other topics. But how will these societal decisions be made? Who gets to decide what is ethical? How will these agreed-upon rules be enforced?

Add Data Breaches into the Mix

What is not as clear is that new privacy policies, ethical guidelines or even laws governing conduct will actually work. There is a global debate over this topic.

Some people think that, just as Facebook admitted mistakes and said “I’m sorry,” others will continue to do the same things.

Another huge challenge is the number of hackers who ignore the rules and steal data and/or use these technologies for evil — no matter what the law says. Our current explosion in the number of data breaches and ransomware should forewarn everyone that trusting the future of technology is fool’s gold — since the bad actors also use the same technology.

The recent flood in data breaches reveal many weaknesses in our ability to police cyberspace effectively, even when protections are promised by companies. While many experts believe that artificial intelligence is the weapon that organizations need to win the cyberwar, others say this is folly and the bad actors will obtain the same weapons. They warn society to not even go down this cyberweapon road — and yet, we already are moving in this direction.

Two things are clear whenever you try to connect these ethical dots regarding technology. First, the problem is very hard. Second, we face a moving target that is growing exponentially.

We need a major international effort to focus on answers to these ethical questions.

Why?

Many leading minds have sent chilling warning about AI and where it may lead us. Elon Musk recently called AI "an immortal dictator" from which we would never escape. “We are rapidly headed towards digital super intelligence that far exceeds any human…”

Stephen Hawking spent the last years of his life warning about AI. He believe AI would spread inequity in society: "So far, the trend seems to be toward … technology driving ever-increasing inequality. ...”

Final Thoughts on Technology and AI Ethics

Why write this blog now? Because all of us need to be thinking about the ethics of what we do each and every day in order to make a difference. We all need to be learning more about the topic.

It was nice to see Stanford University offer a new class exploring ethics in AI.

A possible lesson learned from the Facebook’s privacy mistakes is that we must reconsider the broader actions and implications of sharing data, protecting data and implementing new cutting-edge AI or machine learning solutions. The challenge is that once the “genie is out of the bottle,” we may never be able to go back to the way things were.

I will return to this topic later in 2018 with some specific recommendations for the public and private sectors to consider.   

Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.