Bias Still Haunts Facial Recognition, Microsoft Hopes to Change That

The company is making improvements in the troubled technology by collecting more data and expanding the datasets it uses to train its AI.

  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
(TNS) —  If a picture paints a thousand words, facial recognition paints two: It’s biased.

A few years ago, Google Photos automatically tagged images of black people as “gorillas.” Flickr (owned by Yahoo at the time) did the same, tagging people as “apes” or “animals.”

This year, the New York Times reported on a study by Joy Buolamwini, a researcher at the MIT Media Lab, on artificial intelligence, algorithms and bias. Not surprisingly, she found that facial recognition is most accurate for white men, and least accurate for darker-skinned people, especially women.

Now — as facial recognition is being considered for use or is being used by police, airports, immigration officials and others — Microsoft says it has improved its facial-recognition technology to the point where it has reduced error rates for darker-skinned men and women by up to 20 times. For women alone, the company says it has reduced error rates by nine times.

Microsoft made improvements by collecting more data and expanding and revising the datasets it used to train its AI.

From a company blog post: “The higher error rates on females with darker skin highlights an industrywide challenge: Artificial intelligence technologies are only as good as the data used to train them. If a facial recognition system is to perform well across all people, the training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry and eyewear.”

In other words, the company that brought us Tay, the sex-crazed and Nazi-loving chatbot, wants us to know it is trying. (Microsoft took its AI experiment Tay offline in 2016 after it quickly began to spew crazy and racist things on Twitter, reflecting what she learned online.)

Meanwhile, IBM announced that it will release the world’s largest facial dataset to help in studying bias. It’s actually releasing two datasets this fall: one that has more than 1 million images, and another that has 36,000 facial images equally distributed by ethnicity, gender and age.

IBM also said this year that it improved its Watson Visual Recognition service for facial analysis, decreasing its error rate by nearly tenfold.

“AI holds significant power to improve the way we live and work, but only if AI systems are developed and trained responsibly, and produce outcomes we trust,” IBM said in a blog post. “Making sure that the system is trained on balanced data, and rid of biases is critical to achieving such trust.”

©2018 The Mercury News (San Jose, Calif.) Distributed by Tribune Content Agency, LLC.

  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email