IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Workplace Facial Screenings Could Have Consequences (Opinion)

The presence of artificial intelligence in workplaces has risen drastically, however, the technology remains highly controversial. Perhaps the workplace is better off without AI.

OPED-FACIALSCREENING-PROGRESSIVE-COMMENTARY-GET
A video monitor displays attendees as their images are captured with CyperLink's facial recognition during CES 2020, at the Las Vegas Convention Center on January 8, 2020. (David Becker/Getty Images/TNS)
TNS
(TNS) — Artificial intelligence has been on the rise in workplaces for at least the past decade. From consumer algorithms to quantum computing, AI’s uses have grown in type and scope.

One of the more recent advances in AI technologies is the ability to read emotions through facial and behavioral analysis. While the emotional AI technology has largely been implemented in marketing campaigns and health care, a growing number of high-profile companies are using it in hiring decisions.

Companies should stop this immediately.

There are a number of risks associated with this technology. One of the more troubling is the apparent racial bias — one that assigns more negative emotions to Black people than white people, even when they are smiling.

For example, Microsoft’s Face API software scored Black faces as three times more contemptuous than white faces. This bias is obviously harmful in a number of ways, but it’s especially devastating to non-white professionals who are disadvantaged in their ability to secure a job and progress within their field.

Any workplace that uses a hiring algorithm that disproportionately sees Black and brown people as worse emotionally will further drive workplace inequalities and discriminatory treatment.

According to a Washington Post report, more than 100 companies are currently using emotional AI, and this technology has already been used to assess millions of job applicants. Among the top-tier companies deploying emotional AI are Hilton, Dunkin’ Donuts, IBM and the Boston Red Sox.

Emotional AI recognition has been estimated to be at least a $20 billion market.

The technology uses facial recognition to analyze emotional and cognitive ability. Generally, an interviewee will answer preselected questions during a recorded video interview, and be assessed by the AI algorithm. The assessment provides a grade or score on various characteristics, including verbal skills, facial movements, and even emotional characteristics — all of which aim to predict how likely the candidate will succeed in a position before taking next steps.

Supporters of the technology argue that it removes human prejudice from the equation. But replacing human bias with an artificial one can’t be the solution.

Moreover, companies tend to use emotional AI to screen for a very limited data set to decide who gets marked as “employable.” These limited data sets usually favor majority groups while ignoring minority ones. For example, if someone’s first language isn’t English and they speak with an accent or if an applicant is disabled, they will more likely be earmarked as less employable.

The technology can also work to the disadvantage of women.

For starters, much of the AI technology fails to properly identify women — even iconic women such as Oprah Winfrey and Michelle Obama. Many examples have shown that, particularly in fields that are already dominated by men, women applicants are downgraded and less likely to be recommended than male applicants.

There are a plethora of other anecdotes that highlight the biases of emotional AI, even outside the workplace. These include cameras that identify Asian faces as blinking and software that misgenders those with darker skin.

Of course, companies have been warned of the ongoing biases and have so far ignored them; many still use software like HireVue, which Princeton professor of computer science Arvind Narayanan described as “a bias perpetuation engine.” Research institute AI Now, based at New York University, has called for a complete ban on emotional AI tech.

Until emotional AI is shown to be free of racial and gender biases, it’s unsafe for use in a world already struggling to overcome inequalities. If companies want to assist in that struggle, they should end the use of emotional AI in the workplace.

Distributed by Tribune Content Agency, LLC.