IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI May Not Be the Answer to All that Ails Human Resources

Some companies have taken AI chatbots to another level, even offering automated psychotherapy. Others embed AI in work computers to detect burnout or other worrisome behavior. But the technology poses its own issues.

a digital illustration of a human resources network
Shuttestock/Connect world
(TNS) — Harried supervisors will tell you managing a team is difficult in the best of times when face-to-face interaction happens daily and business is good.

Providing constructive feedback to dispersed workers during a once-in-a-lifetime pandemic that has disrupted regular business can feel impossible. Some companies have seized on new tools built on artificial intelligence to monitor their employees. These nascent productivity-monitoring tools, though, come with high risks, according to Serife Tekin, a professor who studies ethics and AI at the University of Texas at San Antonio.
 
"There's a big, big hype about artificial intelligence and what it can do," she told me. "The enthusiasm for these tools has increased since we started the Zoom life with the beginning of the pandemic. But they're extremely, extremely primitive." Most of us have seen chat boxes pop up on websites offering help. In most cases, these are computer programs using artificial intelligence to answer basic questions. If your question is beyond the AI's capabilities, the chat is routed to a customer service representative.
 
Some companies have taken AI chatbots to another level, even offering automated psychotherapy. Others offer to embed AI in work computers to detect burnout or other worrisome employee behaviors. An Amazon driver recently quit because he was tired of the company's AI looking over his shoulder.
 
"This whole technology is grounded upon this thing called digital phenotyping," Tekin said. "It's basically inferring people's mental states based on their digital footprint."
 
The AI measures how quickly you speak on the phone or in a Zoom call. It can track how often you visit social media sites or how you respond to text messages. It measures your online behavior to determine your mental health, a correlation that researchers discovered more than a decade ago.
 
"Human‐computer interaction measures not what you type, but how you type," Dr. Thomas Insel, a former head of the National Institutes of Health, wrote in a landmark 2008 paper. "Subtle aspects of typing and scrolling, such as the latency between space and character or the interval between scroll and click, are surprisingly good surrogates for cognitive traits and affective states."
 
Insel and other researchers are convinced digital phenotyping will revolutionize mental health care. Hundreds of companies have seized on the research to launch employee monitoring apps, some concentrating on boosting productivity, while others are nothing more than spyware. AI applications can track employee's every keystroke or mouse movement. Others use the computer's camera to track your eyes. If the AI determines a worker is unusually frustrated or unproductive, it can report the problem to human resources or a supervisor.
 
The apps can also flag high-performers and perhaps give supervisors insight into how to boost the productivity of other employees. But like many new technologies, these apps often overpromise and underperform. In its most beneficent use, human resources departments use AI to detect employee burnout. "It's assuming that people respond to stress or burnout similarly," Tekin said. "Your response to burnout might be turning inward and becoming even less expressive, exhausted and less communicative. Whereas my response to burnout might be trying to control the situation."
 
There are also cultural differences in how people cope with stress, depression and anger. The challenge for AI apps is sorting through the variability of human responses. If developers were sincere, they would admit the complexity of the human mind cannot be understood by a computer program that operates on a broadband Internet connection.
 
"I think it's extremely problematic to be surveilling and monitoring employees that way, because this will then imply that it will be in the employee's records," Tekin added. At some point, a behavioral issue could be caused by a medical one, which triggers all kinds of federal and state regulations.
Tekin's biggest concern is that the technology is being deployed more quickly than ethicists or policymakers can respond. The Food and Drug Administration has already loosened rules around psychotherapy apps to expand access during the pandemic because so many patients do not want in-person appointments.
 
"AI being used to enhance employee-employer relationships, I think is more acceptable," Tekin added. "An AI chatbot replacing the kind of work that is expected from the employer is problematic."
Today's managers are always looking for ways to work more efficiently. Companies are already using AI to screen job applicants, check work performance and gauge employee job satisfaction.
 
Truly empathetic managers, though, will not rely on digital tools to perform the humanistic portion of their jobs. Heartfelt conversations with coworkers are laborious and time-consuming but still the best way to keep top talent on board.
 
©2021 the San Antonio Express-News, Distributed by Tribune Content Agency, LLC.