IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: AI Might Eliminate Humanity in Human Resources

According to recent findings, more and more human resources professionals utilize artificial intelligence in evaluating employees. But such tech can lead to unfair employee appraisals or outright discrimination.

Robot/AI HR Manager concept
ProStockStudio/Shutterstock
(TNS) — With 86% of major U.S. corporations predicting that artificial intelligence will become a “mainstream technology” at their company this year, management-by-algorithm is no longer the stuff of science fiction.

AI has already transformed the way workers are recruited, hired, trained, evaluated and even fired. One recent study found that 83% of human resources leaders rely in some form on technology in employment decision-making.

For example, at UPS, AI monitors and reports on driver safety and productivity, tracking drivers’ every movement from the time they buckle their seat belts to the frequency with which they put their trucks in reverse. At IBM, AI identifies employee trends and makes recommendations that help managers make decisions on hiring, salary raises, professional development and employee retention. Even NFL teams are using AI to assess player skills and make injury risk assessments during the recruiting process.

Amazon, a pioneer in the use of AI, has gone all in by integrating the technology throughout its entire company, especially in human resources. Just a few months ago, contract employees for Amazon claimed that they were being summarily fired by automated emails for failing to meet preprogrammed productivity bench marks.

In fact, Amazon’s use of an electronic tracking system made headline news for the way it monitored worker productivity and, allegedly, automatically fired employees it deemed were underperforming.

According to a 2018 letter written by attorneys for Amazon, if an employee spent too much time off task, the system “automatically generate(d) … warnings or terminations regarding quality or productivity without input from supervisors.” An Amazon spokesperson subsequently clarified, “It is absolutely not true that employees are terminated through an automatic system. We would never dismiss an employee without first ensuring that they had received our fullest support, including dedicated coaching to help them improve and additional training.”

These reports highlight the need for employers to find the right division of labor between artificial intelligence and human resources personnel — between using AI to improve human decision-making and delegating decision-making entirely to algorithms.

Using AI to make decisions ordinarily made by HR professionals can have significant legal ramifications, so employers should exercise caution when deciding when — and whether — to hand such matters over to algorithms. There may be cases in which compliance with federal anti-discrimination law requires human intervention. This is frequently the case when it comes to workplace accommodations for pregnant, disabled and religious employees.

The Americans with Disabilities Act requires covered employers to provide reasonable accommodations for individuals with disabilities. Similarly, under Title VII of the Civil Rights Act of 1964, employers cannot discriminate on the basis of pregnancy and religious practices of employees. Generally, these accommodations are granted through an interactive process between employer and employee: two humans.

Most of the time an employee initiates the interactive process by notifying the employer of the need for a reasonable accommodation — a conversation that can be sensitive, personal and even difficult for employees. If an employee’s primary interface with his employer is an app or an algorithm, initiating that process may be daunting and employees may not be willing to disclose some of their most personal and protected issues to a chatbot. For that matter, it may not even be clear to the employee who the appropriate point of contact is.

It may come as a surprise that there are some instances in which an employer may be expected to initiate the interactive process without being asked — for example, if the employer knows that the employee is experiencing workplace problems because of a disability. Under those circumstances, the process often starts when a supervisor senses, with their own eyes or judgment, that an employee needs intervention.

Accordingly, employees and civil rights advocates have voiced concerns about whether the use of AI in employment decision-making allows for a process that is so heavily dependent on personal interactions. An algorithm, no matter how sophisticated, may not be capable of the sort of sensitivity and responsiveness needed to meet the needs of employees in need of accommodations.

Whether employers rely on algorithms, human HR professionals, or both, they must develop and implement policies to handle various, more nuanced employee situations. If an employer uses AI for reviewing performance and tracking productivity, the employer should ensure that their AI system allows for — and accounts for — reasonable accommodations related to disability, pregnancy and religious observance.

Above all, employers must inform their employees that the requirement to engage in an interactive process for an accommodation under the ADA and Title VII still applies when the employer uses AI to track productivity.

While AI is becoming mainstream technology in the workplace, discrimination-by-algorithm must not.

©2021 Chicago Tribune. Distributed by Tribune Content Agency, LLC.