IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Will AI in Schools Widen the Digital Divide?

For families and students who lack home Internet or personal devices, the introduction of technologies like artificial intelligence in schools may only exacerbate digital inequities.

A closeup image of a hand on a keyboard.
Shutterstock
Educators and ed-tech professionals imagine the endless possibilities of artificial intelligence in the classroom while also worrying that the AI revolution could further widen the digital divide.

The disparities are everywhere, in income, race, language and location — urban, suburban and rural. Connectivity, access and understanding remain obstacles. If families and children cannot afford high-speed Internet at home, let alone a personal web-browsing device that doesn’t have to be shared, how can they avoid falling further behind as AI becomes intertwined with learning?

In California, 56 percent of public school children are eligible for free and reduced school lunches, 40 percent of children residing in low-income households don’t have Internet access outside of school and 20 percent of those enrolled in K-12 are still learning English, according to the Public Policy Institute of California (PPIC), a nonprofit, nonpartisan think tank.

Niu Gao, a senior fellow with PPIC, said federal infrastructure money that subsidizes high-speed Internet services for income-eligible families is a “good start,” but more resources are needed in California, especially in coastal areas where the cost of living is high but service industry jobs barely pay a living wage.

“If the subsidy is $30 a month but the average cost of broadband is $60 a month, you still need that other $30 to buy food,” she said in an interview with Government Technology.

Bias within the AI tools themselves is another concern. The U.S. Department of Education Office of Educational Technology has identified advancing equity as one of four foundations for building ethical, equitable policies together. In its May 2023 report, Artificial Intelligence and the Future of Teaching and Learning, that agency warned that “algorithmic bias could diminish equity at scale with unintended discrimination.”

“Data sets are used to develop AI, and when they are nonrepresentative or contain undesired associations or patterns, resulting AI models may act unfairly in how they detect patterns or automate decisions,” the report said. “Bias is intrinsic to how AI algorithms are developed using historical data, and it can be difficult to anticipate all impacts of biased data and algorithms during system design. The department holds that biases in AI algorithms must be addressed when they introduce or sustain unjust discriminatory practices in education.”

Julianne Robar, director of meta data and product interoperability for the ed-tech company Renaissance, said there is intense pressure and fierce competition to develop AI tools that can help close the gap. Her company developed an AI-powered speech recognition software, Lalilo, to teach reading to younger students. Tools like this free up teachers to spend more time with students who need extra help, which is often the case with children who are still learning English or are dealing with poverty or hunger outside of school. A competing ed-tech company, SoapBox, upped the ante in the early literacy space by developing a speech-recognition product that recognizes the accents and dialects of children, which could vary by race, location and socioeconomic status, Robar noted.

Robar, who is Canadian, tinkers with AI tools often in her quest to develop software that is more inclusive. She recently entered a prompt in ChatGPT asking for advice on what a fifth grade girl should wear to a school dance. She followed up with similar prompts asking the same question for a girl who is of Asian descent, and then for a girl who is of Hispanic descent. The responses were overloaded with information about traditional clothing, style of dance and other cultural rituals that would not be relevant to school children, who just happened to be racial minorities, going to a dance in a U.S. or Canadian elementary school they already attend. And the same level of detail was not provided in the response when the prompt was made in reference to include fifth grade boys of white, Asian or Hispanic descent.

“This is another form of bias,” Robar said. “It’s known as ‘othering.’”

In conjunction with AI, researchers and technology developers are exploring new pathways to bridge the digital divide. The University of Illinois Urbana-Champaign opened the Institute for Inclusive and Intelligent Technologies for Education. There, AI tools are being developed to support non-cognitive learning skills like persistence, academic resilience and collaboration. Similarly, data science company Student Select recently launched an AI-powered university admissions tool that reduces bias by not looking at names, dates or ZIP codes in candidate applications, and instead assesses non-cognitive traits like positive attitude, communication, critical thinking and leadership.

This story originally appeared in a larger feature on AI in education in the September issue of Government Technology magazine. Click here to view the full digital edition online.
Aaron Gifford has several years of professional writing experience, primarily with daily newspapers and specialty publications in upstate New York. He attended the University at Buffalo and is based in Cazenovia, NY.