IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Ed-Tech Development Evolves to Address Risks of AI

As schools and universities make more use of artificial intelligence-driven tools, some ed-tech developers are seeking input from educators and implementing policies related to ethical use and data privacy.

Illustration of a laptop with book pages coming out of the back of it. White background.
Shutterstock
Despite the growing popularity of generative artificial intelligence to enhance instruction in K-12 schools and universities, many educators remain concerned about AI hallucinations and the lack of data privacy protections in today’s market for ed tech that uses GenAI. To help address these concerns, some ed-tech leaders are adjusting their technology development processes to include teacher input as well as internal protocols focused on ethical use and data safety.

According to Amber Orenstein, senior vice president of product management, design and operations at the ed-tech company Discovery Education, many of her customers are looking to AI tools to help provide feedback to students and generate course content. However, concerns about data privacy remain a major barrier at the K-12 AI level in particular.

Orenstein noted that as ed-tech companies continue integrating AI into their products, they should consider setting up internal policies and controls to make sure staff are developing tools that will be safe and ethical. She said in the case of Discovery Education, the company has tried to adhere to best practices for data privacy and cybersecurity.

“We’ve approached the challenge of making the ‘AI revolution’ safe and effective for the teachers and students we serve by focusing on internal controls. We have also created a rigorous internal AI policy that guides employees’ use of AI,” she said. “Then, within specific teams, we have created guiderails that further ensure the appropriate uses of AI within those specific teams’ work.”

In terms of how user data on AI platforms should be used, Orenstein said Discovery Education recently developed AI functions that collect data on the problem-solving processes of students so their teachers could provide personalized feedback and guidance. She noted that this need to provide feedback to students quickly is one of the key goals of educators in adopting new ed-tech tools and AI programs, adding that teacher input is an important part of developing effective classroom tools.

“In the coming months, we will [also] launch a new AI-powered formative assessment technology to a small group of educators,” she said. “Our goal is to rapidly iterate and co-design [tools] in partnership with the experts in the field — the teachers who do the hard work each and every day to drive change and improve learning outcomes for students.”

Brian Imholte, head of education and learning services at the software-engineering services company EPAM Systems, said that EPAM has talked to school leaders, educators and students across the country to gauge their needs and concerns relating to GenAI tools in education. He said he’s noticed more apprehension among educators compared to students when it comes to using AI tools, partly due to concerns around data privacy and AI hallucinations. He said reducing AI hallucinations remains a major challenge for developers of AI-driven educational tools and noted that his company has been working to develop more accurate, effective AI tutors.

“One of the big things that we hear from teachers [about AI] is, there’s often this lack of understanding and a lack of vision and immediate reaction of, ‘Kill it!’ I think from students, it’s a very, very different experience,” he said. “I think what you’re seeing from a younger set of folks and students who are younger college students is a complete embracing [of the technology], and they don’t care much about the privacy of the data pieces because with younger kids in particular, they’ve grown up in a world where all their data is in essence shared.”

Charles Thayer, chief academic officer at the online curriculum provider Lincoln Learning Solutions, also singled out AI hallucinations and data privacy as top concerns.

“Artificial intelligence [tools] can hallucinate and make things up,” he said. “You can trick it if it’s not being prompted correctly, and things can go sideways quickly. … So we’re taking a very deliberate, intentional approach with how we leverage the technology.”

Lincoln Learning Solutions Chief Technology Officer Dave Whitehead said that when it comes to data privacy, AI tools should be developed to protect personal information from unauthorized access. He reiterated the need to make sure AI ed-tech tools are designed with data privacy in mind.

“Tech companies that use AI to enhance their products and services should be aware of potential risks and threats to their data and systems, as well as the expectations of their users. Cybersecurity data and privacy practices for AI should include securing the data sources and storage, and ensuring the quality and validity of the data used for AI training and testing, protecting the identity and privacy of learners and educators, and providing clear and understandable explanations on how the AI works and what it does,” he said, adding that companies need to be compliant with changes in local, state and federal regulations relating to AI.
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.