IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

U.S. Department of Education Calls for ‘AI Bill of Rights’

The department’s Office of Educational Technology, in response to the speed of AI innovation and classroom implementation, identified key questions, concerns and recommendations for establishing school policies.

A wooden gavel lying next to the words "Bill of Rights" on a beige background.
Shutterstock
When it comes to use of AI in the classroom, America is at a conceptual crossroads, torn between the electric bike and robot vacuum.

With the bike, the U.S. Department of Education explains in a recent report, humans are always in command even though our effort is multiplied by a technological enhancement. The robot vacuum, meanwhile, just does its job, freeing us from involvement or supervision.

Using that analogy, the department calls for an “AI Bill of Rights” at a time when classroom technologies like ChatGPT are evolving faster — and peddling harder — than the rate at which educators and policymakers can write rules for ethical, responsible use.

The ED Office of Educational Technology’s report, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations, was released in May. It recommends establishing general guidelines such as guardians for AI use and keeping humans in the loop every step of the way, with teachers always in the center of that circle. Concerns about student privacy and surveillance are noted early and often in the report. It says the use of voice-recognition tools can be discriminatory if they don’t recognize regional dialects, and it warns of AI’s potential to create achievement gaps when it speeds up or slows down curricula.

Moreover, the 71-page document identifies what AI can improve on and where it needs to go.

“From deficit-based to asset-oriented; from individual cognition to social and other aspects of learning; from fixed tasks to active, open and creative tasks; and from correct answers to additional goals,” the report said.

The U.S. Department of Education commissioned this report in 2021 after determining from field research that education technology developers, to include student information, school instruction, parent-teacher communication, facility logistics and classroom instruction, were planning to bolster their systems with AI functions. This prompted four listening sessions between June and August 2022. More than 700 people who have an interest in education technology attended these sessions, which occurred before the public largely became aware of generative AI chatbots such as ChatGPT, prompting the Department of Education to further engage with educational and AI policy experts before completing the report.

The report emphasizes key differences between technology and human teachers, noting that AI cannot meet learners where they are like teachers can, nor does it grasp common-sense judgment.

“(E)xperts in our listening sessions warned that AI models are narrower than visions for human learning and that designing learning environments with these limits in mind remains very important,” the report said. “The models are also brittle and can’t perform well when contexts change.”

Beyond that, the report has more questions than answers. Chief among them:

  • “How are youth voices involved in choosing and using AI for learning?”
  • “When AI is used, are students’ privacy and data protected? Are students and their guardians informed about what happens with their data?”
  • “Is high-quality research or evaluations about the impacts of using the AI system for student learning available? Do we know not only whether the system works but for whom and under what conditions?”
  • “Is AI improving the quality of an educator’s day-to-day work? Are teachers experiencing less burden and more ability to focus and effectively teach their students?”
  • “Do teachers have oversight of AI systems used with their learners? Are they exercising control in the use of AI-enabled tools and systems appropriately or inappropriately yielding decision-making to these systems and tools?”
  • “To what extent are AI technologies enhancing rather than replacing human control and judgment of student learning?”
  • “How will users understand the legal and ethical implications of sharing data with AI-enabled technologies and how to mitigate privacy risks?”
  • “Are we learning for whom and under what conditions AI systems produce desired benefits and impacts and avoid undesirable discrimination, bias, or negative outcomes?”
Despite so many questions, criticisms and concerns, the report identifies AI as a helpful classroom partner in that teachers can delegate tasks to a virtual assistant in order to spend more interactive time with students. And AI is already being used to coach teachers and help them get better at their jobs.

The report concludes with a call to action including seven recommendations, most importantly to keep humans “in the loop” and involved in any process augmented by AI. The report also recommends making sure priorities, strategies and technology tools all place the needs of students first, ahead of the excitement of new tech, and recommends against romanticizing AI and taking a “let’s see what the tech can do” approach. It calls upon the research-and-development sector of ed tech to incorporate best practices for teaching and learning into the design of new tools, and develop them in ways that make them adaptable for context, for specific needs and circumstances, and in ways that improve trust and safety. It calls for prioritizing and strengthening trust of constituents, for involving educators in decisions, and for policymakers to work with constituents to develop specific guidelines and student privacy laws that can be followed by school leaders at the local level, as opposed to asking school districts to come up with their own regulations.

The report adds that ongoing discussions about the role of AI throughout the educational ecosystem is a good first step toward creating policies and standards.

“We see progress that we can build upon occurring,” the report says, “as constituents discuss these three types of questions: What are the most significant opportunities and risks? How can we achieve trustworthy educational AI? How can we understand the models at the heart of applications of AI and ensure they have the qualities that align to educational aspirations?”