IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Learning Engineering Tools Competition Tackles Learning Loss

Thirty teams received $4 million in prizes for developing tools that could address learning loss from the COVID-19 pandemic, accommodating faster feedback and communication for students and teachers.

A open laptop and notebook with education-themed icons floating above them.
Shutterstock
It’s no secret that the COVID-19 pandemic has led to a ripple effect of learning loss. Many states are making efforts to address the problem, with Louisiana trying to recruit more teachers, Ector County in Texas resorting to virtual tutors, Indiana turning to blended learning and targeted interventions and Montana addressing technology access — and that’s just the tip of the iceberg. In an international competition over the past several months, the Washington, D.C.-based education organization The Learning Agency and Georgia State University highlighted a slew of new digital tools focused on improving learning, from feedback generators to paper digitizers, and what they all have in common is that they offer students guidance rather than leaving them to their own devices.

According to a news release, the Learning Engineering Tools Competition in its second year doled out roughly $4 million in prize money — more than double the $1.5 million awarded across 18 teams in 2021 — to 30 teams across four tracks, including K-12 accelerated learning, assessment, learning science research and adult learning. The winners, which include entrepreneurs, learning scientists, educators and researchers from around the world, will develop technologies and platforms that could help more than 4 million students by the end of the year, and 40 million kids over the next three years, the release said, citing estimates gathered from each winning team.

Among the winners in the assessment track, the Stanford University-based Short Answer platform gives students in grades 6-12 the ability to provide instant answers and discuss assessments through text, video, audio or images, with the goal of increasing student engagement and communication. Stanford graduate student and Short Answer team member Adam Sparks said the quicker feedback time it provides mitigates learning loss and maximizes time in the classroom.

“Feedback and corrections happen in the moment rather than days apart, alleviating the lag in feedback experienced by students with attendance issues,” he told Government Technology in an email.

Sparks said the plan is to also provide students with a feature that allows them to study in small groups or individually on phones or with printable cards, so they have guidance both in and out of school settings. He said that some teachers are fearful that students who struggle will not be able to provide helpful feedback, or just won’t take it seriously. He said the platform is meant for all students.

“We’ve designed Short Answer in a way that scaffolds the feedback process so that anyone, even novices to content, can provide meaningful feedback to peer work. The comparative judgment process at the core of our tool ensures that students are never analyzing peer work in isolation, but instead are comparing peer work and using teacher-provided metrics upon which to guide that comparison,” Sparks said. “While students’ work is anonymous, the debate and discussion inherent in every Short Answer activity type ensure social accountability that encourages students to take the feedback process seriously.”

The Smart Paper team, comprised of Playpower Labs in India and Shonan Seminar in Japan, created a tool that analyzes learning data on paper. Nirmal Patel, the chief data scientist at Playpower Labs, said that Smart Paper aims to combine the effectiveness and comfort of paper with the power of digital.

“Paper learning is effective but difficult to manage in the digital ecosystem. Smart Paper makes paper a first-class digital citizen and allows educators to manage traditional paper learning digitally,” Patel told GovTech in an email. “When more student work on paper is digitized, it opens up a new avenue for educational data mining research. … A large amount of foundational formative learning happens on paper, and AI-based technologies can enable students to get rapid feedback. Reliable AI feedback models can empower self-learners and facilitate flip learning.”

Patel believes that if students are assessed using Smart Paper, which leverages AI to hasten feedback, they can identify the specific learning needs of those who need the most help.

“As AI technologies improve over time, the quality of automated feedback can also improve and become precise in identifying where learners need help,” he said. “AI can also reduce the time for managing and grading paper assessments, enabling teachers to spend more time teaching and connecting with students.”

M-Powering Teachers, a team of individuals from Harvard University, Stanford University and the University of Maryland at College Park, was among the winners in the learning science research track. Their machine learning tool uses natural language processing to analyze audio recordings from math classrooms and generate feedback for teachers. It also generates intelligence for researchers, according to University of Maryland Assistant Professor of Education Policy Jing Liu.

“The innovative integration of machine learning and educational theory and practices creates multiple promising avenues to change the future of teaching and learning,” Liu told GovTech in an email.

Liu said that researchers can use the tool to replace or supplement labor-intensive human scoring of classroom teaching, enabling the completion of rapid-cycle learning science studies and allowing instructional innovations to get to scale more quickly and efficiently. He also adds that, from a practical perspective, instead of receiving human-based classroom observations only once or twice per year, teachers can use MPT to receive immediate, on-demand, individualized feedback on key aspects of their teaching. Liu said that in a trial, their algorithm, used to measure teachers’ uptake of student ideas, has proved beneficial.

“In a randomized controlled trial, we found that feedback based on this measure can improve instruction and student engagement,” he said. “The (natural language processing) measures we proposed for our winning tool will greatly enrich the kinds of feedback teachers can get in a way that is private, consistent and individualized and help create more student-centered and equitable classrooms.”

Editor's note: A previous version of this story incorrectly said the Smart Paper team was based in Japan.
Giovanni Albanese Jr. is a staff writer for the Center for Digital Education. He has covered business, politics, breaking news and professional soccer over his more than 15-year reporting career. He has a bachelor’s degree in journalism from Salem State University in Massachusetts.