IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

ASU+GSV 2023: AI Will Fundamentally Change Education

Educators and tech CEOs at the annual ASU+GSV Summit this week stressed the need to adapt curricula and teach students to use artificial intelligence without devaluing important skills.

People sitting on stage as part of a panel at the ASU+GSV Summit including
From left, ASU-GSV Summit moderator Mitchell Stevens on Tuesday discusses ways that AI could change education with Stanford University computer science professor Emma Brunskill, Turnitin CEO Chris Caren, GPTZero developer Edward Tian, Southern New Hampshire University President Paul LeBlanc and Wharton associate professor Ethan Mollick.
Image courtesy of ASU+GSV Summit
Ever since the AI chatbot ChatGPT was launched late last year, K-12 and higher ed educators have wrestled with whether they should allow students to use the technology, given concerns about academic dishonesty, or if they should encourage students to make limited use of it to help with research, as well as to what extent AI tools more generally should be accepted in the classroom.

Those questions were the main focus of an ASU+GSV Summit seminar Tuesday led by Stanford University computer science professor Emma Brunskill, Turnitin CEO Chris Caren, Southern New Hampshire University (SNHU) President Paul LeBlanc, Wharton School of the University of Pennsylvania associate professor Ethan Mollick, and Edward Tian, a computer science and journalism student at Princeton who founded and developed GPTZero. Their discussion, titled “The Future of Integrity In The Brave New World of AI/GPT,” centered on the ways advances in the field of AI and tools like ChatGPT could drastically change how learning and career training will take place in an increasingly tech-integrated education sector and job market.

Noting the recent development of tools used to detect AI plagiarism, such as GPTZero and new AI detection features on Turnitin, Tian said educators must also work to find ways to adapt their curricula to changes in technology rather than only focusing on plagiarism moving forward.

“When we first launched Jan. 1, it made a lot of sense to have immediate safeguards in place because the buzz around ChatGPT was everywhere,” he said. “But today, we really need a shift from detection on the individual level to detection on the policy and institutional level. We just launched a new product to do that. … We really need to get to the core of what all of this is about. It’s not about catching the student. It’s not even about detecting an AI. It’s about preserving what’s human.”

Caren said educators must familiarize themselves with AI tools to help design lessons and assignments that encourage students to strengthen their own critical thinking skills, as well as to teach students how to use AI tools appropriately for things like research.

“We’ve been using AI for three years to figure out how to fingerprint writing style and tell when it’s changed,” he said. “With AI-generated content, educators have varying views on the appropriateness of using AI in assignments. Most are welcoming in most cases, but they want to understand how much of the work was from the student.

“We’ve been working on an application — a view in our standard product that about 2 million instructors use — that highlights for the first time the percent of the paper that’s written by AI,” he later added. “Right now, we’re seeing about 10 percent of papers with at least 20 percent AI authorship in the work, and about 5 percent that are basically completely written by AI. … Those numbers, if you look back to four weeks ago, have tripled.”

Brunskill said that while AI technology and tutors more specifically could provide more one-on-one support for students at scale in the years to come, students must still be taught that “productive struggle is part of what it means to learn.” However, she said the U.S. needs to focus more broadly on retraining workers to gain the skills necessary to find work in tomorrow’s economy if advances in AI eventually lead to massive layoffs across industries.

“We need to make sure students realize that when they use these [AI tools a lot], they’re losing an opportunity to gain the skills they will need for the future,” she said. “That also brings up a really important question for education going forward, which is, ‘What are the skills we’re going to stop teaching, and what are the skills we need to teach now in order to allow people to make use of these technologies in an effective way?’”

With AI tools expected to become ubiquitous across industries for a variety of job functions, Brunskill and Mollick said it’s just a question of when AI will fundamentally change the job market, rather than if it will.

“There’s not been a single person here who hasn’t said everything is different now,” Mollick said. “Everything just ended the way we thought it was — the nature of jobs just changed fundamentally, the nature of how we educate, the nature of how teachers and students relate to work — all of that has just changed. Even if there’s no advancement in AI past today, that’s already happened. … Anyone who is telling you they have all the answers — we don’t have them.”
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.