IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How I Learned to Stop Worrying and Love AI

Education is poised for a new chapter as generative AI is introduced in classrooms, and while that comes with a healthy amount of concern, it also offers new possibilities that we're only just beginning to uncover.

A person in a business suit with their head obscured by a dark cloud with "AI" in the middle of it in blue. Gray background.
Shutterstock
Coming up on a year since the public launch of ChatGPT, the U.S. education system is navigating another upheaval with technology. But unlike the last one — three years ago, when COVID-19 necessitated remote learning — the innovation in this case preceded necessity. I think that makes it an exciting time for K-12 and higher education, because the upheaval is one of possibility: Generative artificial intelligence is relatively user-friendly and untethered to any specific application. It’s not defined by any one problem or use case in particular, leaving educators and students to discover its uses for themselves. These are the unregulated Wild West days of AI, when we’re still figuring it out, so it’s a time to contemplate possibilities.

One of the most exciting things to me about large language models is that they’ve nearly broken the language barrier between humans and computers. Users no longer need technical acumen to wield computers for complex tasks, and some computers can now communicate with users better than some users can between themselves. Generative AI is almost like an API between people and software that will, especially as it improves, make it increasingly easy for them to interface and collaborate. In the next five to 10 years, I expect generative AIs to improve quickly and become more portable and accessible, at least in the sense of appearing on more platforms intertwined with our lives. It may become increasingly easy to forget — and therefore critical to remember and teach people — that AI is not an entity but a tool, and the user is ultimately responsible for what they do with it.

Generative AIs are already capable of doing most menial mental labor we ask of them, like writing our homework or emails. Once they can recognize specific voices accurately enough, and tech companies write the necessary APIs, AI tools may become voice assistants embedded in phones and watches that we can verbally instruct to do anything we now do on a computer — file taxes, book a hotel, move money from a bank account, browse the New York Times. If history is any measure, I predict this change will be significant but gradual and overlooked, like how we think nothing of Google and FaceTime today.

These are the unregulated Wild West days of AI, when we’re still figuring it out, so it’s a time to contemplate possibilities.
What will this mean for education? We’ve already begun the yearslong process of finding out. Teachers are using generative AI to create slideshow presentations, amend their lesson plans and brainstorm prompts to fuel classroom discussions. Students are using it to come up with project ideas, hone their writing skills and study for tests. Niche use cases in higher education, such as marketing professors using chatbots to coach aspiring sales professionals, are proliferating by the week.

Of course, these revolutions always have a flip side. I share the common concerns of educators about cheating, and even more the concerns about what could happen to our information ecosystem as the cost of generating persuasive falsehoods is reduced virtually to zero. What happens when anyone, anywhere, can produce 1,000 professional-sounding pseudoscientific studies with 100 bogus sources, or convincing phishing emails, or financial scams targeting seniors, every day? What happens when it takes no money or expertise to create deepfakes at such velocity and quality that video and photographic evidence are no longer admissible in court, because they’re impossible to verify? And could that necessitate a whole new mode of gatekeeping, and what would that do to already-fraying public trust in institutions? Eventually AI may be able to “generate” designs for previously unachievable weapons or solutions to complex scientific problems. One has to imagine an AI becoming as good at military strategy as modern computers are at chess. These are serious problems for society to navigate, but so are cybersecurity and accessibility, and I think they’re surmountable.

Part of discovering what generative AI can do will be discovering what it can’t. Even the most advanced technologies tend not to be as limitless in practice as they first appear in theory, as they bump up against the infinite complexity and confounding variables of reality. In the case of generative AI, for instance, no matter how accurate or competent it becomes at synthesizing text or imagery, it has no senses or experience of the world and is therefore incapable of subjectivity, which rules it out as a replacement for creative writers and artists. I say this having not only seen the paintings and poems generated by AI tools, but having been doubly convinced by them. But as a tool, generative AI will revolutionize their industries nonetheless, and those students who know how to wield it will have a leg up on those who don’t. So it will be the job of educators to help prepare them for that world.

I don’t envy educators their task of trying to see around this corner. While it’s true that we have historically adapted to new technology and will continue to do so, generative AI carries a potential for exponential change, especially if it becomes so good at coding that it can design its own successor. In that scenario, all bets are off, although I’m skeptical for aforementioned reasons. In any case, as with the early days of the Internet, the biggest leaps and bounds in the status quo are still ahead of us. It’s not too early to imagine what’s possible and not too late to avert disaster. The next Steve Jobs, Jeff Bezos and Mark Zuckerberg are sitting in classrooms somewhere right now. I hope they’re learning something about responsibility.

This article originally appeared in the September issue of Government Technology magazine. Click here to view the full digital edition online.
Andrew Westrope is managing editor of the Center for Digital Education. Before that, he was a staff writer for Government Technology, and previously was a reporter and editor at community newspapers. He has a bachelor’s degree in physiology from Michigan State University and lives in Northern California.