IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New Jersey Colleges 'Learning as They Go Along' with ChatGPT

Fears about misinformation, AI hallucinations, cheating and the long-term undermining of education persist, but college students and professors are finding AI useful for administrative tasks and tutoring.

ChatGPT creative,Open,Ai,Chatgpt,Chat,Bot,Seen,On,Smartphone,Placed,On
Shutterstock
(TNS) — When Herrin Fontenette, a Rutgers sophomore from Jersey City, casually asked the ChatGPT online tool for the first Black Rutgers graduate, it named Paul Robeson.

She knew that was way off: James Dickson Carr earned his degree 27 years before Robeson, the actor and global activist.

Fontenette worries that artificial intelligence tools like ChatGPT will spread misinformation and rob students of a personal connection to their research.

Since the tool launched in late November, New Jersey colleges and universities have debated its applications and accuracy while finding ways to regulate student use, turn it into a teaching tool, and even benefit from it on an institutional level.

“We’re still learning it as we go along,” said Marybeth Boger, vice president for student affairs and dean of students at New Jersey Institute of Technology. “Part of the university is getting it, some of our professors are integrating AI into their coursework, and another contingency is saying, ‘Oh my gosh, it’s another way to cheat.’ What do you do? That’s the challenge, finding the happy medium.”

ChatGPT, created by OpenAI, an AI research company, is one of many AI content generators in which anyone with a free (for now) account can interview and assign writing tasks. Businesses use various forms of AI for communications, writing code, and creating graphics. Students use it to brainstorm ideas, find and summarize articles, express thoughts, or figure out the best Squish-Mallow stuffed animal. The AI market, estimated at $100 billion globally now, is expected to reach $2 trillion by 2030.

Artificial intelligence is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings, according to the Encyclopedia Britannica, which may have been printed on paper and sold door-to-door, but beats Wikipedia for accuracy.

Think of ChatGPT — for “Generative Pre-trained Transformer” — as the most prolific student in class, the one who helps kids cheat but who may or may not have done all the online reading assignments, and certainly not anything more recent than 2021, its cutoff. The one who, for now, has a very flat, predictable sentence structure and, according to Montclair State University guidance, a “tendency to hallucinate.”

A March survey by BestColleges, a college ranking and news site, found that just over half of college students believed using artificial intelligence tools for schoolwork was cheating, while 20 percent disagreed. Almost a third said their schools prohibited using artificial intelligence tools for academic work.

While no U.S. college or university has banned the use of ChatGPT so far, Clifton public schools have, joining big city districts like New York, Los Angeles, Seattle, Milwaukee, Oakland and Anchorage.

Bian Cabello, 22, a senior at New Jersey Institute of Technology, sits on the school’s academic integrity committee, where AI has not yet been an issue. Still, he sees how its use could cause controversy.

“I’m a firm believer that you work for your grades, and you do it honestly, so you actually understand what you’re doing, so it does not become a problem in the future,” he said. “It inhibits the development of a student’s ability to communicate if you’re relying on a machine model to do it.”

Cabello, a chemical engineering major from Paterson, said he had used ChatGPT recently, with the blessing of his professor, for a project on how the 1984 Bhopal chemical explosion in India could have been prevented. The chat didn’t give him new ideas, he said, but quickly gave him deeper, helpful information. He also learned to ask one question at a time to get clearer answers.

At NJIT, which is launching two graduate programs in AI, Boger said ChatGPT reminds her of Chegg, an education technology company that provides answers to engineering problems, among other subjects. But it is harder to tell when a student has used ChatGPT, she said. Only professors who know their students’ work extremely well can determine if there’s been a sudden, inexplicable rise in its quality.

“The professors are feeling outrage, and they want our office to adjudicate it,” she said, “but if you don’t have evidence, what do you do? You can’t say that because they went from a C to an A that they cheated.”

She worries more about what students understand than how they reach that understanding.

”I’m concerned about them not learning, passing everything and having a degree in hand, and they have no skill,” she said. “I’m more concerned about that than trying to catch them for cheating.”

Her advice for a hypothetical world where engineers haven’t really learned engineering thanks to AI?

“Don’t drive over the next bridge.”

It has always been hard to determine if student work was purely original, as it is often shaped by professors’ and peers’ advice, spelling and grammar checkers, and plagiarism detectors. With the arrival of AI, experts say it will be quite difficult to tell where the human brain ends and the machine begins.

As Jordan Suchow, an information systems professor at the School of Business at Stevens Institute of Technology, puts it, “You have an end product that neither you nor an AI model could’ve done alone, and no sentence or phrase in the final version can be solely attributed either to you or to AI, where you can’t unscramble the egg.”

Some locally grown help may be in sight. Earlier this month, Edward Tian, a 22-year-old Princeton student, and his team raised $3.5 million in seed money for GPTZero, an AI detection tool he developed over winter break. It already boasts 1.2 million users and is planning a browser plug-in to detect the origin of facts on websites. He did not return requests for comment.

Emily Isaacs, a writing professor and head of Montclair State University’s Office for Faculty Excellence, said she could tell AI writing by its voice.

“I think there’s a predictability and boringness to it,” she said. “I don’t think it’s as good yet.”

But Isaacs suggested that academia’s recent insistence on elegant writing may one day seem as quaint as the olden-days insistence on legible penmanship for scholarly success.

“Someday, maybe, we’ll say we can’t believe all these smart people never got into graduate school because they wrote in accented English,” she said. “Hopefully, someday, that will seem insane.”

Other professors have recognized that artificial intelligence can be “revolutionary” in helping first-generation or low-income college students sound more academic when they need to.

Viktoria Popova, Director of Institutional Research and Assessment at Centenary University in Hackettstown, has another equity-related concern. While the 3.0 version of ChatGPT is free, the next, more powerful version may not be.

“Are all of our students going to have equal access to the tools we’re using, or are some using less powerful tools?” she asked. “We cannot create additional barriers for our students in our excitement for technology.”

For now, New Jersey colleges and universities are developing policies about student use of artificial intelligence and mulling over how to assess student progress in a chatbot-proof, cheat-proof way.

Montclair State University has a strict suggestion for professors to put on their course materials: “Use of artificial intelligence to produce or help content, when an assignment does not explicitly call or allow for it without proper attribution or authorization, is plagiarism.”

In its guidance, Princeton University noted that AI would make higher education even more essential, creating a need for “the nuanced and sophisticated ways of thinking” taught in universities. It suggested two syllabus statements, one prohibiting engaging in unauthorized collaboration with AI software, and one requiring professor permission before doing so, which concludes, “Using these tools without my permission puts your academic integrity at risk.”

The guidance predicted that Princeton faculty “can manage with grace and aplomb,” a phrase that displays far more fluency and subtlety than most AI could muster.

Rutgers urges its professors to experiment with AI tools before restricting or incorporating their use in class, and it asks instructors to create assignments that require critical thinking, connect concepts to personal experiences a computer wouldn’t know about, discuss readings done in class, and make innovative connections. Professors chose among forbidding its use in any student writing, using it for brainstorming, or using it at will, in which case students must footnote their use of the tool and explain their work.

Boger, at NJIT, advocates for cooperative programs, where students combine study with real-world work. “We need more intentional coops, internships, and hands-on type learning, that may inspire them to want to learn the material for real,” she said.

Monica Devanas, director of faculty development and assessment programs at the Rutgers Office of Teaching Evaluation and Assessment Research, has heard of teachers generating essays via ChatGPT and critiquing them with students, so they can see how the writing can be superficial, redundant, and flat.

Her colleague, Chris Drue, associate director for teaching evaluation, encourages instructors to have students give presentations about their work and how it was produced, while some professors give in-class final exams to prevent reliance on bots.

The two said the faculty hasn’t rebelled against the extra work required in adjusting to AI, in part because it has proven helpful in some administrative tasks, in coming up with questions for readings, and in managing large amounts of data.

“It’s an amazing tool to use for themselves,” Drue said, comparing AI to a teaching assistant.

University administrators are also beginning to use AI on the school-wide level. Centenary consulted ChatGPT-4.0 to see how well it is meeting its goals as it goes through its re-accreditation. The tool suggested the employee attendance policy could be more explicit about accommodations for people with disabilities, lower socioeconomic status, or limited access to the Internet for reporting absences. She said the tool could shave years off some assessments, and humans make the final decisions.

But bad news is brewing for one class of educators, who may be going the way of the Blackberry phone — the tutor. A survey released Monday of more than 3,000 high school and college students found that of those who have used both tutors and ChatGPT, nearly all of them have replaced some tutoring sessions with the chat tool, and 90 percent prefer studying with the tool over the tutor, and 95 percent said their grades went up since studying with ChatGPT.

When asked how an article about ChatGPT use in New Jersey universities should conclude, the tool replied:

“As students and educators alike adapt to the ever-changing landscape of higher education, it’s important to remain open-minded and willing to embrace new technologies and approaches. By doing so, we can create a more dynamic and engaging learning environment that fosters creativity, critical thinking, and lifelong learning.”

Or one that fosters easy, inaccurate answers. Or, one that has to work overtime to stay human and creative…

Isaacs from Montclair State said it would be a while before higher education fully absorbs the impact of tools like ChatGPT in the classroom.

“Is this going to be like Wikipedia, just an evolution of something we’re used to,” she said, “or radically disruptive as it turns out social media was?”

©2023 Advance Local Media LLC. Distributed by Tribune Content Agency, LLC.