IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Preparing K-12 and higher education IT leaders for the exponential era

CU Boulder Community Raises Concerns Over ChatGPT Edu Deal

A dissent letter with more than 700 signatures questions the University of Colorado system’s partnership with OpenAI, sharing concerns over data privacy, academic integrity, student input and AI governance.

A  large stone sign on the ground that says "University of Colorado Boulder" in front of a large building.
Shutterstock/Red Herring
Students, faculty and staff at the University of Colorado Boulder are concerned about data privacy, educational integrity and governance practices following the University of Colorado (CU) system's agreement with OpenAI to bring ChatGPT Edu to all campuses, according to a student-authored “dissent letter” that has garnered hundreds of signatures.

In February, the CU system announced a deal with OpenAI to provide campuswide access to ChatGPT Edu, a version of the chatbot tailored for universities. An announcement from the university said the agreement, valued at roughly $2 million for the first year, is intended to help meet workforce needs and create prepared graduates, and requires users to log in using university-issued email addresses and complete a short training module before using the tool.

Aaron Gluck, a Ph.D. student at CU Boulder who is researching machine learning in educational contexts, helped write the open letter to university system affiliates and members of the public.

“I’ve been working with these types of tools for several years now, certainly at least five, and I am not necessarily against AI,” he said. “I think the issue is that you’re trying to introduce it in an environment where there's already lots of misuse and there isn’t a lot of education about how to use these tools properly, effectively and safely.”

According to the university's announcement, OpenAI is prohibited from using data generated within the CU environment to train its public large language models. CU retains ownership of identifiable student data and may audit individual usage in "isolated and limited cases."

However, the agreement does not name which CU system employees will have access to identifiable student data, or who may conduct such audits. Critics of the agreement say the language is vague enough to raise concerns.

Gluck said he also worries that de-identified data could still be used by OpenAI for research or product improvement.

“We are simply required to trust the words of a company whose current primary focus is obtaining more data,” says the letter, which had more than 700 signatures of support as of March 31.

The university did not immediately respond to a request for comment. However, Ann Stevens, provost and executive vice chancellor for academic affairs, put out a statement March 3 acknowledging the reactions of many students and staff to the OpenAI deal, explaining that CU’s agreement follows university protocols for responsible use and substantially reduces data exposure risk compared to the status quo.

On the other hand, the dissent letter calls on the university to adopt more explicit limits on data use and transparent auditing processes that allow faculty and students to verify how their chatbot interactions are handled.

Another issue raised by the dissent letter is representation. The OpenAI partnership was developed with input from the CU system’s AI working group, which includes the system chief information officer and chief procurement officer as well as representatives from each school. The dissent letter argues that the committee’s composition lacked representation from students, faculty and those with deep expertise in AI — a critique Stevens mentioned in her response.

“I want to acknowledge concerns that faculty, staff and students were not broadly consulted before this system‑level contract for ChatGPT was finalized,” Stevens wrote, mentioning the importance of student input in decisions, though she did not suggest a specific plan to better include them in the future.

At the heart of the demands in the dissent letter are concerns about educational impacts and a desire for pedagogical policies, including AI literacy training materials beyond the initial brief training. Signatories argue that introducing ChatGPT Edu at scale without clear pedagogical policies could undermine academic integrity and critical thinking, and that students already use generative AI in ways that circumvent learning rather than support it.

In addition to students relying on AI to complete assignments, Gluck said he has seen instructors hand out assignments generated with AI’s help that are impossible to complete due to contradictions in the instructions.

The letter asks the university to fill gaps in guidance with AI literacy resources and ethical use standards.

“It’s important to understand what’s actually going on,” Gluck said.
Abby Sourwine is a staff writer for the Center for Digital Education. She has a bachelor's degree in journalism from the University of Oregon and worked in local news before joining the e.Republic team. She is currently located in San Diego, California.