But alongside this promise comes growing complexity and risks. Institutions must navigate concerns about academic integrity, data privacy and the ethical use of AI. Much of the technology being considered comes from third-party vendors, which makes robust procurement and vendor risk management processes essential from the outset.
For chief information officers, the priority is clear: distinguish between practical, impactful applications and those driven by hype. The goal is to adopt AI that enhances teaching, learning and operational efficiency without compromising academic standards.
DEFINING RESPONSIBLE AI USE IN HIGHER EDUCATION
Before assessing what's practical versus aspirational, CIOs must first ground their strategy in a clear understanding of responsible AI frameworks such as the National Institute of Standards and Technology AI Risk Management Framework and with an eye on upcoming federal and state regulation such as the Colorado AI Act. This means defining a strategic approach to the acquisition and deployment of third-party AI tools that upholds institutional values, safeguards sensitive information and maintains academic standards.
Key principles include:
- Data privacy and enterprise access: Institutions should prioritize purchasing AI platforms on enterprise terms to ensure student and faculty data is protected under strict privacy and security protocols. Enterprise terms generally include commitments not to use data to train AI models.
- Security and data privacy standards: Ensure all tools comply with data privacy laws such as FERPA and meet high privacy and security standards such as ISO 27001, ISO 27018 and ISO 27701.
- Build on existing vendor risk management: Conduct AI due diligence on vendors and their AI tools. Vendors should clearly explain how they manage ethical AI risks.
ALIGN AI ADOPTION WITH INSTITUTIONAL RISK APPETITE
Each institution has a unique risk profile and appetite for change. CIOs must evaluate AI opportunities within the context of their institution's risk tolerances, technological capabilities and strategic goals.
Key considerations include:
- Risk appetite: Institutional leadership should define the risk appetite. Does the institution want to be on the forefront of generative AI? Or should generative AI be implemented more cautiously?
- Privacy, accuracy and security: Innovation must be balanced with caution, particularly when AI tools handle sensitive student data or critical academic functions. Vendors should commit to not using institutional data to train AI models.
- Control of AI tools: Assess if your institution is in control of third-party AI tools. Can you enable and disable generative AI functionalities? Ensure vendors provide sufficient information and support for your institution to make these adoption decisions.
SET REALISTIC PRIORITIES: FOCUS ON HIGH-IMPACT, LOW-RISK USE CASES
Adoption requires trust. Start with low-risk, high-value functions that leverage generative AI strengths while minimizing potential downsides like inaccuracies or hallucinations. This allows institutions to build trust and implement AI intentionally and incrementally.
Examples include:
- Alt text generation: Tools that use AI to draft descriptive alt text for images, enhancing accessibility and saving faculty time.
- Routine task automation: Tools that simplify routine tasks and inspire instructors and faculty staff by suggesting course-related content such as quiz suggestions and rubric creation or class scheduling.
- Faculty support tools: Offer AI-assisted tools to reduce faculty workload and improve resource accessibility.
MEASURE PROGRESS AND FOSTER COLLABORATION
AI implementation should not be a top-down initiative. Success depends on inclusive collaboration across departments, with clear performance metrics and regular opportunities for feedback.
- Monitor and adapt: Continuously assess AI's impact and make iterative improvements based on measurable outcomes.
- Engage stakeholders: Involve faculty, students, administrators and IT teams to ensure tools meet the needs of all users.
- AI literacy: Provide stakeholders with training on how to use AI tools and the responsible use of AI. Role-based training can strengthen AI literacy for those in the most critical roles.
- Build institutional buy-in: Foster a shared understanding of AI’s role and benefits to encourage broader acceptance and usage.
RESPONSIBLE AI AS A COMPLEMENT, NOT A THREAT
As generative AI matures, CIOs must balance vision with pragmatism, navigating technological advances, regulatory developments and institutional constraints. By prioritizing responsible use, focusing on low-risk, high-impact use cases and engaging the broader academic community, AI can enhance, rather than replace, traditional educational methods.
Done properly, AI adoption will not only support the educational mission but also future-proof institutions for the evolving demands of modern learners. To realize this potential, CIOs must champion a shared vision for AI grounded in ethics, practicality and collaboration.
Stephan Geering is the compliance, trustworthy AI, and privacy officer at the education software company Anthology.