By Dr Marzia A. Coltri | Philosopher, AI Ethicist, Counsellor
Some institutions are gaining attention as they integrate AI into higher education. Our partnerships, tools, and campus-wide initiatives demonstrate our commitment to a brighter future. However, despite these positive announcements, several universities—including those dedicated to widening participation—still struggle with a fundamental challenge: creating inclusive, ethical, and human-centred AI ecosystems.
This paradox leads to further ethical considerations.
While emerging frameworks like the UNESCO “AI for Teaching and Research” Guidelines (2024) encourage educational institutions to promote transparency, equity, and digital empowerment, many current approaches fall short. They focus on AI detection rather than direction and on control rather than growth. Students are warned and anxious, not guided. Educators are overwhelmed. And the AI literacy gap is growing.
AI Education: A Crisis of Trust, Not Just Tools
Students’ stress intensifies, especially for those from diverse cultural, linguistic, or non-traditional academic backgrounds. I have observed firsthand how intelligent, curious learners are silenced—not supported—by punitive environments where algorithmic “detectors” are trusted more than human context. This is not merely unethical. It is intellectually stifling.
What we need now is not more surveillance but pedagogical or androcentric change. We need AI ethics teaching embodied not only in policy but also in praxis. And we need leaders—at every level and from various backgrounds—who understand that digital integrity must be anchored in psychological safety, critical thinking, and community.
This is where my work lives.
Building Ethical, Peer-Led Support Cultures
As a philosopher and counsellor deeply engaged in the intersection of AI, education, and well-being, I’ve been developing initiatives that move beyond compliance. These programmes train student leaders to serve as AI Integrity Mentors—bridging the gap between technology and human insight. They create spaces for honest dialogue, collaborative learning, and Socratic questions.
These aren’t pilot schemes or ‘tech trends’. They are human strategies. And they work.
Grounded in evidence-based well-being models like WOOP (Wish, Outcome, Obstacle, Plan) and linked to international standards on responsible AI use, these initiatives empower students not just to “follow the rules” but to understand why the rules matter and how they can apply technology meaningfully in their academic and professional lives.

The Real Innovation? Inclusion.
Institutions often speak of innovation as though it were purely technological. But true innovation lies in the design of access, in building ethical AI literacy that recognises cultural context, in rethinking assessment strategies, and in listening to the silent—those who fear being accused rather than being understood.
Claude for Education, for instance, signals a new direction: AI tools that guide rather than answer, teach rather than tell. However, access to such tools must be equitable, and the pedagogy around them must be humane. Even the best AI tools risk becoming another exclusion layer without ethical scaffolding.
What Comes Next
At this critical juncture, AI education must not only automate but also illuminate. And that means bringing ethics to the forefront, not just in what we teach but also in how we lead.
I hope to continue collaborating with global educators, ethicists, policymakers, and students to build integrity models that scale across diverse educational landscapes. Integrating ethical AI frameworks into international curricula and aligning them with initiatives such as UNESCO’s Digital Learning and AI in Education is essential to co-creating inclusive learning environments.
Academic integrity should focus on nurturing meaningful learning rather than catching misconduct.
Are you interested in ethical leadership in AI, inclusive education, or collaboration across different disciplines?
References
- Anthropic. (2025). Introducing Claude for Education.
- UNESCO. (2024). AI and Education: A Guide for Teaching and Research.
- Gollwitzer, P. M., & Oettingen, G. (2021). WOOP: Mental contrasting with implementation intentions.
- Higher Education Policy Institute (HEPI) & Kortext. (2025). AI Student Survey Report.
- Harte, P., & Khaleel, F. (30 October 2024). AI and Academic Integrity: What Next? HE Professional.
- Pratschke, B.M., 2024. Assessing Learning. In Generative AI and Education: Digital Pedagogies, Teaching Innovation and Learning Design (pp. 91-108). Cham: Springer Nature Switzerland.