AI Ethics and Digital Literacy: Compliance with the EU AI Act

·

·

Organisations must ensure AI literacy among staff and stakeholders, aligning with Article 4 of the EU AI Act. This regulation bans AI systems from misleading people with unclear, vague, and thus unethical techniques. It prevents AI Systems to take advantage of people vulnerability, manipulating their emotions at work or school, or adding more data to facial recognition systems using online photos or CCTV recordings. If companies don’t follow these new regulations, they could face fines of up to €35 million, or 7% of their yearly earnings worldwide.

The Ethical Foundations of AI Governance

AI governance must balance utility and duty. Jeremy Bentham’s utilitarianism suggests that AI systems should maximise overall well-being, while Immanuel Kant’s deontological ethics emphasise the need to respect human dignity and autonomy. The challenge is ensuring that AI-driven decisions benefit the collective good without reinforcing hierarchical control that benefits elites at the expense of others. The ethical imperative is clear: benevolence in AI governance fosters inclusivity, whereas malevolence perpetuates bias and power imbalances.

AI Literacy in Education: A Concrete Need

Universities, vocational schools, further education, secondary and primary schools, and online academies must implement AI governance strategies by 2 August 2025. AI literacy programmes should help educators and students understand AI’s ethical implications, risks and opportunities. Failure to comply will lead to significant penalties, making it crucial for institutions to act now.

Lucy, Italy’s First Experimental AI School

Lucy, “La prima scuola Sperimentale di Intelligenza Artificiale in Italia” is an exemplary initiative and application of AI-inclusive education. A collaboration between Ammagamma and IC3 Mattarella of Modena, this programme embraces a critical, multidisciplinary approach to AI. Their motto, “De Arte Intelligendi…Educare a pensare” (The Art of Understanding…Educating to Think), reinforces their mission to integrate AI into a broader educational framework. Beyond coding and digital technologies, Lucy’s curriculum combines AI with algebra, statistics, logic, philosophy, and imagination—an approach based on ethical, responsible, and innovative AI literacy (Ammagamma, 2020).

Lucy’s project is in line with AI literacy research, highlighting the need for critical and ethical digital skills with AI systems beyond technical competencies. This educational programme emphasises experiential learning, helping students and educators analyse AI’s ethical dimensions through real-world applications. This model demonstrates how AI literacy can be incorporated into mainstream education, ensuring students develop the skills to apply AI’s complexities responsibly.

Next Steps

Institutions and organisations must:

  1. Conduct AI literacy training or continuous professional development (CPD) for staff and stakeholders.
  2. Ensure ethical AI deployment by eliminating prohibited practices.
  3. Develop AI risk assessments to classify AI systems according to EU AI Act guidelines.
  4. Prepare for August 2025 legislation, when full penalties and compliance checks will be implemented.
  5. Teach Philosophy and Ethics programmes in the curricula.

The EU AI Act is a pivotal moment for responsible AI adoption. As enforcement deadlines approach, institutions and businesses must proactively ensure compliance. The future of AI ethics and digital literacy depends on today’s governance decisions.

‘A Risk-Based Approach to AI Regulation.’

  1. Minimal Risk (Green): AI in video games, spam filters
  2. Limited Risk (Yellow): Chatbots, recommendation systems
  3. High Risk (Orange): AI in healthcare, autonomous driving and hiring processes
  4. Unacceptable Risk (Red): AI for social scoring, real-time biometric surveillance

References


en_GB