![](https://marziacoltri.com/wp-content/uploads/2024/04/Depicting_a_balance_between_AI_technology_and_et-1.jpg)
![](https://marziacoltri.com/wp-content/uploads/2024/04/a-futuristic-scene-of-a-therapist-s-office-where-a-caUI1tmQSWe8I1kHFbSrzg-uMBjpYKYS4qdCBOGVwq2Uw.jpeg)
![](https://marziacoltri.com/wp-content/uploads/2024/04/Blu_and_green_Icon_depicting_the_principles_of_t.jpg)
These futuristic scenarios depict an AI robot in a therapeutic environment overcoming mental health challenges.
The second image, on top, reflects the therapist’s room, where a human and an AI robot are having a one-to-one talk. A person sits in a comfortable chair, while the AI robot, equipped with a calming holographic interface, sits at the table. There is a warm environment surrounding them, complete with plants and soft lighting. Somehow, this represents the relationship between the two as they navigate the complexity of human emotions.
How can this relationship be real, congruent and accountable?
Matthias Barker, a specialist in complex trauma and child abuse, says: “People come to us for human connection, for wisdom and the presence of a trustworthy companion to guide them through life’s troubles. What does a robot know about the challenges of being human?”
There is an ethical dilemma for AI in counselling and psychotherapy. Transparency, accountability, fairness and inclusion are the core principles to be upheld by therapists under the BACP and NCPS codes of ethics. Many therapists use and discuss ChatGPT in therapy, but it is also morally important to analyse what is good or bad when using AI in our therapy sessions, based on trust and balance.
For example, asking the ChatGPT to respond to my client’s challenges and how to handle them is unethical; there are various free chat boxes for treatments such as CBT and DBT. But what about these AI boxes, which will mislead and misinform vulnerable people into extreme beliefs and behaviours? Is it possible that this could function similarly to Schopenhauer’s Die Welt or the Veil of Maya, creating more vacuum and allowing people to escape from reality and lead them to an unreal world?
The OECD AI Principles promote innovative and trustworthy AI that respects human rights and democratic values.
These are four key principles:
- Inclusive Growth, Sustainable Development and Well-Being: AI technologies should benefit everyone, fostering creativity in equitable ways.
- Human-centred Values and Fairness: Across all stages of AI in business, healthcare and education, democratic values and human rights—such as freedom, dignity, autonomy, privacy, non-discrimination, equality, diversity and social justice—are crucial.
- Transparency and Explainability: Responsible disclosure of information about AI is essential.
- Robustness, Security and Safety: Providing secure tools and technologies that support safe therapy.
These principles open new conversations about AI ethics in counselling.
As counsellors, we are interested in human connection and shared values with our clients looking for a safe place, belonging and well-being.
Is ChatGPT replacing the value of being a counsellor?
AI’s presence in therapy has been debated, with some acknowledging its potential but asserting it won’t replace human therapists due to the complexity of emotional and psychological dynamics.
Let’s explore some challenges:
- Lack of Human Connection: AI doesn’t have the humanity and understanding that a real person can give. The therapeutic relationship is built on trust and an emotional bond.
- Bias and Fairness: When AI systems learn from past data, it can reinforce any biases that are present in that data. If AI isn’t carefully thought through, it could unintentionally generate assumptions about minority communities.
- Privacy Concerns: Giving confidential information to AI systems is a breach of privacy. To maintain trust, it’s important to ensure that information is kept safe and confidential.
- Misdiagnosis and Errors: AI systems could misunderstand symptoms or give wrong evaluations. Without human assistance, AI alone might miss opportunities to step in early on.
- Ethical Dilemmas: AI systems might not always make decisions that are ethical or culturally right. It could be challenging to find a balance between freedom, beneficence and non-maleficence.
- Dependency: We might be less likely to ask for help from humans if they rely too much on AI.
While AI has the potential to improve mental health care, attentive use and ongoing review are needed to reduce risks and assure the ethical use of AI in therapy today.