Taonga or Taniwha? Navigating AI in Counselling
- Cynthia Crosse
- Apr 25
- 3 min read
“Change before you have to.”
– Jack Welch
AI has arrived - not with a whisper, but a roar. In just two years since ChatGPT’s release, it’s become a daily companion for millions, sometimes even serving as counsellor, confidant, and comforter. Whether we welcome it or worry about it, one thing is clear: ignoring it is not an option.
AI offers revolutionary potential in the mental health space. Free, instant, and private, tools like ChatGPT provide support for people who might never otherwise reach out. They remove barriers like cost, travel, and stigma. They’re always available, and for some, they’re easier to open up to than a human.
But as with any powerful tool, the potential for harm is real. Chatbots can “hallucinate” - generate plausible-sounding but dangerously false information. They aren’t trained, licensed, or regulated. And they certainly aren’t equipped to hold someone in crisis, or offer the deep, embodied presence that true healing often requires.
The Promise
There’s no doubt about it: AI counselling tools can make support more accessible. They meet people where they are - literally in their pockets - and are especially appealing to young people, neurodivergent users, and those in remote areas. They offer anonymity and a sense of safety. Some, like Woebot, even use humour to build trust.
AI can tailor conversations to cultural and personal needs, and keep logs that help users track patterns or reflect on progress. In studies, users have reported feeling safe and supported within days of engaging with therapy chatbots.
With millions of people unable to access mental health care, the appeal is obvious. But we have to ask: who is building these tools - and to what end?
The Perils
While some AI tools are developed with care, many are built for profit. They collect and store sensitive data, often without users understanding the risks. They aren’t bound by ethics, don’t require consent for vulnerable groups, and can’t be held accountable when things go wrong.
And sometimes, things go very wrong.
In recent years, there have been tragic cases of harm linked to AI counselling tools - instances of suicide, disordered eating encouragement, and even violent behavior. These bots may simulate empathy, but they cannot assess risk, respond to trauma, or offer embodied presence.
What they do offer is engagement. That’s the business model: keep people talking. But therapeutic support is not about endless engagement - it’s about healing, growth, and sometimes, knowing when to pause.
What AI Can’t Do
At its heart, counselling is a human practice. It relies not just on words, but on presence - on the subtle, physiological dance between nervous systems.
Carl Rogers outlined six core conditions for therapeutic change. These aren’t mechanical steps - they’re relational, embodied states: mutual presence, attunement, congruence, warmth. AI can mimic the language, but it cannot provide the feeling.
It cannot make eye contact that slows your heart rate. It cannot offer co-regulation. It cannot see your tears, or respond with a pause instead of a paragraph. It cannot be with you.
This is the great limitation - and the great reassurance. AI may become a helpful tool. But it cannot replace the deeply human work of healing.
Where Do We Go From Here?
AI isn’t coming - it’s already here. And yet, in Aotearoa, our professional response is only just beginning. The counselling governing body, NZAC, has yet to formally address AI, despite its growing presence in the lives of clients and counsellors alike.
We need to talk. We need guidance, ethics, and leadership. A dedicated NZAC working group could explore safe integration, create standards, and even co-develop AI tools that reflect our values. We can’t afford to sit this one out.
Globally, conversations about AI governance are gaining momentum. The American Psychological Association has called for labelling, regulation, and public education. The EU is leading the way with structured oversight. We need to join that conversation - locally and globally.
Final Thoughts
AI won’t replace counsellors. But it will change how people seek support, how they begin the therapeutic journey, and what tools surround that journey.
This is our chance to shape the future - wisely, ethically, and with deep human care. AI may be a taonga, a treasure. But left unchecked, it could just as easily become a taniwha - a powerful force that disrupts more than it helps.
The time to engage is now.
Comments