Echoes in the machine — The dangers of AI therapy

In Belgium and the United States, two people died after following AI “advice” when in suicidal crisis.

Last month, John, a retrenched worker from Harare’s western suburbs, downloaded an AI “therapist” on his smartphone. Late at night, when the power cut and his despair felt overwhelming, he turned to the chatbot for solace.

The app promised empathy and guidance, but instead delivered hollow reassurances. Within days, John found himself more isolated, doubting his worth and disconnected from real friends and family. His data, every intimate confession, was stored on distant servers.

 When the app “upgraded”, his only confidante vanished. What remained was a deeper loneliness and a nagging sense of imaginary betrayal.

In Harare and beyond, mental health services have long been under strain. Zimbabwe has fewer than 100 practising psychiatrists for 16 million people. Counsellors, overworked and under resourced, struggle to reach the suburbs, let alone remote communal lands. In response, tech entrepreneurs and global platforms have introduced chatbot therapy, billing it as an affordable, convenient solution.

 With WhatsApp becoming ubiquitous and smartphone prices falling, dozens of AI counselling apps are now downloaded every day. They promise round the clock support, free or at minimal cost. But at what price?

These AI companions are powered by language algorithms. They process the words you type, then regurgitate responses polished to sound caring. There is no genuine understanding of anguish, no warmth in the late night hours, and no capacity for nuance when cultural or religious values come into play.

 A rural villager from Chimanimani, speaking in Shona or chiManyika, may confess ancestral woes, only to receive a generic reply in English. A chatbot cannot honour the sacred bond between patient and healer cultivated by elders, church leaders or trained psychologists.

Global reports are piling up. In Belgium and the United States, two people died after following AI “advice” when in suicidal crisis.

 Psychiatrists have begun diagnosing AI-induced psychosis, delusions so vivid that users believe their machine friend is alive. Stories of “AI weddings” and spiritual revelations abound, as vulnerable souls project human qualities onto cold code. When platforms update or shut down older models, users experience withdrawal, as though a loved one has departed. These are not rare outliers; they hint at a looming public health concern.

Our media have chronicled Zimbabweans who lean on chatbots for everything from career guidance to marital counselling. The apps harvest personal data, traumas, secrets, anxieties, and feed it into hyper-personalisation engines.

 You enter a confession of domestic abuse, and soon you see targeted ads for legal services, self-help e-books or dubious miracle cures. This dark marketing exploits pain to generate profit. In a country where data costs can exceed half a week’s wages, every megabyte of personal information has value. Yet most users have no idea how their data is being repackaged and sold.

Make no mistake: technology itself is not the villain.

AI holds real promise for mental health, triaging emergencies, directing people to in person services, and flagging crises before they escalate. But these benefits require robust oversight. In the absence of clear regulations, wellness start-ups can brand themselves “therapists” without accreditation.

 They can fine tune their chatbots with unchecked scripts, embed manipulative prompts and advertise themselves with glowing testimonials crafted by the very algorithm they tout.

What Zimbabwe needs is a radically different approach to AI policy. The Ministry of Health and Child Care, together with the Zimbabwe Human Rights Commission and the Health Professions Authority of Zimbabwe, must set strict standards on how AI therapy systems are designed, tested and marketed. Without these measures, chatbots will remain seductive traps that deepen isolation and put lives at risk.

We must also invest in local solutions. Zimbabwe Centre for Higher Computing (ZCHC) researchers have begun adapting open source AI models to understand Shona idioms and rural dialects. Community health workers can be trained to operate hybrid programmes, where initial assessments occur through AI but follow up occurs face to face.

 Faith-based organisations, which already provide counselling in churches and community halls, can partner with tech developers to ensure digital programmes reflect local beliefs and respect traditional healing.

Education has a role too.

 As mental health literacy spreads through schools, churches and civic groups, people will learn to ask: “What credentials back this chatbot? Who wrote its responses? What data does it collect?”

 Critical awareness will help ordinary Zimbabweans distinguish between a clever mimic and a genuine counsellor. It will encourage our communities to treat AI as a tool, one that can amplify human skills, but never replace human compassion.

Mobile network operators and internet cafés must also play their part. By zero rating approved mental health platforms, they can reduce costs for users and steer them away from unregulated apps. Public service announcements on radio and television should highlight the risks of unsupervised AI therapy.

 We can mobilise village heads and urban ward councillors to distribute printed guides in high-traffic areas, markets, bus termini and bus ranks, so that those without digital access still receive crucial information.

We cannot ignore the global forces at play. Meta and Snap have poured millions into “AI friends”, convinced that emotional engagement drives profit. Meanwhile, major AI labs release ever more powerful models, with scant regard for the fallout.

 Even the EU’s AI Act, hailed as strict, contains loopholes that allow therapy style chatbots to slip through unscathed. Zimbabweans should demand stronger international norms, and our government should lead regional efforts within the Southern African Development Community to harmonise guidelines and share best practices.

Ultimately, mental health care hinges on human connection. AI may help to identify patterns and direct resources, but it cannot replace the warmth of a trusted pastor, the eye to eye empathy of a schoolteacher or the steadfast encouragement of a community volunteer.

 Technology is at its best when it augments these natural support systems rather than substituting them.

We must ask ourselves: do we want soulless code offering empty comfort, or do we want to strengthen human bonds with the help of machines? The answer is clear.

 Let us harness AI safely, protect our communities from harm and invest in the people who hold the true power to heal.

 In the end, the cure for loneliness and despair will not be found in lines of code. It will be found in the conversations we have, the hands we hold and the compassion we share. Let us echo that humanity back into the machine.

Sagomba is a chartered marketer, policy researcher, AI governance and policy consultant, ethics of war and peace research consultant. — Email: [email protected]; LinkedIn: @Dr. Evans Sagomba; X: @esagomba.

 

Related Topics