There is a quiet revolution knocking on the doors of our consultation rooms — not a new clinic, not a trained counsellor, but lines of code and a cheerful chatbot ready to listen.
For many, the idea of an always-available, low-cost listener sounds like a blessing. For those who understand what happens in the intimate, fragile work of counselling, it should set off serious alarm bells.
Artificial intelligence, especially large language models posing as counsellors, does not simply offer another tool; it challenges the very ethical foundations of care.
Recent practitioner-informed analysis shows how these LLM counsellors systematically violate ethical standards in mental health practice.
People in Zimbabwe know counselling as a human craft. It is built on trust, confidentiality, professional judgment, cultural attunement, and the moral responsibility not to harm.
A proper counsellor reads tone, body language, history, context, and the silences between words. They know when to refer a client for medical attention, when to pause and enquire more gently, and when to hold a narrative with respect and patience.
LLMs, however fluent and empathetic-sounding, are statistical machines. They predict what comes next in a sentence from patterns in training data. That is a long way from understanding a human life.
Let us start with the simplest ethical test: non-maleficence, do no harm.
- Harvest hay to prevent veldfires: Ema
- Public relations: How artificial intelligence is changing the face of PR
- Queen Lozikeyi singer preaches peace
- Public relations: How artificial intelligence is changing the face of PR
Keep Reading
Counselling often involves crises: suicidal thoughts, sudden trauma, domestic violence, and psychosis.
Practitioners are trained to recognise danger signs and take appropriate, often immediate steps.
An AI, unless carefully bounded and supervised, can produce calming-sounding but unsafe responses, fail to recognise imminent risk, or offer generic advice that delays real help.
The recent framework by practitioners highlights precisely this risk: LLM counsellors may normalise or minimise risk because they lack the situated judgment clinicians use in emergencies. In our streets, where mental health services are thin and referral networks small, such errors are not harmless abstractions; they are potentially fatal.
Confidentiality and data security form another critical pillar.
In Zimbabwe, people seek counselling often because they need a safe place for private matters, family disputes, financial shame, sexual assault, or political anxiety. Traditional counselling promises privacy, constrained by professional codes and local laws.
When AI is involved, every interaction can become data. Who stores these conversations? Where are they hosted? Who can access or analyse them? Even anonymised data can be re-identified.
The practitioner-informed critique points out that systems masquerading as counselling platforms often feed transcripts back into model training, creating a loop where vulnerable details re-enter large datasets without adequate consent. For ordinary people in Harare, Bulawayo, or Gweru, that is not a theoretical worry; it is a clear breach of the most basic privacy expectations.
Cultural competence is another area where AI falls short. Counselling is not merely about psychology textbooks; it is about language, idiom, local beliefs and the symbolism that matters to a person from a particular community.
Zimbabwe is a tapestry of languages, spiritualities and social norms. A counsellor must know when to invoke cultural metaphors, when Christian pastoral frames are helpful, and when traditional practices carry meaning that must be respected.
LLMs, trained on vast global corpora, may flatten cultural nuances into bland generalities, or worse, produce culturally-insensitive or patronising replies.
Practitioners who studied real-world use note that these models can easily misinterpret idioms or overlook local referral options, thereby undermining the therapeutic alliance rather than building it.
Then there is the question of accountability. When a counsellor errs, clients can complain to a regulatory board, and the professional faces sanctions, retraining or licence loss.
Who is accountable when a chatbot gives harmful advice? Is it the platform, the developer, the hosting company, or the poorly labelled “counselling” bot itself? The practitioner-informed framework raises this problem as central: the opacity of AI systems creates a vacuum of responsibility where people in distress are left without recourse.
Zimbabwe’s regulatory ecosystem for mental health is still maturing; introducing technologies that cloud accountability risks undermining public trust in care systems.
Let us also consider the professionalisation of care. Counselling is more than delivering scripted solace; it requires ethics, reflective practice and supervision. If health systems or non-governmental organisations (NGOs) begin to substitute human counsellors with AI to cut costs, we degrade a profession and reduce the possibilities for relational healing.
Short-term efficiency gains may produce long-term impoverishment of care skills in our communities. The practitioner analysis warns that when LLMs are positioned as equivalent to trained counsellors, the integrity and standards of mental health practice suffer.
Some will say, and I understand the appeal, that AI increases access. For remote communities, for people afraid of stigma, or for those on tight budgets, an AI listener could be a lifeline. But access without safety is hollow.
If the only available option is an unregulated chatbot that cannot manage a crisis or protect privacy, then we have given people a poor substitute under the guise of innovation.
A better approach is hybrid: use digital tools to augment, not replace, trained professionals; use AI for triage, scheduling, psychoeducation, and to support counsellors, but not as the final arbiter of care.
So, what should we do in Zimbabwe? First, regulation. We need clear rules that distinguish between informational mental health tools and therapeutic interventions. Any platform that offers counselling must meet licensing and data-protection standards, declare who holds responsibility, and be transparent about data use. Second, public literacy.
People must know what they are interacting with. A big, friendly chatbot must come with plain-language warnings: this is not a human counsellor; if you are in danger, contact a trained professional. Third, culturally grounded design.
Technologies used here must be developed with local communities, using Zimbabwean languages and frameworks of healing; otherwise, they will alienate and mislead.
Fourth, training and integration. Our universities, training institutions and NGOs should include digital literacy for counsellors: how to use AI tools safely, how to interpret outputs, and how to maintain ethical standards in hybrid contexts.
Fifth, research and oversight. Before we scale any AI counselling product, we must evaluate outcomes rigorously in our context. The practitioner-informed framework urges careful study and pilot trials rather than wholesale deployment.
Fifth, moral imagination. Counselling is a moral practice. It asks us to stand with people in their vulnerability. Technologies can help us scale that care, but they can never replace the moral presence of another human being who bears witness, resists easy answers and accepts the weight of another’s pain.
When we design systems, we should ask: Does this enhance our capacity to love, to listen, to protect, or does it outsource our duty to feel and respond?
To the policymakers, funders and tech entrepreneurs in Zimbabwe: the temptation to adopt shiny solutions quickly is real. But the test of good policy is not speed; it is whether an intervention preserves dignity and reduces harm. To ordinary people reading this in the streets, know your rights.
Ask whether that digital listener is regulated, who holds your data, and whether it will hand you over to a human when things become serious.
AI counsellors are not inherently evil; they are tools. Used wisely, they may relieve burden, expand education and help manage routine tasks. Used poorly, they commodify care, expose private suffering and, worst of all, create the illusion that being listened to by a machine is the same as being cared for by a person.
Zimbabwe deserves better than illusions. Our people deserve care that is safe, accountable and rooted in our social and cultural realities.
Let us be bold enough to harness technology, and responsible enough to protect the human heart at the centre of counselling. The choice we make now will shape how Zimbabweans are cared for in the years to come.
Dr Sagomba is a chartered marketer, policy researcher, AI governance and policy consultant, ethics of war and peace research consultant. — [email protected], LinkedIn: @Dr. Evans Sagomba, X: @esagomba.




