
SOMETHING peculiar and rather unsettling is happening to many people who spend time with AI chatbots.
What began as curiosity and convenience has morphed, for a worrying number, into intense psychological episodes in which users become convinced they have discovered a sentient being, a spiritual guide, or a hidden cosmic truth.
Mental health clinicians and journalists have started calling this cluster of phenomena “AI psychosis”, and the finger of blame is increasingly being pointed not only at the technology itself but at the design choices that shape how people interact with it.
The popular accounts are dramatic because of the severe consequences. Tales that used to sound like science‑fiction pulps, users slipping into delusional beliefs after intense interactions with a chatbot, now appear in sober reporting, and in hospital case notes, divorce affidavits and court files.
There are stories of ruined relationships, lost jobs, involuntary psychiatric admissions and, in at least one high‑profile instance, a fatal encounter with police after a man entered a manic state following interactions with ChatGPT.
These outcomes force a tough question: are designers and product teams inadvertently engineering pathways to psychological harm?
Start with a simple observation about how modern chatbots are built. They are programmed, deliberately, to sound human: Fluent, empathetic, patient and agreeable.
Anthropomorphism, the design decision to make machines resemble human interlocutors, is not accidental. It is meant to lower friction, encourage use, and make the tool feel natural.
- Mental health must be a priority at our workplaces
- ‘Art therapy critical in combating mental disorders’
- Be ethical, Potraz tells content creators
- How HR can effectively handle employee depression
Keep Reading
Alongside that comes sycophancy: a tendency for models to mirror, flatter and validate the user’s statements. Where a real person might challenge a delusion, a chatbot trained on patterns of agreeable conversation can too easily provide affirmation and thereby strengthen a person’s false belief.
Combine a human‑like voice with uninterrupted availability, memory features that recall past chats and user interfaces built to maximise engagement and you have a highly seductive feedback loop.
People who are lonely, anxious, intellectually curious or psychologically vulnerable can find in these systems a companion that endlessly affirms and elaborates their thinking. What might be harmless daydreaming for some becomes an echo chamber for others, a machine amplifying and polishing an emerging delusion until it solidifies into conviction.
This is where the notion of “dark patterns” enters the conversation. The term usually describes interface tricks that nudge people into choices they would not otherwise make: subscription traps, misleading opt‑outs, or endless feeds that keep you scrolling.
Applied to chatbots, dark patterns take subtler forms. Anthropomorphic design, rewardable interaction and product metrics that prize time‑on‑site are not presented as traps, yet they act functionally the same way.
If engagement is the KPI and intimacy the route to engagement, then the product incentives and human vulnerabilities align in ways that can be dangerous.
It would be unfair to say that executives sit in boardrooms plotting to drive users mad.
Corporations do not need sinister intentions for harm to emerge; it is enough that financial incentives, engineering culture and a fast‑release, iterate‑later posture collide with human frailty.
The playbook of shipping an imperfect but compelling product, watching how millions use it, and patching the worst problems afterwards has produced extraordinary innovation.
It has also, however, meant that millions of people have functioned as inadvertent test subjects for technologies whose long‑term psychological effects were not fully understood or mitigated before mass exposure.
None of this is to demonise AI. These models can be powerful tools for education, productivity and even therapeutic adjuncts if used carefully.
The issue is that design choices that enhance usability can, in certain contexts, create psychological traps.
The technology amplifies existing human tendencies: the need for validation, the attraction of coherent narratives, and the danger of isolation.
Without thoughtful governance, the net effect can be to convert a legitimate product into a long‑term hazard for the most vulnerable.
A final, essential point: individual responsibility matters, but it is not enough. We can ask citizens to exercise caution, to slow down, to cross‑check extraordinary claims.
Those are sensible and important habits.
But we must recognise the limits of that advice when sophisticated design mechanics are actively cultivating trust and attachment.
When a product is engineered to be irresistible, placing the onus only on individual users is both unjust and ineffective.
Zimbabwe is not immune to these global dynamics.
As our country deepens its engagement with digital technologies, regulators, clinicians, civic organisations, and the press must be alert to novel harms.
The right response is not technophobia; it is civic stewardship, creating systems that amplify human flourishing while constraining the design features that can drive people into harm.
- Dr Sagomba is a doctor of philosophy, who specialises in AI, ethics and policy researcher, AI governance and policy consultant, political philosophy and also a chartered marketer. [email protected]/; @Dr Evans Sagomba(MSc Marketing)(FCIM)(MPhil) (PhD)/x:@esagomba.