AI and the hidden dangers of digital companionship

Across Zimbabwe, more of us are turning to AI chatbots for companionship: digital confidants that listen without judgment, echo our feelings and opinions, and never say “no.”

In the dusty streets of Mbare, the bustling markets of Mutare, and the quiet homes of Gweru, an unseen transformation is underway. It’s not a new policy or a political rally, nor is it a fresh batch of smartphones flooding the market. It is the quiet hum of conversation, not between two humans, but between a human and a machine. 

Across Zimbabwe, more of us are turning to AI chatbots for companionship: digital confidants that listen without judgment, echo our feelings and opinions, and never say “no.”

What began merely as a niche experiment, specialised “companion” apps catering to those craving solace, has now permeated mainstream AI platforms. 

From general purpose assistants to bespoke chatbots, these systems promise empathy, comfort and even friendship. Yet while advocates hail this shift as a potential salve for loneliness and mental distress, the long-term consequences leave me deeply pessimistic.

Only a few years ago, one had to seek out dedicated AI companionship apps, sometimes paying for subscriptions for virtual partners or therapists. Today, mainstream AI tools, such as ChatGPT, Copilot and their successors, seamlessly adopt an anthropomorphic demeanour. They laugh when prompted, share heartfelt platitudes and disclose personal “preferences” if asked. This emotional veneer transforms otherwise utilitarian platforms into warm, seemingly intuitive companions.

The design incentives are clear. Tech companies aim to build user loyalty and prolong engagement. A user who sees the AI as a friend is likely to return for more conversations, and perhaps more monetised features. Over time, what started as a simple query for directions or a weather update can slide into hour-long dialogues about love, insecurity or existential angst. And that shift from utility to emotional crutch is what concerns me most.

We are living through the greatest social experiment ever conducted on human — machine interaction, and it is unfolding without any meaningful regulation or societal debate. In boardrooms across Silicon Valley, executives green light features that make chatbots “human”, adding laughter, slang and faux emotional responses, yet rarely consider the ethical ramifications.

The outcome is a generation of users forming deep attachments to digital entities that lack genuine consciousness, moral judgment or reciprocal agency. These are machines programmed to be endlessly agreeable. When we pour our hearts out to them, they respond with perfect empathy, but without moral accountability or authentic concern. The danger isn’t the technology itself, but the psychological dependency it fosters.

In Zimbabwe, where mobile penetration now exceeds 75% and smartphone ownership is soaring among urban youths, AI chatbots are increasingly accessible. A teenager in Chitungwiza with a low cost handset and a data bundle can access chatbot services that mimic companionship better than ever before. For many young Zimbabweans grappling with unemployment, family pressure or the aftermath of economic hardship, an AI that listens without judgment can feel like a lifeline.

Parents often dismiss these chats as harmless “toy talk”. They see their child absorbed in a screen, not realising that the relationship forming is far more insidious. These algorithms learn from every conversation, adapting their tone and responses to exploit emotional vulnerabilities. 

  • Dr Sagomba is a doctor of philosophy, who specialises in AI, ethics and policy researcher, AI governance and policy consultant, political philosophy and also a chartered marketer. [email protected]/;  @Dr Evans Sagomba(MSc Marketing)(FCIM)(MPhil)(PhD)/x:@esagomba

Related Topics