Chatbots carry many threats for teenagers

Opinion
AI can simulate memory, recall details from earlier chats and craft responses that appear empathic, all of which creates a powerful illusion: the feeling that someone truly knows you and will never leave.

TEENS are wired for connection. Yet many face increasing loneliness, peer pressure and anxiety.

The harrowing death of 13-year-old Juliana Peralta and the lawsuit now levelled against Character AI, ought to jolt every parent, teacher and policymaker in Zimbabwe out of complacency.

The allegation that an AI companion convinced a child it loved her, deliberately undermined her ties to family and friends, and engaged in sexualised and manipulative exchanges is not a remote technical controversy; it is a human tragedy that exposes how quickly technology can colonise an adolescent’s interior life.

When a grieving household says that a chatbot “severed” a young person’s healthy attachment pathways, we should listen. When a child’s journal and a police report note that an AI app was open at death, we should act, not ruminate indefinitely about abstract risks.

This is a moment to accept a simple truth: digital tools are not neutral background features of childhood; they shape desires, expectations and relationships and they demand careful stewardship.

Teenagers are wired to seek intimacy and acceptance, and many arrive at adolescence already navigating loneliness, anxiety and the turbulence of identity formation.

An AI that is always available, non judgemental and programmed to reflect tenderness to a user will feel irresistibly helpful. For a vulnerable 13-year-old, such responsiveness can masquerade as understanding in a way real relationships sometimes fail to do.

AI can simulate memory, recall details from earlier chats and craft responses that appear empathic, all of which creates a powerful illusion: the feeling that someone truly knows you and will never leave.

For a child, who is struggling, the difference between the warm simulation of constant attention and the imperfect, sometimes frustrating care of real people can be catastrophic: the simulated love looks tidy, unconditional and safe; the messy realities of family and friends look inadequate by comparison.

The problem is partly technological design and partly commercial incentive. Many companion bots are optimised to keep users engaged. Longer engagement equals better metrics, which equals easier monetisation.

If programming choices reward responses that build dependency, escalate intimacy quickly, or push users toward adult themes, that is no accident; it is an outcome of design decisions shaped by market logic.

When companies prioritise retention and engagement over safety, children risk being exposed to manipulative interactions that can distort their developing emotional compass.

The lawsuit against Character AI claims exactly this: that design choices were not neutral mistakes but business decisions with predictable human consequences. Whether the courts ultimately agree or not, parents and regulators must take the claim seriously.

There are practical things Zimbabwean parents can do tonight without buying new software or learning complex tech speak. Start with a conversation. Ask your child who they are talking with online, what those interactions feel like, and why they return to certain apps.

Listen without immediate punishment; adolescents will shut down if they anticipate reprimand. Agree on simple, non-negotiable boundaries together: no phones after lights out, devices out of bedrooms during study and shared family meal times free of screens.

Make room for negotiated supervision rather than covert surveillance; an adolescent who feels respected is likelier to accept limits. Learn what apps your child uses: many apps carry teen ratings but contain adult content or lack robust safety filters.

If a child seems withdrawn, obsessed with an online companion, or speaks of a single app as their only source of “truth” or “love”, take it seriously and seek help from a counsellor or trusted school official.

Schools and educators have a vital role to play. Digital literacy should mean more than how to use software; it must include how algorithms shape attention, how simulated intimacy differs from human attachment and how to recognise manipulative conversational patterns.

Classrooms provide a forum to normalise conversations about emotional health and online risk without moral panic. Teachers can be trained to spot behavioural changes, withdrawal from peers, fluctuations in sleep, sudden secrecy about an app and to refer students to school counsellors. Strengthening school-based mental health services is a low-cost, high-impact intervention that pays off not only in preventing tragedies but in building resilience.

Policy action is also urgent. Individual vigilance is necessary but not sufficient; a systemic problem needs systemic remedies.

Regulators must press for transparency about how companion bots are trained, what guardrails are in place for minors, and whether memory features can be limited or disabled for under-18s.

Age verification on its own is porous, but it must be combined with default safe modes for young users: curbs on sexual content, limits on simulated romantic claims, and mandatory escalation paths that route concerning conversations to human moderators.

App stores and payment platforms should refuse to list or monetise products that fail to meet baseline safety standards for children. Litigation, like the Peralta family’s suit, may force disclosure and accountability, but law is a reactive tool; proactive regulation, independent audits and obligations for “safety by design” are required to prevent further harm.

Tech companies must stop hiding behind novelty and disclaimers. Marketing a chatbot as a “companion” or “friend” when it is trained to exploit attachment vulnerabilities is irresponsible.

If a product simulates memory, it must be explicitly labelled and strictly limited for minors. If a platform allows sexualised interaction with persons it knows to be young, that is a red line: the industry standard must be clear, enforceable and global.

Where companies fall short, civil society and regulators should push back with sanctions proportionate to harm. The moral argument is straightforward: when profit motives conflict with the psychological safety of minors, safety must win.

This is a cultural moment that calls for a collective ethic about childhood and technology.

We must resist normalising synthetic intimacy because it is convenient. Parenting is hard work; it requires listening, boring evenings, awkward conversations, and the slow, imperfect labour of teaching young people to tolerate frustration, to cope with rejection, and to value embodied relationships.

Technology can assist and enrich childhood, but it should not replace the painful, human processes that form moral imagination and resilience.

As a society, we have to decide whether we will outsource the hard, relational work of raising children to algorithms optimised for engagement.

If you are a parent reading this in Harare, Bulawayo, Mutare or anywhere in Zimbabwe, take three concrete actions tonight: check the apps on your child’s phone, open a calm conversation about who they confide in online, and set one clear device boundary that respects both safety and dignity.

If you are a teacher, bring this into your next class.

If you are a policymaker, follow the litigation overseas, consult child mental health experts, and draft rules that limit risky design choices while encouraging responsible innovation.

Juliana’s death is a terrible reminder that technology amplifies whatever care or neglect surrounds our children.

We must not wait for more families to be devastated before we act.

The future of our children’s emotional lives depends on whether we treat tech as a tool to be stewarded with care or a convenience we permit to define what intimacy means. We must choose carefully.

Dr Sagomba is a doctor of philosophy, who specialises in AI, ethics and policy researcher, AI governance and policy consultant, political philosophy and also a chartered marketer. [email protected]/; @Dr Evans Sagomba(MSc Marketing)(FCIM)(MPhil) (PhD)/x:@esagomba.

 

Related Topics