AI governance: A look at high-risk systems

A biased social media algorithm might distort public discourse, while a flawed medical diagnostic system could misdiagnose a patient with life-threatening consequences.

ARTIFICIAL Intelligence (AI) has become the buzzword of our age. It is invoked in boardrooms, whispered in classrooms, and debated in parliaments.

Yet, for all the excitement, there is a sobering reality: AI is not a neutral tool. It can empower, but it can also imperil. The question that Zimbabwe, like every other nation, must confront is deceptively simple: how do we assess whether an AI system poses a risk to individuals, groups, or society as a whole?

This is not an abstract philosophical exercise. It is a matter of governance, of safeguarding livelihoods, and of protecting the vulnerable. And it is a matter that cannot be deferred.

Let us begin with a basic observation. AI is not one thing. It is many things. A chatbot that helps schoolchildren with homework, a self-driving car navigating Harare’s congested streets, a voice assistant in a household, a humanoid robot in a care home, a surveillance camera in a shopping mall, a toy that responds to a child’s voice, and a recommender system on social media, all of these falls under the broad umbrella of “AI”.

Yet, clearly, they do not present the same risk profile. A malfunctioning chatbot might frustrate a student, but a malfunctioning autonomous vehicle could cause fatalities.

A biased social media algorithm might distort public discourse, while a flawed medical diagnostic system could misdiagnose a patient with life-threatening consequences.

This uneven terrain is precisely why regulation is so difficult. To lump all AI systems under one legal framework, simply because they share the label “AI,” is to risk absurdity. It is akin to regulating bicycles and aeroplanes under the same transport law.

The term “high-risk AI” is now fashionable in policy circles, but it is often poorly defined. The European Union’s AI Act, hailed as a global benchmark, attempts to classify certain systems as high-risk.

These include AI used in critical infrastructure, education, employment, law enforcement, and healthcare. Yet even this supposedly strict framework has blind spots.

Consider social media recommender systems. They are not classified as high-risk under the EU Act, yet we know they have already contributed to political polarisation, mental health crises, and even violence.

Consider generative AI models that can produce convincing fake videos or audio recordings. These are not neatly captured by existing definitions, yet they pose obvious risks to democracy and social trust.

High-risk AI, then, should not be defined narrowly by sector alone. It should be defined by potential impact. Systems that can cause physical harm, distort democratic processes, manipulate vulnerable populations, or undermine social cohesion should all fall under stricter scrutiny.

One might ask: why should Zimbabwe, grappling with economic challenges, climate shocks, and infrastructural deficits, devote energy to regulating AI?

The answer is straightforward. Because AI is already here, and it is already shaping our society. Zimbabwean banks are experimenting with AI-driven credit scoring. Local authorities are considering AI for traffic management.

Farmers are being introduced to AI-powered weather prediction tools. Social media platforms, powered by opaque algorithms, are influencing our political discourse and our children’s mental health.

To pretend that AI is a distant concern is to ignore reality. And to ignore the risks is to invite harm that may be irreversible.

There is a temptation to adopt a “wait and see” approach. After all, why regulate something that is still evolving? But history teaches us that waiting too long can be catastrophic. Consider the regulation of tobacco.

For decades, governments hesitated, even as evidence of harm mounted. By the time strict laws were enacted, millions had suffered preventable illnesses. Consider climate change. For years, policymakers delayed action, and now the costs of adaptation are exponentially higher. AI presents a similar dilemma.

If we wait until harm is undeniable, it may be too late. Once misinformation spreads, once trust in institutions collapses, once vulnerable communities are exploited, no law can fully repair the damage.

Zimbabwe’s context adds unique dimensions to the AI risk debate.

Our economy is fragile, our institutions are under strain, and our citizens are often exposed to vulnerabilities that wealthier nations can buffer.

Imagine an AI-driven credit scoring system that unfairly penalises informal traders because it cannot interpret non-traditional income streams. Imagine an AI-powered surveillance system deployed without proper safeguards, eroding civil liberties. Imagine generative AI tools flooding our information space with fabricated political content during an election season.

These are not hypothetical risks. They are foreseeable. And they demand foresight. So, what should Zimbabwe do? First, we must resist the temptation to copy-paste foreign laws.

The EU AI Act, for all its ambition, is not a perfect fit for our realities. Our regulatory framework must be contextualised, sensitive to local vulnerabilities, and responsive to our developmental priorities.

Second, we must adopt a tiered approach. Not all AI systems require the same level of oversight. Low-risk systems, such as chatbots for customer service, should be lightly regulated.

Medium-risk systems, such as AI used in agriculture, should be monitored but encouraged. High-risk systems such as AI in healthcare, policing, or financial services should face strict legal scrutiny, mandatory audits, and transparency requirements.

Third, we must embed ethics into our framework. AI is not just a technical tool; it is a social actor. It shapes behaviour, influences decisions, and redistributes power.

Our oversight must therefore include ethical considerations: fairness, accountability, transparency, and respect for human dignity. Regulation cannot be imposed from above alone. It must be justified to the public.

Citizens must understand why certain AI systems are deemed high-risk, why certain safeguards are necessary, and why certain freedoms must be protected.

This requires dialogue. Universities, civil society organisations, and the media must play a role in educating citizens about AI risks. Policymakers must engage communities, not merely technocrats.

And the private sector must be part of the conversation, not merely a target of regulation. Zimbabwe does not operate in isolation. AI systems are often imported, developed by multinational corporations, and deployed across borders. This means our regulation must align, at least partially, with global norms.

But alignment does not mean subservience. We must assert our sovereignty, ensuring that imported AI systems respect Zimbabwean laws, values, and priorities.

Just as we demand that foreign investors respect our labour laws, we must demand that foreign AI systems respect our ethical standards.

The debate on high-risk AI is not a luxury. It is a necessity. It is about protecting lives, safeguarding democracy, and ensuring that technology serves humanity rather than undermines it.

Zimbabwe has an opportunity to lead, not lag. By crafting a thoughtful, contextualised framework for AI oversight, we can protect our citizens, attract responsible investment, and position ourselves as a nation that embraces innovation without sacrificing ethics.

AI is not destiny. It is a design. And designs can be guided, shaped, and regulated. The challenge before Zimbabwe is to recognise that some AI systems are not merely tools but potential threats.

To classify them as high-risk is not to stifle innovation; it is to safeguard society. We must therefore ask, with urgency and clarity: which AI systems deserve our trust, and which demand our scrutiny?

The answer will define not only our technological future but also our social fabric. If we get it right, AI can be a partner in progress. If we get it wrong, it can be a catalyst for harm.

  • Dr Sagomba is a doctor of philosophy, who specialises in AI, ethics and policy researcher, AI governance and policy consultant, ethics of war and peace research consultant, political philosophy and also a chartered marketer- [email protected]/ LinkedIn; @ Dr. Evans Sagomba (MSc Marketing)(FCIM )(MPhil) (PhD)/ X: @esagomba

Related Topics