Society is witnessing a growing enthusiasm for Artificial Intelligence (AI) and its accompanying tools. Much praise has been given to the benefits of AI, particularly its ability to assist in making timely decisions and processing vast amounts of data quickly.
On the African continent, there is growing inquiry into the potency that AI technology can leverage in transforming value chains within the public and private sector. Ultimately, this could potentially serve as a catalyst for economic and social change for the continent.
For Zimbabwe, there is a buzz in terms of activity concerning issues related to AI usage. For instance, universities and colleges are starting to see the need to re-align their curriculum to the evolving technological landscape.
Ultimately, the desire here is for a workforce also apt in using technology such as AI in driving the Education 5.0 agenda enroute to Vision 2030.
Another feat to note is the emergence of hubs such as the Zimbabwe AI Lab and Tech Hub Harare in seeking to inculcate the development of tech-entrepreneurs from which AI tools are expected to have prominence.
The clarion call by the minister of ICTs is also well noted for the need to establish a multi-stakeholder AI committee tasked as advisory to stakeholders within the Zimbabwean knowledge and ICT ecosystem.
Yet despite these advances, we need to be cautious. One such caution concerns a phenomenon known as AI hallucinations. In simple terms, AI hallucinations refer to the false confidence an AI system exhibits when generating information that appears credible but is, in fact, inconsistent, inaccurate, or entirely fabricated.
Others have described this as a nonsensical output that demands careful verification before presentation or dissemination.
- Mavhunga puts DeMbare into Chibuku quarterfinals
- Bulls to charge into Zimbabwe gold stocks
- Ndiraya concerned as goals dry up
- Letters: How solar power is transforming African farms
Keep Reading
In a study published in the journal Humanities and Social Sciences using a case of 243 instances of distorted information generated through AI, some common errors were systematically classified as findings to the study.
These included errors involving facts, inconsistencies, logic, reasoning, inaccuracies and also fabrication. The findings give attention to the need of paying an extra layer of scrutiny to any information gathered through AI tools.
At play here is a dual veneer of false confidence from both the AI system and the end user.
First, the AI system relies heavily on queries and prompts given by the user to generate its output. During this process, much can go awry.
Some results may be anchored in facts and events that truly happened, while others may be tenuous, speculative, or outright incorrect. The danger lies in the fact that all of it is presented with the same confidence, leaving it to the user to discern what is accurate.
The second layer of false confidence lies with the human user. Here, factors such as impression management come into play where individuals use information, verified or not, to advance an agenda or impress others. This often occurs when users fail to question or validate AI-generated content before using it to make decisions or support arguments.
Given this backdrop, it is not surprising to see the challenges that have already emerged in sectors of the Zimbabwean economy due to uncritical AI use. Consider the legal profession, for example.
Recently, an apology was received to the Supreme Court of Zimbabwe after a legal brief submitted containing fabricated case law and misinterpretations generated by AI. This is an observation also affecting the South African legal fraternity where judges have reprimanded lawyers for citing non-existent case law citations that turned out to be hallucinated outputs from AI tools.
In one notable case, an acting judge explicitly attributed the inclusion of fake legal citations in an argument to the use of AI-generated content.
So, what should we do?
First, we must embrace the genuine benefits of AI tools in making our work and lives more manageable.
With their burgeoning popularity, it is clear we are only beginning to unlock their full potential. These tools will continue to evolve and become embedded in our daily lives.
Second, the duty of care and responsibility in using AI lies squarely with us, the end users. Using unverified information is wrong and can place us in precarious, even legally liable, situations.
I questioned ChatGPT, a popular generative AI tool, to offer advice in dealing with the challenge of AI hallucination.
The response was: “when using AI, remember to verify its information with trusted external sources, always check the evidence or references it provides, and treat it as an assistant, not as the final authority.”
Third, and most importantly, we must not abdicate our agency and judgment in the face of technological progress. The confidence we place in AI outputs must be balanced with skepticism and a willingness to question.
We cannot allow ourselves to become slaves to the machine, letting our critical thinking atrophy in the glow of AI-generated text.
Remaining vigilant and exercising our human judgment is an essential skill in the present and future.
We are truly living at the height of a technological moral panic, a time when our ability to exercise our executive functioning skills is being eroded, precisely when we need them the most. It is a period in which voices of falsehood are legion, spreading at the mere click of a button, often without verification or reflection.
Yet, this is also the very moment when we must be most vigilant and rise to the task of cultivating further the skills and habits that affirm our commitment to truth, discernment, and verification.
This is what makes us humans and our exercise of dominion.
AI can be a powerful ally but only if we, as humans, remain in charge of discerning what is true, reliable, and worthy of trust.
- Chinyamurindi is a professor in the Department of Applied Management, Administration and Ethical Leadership at the University of Fort Hare in South Africa. He writes in his own personal capacity. These weekly New Perspectives articles, published in the Zimbabwe Independent, are coordinated by Lovemore Kadenge, an independent consultant, managing consultant of Zawale Consultants (Pvt) Ltd, past president of the Zimbabwe Economics Society and past president of the Chartered Governance & Accountancy in Zimbabwe (CGI Zimbabwe). — [email protected] or mobile: +263 772 382 852.




