
AI development and adoption can only be built based on clean, efficient, transparent and accountable governance. All these Global South countries riddled with corruption, incompetence, and mismanagement — in both the public and private sectors — are not ready for the AI revolution.
Good governance is crucial for responsible AI development, deployment, and use. An enabling AI regulatory framework is essential. Specific aspects foundational to AI include establishing ethical guidelines and principles for AI development and deployment, regulatory oversight, transparency and accountability, inclusive public engagement and participation, risk management and resilience.
Put differently, AI governance consists of the frameworks, policies, and processes that guide the development, deployment, and use of AI in ways that ensure it is ethical, transparent, accountable, and aligned with societal values. AI governance aims to manage risks and maximise the benefits of AI technologies while addressing concerns like bias, privacy, security, and the potential for misuse.
Good AI governance frameworks must support capacity building initiatives to enhance AI literacy among policymakers, regulators, and society. There is a need for continuous monitoring, evaluation, and updating of governance mechanisms to address emerging challenges, ensure compliance with evolving regulations, and maintain public trust in AI technologies.
Indeed, good governance and an enabling regulatory framework are essential for fostering ethical, transparent, and accountable AI ecosystems. By integrating principles of fairness, accountability, transparency, and inclusiveness, policymakers can promote innovation while safeguarding against potential risks and ensuring that AI technologies contribute positively to sustainable socio economic development. Globally, national AI policies and regulations must be developed and harmonised in regional blocs and continents.
Preparing for AI legislation
The key AI powers, including China, the United States, the European Union, and the United Kingdom, have all been proactive in planning for AI. At the company level, corporations have also been busy getting ready for AI. There are different levels of corporate preparedness for the formal adoption of AI.
A key part of the preparation for AI is developing and implementing an enabling legal framework. AI legislation refers to laws, regulations, and policies enacted by governments to govern the development, deployment, and use of AI technologies. This legislation aims to establish a legal framework to address the unique challenges and opportunities AI presents, ensuring that it operates in ways that are ethical, safe, and beneficial to society.
- Time running out for SA-based Zimbos
- Sally Mugabe renal unit disappears
- Epworth eyes town status
- Commodity price boom buoys GB
Keep Reading
Data privacy and security are critical aspects of AI legislation. Laws that protect personal data used in AI systems ensure that sensitive information is handled securely and with consent. This emphasis on data privacy and security in AI legislation provides reassurance to the public about the protection of their personal data in the AI era.
Striving for fairness and non-discrimination is crucial in developing and deploying AI. Provisions must be made to prevent bias in AI systems that could lead to discrimination based on race, gender, or other factors.
Safety and reliability must be the cornerstones of AI systems, which can be achieved through standards that ensure AI systems are robust and reliable, thereby minimising risks such as accidents or malfunctions. Guidelines for the ethical development and deployment of AI, including restrictions on technologies like facial recognition or lethal autonomous weapons, are essential to ensure responsible AI practices.
Why is AI legislation important?
It ensures AI is used in ways that protect people’s rights, including privacy, safety, and freedom from discrimination. Moreover, it fosters trust in AI systems by using clear rules that help build public trust, making it easier for people to adopt AI-driven technologies.
By regulating AI, governments can minimise risks related to the misuse of AI, such as unethical surveillance, manipulation, or automation driven job displacement.
Laws help ensure that AI development adheres to ethical principles, considering human rights and societal impacts. Furthermore, legislation can encourage responsible innovation while balancing the need for economic growth, ensuring that AI contributes positively to economies.
However, as already indicated, there is a need for regional, continental and global harmonisation of AI laws and regulations. This can help prevent conflicts and create a more unified approach to AI governance. Given the rapid evolution of AI technologies, governments worldwide are increasingly focusing on AI legislation to ensure the technology is harnessed responsibly.
Lessons from the EU AI Act
The European Union (EU)’s AI Act sets global benchmarks for AI regulation. This landmark legislative framework aims to regulate the development, deployment, and use of AI across the EU. Introduced in April 2021, the Act is the first comprehensive legal framework for AI globally and seeks to strike a balance between fostering innovation and addressing risks associated with AI technologies. The Act adopts a risk-based approach, categorising AI systems into four levels of risk: (1) unacceptable, (2) high, (3) limited, and (4) minimal.
Unacceptable-risk AI systems, such as those involving social scoring by governments, are outright banned. High-risk applications, including AI in critical sectors such as healthcare, law enforcement, and education, are subject to stringent transparency, accuracy, and accountability requirements. Specific transparency obligations apply to limited-risk systems, such as chatbots, while minimal-risk systems face no additional regulations.
This tiered approach ensures that oversight is proportionate to the potential harm the AI system poses.
The Act also emphasises ethical AI deployment, requiring safeguards to prevent discrimination, ensure human oversight, and protect fundamental rights. By mandating transparency and accountability, the legislation aims to build trust in AI technologies while addressing concerns such as bias, data privacy, and misuse. Additionally, the AI Act has provisions for creating a European AI Board to oversee implementation and ensure harmonised enforcement across member states.
It supports innovation by establishing regulatory sandboxes where developers can test AI systems under controlled conditions. With the Act, the EU aims to set a global standard for responsible AI governance, encouraging other nations to adopt similar regulatory measures. By doing so, it positions itself as a leader in shaping AI’s ethical and safe use while fostering a single market for trustworthy AI systems.
Indeed, special policies and regulations are foundational and pivotal to the AI revolution.
Development of strong IP laws
Intellectual property (IP) laws are essential for AI development because they create a structured legal framework to protect innovations, encourage investment, foster collaboration, and clarify rights and responsibilities. The IP legislative frameworks are weak or underdeveloped in most emerging and least industrialised economies. IP protection ensures that companies and individuals can secure the rights to their AI inventions. This protection incentivises investment, as businesses and investors are more likely to fund expensive research and development projects if they know they will have exclusive rights to the resulting technologies.
There is a need to create an environment that fosters innovation. IP laws provide a competitive advantage to innovators, encouraging them to create new and unique algorithms, models, and applications. Patents, for example, grant exclusive rights to an invention for a set period, motivating creators to innovate further, knowing they have a temporary monopoly on their discoveries. Strong IP protections facilitate collaborations and licensing agreements. Companies can license their IP to other entities, generating revenue while enabling other innovators to build on their work. This is particularly important in AI, where open collaboration often yields more robust and diverse solutions.
Clear IP laws help define who owns what part of an AI system. AI systems often incorporate multiple layers of technology from various entities, such as algorithms, datasets, and pre-trained models. IP laws clarify ownership rights, reducing the risk of legal disputes over IP and enabling smoother commercialisation. AI development often relies on large amounts of data, much of which could be protected under copyright, trade secrets, or other IP protections. IP laws help regulate these data assets’ ethical and legal use, ensuring that data is used responsibly and that creators’ rights are respected. Without IP protections, larger players could replicate and profit from smaller innovators’ work, stifling competition. IP laws help level the playing field by allowing smaller developers to secure rights to their innovations and compete or license their technology to others.
By protecting innovation, IP laws enable a robust ecosystem where developers, researchers, and companies are incentivised to pursue advancements in AI, accelerating growth while respecting creators’ contributions. Consequently, for countries in the Global South to participate in the research, development, and production of AI systems, they must develop and enact robust IP legislative frameworks.
The case for guardrails
Guardrails in the context of AI are safeguards, frameworks, and mechanisms designed to ensure AI systems operate safely, ethically, and effectively. They serve as boundaries that prevent AI from making decisions or taking actions that could lead to unintended harm or unethical outcomes. These guardrails are critical in managing the risks associated with AI, including bias, misuse, and lack of accountability, while also fostering trust and promoting the responsible development of AI technologies.
Guardrails are essential for several reasons. First, they help prevent physical, emotional, or societal harm by ensuring that AI systems do not cause negative consequences through their actions or recommendations. Second, they build trust among users, stakeholders, and regulators by demonstrating a commitment to safety, ethics, and accountability. Third, they promote compliance with laws and regulations, helping organisations avoid legal or reputational damage. Finally, they support innovation by enabling the creation of AI systems that are beneficial, fair, and widely accepted. The Global South must develop bespoke and context-aware guardrails.
Examples of guardrails in action can be seen across various industries in the Global North. In content moderation, for instance, AI systems used by social media platforms have filters to detect and block harmful or inappropriate content, ensuring safer online environments. In the realm of autonomous vehicles, safety protocols and fail-safe mechanisms are embedded in AI systems to prevent accidents and respond appropriately in emergencies. Similarly, in healthcare, ethical guidelines ensure that AI systems handling patient data prioritise privacy and offer explainable and transparent decisions to medical professionals. Clearly, emerging and least industrialised countries learn from the Global North in developing guardrails, but there must be a concerted effort to contextualise them within the diverse lived experiences, norms and values of the Global South.
Indeed, as AI continues to shape critical aspects of society, establishing and enforcing guardrails will be essential. They mitigate risks and pave the way for AI systems to become trusted partners in solving complex problems and improving human lives. The Global South must take cognisance of this and prepare accordingly. In fact, these countries can utilise their late-mover advantage and develop better and more effective guardrails than the Global North.
A dynamic governance framework
The governance of AI involves a complex balancing of dilemmas — the trade-offs between opportunity seeking and risk aversion, the interplay between security and transparency, tensions between globalisation and localisation, and the interplay between self-regulation and government control. Indeed, AI governance frameworks must balance innovation, ethics, public safety, and fairness as AI technologies evolve. Flexible, inclusive, context — aware, and predictive AI governance policies are needed. Multi-stakeholder cooperation between governments, commerce, academia, and civil society is essential to guarantee robust and adaptable AI regulatory frameworks. What is critical is a dynamic and balanced governance structure that can adapt to AI advances to ensure the technology delivers maximum benefits while exhibiting minimum risks.
AI is a fast changing technology, with innovations sprouting all over the world. A dynamic governance framework is crucial for AI adoption because it allows regulatory structures to adapt to rapid technological changes, diverse applications, and the evolving societal impacts of AI. There is a need for adaptability to rapid change. AI technology advances at a fast pace, with new techniques, applications, and ethical challenges emerging continuously. A dynamic governance framework can evolve alongside these advancements, ensuring regulations remain relevant without stifling innovation. There is a need to balance between innovation and regulation. Dynamic governance enables a balance between encouraging AI development and ensuring responsible use. Instead of rigid, one size fits all rules, adaptable governance frameworks can implement risk based regulations, differentiating between low risk and high risk AI applications and adjusting requirements as the technology matures.
AI raises ethical and social concerns, such as bias, privacy, and accountability. A flexible governance model can adapt to these issues as they arise, allowing for ethical considerations to be incorporated into policy over time rather than being locked into initial, potentially outdated standards. AI development and deployment are global, so governance frameworks must accommodate different legal, social, and cultural contexts. A dynamic approach allows for international collaboration, helping to harmonise standards and reduce regulatory fragmentation across borders, making it easier for AI systems to be adopted globally. As AI systems increasingly impact society, engaging a wide range of stakeholders — developers, users, governments, and civil society — is essential. A dynamic governance structure enables ongoing consultation and input from these stakeholders, making policies more inclusive and grounded in real world applications and concerns.
AI carries potential risks, including cybersecurity vulnerabilities, job displacement, and unintended consequences from autonomous systems. A dynamic governance model can respond to these risks with targeted measures, like evolving security standards or adaptive workforce policies, to address risks as they become better understood. Public trust in AI technologies is essential for their widespread adoption. A transparent governance framework that adapts to address new risks and concerns can help build public confidence, ensuring that AI systems are seen as safe, fair, and beneficial.
The case for a dynamic technology law commission
A dynamic governance framework for AI allows policymakers and stakeholders to respond flexibly to new insights, ensuring that regulations evolve in tandem with technology to support safe, effective, and ethical AI adoption. A dynamic technology law commission consisting of tech savvy lawyers and AI experts can help achieve the dynamic governance of AI.
This is an excerpt from the book Artificial Intelligence: A Driver of Inclusive Development and Shared Prosperity for the Global South by Mutambara.
Mutambara is the director and full professor of the Institute for the Future of Knowledge (IFK) at the University of Johannesburg in South Africa.