AWS: The dark side of AI (II)

The unrestrained race for AI supremacy among Chinese, Russian and United States researchers does not augur well for cooperation.

THERE are also potential dangers and risks associated with the technology — the dark side of Artificial Intelligence.

It is a tipping point that forces rational humans to surrender control to machines for tactical decisions and operational-level war strategies.

When that condition is achieved, an army that does not remove humans from decision loops will lose a competitive advantage to the enemy.

Hence, with the attainment of battlefield singularity, using autonomous weapons systems becomes an existential matter. It is no longer a “nice to have” or some intellectual curiosity.

The AWS have to be deployed for survival! With AWS, machines would select individual targets, plan the battlefield strategy and execute entire military campaigns.

Furthermore, autonomous reactions at AI-determined speeds and efficiency could drive faster execution of battle operations, accelerating the pace of military campaigns to defeat or victory. Humans’ role would be reduced to switching on the AI systems and passively monitoring the battlefield. They will have a reduced capacity to control wars.

Even the decisions to end conflicts might be inevitably ceded to machines. What a brave new world! What are the implications of autonomous battles and wars?

There is a concern that autonomous weapons could increase civilian casualties in conflict situations. Indeed, these weapons could conceivably reduce civilian casualties by precisely targeting combatants.

However, this is not always the case. In the hands of bad actors or rogue armies that are not concerned about non-combatant casualties — or whose objective is to punish civilians — autonomous weapons could be used to commit widespread atrocities, including genocide.

Swarms of communicating and cooperating autonomous weapons could be deployed to target and eliminate both combatants and civilians.

Autonomous nuclear weapons

The most dangerous type of autonomous nuclear weapons systems (AWS) are autonomous nuclear weapons systems (ANWS). These are obtained by integrating AI and autonomy into nuclear weapons, leading to partial or total machine autonomy in the deployment of nuclear warheads.

In the extreme case, the decision to fire or not fire a nuclear weapon is left to the AI system without a human in the decision loop. Now, this is uncharted territory, fraught with unimaginable dangers, including the destruction of all civilisation.

However, it is an unavoidable and inevitable scenario in future military conflicts.

Why?

Well, to avoid this devastatingly risky possibility, binding global collaboration is necessary among all nuclear powers, particularly Russia, China, and the United States.

Given their unbridled competition and rivalry regarding weapon development and technology innovations, particularly AI, there is absolutely no chance of such a binding agreement.

The unrestrained race for AI supremacy among Chinese, Russian and United States researchers does not augur well for cooperation.

This is compounded by the bitter geopolitical contestations among these superpowers, as exemplified by the cases of Ukraine, Taiwan, and Gaza.

Furthermore, there is ruthless distrust and non-cooperation among the nuclear powers on basic technologies, as illustrated by the unintelligent, primitive and incompetent bipartisan decision (352 to 65) by the US House of Representatives to outlaw TikTok in the United States on March 13 2024.

Also instructive is the 2019 Huawei ban, which means that the company cannot do business with any organisation operating in the United States.

There is also restricted use of Google, Facebook, Instagram, and X in China and Russia. Clearly, the major nuclear powers are bitter rivals in everything technological!

Given this state of play, why would the Chinese and Russians agree with the United States on how and when to deploy AI in their weapons systems, be they nuclear or non-nuclear?

As it turns out, the evidence of this lack of appetite for cooperation is emerging.

In 2022, the United States posited that it would always retain a “human in the loop” for all decisions to use nuclear weapons. In the same year, the United Kingdom adopted a similar posture.

Guess what?

Russia and China have not pronounced themselves on the matter. With the obtaining state of play — conflict, competition, geopolitical contestation, rivalry and outright disdain — described above, why should the Russians and Chinese play ball?

In fact, the Russians and Chinese have started to develop nuclear-armed autonomous airborne and underwater drones.

Of course, the danger is that such autonomous nuclear-armed drones operating at sea or in the air can malfunction or be involved in accidents, leading to the loss of control of nuclear warheads, with unimaginably devastating consequences.

Future of AWS

Autonomous weapons systems will be a crucial part of warfare in the not-so-distant future. More significantly, autonomous nuclear weapons are on the horizon.

As explained earlier, although well-meaning, attempts to ban them entirely will likely be unsuccessful, if not futile. Indeed, without effective regulations, rules and restrictions, autonomous weapons will reduce human control over warfare, thus presenting increased danger to civilians and combatants.

Unchecked AWS will threaten and undermine peace and stability. Global cooperation is urgently needed to govern their improvement, limit their proliferation, and guard against their potential use. However, the utility and appeal of technology must not be underestimated. Autonomous weapons have not yet been fully developed; hence, their potential harm and military value remain open questions.

Therefore, political and military leaders are somewhat circumspect and non-committal about forgoing potentially efficacious weapons because of speculative and unsubstantiated fears.

The military tactical and strategic value of AWS is simply too immense to go unexplored. Beyond autonomous weapons, sophisticated and advanced AI systems have demonstrated efficacy in the development of cyber, chemical, and biological weapons.

Understanding autonomous weapons is critical for addressing their potential dangers while laying the foundation for collaboration on their regulation.

Moreover, this is preparatory work for future, even more consequential AI dangers occasioned by cyber, chemical and biological weapons.

Concluding remarks

Autonomous weapons systems are likely to become more sophisticated and capable due to advancements in AI, robotics, and sensor technologies.

This could lead to systems with greater autonomy, decision-making capabilities, and adaptability on the battlefield.

Society will continue to grapple with the profound legal and ethical challenges surrounding the use of AWS — accountability, discrimination, proportionality, and adherence to international humanitarian law.

Efforts to establish regulations, treaties, or guidelines to govern the development and use of such systems must be doubled.

The proliferation of autonomous weapons could significantly affect international relations and security dynamics.

As more countries develop and deploy these technologies, there will be dangers of an arms race, conflict escalation, and global security destabilisation.

There is also scope for the development of human-machine collaborative systems — human augmentation in military operations. Humans and autonomous weapons can work together synergistically on the battlefield.

This approach could leverage the strengths of both humans (e.g., judgment, creativity, empathy) and machines (e.g., speed, precision, efficiency) while mitigating some ethical concerns.

Public perception and acceptance of autonomous weapons will be key determinants of their future. Debates, protests, and advocacy efforts regarding these technologies’ ethical implications and risks will occur.

These could influence policy decisions and research priorities. Indeed, the future of autonomous weapons systems will hinge on an intricate and complex interplay of advances in AI systems, ethical considerations, international norms, and policy decisions.

Policymakers, researchers, and society must carefully and continuously evaluate and assess the potential impacts and implications of AWS.

Welcome to the brave new world of AI. Indeed, there are great opportunities and potential risks, in equal measure.

Of course, the bulk of our efforts must be to develop and deploy AI systems to solve social, economic, and environmental challenges worldwide.

AI must not leave anyone behind. However, it will be remiss of us, an unconscionable dereliction of duty, if we do not seek to understand, anticipate and mitigate the dark side of Artificial Intelligence.

 

  • Mutambara is the director and full professor of the Institute for the Future of Knowledge at the University of Johannesburg in South Africa. He is also an independent technology and strategy consultant and former deputy prime minister of Zimbabwe.

 

Related Topics