Both advocates and, more discreetly, diplomats lament the slow progress and the political entrenchment of the main forum for deliberations on autonomous weapons. For each of the past ten years, the Convention on Certain Conventional Weapons (CCW) has brought together for a few weeks in Geneva national representatives, experts, and civil society advocates. In tranquil surroundings that feel cut off from the world of battlefields and technological development, the CCW has functioned as a crucial forum for nations to gain a deeper understanding of pertinent issues and engage in dialogue with leading technical experts.
However, aside from some watered-down principles based on voluntary compliance, no concrete instrument to regulate the development and use of autonomous weapons has emerged from these meetings. To be fair, the CCW, which operates on consensus, has been a victim of the broader fraying of the global order and geopolitical tensions among major states. Yet the CCW may have completed its tenure as an incubator for these discussions. A new forum is now required to expedite the regulatory process.
Would another forum yield better results? We can’t know for certain, but exploring alternative forums seems a necessary next step.
A discussion on responsible military artificial intelligence (AI) has been running parallel to the CCW process for several years. It highlights the reality that autonomous weapon systems are rapidly becoming an aspect of modern AI-enabled warfare.
The new AI-enhanced technologies are reforming the ways in which militaries perform logistics, surveillance, and target selection. In a sign of this new era, United States Air Force Secretary Frank Kendall was recently a passenger in an F-16 fighter jet piloted by artificial intelligence.
Responsible military AI is the focus of the REAIM Summits. The first Summit on Responsible Artificial Intelligence in the Military Domain, REAIM 2023, was held last February in the Hague. The second will be held in the Republic of Korea September 9 – 10, 2024.
But current discussions on responsible military AI are not addressing the need for clear regulation of autonomous weapons. At the close of the first REAIM Summit, the United States released its proposed Political Declaration on Responsible Military AI and Autonomy. So far, 53 states, including Canada, have signed on to this voluntary declaration.
Paul Scharre, author and Executive Vice President at the Center for New American Security, notes in Foreign Affairs that this declaration does provide general guidelines on the use of autonomous weapons. However, he notes that it “lacks meaningful restrictions on the most dangerous forms of autonomous weapons, such as antipersonnel autonomous weapons and autonomy in nuclear operations.” While the declaration can lead to greater sharing of standards and best practices—largely among American allies—a much stronger, more binding, and specific instrument is needed to bring less likeminded states into line.
Over the past year, countries including Costa Rica and Sierra Leone have hosted dialogues on autonomous weapons. Most recently, Austria hosted an international conference on April 29 – 30 in Vienna that was attended by representatives of 140 states and included more than 1,000 participants from government, academia, and civil society. But the conference and regional dialogues have so far not been seen to provide a clear alternative forum to the CCW. Last November, the First Committee of the United Nations (UN) General Assembly, which deals with disarmament and threats to global peace, adopted a first ever resolution on autonomous weapons. But the resolution simply called for further dialogue, without offering a clear answer on forum and scope.
While civil society laments offstage, states must determine any new forum for discussion – and such a change requires a display of political will. The UN General Assembly is a more inclusive choice and, as it works on a majority vote rather than consensus, it could be more effective. However, the CCW’s lack of progress suits various actors, including Russia but also the United States and its allies, which do not want any legally binding instruments constraining their use of emerging technologies.
Another forum might not be more effective if the positions of major countries remain fixed and if middle powers do not provide leadership to bring outlying states to the table.
States must also decide on a relevant focus. A decade ago, the International Committee of the Red Cross defined autonomous weapons as those that “once activated, can select and engage targets without further human intervention.” But on today’s battlefields, AI decision-support systems and target-generation systems are diminishing human control over targeting. Humans have mere seconds to approve actions suggested by AI.
One thing is clear: States must address the increasing autonomy of weapons platforms and the use of AI-enabled targeting that precludes significant human control. States must commit to the development of a global, legally binding instrument that ensures human control over the targeting of humans and civilian infrastructure. The responsible military AI dialogues can complement this work and also bolster international humanitarian law to address issues of emerging technologies.
As well as a new global agreement, states must develop domestic regulations, standards, and training for their own militaries. Though progress has been sluggish, the chance to regulate AI-enabled weapons remains within reach. This is an opportunity that must not be allowed to slip away. •
Branka Marijan works for Project Ploughshares studying autonomous weapons.
Peace Magazine , page . Some rights reserved.
Search for other articles by kgsimons here