Peace Magazine: AI and War

Peace Magazine

AI and War

• published May 18, 2024 • last edit May 30, 2024

Most AI (artificial intelligence) applications are positive and have led to an improvement in the quality of human life. In connection with military applications, however, considerable dangers can arise, including loss of control.

That possibility has come much closer and faster in the 21st century, as human intelligence is increasingly supplanted by AI, which could also influence the course of life-threatening conflicts.

Such events were not even contemplated 300 years ago when Immanuel Kant formulated fundamental ideas on an international peace policy in his essay On Eternal Peace. They stand in contrast to today’s reality of increasingly militarized conflicts. Kant, on the other hand, called for the warlike state of humanity to be overcome in favor of conscious peacemaking:

He believed in a “covenant of peace“ that would differ from peace treaties (which end only one war) by seeking to end all wars forever.

But what does this mean amid current global developments – especially the influence of AI on the Internet?

Algorithms, the Internet and artificial intelligence were unknown and unimaginable for Kant. Does the ideal peace policy therefore still lie in peace treaties and peace alliances? And do we now need a new way of influencing the development of AI in individual countries?


Many regions and states are currently in major conflicts that are being fought by military means or are on the brink of war. Military interventions are increasingly replacing diplomatic and civil conflict resolution.

The threats are expanding: Particularly in the Middle East in the war between Israel and Hamas supported by Iran, in the war between Russia and Ukraine or the Western camp, the threatening situation between the People’s Republic of China and Taiwan, supported by the U.S. And with the rejection of diplomatic activities and the nuclear threat in the conflict between North and South Korea, there is also the potential for a third world war with use of nuclear weapons in the event of further escalation. Nuclear powers are involved in all of the these crisis situations, and in the case of North Korea and Russia, they are even threatening openly to use nuclear weapons.

But with the advent of artificial intelligence, human protection mechanisms against the triggering of falsely recognized acts of aggression are disappearing more and more.

At the same time, the United Nations is increasingly weakened as a diplomatic hub for peacemaking. The structure of the UN itself, with the veto rights of the five permanent members of the Security Council and the relative powerlessness of the UN General Assembly, mean that the UN is institutionally helpless in current conflicts.

But now, however, there is a looming problem for which neither the UN, the European Union, nor national politics are prepared: the development and use of AI in modernized weapons systems.


An escalating trend towards confrontation rather than cooperation between the major industrialized nations and military powers would fuel an unchecked arms race. Now, this also applies to software-based weapons such as cyber and autonomous weapons.

In a confrontation between the major nations, no one will take the risk of lagging behind their competitors in the technologically important areas of cyberspace and AI. Software development in these areas can take place completely uncontrolled and in secret. None of the nations can know which capabilities an opponent already has and which will be available in the near future.

That is why each side must go to great lengths to keep up. An arms race in the field of AI could lead much faster than expected to extremely dangerous military products that are almost impossible to control.

It is increasingly likely that a stage in the arms race has already been passed when internationally coordinated control of AI in weapons systems was still possible.

Arms control and verification are possible for conventional weapons systems. Aircraft, ships, tanks and nuclear weapons can be counted. With cyber weapons and AI-based weapons, however, it is all about software.

Software systems have special characteristics for which arms control and the verification of agreements are hardly feasible. No state will allow employees of an opposing state to gain access to its own software in order to verify arms control agreements. The risk would clearly be too high that the opponent could obtain a copy of this software. Furthermore, just checking the possible functionalities would be very time-consuming and could take years, during which the software would be developed further anyway.

It is completely incalculable what the future holds for us in terms of software-based weapons.

Disarmament agreements relating to software will hardly be possible. With software, any number of copies can be produced in a short space of time and can be used as often as required. Once developed, software for autonomous weapons will always be available.


Major advances in the field of artificial intelligence have also led to corresponding advances in military technology. In particular, autonomously acting robots or drones can also be used for military purposes. Enemy targets can be identified and attacked on the basis of automatic image recognition with good object classification.

There is a wide range of applications for autonomy in weapon systems. Many types of weapons can be equipped with more and more autonomy. This applies to robots, vehicles, flying objects and even ships or submarines.

In such systems, humans can be replaced by AI components. As with autonomous driving, this can also be done gradually. Our modern cars already contain many autonomous functions, including fully autonomous driving, although a human must still be able to intervene at any time for legal reasons. This is because accidents involving autonomous vehicles are not yet tolerated in road traffic. However in war situations, collateral damage is more likely to be accepted. Less sophisticated autonomous functions could also be used here. This increases the risk of such systems being used in times of war.

Modern warfare is not just about weapons, but also about the increasing use of AI in reconnaissance, determining the situation of the enemy and one’s own armed forces, and planning actions. AI-based software is also being used for these purposes now in the war in Ukraine.

Decisions regarding the selection of targets are also increasingly being made by machines. In the Gaza war, Israel is using AI systems to determine targets. These systems provide significantly more targets with precise location information on members of Hamas than would be possible with intelligence information.

In weapons systems of the future, the attack chain, the so-called “kill chain”, will be reduced in time, possibly to a few seconds. This attack chain describes the process from observation, identification of targets, planning and decision-making to the execution of an action. AI can be used in all of these phases, thereby shortening the entire decision-making process to such an extent that humans can barely intervene.

Even though politicians repeatedly emphasize that ultimately the decision on the lethal use of a weapon must remain with a human (“man in the loop”), some experts are questioning this principle: In situations in which a machine recognizes that a soldier’s life is threatened, it must also be able to decide independently whether to use a weapon if it would take too long to involve a human being.

Lahl and Varwick (2021, p. 134) argue similarly in their book Understanding security: “Formal competence is one thing, the actual chance of intervention is another. The more complex a collective network of semi-autonomous weapon systems is, the more impossible it becomes for the controlling human to see through the ‘black box’ and recognize errors or manipulation — in other words, to understand, evaluate and, if necessary, correct the results delivered by algorithms. In highly intensive situations under extreme time pressure, their role is…reduced to pseudo-control.”

In most cases, it will not be possible to meet the requirement of allowing people in the decision-making chain the ability to evaluate situations with sufficient certainty (“meaningful human control”).

When using neural networks, (which are loosely modelled on the human brain) decisions made by the machine are not comprehensible anyway. Even when using other AI methods, it is generally not possible to provide easily comprehensible reasons for the machine’s decisions. Instead, decisions are based on hundreds of features that are uncertain and vague and are calculated using formulas of some kind. Simple control will not be possible here.


An unchecked arms race of nuclear powers on a confrontation course also increases the risk of nuclear war to a considerable extent.

In recent years, a new arms race has already begun in various military dimensions. Most are still in their beginnings and the consequences can hardly be calculated. This applies to new nuclear weapon delivery systems such as hypersonic missiles, the planned weaponization of space, laser weapons, the expansion of cyber warfare capabilities and the increasing use of artificial intelligence systems, through to autonomous weapon systems.

All of these aspects also interact with early warning systems for detecting nuclear missile attacks and will significantly increase the complexity of these systems.

The further development of weapon systems with greater accuracy, improved maneuverability and ever shorter flight times (hypersonic missiles) will increasingly require AI techniques to automatically make decisions for certain subtasks.

In connection with early warning systems, there are already calls for the development of autonomous AI systems that evaluate an alarm message fully automatically and trigger a counter-attack if necessary, as there is no time left for human decisions. However, the data available for a decision is usually vague, uncertain and incomplete.

This is why even AI systems cannot make reliable decisions in such situations. And in the short time available, it will hardly be possible to check the machine’s decisions. Humans can only believe what the machine delivers. Due to the uncertain and incomplete data basis, neither humans nor machines will be able to reliably evaluate alarm messages.

According to a November 2019 report by the U.S. National Security Commission on Artificial Intelligence, there is a risk that AI-enabled systems could track and attack previously invulnerable military positions, undermining global strategic stability and nuclear deterrence.

States could thus be tempted to behave more aggressively, which could increase the incentives for a first strike. The report also proposes agreements between the U.S., Russia, China and other nations to achieve a ban on the launch of nuclear weapons authorized or triggered by AI systems.

The SIPRI report on the impact of AI on strategic stability and nuclear risks also warns against the increasing use of autonomous or AI-based decision support systems, which only appear to provide a clear picture in a short space of time. In order to maintain a degree of stability, an exchange between military forces on the respective AI capabilities is necessary in order to uphold the principle of nuclear deterrence.


Potential cyber attacks are also incalculable, whereby components or data of an early warning system could be manipulated, which could be possible in a variety of ways.

There have been some surprises in civilian AI applications in recent years, with unexpected capabilities achieved, most recently with generative AI systems such as ChatGPT. In 2023, the world’s leading AI scientists and heads of major AI companies issued urgent warnings about the potential risks of this development. Superintelligence that far exceeds the level of human intelligence is also considered possible in the coming years.

With the help of deepfake techniques and generative AI systems, masses of texts, images and videos can be generated that convey supposed facts. Such disinformation can be used to manipulate people and destabilize societies. If more and more media content is generated automatically without the possibility of verifying its truthfulness, political action in democratic states will become increasingly difficult. In an era of growing tension and tribalism, chaos, with social upheaval, riots and possibly civil wars could be the result.

Increasingly political friction, combined with ever more dangerous weapons systems, such as the further development of hypersonic missiles and the trend towards AI-based weapons described above, form a mixture that is becoming unmanageable for our political systems and can easily lead to a global catastrophe due to misunderstandings, e.g. in the form of an accidental nuclear war.


A policy based solely on mutual confrontation between the West and Russia or China will result in dangerous weapons systems being further developed on all sides with the highest priority, including the incorporation of AI technologies. The current wars provide an ideal testing ground for perfecting these military capabilities.

To avoid a global catastrophe that could lead to the annihilation of humanity, this process must be reversed. And it is not difficult to see what should be done.

It is imperative that the current wars be ended as quickly as possible, before reaching an irreversible cycle of violence. Instead of supplying weapons to war zones, extensive diplomacy should be applied urgently.

Instead of mutual confrontation, trust, cooperation and good communication channels must be rebuilt and improved. The economic and geostrategic interests of the various sides must be taken into account in negotiation processes.

Instead of stationing new hypersonic missiles in East and West, effective arms control agreements, including nuclear disarmament, are required. Global agreements to ban autonomous weapons systems and regulate AI are also urgently needed.

Meanwhile, dependence on Internet services for critical infrastructure should not increase any further. Instead, important infrastructure systems, such as health care and power supply, must be able to function flawlessly even without the Internet. It must also be ensured that dangerous weapons systems, such as nuclear missiles, cannot be controlled via the Internet.

These are all realizable protective measures, and should be put into effect as soon as possible.

The principles of peacemaking embraced centuries ago by Kant are equally pertinent today.

Some have suggested that the UN could be the basis for his “covenant of peace,“ which absolutely condemns war as a legal process, but makes a state of peace an “immediate duty,“ established by a treaty between nations.

But that would depend on reforming the UN – through processes that affect both its structure and the legal status of the people it represents. It would also require financial independence that is very difficult when the world body is reliant on funding and contributions by 193 member countries.

Strengthening the UN’s effectiveness to make decisions vital for the perilous future would also need a more democratic structure whose power is now centralized among the world’s most wealthy and nuclear-armed nations, which are in a position to thwart such efforts. Or ones to achieve legitimate and institutionally balanced control over the decision-making bodies.

Although strong resistance to any UN-coordinated control of AI development would be expected from powerful actors who want to defend their own interests, a solution must be found, and quickly. Previous attempts by the UN (UN Advisory Council on Artificial Intelligence) have therefore only been based on inadequate non-binding recommendations.

At a time of crucial transition, the world is facing a choice between diplomacy and disaster, as never before. Too much is at stake here in achieving responsible AI development that affects the fate of the global community as a whole.

Published in Peace Magazine Vol.40, No.2 Apr-Jun 2024
Archival link:
V40n2 issue cover
Peace Mag masthead

About Peace Magazine

Jan-Mar 2022 issue :

Apr-Jun 2022 issue :

Jul-Sep 2022 issue :

Oct-Dec 2022 issue :

Jan-Mar 2023 issue :

Apr-Jun 2023 issue :

Jul-Sep 2023 issue :

Jan-Mar 2024 issue :

Apr-Jun 2024 issue :

Jul-Aug 2024 issue :

Peace Magazine cover images :

Peace Magazine articles, 1985-2023 :

The Peace Calendar, 1983-1984 :

Project Save the World

Peace Mag's weekday videos and podcasts

Contact page / About Peace Mag :

Privacy, copyright, and reprints :

Policies for contributors and subscribers :

Terms and Conditions :

Guidelines for editing :

Advertising ratecard :

Email the office at :

198 methods of nonviolent action by Gene Sharp :

Disarmament Campaigns archive :

Link to other peace and disarmament websites :

Follow Peace Magazine on Facebook : Follow Peace Magazine on Twitter