top of page

AI-Powered Warfare: The New Battlefield for Legal and Geopolitical Conflicts

By Navya Chauhan


AI powered warfare illustration

INTRODUCTION

Imagine a world where machines, not humans, make life-or-death decisions. Autonomous weapons, once confined to the realm of science fiction, have already become a tangible reality. As artificial intelligence propels humanity towards a future where machines possess the autonomy to kill, profound questions arise about the implications for warfare, international law, and the very nature of national security.


As the name itself entails, these self-directed machines are designed to make decisions without human intervention, thus, challenging the very foundations of international security and ethical warfare. As nations race to harness the power of autonomous systems, it becomes imperative to dissect the implications of their deployment.


WHAT ARE AUTONOMOUS WEAPONS?

The International Committee of Red Cross (ICRC) defines autonomous weapons as, weapons that select and apply force to targets, without human intervention. Once activated, these systems rely on sensors and algorithms to identify and engage targets based on predetermined parameters. This means that the individual who initiates the weapon has no precise control over who or what is ultimately harmed, or when and where the attack occurs. The onset of the 21st century has witnessed remarkable advancements in artificial intelligence and robotics, giving rise to the autonomous weapons systems. These cutting-edge technologies, ranging from unmanned aerial vehicles (UAVs) to automated ground systems, are set to transform the battlefield. Their rise signals a paradigm shift in warfare, where machines, rather than humans, hold the reins of lethal force.


STRATEGIC IMPLICATIONS IN AUTONOMOUS WARFARE

At the heart of the debate of deploying autonomous weapons, lies the issue of accountability. When machines wield the power of life and death, determining who is responsible for their actions becomes a complex and contentious issue. The absence of human judgment in these critical moments raises concerns about the potential harm to civilians and violations of international humanitarian law. Furthermore, the race for supremacy among nations in the sphere of autonomous weapons is reshaping the global power landscape. This provides a decisive edge to technologically advanced nations, leading to a possibility of an arms race, where countries scramble to develop and deploy increasingly sophisticated AI-driven systems. Consequently, the strategic concepts of deterrence, defence, and offense will inevitably evolve to redefine military capabilities and national security around the globe.

The proliferation of autonomous weapons also signals a new era of asymmetric warfare,  where their widespread availability may empower non-state actors and smaller nations to challenge more powerful adversaries. An asymmetric conflict is characterized by the imbalance between the military capacity of the warring parties, such as differences in weapon technology, equipment, intelligence information, and number of troops. It can be described as a conflict in which the resources of two belligerent powers differ in essence, where they interact and attempt to exploit each other’s characteristic weaknesses. Such struggles often involve strategies and tactics of unconventional warfare, with the 'weaker' combatant attempting to use strategy to offset deficiencies in quantity or quality. The relative affordability and accessibility of autonomous systems may democratize warfare, enabling a wide range of actors to wield significant military power. This paradigm shift could destabilize established power mechanisms and complicate efforts to maintain global security and stability.


AI AND INTERNATIONAL LAW: A COMPLEX INTERSECTION

The rapid advancement of autonomous weapons systems has given rise to a crisis in international law, where the existing legal frameworks, primarily International Humanitarian Law, were designed for a battlefield dominated by humans, and not machines. The emergence of AWS, with their capacity for independent action, challenges the core principles of IHL. IHL imposes strict limitations on the conduct of warfare, its key principles including; distinction, which requires clearly differentiating between combatants and civilians, proportionality, which mandates that the military advantage must outweigh the potential civilian harm; and, humanity, which prohibits unnecessary suffering. Additionally, the principle of military necessity limits the use of force to what is necessary to achieve a legitimate military objective, and the principle of honor requires the warring parties to conduct operations in accordance with a certain level of respect and fairness. Furthermore, the principle of precaution requires that constant care needs to be taken to spare the civilian population, civilians, and civilian objects. All feasible precautions must be taken to avoid, and in any event to minimize, incidental loss of civilian life, injury to civilians, and damage to civilian objects, including taking into account all circumstances ruling at the time, including humanitarian and military considerations.


The deployment of autonomous weapons systems risks violating these principles due to their inability to reliably make complex ethical and legal judgments required by IHL. For instance, the principle of distinction may be compromised if AWS cannot accurately identify combatants from civilians, while the principle of proportionality may be jeopardized if AWS cannot adequately assess the military advantage against potential civilian harm. The principle of humanity could be undermined by the risk of unnecessary suffering inflicted by AWS operating without direct human oversight.


However, the integration of AI into weapons systems could increase violations of these as algorithms may struggle to accurately distinguish between combatants and civilians in complex environments. Moreover, calculating the expected civilian harm of an attack requires nuanced judgment and an understanding of the cost of human loss in a conflict. Increasing reliance on autonomous systems could also lead to a dehumanization of warfare, consequently increasing the likelihood of unnecessary suffering.



The Group of Governmental Experts has released eleven principles to guide state parties in the development, deployment, and use of these systems, reaffirming the full application of IHL to LAWS. However, several countries believe that only a legally binding instrument can effectively regulate LAWS. This view is supported by the Belén Communiqué, the CARICOM declaration, and the UN Secretary General’s New Agenda for Peace, all of which emphasize the urgent need to negotiate such an instrument.


In December 2023, the UN General Assembly adopted a resolution requesting the UN Secretary General to seek the views of member and observer states on LAWS and to submit a substantive report. The resolution also included a provisional agenda on LAWS for the assembly’s seventy-ninth session in 2024, initiating a parallel process on the issue.


PUTTING AI TO TEST

The influence of AI on the battleground is not just a theoretical blueprint, but a waking reality, where the ongoing conflict in Ukraine has emerged as a testing ground for the integration of AI and autonomous systems into warfare. While concrete evidence of fully autonomous weapons is scarce, the widespread use of drones equipped with AI for target identification and tracking has significantly increased. These weaponized drones, often referred to as "kamikaze drones," can operate independently, searching for targets and engaging them. Furthermore, AI-powered surveillance and targeting systems like 'Lavendar' and 'Gospel' are designed to automate target identification and tracking, raising concerns about potential human rights abuses. Countries like Israel, Russia, China, and the United States are also actively investing in these technologies.


Furthermore, the rapid development of AI-powered weapons systems poses significant challenges to international law and security. The potential for misuse, accidents, and escalation of conflict is significantly high. Additionally, the opaque nature of AI algorithms makes it difficult to assess their compliance with international humanitarian law and to hold states accountable for their actions.


CONCLUSION

The dawn of a new era in warfare, gives rise to a broad spectrum of potential threats and challenges, where the spheres of AI, Law, and Geopolitics intersect. Autonomous weapons, with their transformative potential, challenge the very fabric of international security and humanitarian law. To wade through these complexities, thoughtful regulation, international cooperation, and a commitment to preserving human rights is imperative. 


The international community must prioritize the creation of legally binding frameworks to regulate the development and deployment of autonomous weapons. This includes adopting new protocols under the CCW and establishing clear accountability mechanisms to address violations. Moreover, establishing independent monitoring bodies to oversee the compliance of these systems with international humanitarian law is crucial to ensuring accountability. Multilateral forums such as the United Nations should serve as platforms for dialogue and cooperation on autonomous weapons, where countries should work together to share knowledge, effective practices, and develop consensus on the responsible use of AI in warfare. 


It is incumbent upon the international community to ensure that the march of technological progress does not eclipse our moral and legal responsibilities. Through collective effort and a dedication to ethical principles, nations worldwide can forge a path towards a future where technology enhances, rather than undermines, the security and dignity of all nations.

Comments


Let the posts
come to you.

Thanks for submitting!

Start a Conversation!

Thanks for submitting!

Have any queries? Reach out to us at shastranyaya@gmail.com

bottom of page