Search
pacific forum History of Pacific Forum

PacNet#39 – Lavender’s impact: time for an arms control agreement for AI?

Written By

  • Manoj Harjani Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies in Singapore

MEDIA QUERIES

AI’s increasing presence on the battlefield is a major concern for strategic stability. In the ongoing conflict in Gaza, the alleged use of AI for targeting should raise alarm bells and motivate greater efforts towards regulation and arms control. It is only a matter of time before similar concerns become visible in the Indo-Pacific, where many states are aggressively raising their military spending despite economic difficulties. The Gaza example shows that Indo-Pacific states cannot be bystanders in an environment where there are currently no constraints on developing and using military AI.

Targeting humans with AI

In a recent report, +972 Magazine—an online publication run by Israeli and Palestinian journalists—drew on anonymous insider interviews to claim that the Israel Defence Forces (IDF) has been using an AI-based system called “Lavender” to identify human targets for its operations in Gaza. Worryingly, the same report claimed that “human personnel often served only as a “rubber stamp” for the machine’s decisions.”

Responding to these claims, the IDF issued a statement to clarify that it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist.” However, a report by The Guardian has cast doubt on the IDF’s rebuttal by referring to video footage from a conference in 2023 where a presenter from the IDF described the use of a tool for target identification that bears similarity to Lavender. The reality is that we lack the ability to independently verify the accuracy of claims by any side, and this is a significant concern considering that militaries continue to explore how to integrate AI to augment existing capabilities and develop new ones.

Unfortunately, current efforts to regulate military AI and limit its proliferation appear unlikely to catch up as well, at least in the short-term. Even though +972’s exposé has garnered global attention, it will not have a tangible impact in terms of encouraging arms control for AI. The levers to achieve progress on that front remain in the hands of major powers who lack incentives to impose limits on the proliferation of military AI.

In the context of the Indo-Pacific, this will be further complicated by the difficulty in untangling governance of military AI from other issues, such as conflicting claims over the South China Sea and tensions related to Taiwan and North Korea. There is a low probability of Indo-Pacific powers making substantive progress on these issues in the short-term, and opportunities for dialogue will wax and wane according to complex political and security calculations states are forced to make continually reassess.

AI on the battlefield and human control

The fact that militaries are pursuing the adoption of AI despite well-established concerns regarding its potential for errors and biased output should come as no surprise. With no international law or arms control regime regulating or prohibiting military AI, states are effectively unrestrained when deploying these technologies, even if they have committed to their responsible use.

Aside from identifying kinetic versus non-kinetic applications, another important distinction when assessing responsible military use of AI is whether its application merely automates a task according to well-defined rules or allows for decisions to be made autonomously. Where AI-based systems can make autonomous decisions, it is crucial to identify the extent of autonomy by gauging the degree of human involvement in the decision-making process.

The extent of human control over the decision-making process of autonomous AI-based systems is critical for accountability in responsible military AI, but as the Lavender example demonstrates, without a legally binding arms control regime, it is quite meaningless to develop verification and enforcement mechanisms.

Furthermore, despite a recent uptick in dialogue between states on responsible military AI—for example through platforms such as the Responsible AI in the Military Domain (REAIM) Summit and US-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy—these are still voluntary frameworks geared towards building norms. Indo-Pacific participation in these platforms is still quite uneven—for example, India has not signed the REAIM Call to Action or the US Political Declaration. Participation by ASEAN member states has also been limited with the exception of Singapore.

Arms control for AI

History has already demonstrated with nuclear weapons that arms control is not a straightforward enterprise. When it comes to developing an arms control regime for AI, there are many barriers beyond major powers simply wanting to avoid constraints regarding the military use of AI. These include a range of procedural challenges that would make negotiations a time-consuming process and consensus very difficult to achieve. Regrettably, trust between major powers, and the US and China in particular, is also in short supply at present.

Many of these barriers can be observed in the ongoing initiative to limit the proliferation of lethal autonomous weapon systems (LAWS) at the United Nations. Since 2014, discussions on LAWS have been taking place within the framework of the Convention on Certain Conventional Weapons involving more than 120 states. In 2017, an open-ended group of governmental experts (GGE) was set up, which has met on a regular basis since then.

Although the GGE agreed to a set of 11 guiding principles on regulating LAWS in 2019, it has struggled to overcome divergence among major powers over the need for a new legally binding instrument. At its most recent meeting in March 2024, the GGE on LAWS already saw disagreement over how to interpret its recently revised mandate to conclude a legally binding instrument by 2026.

Lavender’s impact

Perhaps the most meaningful outcome from +972 Magazine’s report on Lavender has been to shine a spotlight on the risks from military use of AI, and the potential challenges an arms control regime for AI will have to reckon with. Of particular concern are the implications arising from how AI-based systems can rapidly increase a military’s capability to identify and kill targets beyond what human personnel tasked with oversight can realistically assess, especially given the chaotic urgency of war.

Furthermore, as AI blends into the background of military hardware and software, any arms control regime focused on LAWS would only cover some, but not all military use of AI. In the case of Lavender, it would be classified as an AI-based decision-support system rather than a lethal autonomous weapon system. This poses an additional obstacle to the development of an arms control regime for AI aimed at covering a wider range of applications, particularly when accounting for how existing efforts focused narrowly on LAWS have already struggled to reach a meaningful conclusion even after a decade of discussion.

While optimists can point to the historic resolution on AI adopted without a vote by the United Nations’ General Assembly in March 2024, there is a significant risk that progress on regulation and governance of civilian AI could leave behind parallel efforts for military AI. Even the European Union’s landmark AI Act passed earlier this year has a national security exemption, which highlights the difficulty posed by AI’s inherently dual-use nature for governance.

A question mark also remains over the involvement of the private sector in a future arms control regime for AI. Unlike nuclear weapons, which were developed primarily through state-led initiatives, AI’s technological advancement and applications have instead been driven by the private sector.

Even as states have been keen to demonstrate their sovereignty over tech companies through regulation in recent years, it is unclear how they would impose limits on civilian technology and applications being employed for military use. If anything, the wars in Gaza and in Ukraine have demonstrated that private tech companies have become—willingly or otherwise—key actors in contemporary warfare.

In addition to existing efforts by the US, Indo-Pacific powers keen to rein in the proliferation of military AI should focus on building up the broad base of state and governance capacity. This is particularly an opportunity area for the EU—although it is not necessarily within the geographical boundaries of the Indo-Pacific, it has consciously identified the region as an area of focus. Capacity building would be a low-hanging fruit for the EU, particularly among Indo-Pacific states in South and Southeast Asia that are still at a nascent stage of thinking about military AI.

Manoj Harjani ([email protected]) is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies in Singapore.

PacNet commentaries and responses represent the views of the respective authors. Alternative viewpoints are always welcomed and encouraged.

If you want to see more insightful analysis and impactful events in the Indo-Pacific, support us by donating. Your contribution fuels our mission for a secure and cooperative future: Donate Here