A Proposed Stance for India on LAWS

An Indian approach to lethal autonomous weapons systems in today’s world

Authors

1. Executive Summary

As rapid advancements in autonomous technologies are made, autonomous weapons systems continue to become central to military operations. Lethal autonomous weapons systems (LAWS) in particular pose a host of challenges and raise both security and humanitarian concerns. Given the nature of the technology, it is unlikely to be possible to have an outright ban on LAWS; therefore, we recommend certain guidelines for a normative framework for their ethical and responsible use.

The following points are the key takeaways:

1.       LAWS should be redefined with the ability to perform a range of functions autonomously as the central characteristic, as opposed to autonomous targeting.

2.      Human control and accountability should be ensured through both temporal and spatial constraints.

3.      LAWS should be regulated based on particular system capabilities: their range and endurance, as well as the payload of the weapons system.

4.      LAWS should not be a part of any actor’s nuclear force.

5.      Domain-specific operational controls should be established for LAWS, with context-specific guidelines for land-based, maritime and aerial operations.

2. Introduction

As artificial intelligence (AI) becomes increasingly diffused across sectors and advancements in autonomous technologies are made, lethal autonomous weapons systems (LAWS) are likely to become more central to military arsenals. Before such a time, it is imperative to put in place guidelines to ensure that there is room for minimal risks and errors that may come with the deployment of LAWS. Given the nature of the technology itself, LAWS are unlikely to be subject to an outright ban since both AI and automation are central to various other applications as well. Additionally, many of the countries that are central to today’s world order are opposed to a complete ban as well.

3. Context

One major roadblock to the governance of LAWS is the absence of a universally agreed-upon definition of what exactly falls under the term. Currently, what is common to most widely accepted definitions is the highest level of autonomy, meaning that weapons systems where the engagement with a target is performed completely autonomously are classified as LAWS. A major reason for countries to define LAWS this way, is that it enables anything that falls below that threshold to be deemed as acceptable. For instance, partially autonomous weapons systems where a human operator performs the final kill-switch authorisation will not fall under the ambit of LAWS the way they are largely defined today. This allows countries to operate these weapons systems without being burdened by the host of regulations and moral arguments that are often tied to LAWS.

A misconception exists that there is no need to prioritise the governance of laws, since they are still theoretical and have not been deployed today. However, that is not necessarily the case. While the specifics of the degree of autonomy are unclear, both autonomous drones and target identification systems have both been deployed in recent years. Additionally, many states are producing and exporting unmanned combat aerial vehicles with autonomous capabilities.

Today, broadly speaking, two blocs exist in the conversation about the governance of LAWS. A large group of countries strongly advocates for a legally binding instrument (LBI) for the governance of LAWS, believing that current legislation falls short. A small handful of countries, strongly oppose any legally binding instrument. India has been on the fence thus far, and has maintained that, currently, an LBI would be premature.

LAWS are proving to be difficult to restrict. There are many challenges with governing LAWS. Firstly, AI, the underlying technology, has already had widespread proliferation across many civilian use cases. By virtue of being a dual-use technology, it is challenging to separate civilian and military applications, as it is difficult to monitor the development of the technology and thus ensure compliance. While there is literature on AI verification practices, currently it is untested and there are difficulties with its implementation. The lack of a globally agreed-upon definition only exacerbates this problem, as the success of arms control mechanisms is largely reliant on their specificity. In terms of their military utility, LAWS prove to be difficult to restrict, as AI is currently viewed as a means to provide a ‘game-changing’ military advantage. The utility of LAWS makes countries reluctant to restrict or ban them. Countries that have already been working towards developing AI technologies are also likely to see the development of LAWS as a natural progression in their technological ascent.

4. Explanation of Terms

The proposals of this brief refer to the following terms, as explained below:

  • Autonomous Weapons Systems (AWS) - Systems which have the capacity to operate entirely without human intervention or oversight and select and engage targets.
  • Semi-Autonomous Weapons Systems - Systems which can operate only with human oversight in their selection and engagement of targets as well as their deployment of force.
  •  Lethal Autonomous Weapons Systems - For the purpose of this paper, LAWS refer to any weapons systems that have the capacity to deploy lethal force and select targets without human oversight and intervention.
  • Unpredictable Weapons Systems - Weapons systems which have deep-learning capabilities that can continue to learn in critical functions and post-deployment.

5. Proposed Articles for a Framework on Lethal Autonomous Weapons Systems

For every country, there is an imperative to balance the potential for their own development with the costs of their adversaries developing LAWS. There are both domestic as well as international concerns. Domestically, any weapons systems developed should be in accordance with a country’s laws and military ethos. Systems should not be discriminatory in any manner, and there should be a clear chain of accountability. On the international front, the same accountability should be established.

Governments and militaries are uncomfortable with the repercussions they may face in the event that there is a technological error, especially for maintaining accountability for casualties. There are also ethical concerns about whether AI systems will be more indiscriminate in their target selection, and a host of questions about transparency and accountability.

If powerful systems with high degrees of autonomy are not constrained, an ‘autonomy cascade’ can take place. This term, coined for the purpose of this document, refers to a scenario where more and more tasks are increasingly made autonomous, leading to a dangerous scenario where humans find themselves out of the loop in situations where oversight is required.

Countries should carefully craft their own approach to allow for their own technological developments while ensuring that human control is central to the conversation about LAWS. The following proposals address definitional conundrums and application-based concerns.

Article 1: Redefining Lethal Autonomous Weapons Systems (LAWS) to Emphasise Human Control

  • India should propose a new definition that redefines the central characteristic of LAWS. Currently, the primary focus is on autonomous lethal action. Instead, the central characteristic should be the ability of the system to perform autonomously, including the fact that human control is seen as critical to its responsible deployment and should be made mandatory for its use.
  • This will allow for better risk mitigation, guardrails and safety practices to be put in place for even systems that are partially autonomous or have human oversight.
  • The inclusion of human oversight should not be a barrier to a weapons system with autonomous capabilities being considered LAWS. Instead, human oversight should be a key requirement for the deployment of such systems in environments that have a high risk of error.

Article 2: Ensuring Human Control, Temporal and Spatial Constraints, and Accountability

  • Lethal autonomous weapons systems must be constrained in terms of both their ability to operate across large spaces as well as long periods of time to prevent the materialisation of concerns such as risks of escalation, liabilities such as civilian casualties, and a loss of operational control.
  • These risks can be understood through a risk hierarchy that looks at the degree of autonomy of a weapons system and the destructive potential of the system to come up with a taxonomy of different systems. 
Unacceptable Risk High Risk Medium Risk
Fully Autonomous Nuclear Weapons Systems Swarm technologies Semi-Autonomous Weapons Systems
Lethal (fully) Autonomous Weapons Systems Semi-Autonomous Nuclear Weapons Shooters and Sensors of Autonomous Systems
Unpredictable Weapons Systems Fully Autonomous Non-Lethal Systems
  • Human authorisers should retain control over the entire process for high-risk systems– from target identification to a potential strike– and should be able to stop any action they see fit.
  • Other functions aside from targeting should also have human oversight to ensure that there is a clear chain of accountability.
  • In all cases, the chain of accountability should be clearly defined.

Article 3: Regulating LAWS Based on System Capabilities

  • Guidelines that prevent systems with advanced loitering capabilities from having high levels of autonomy should be established. Loitering refers to the ability of an unmanned or fully autonomous systems to remain near a target area for extended periods, while waiting for a target to be identified or to attack a specific target. This will allow for regulations for particular contexts where the time differential between the deployment of the system and the engagement with a target can lead to risks of error or civilian casualties. Context-appropriate regulations need to be developed, as explained in Article 5.
  • Systems with advanced explosive payload capabilities with high degrees of autonomy should also be constrained.
  • Systems that can loiter for prolonged periods of time should have no autonomy in engaging with targets given the high risks of misidentification and error over long periods of time.
  • Systems that have little to no ability to loiter can have more autonomy for functions like navigation and target identification, but even in these cases there should be clearly designated human operators who are responsible for being in control of the ultimate lethal engagement with a target.

Article 4: Prohibition of LAWS in Conjunction with Nuclear Capabilities

  • LAWS should not be used in tandem with any sort of nuclear strike capability.
  • Huge costs will be incurred if operational control is lost, and there are major risks of escalation and civilian casualties.

Article 5: Establishing Domain-Specific Operational Controls for LAWS Deployment

  • Domain-specific guidelines should be proposed for using LAWS in land-based, aerial, and maritime operations. These guidelines should be drafted with context-specific considerations in mind.
  • In the case of the use of LAWS in land-based systems, direct coordination between the system and operators should be required for any target confirmation. Additionally, coordination should be mandatory to identify and confirm surrender.
  • In the maritime context, it is recommended that there be clear oversight for all lethal actions, especially for long-endurance systems such as submarines.
  • For aerial combat, in some cases commanders may choose not to engage enemy targets out of concern for escalatory risks. In these situations, there should be mandatory confirmation with a clearly designated operator.

Article 6: Path to a Legally Binding Instrument

  • Once these principles have been adopted widely and there is normative development based on these ideas centred around responsible use, India should advocate for a legally binding instrument.

6. Conclusion

As LAWS transition from theoretical concepts to central components of modern military arsenals, a complete ban seems improbable given the dual-use nature of AI and the military advantage it provides. They should be seen not as novel forms of technology, but as a natural continuation of older forms of autonomy, such as surface-to-air missiles and anti-personnel sentry missiles. Older weapons were constrained in their own operational contexts, as LAWS will be in the future.

For India, navigating this transient landscape requires moving beyond its current ambiguity and towards a proactive strategy that promotes normative development. An Indian approach that champions the responsible development and deployment of LAWS while also safeguarding India’s own security interests and broader ethical considerations is possible. At the heart of India’s stance, there should be human control, making it a mandatory component of the deployment of LAWS, rather than a factor that excludes them from regulation. A definitional shift coupled with a normative framework that puts forward application-based guidelines will help establish a robust path forward for accountability and risk mitigation.

By spearheading these principles, India will be able to safeguard its own geopolitical interests stemming from its regional security environment, while also solidifying its role as a responsible middle power in shaping the governance of AI in warfare. The fostering of normative development of principles that are responsible while being receptive to technological change, will allow for the eventual development of a legally binding instrument.