• Thu. Feb 16th, 2023

The Future of War: Best Practices for Military AI in the 21st Century

CBP’s Autonomous Surveillance Towers provide autonomous surveillance operations, scanning the environment with radar to detect movement, then orients a camera to the location of the movement detected by the radar, and analyzes the imagery using algorithms to autonomously identify items of interest, such as people or vehicles (Image credit: U.S. Customs and Border Protection)

The U.S. Department of State recently unveiled its framework for a “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy” at the Summit on Responsible AI in the Military Domain (REAIM 2023). The aim of the Declaration is to build international consensus around how militaries can responsibly incorporate AI and autonomy into their operations, and to guide states’ development, deployment, and use of this technology for defense purposes. The Declaration consists of a series of non-legally binding guidelines describing best practices for the responsible use of AI in a defense context:

The following statements reflect best practices that the endorsing States believe should be implemented in the development, deployment, and use of military AI capabilities, including those enabling autonomous systems:

  1. States should take effective steps, such as legal reviews, to ensure that their military AI capabilities will only be used consistent with their respective obligations under international law, in particular international humanitarian law.
  2. States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.
  3. States should ensure that senior officials oversee the development and deployment of all military AI capabilities with high-consequence applications, including, but not limited to, weapon systems.
  4. States should adopt, publish, and implement principles for the responsible design, development, deployment, and use of AI capabilities by their military organizations.
  5. States should ensure that relevant personnel exercise appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.
  6. States should ensure that deliberate steps are taken to minimize unintended bias in military AI capabilities.
  7. States should ensure that military AI capabilities are developed with auditable methodologies, data sources, design procedures, and documentation.
  8. States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those capabilities and can make context-informed judgments on their use.
  9. States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.
  10. States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles.  Self-learning or continuously updating military AI capabilities should also be subject to a monitoring process to ensure that critical safety features have not been degraded.
  11. States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.  States should also implement other appropriate safeguards to mitigate risks of serious failures.  These safeguards may be drawn from those designed for all military systems as well as those for AI capabilities not intended for military use.
  12. States should pursue continued discussions on how military AI capabilities are developed, deployed, and used in a responsible manner, to promote the effective implementation of these practices, and the establishment of other practices which the endorsing States find appropriate. These discussions should include consideration of how to implement these practices in the context of their exports of military AI capabilities.

The best practices outlined in the Declaration include: ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, and that high-consequence applications undergo senior-level review. Additionally, military AI systems should be capable of being deactivated if they demonstrate unintended behavior, and should be designed and engineered to detect and avoid unintended consequences. States are encouraged to take effective steps, such as legal reviews, to ensure that their military AI capabilities will only be used consistent with their obligations under international law, in particular international humanitarian law.

While the Declaration represents a positive step towards ensuring the responsible use of military AI, the Declaration is not legally binding, meaning that states are not required to adhere to its principles. This raises questions about the enforceability of the guidelines and the ability to hold states accountable if they violate them.

Another criticism is that some of the principles outlined in the Declaration are quite general and lack specificity. For example, the principle of ensuring that military AI systems are auditable and have explicit and well-defined uses is somewhat vague and could be interpreted in different ways by different states. Additionally, implementing the principles outlined in the Declaration may be difficult in practice, especially for states with limited resources or technical expertise.

The use of AI in the military context also raises significant ethical concerns, including the potential for unintended consequences and the potential for AI systems to cause harm. The Declaration acknowledges these concerns, but some critics argue that it does not go far enough in addressing them. Finally, while the Declaration highlights the importance of continued discussions and engagement on the responsible use of military AI, some critics argue that there is a need for more concrete action, such as the development of international agreements or the establishment of international organizations dedicated to overseeing the responsible use of military AI.

.