The Pentagon is developing interactive guides to ensure responsible AI development, based on ethical and security principles.
The US Department of Defense (Pentagon) is developing interactive tools to guide the safe and ethical development of artificial intelligence (AI). These guides, including the Responsible AI Toolkit, aim to ensure regulatory compliance and promote collaboration with allies such as NATO. The update incorporates recommendations on generative AI and guidance provided in the National Security Memorandum signed by President Biden.
Responsible AI Toolkit: a tool for security and ethics
The Responsible AI Toolkit has been designed to guide Pentagon program managers and defense officials in the development of ethically and safely compliant AI systems. Initially released in November 2023, this interactive tool offers a digital checklist to help verify compliance with current laws and regulations. This is part of a wider strategy to build trust among allies and promote shared values in the face of more controversial approaches from adversaries such as China.
The tool is accessible to the public, demonstrating the transparency of the American approach. This initiative marks an attempt to establish global standards for AI development, supported by an arsenal of laws and practices in line with ethical and security principles.
Cooperation with NATO and international allies
An adapted version of the toolkit has been co-developed with NATO, responding to the organization’s principles concerning the responsible use of AI and the data lifecycle. Although this version is not yet publicly available, it reinforces mutual assurance processes between allies. The aim is to facilitate the interoperability of systems in combined operations, notably in projects such as CJADC2 (Joint Multi-Domain Command and Control).
Aligning certification processes between partner nations could reduce approval times for AI models and optimize the use of technological resources. This international cooperation aims to strengthen the collective ability to respond to potential threats and improve global security.
Integration of generative AI guidelines
The development of toolkit versions adapted to the evaluation of generative AI systems and large language models (LLMs) is a priority for the Pentagon. These technologies, while promising, pose risks in terms of accuracy and hallucinations, where models can produce erroneous answers. The division headed by Matthew K. Johnson drafted the Department of Defense’s policy on generative AI, aimed at framing its use in military and strategic contexts.
Feedback from NIST and OSTP is being incorporated to improve version 2 of the toolkit, which will be available soon. This version will present guidelines to ensure the safe and efficient use of these technologies, while avoiding biases and security issues.
Impact on security and competitiveness
The creation of these tools is helping to boost US competitiveness in the AI sector. In 2023, the global AI market was estimated to be worth 327 billion euros, with triple-digit growth forecast over the next decade. The ability to develop systems that comply with ethical and security standards could be a major strategic asset.
What’s more, the adoption of such technologies by allies guarantees a homogeneity of procedures and a strengthening of strategic alliances. It also provides a counterweight to the less regulated practices of countries like China, which is investing heavily in AI for both civilian and military applications. The difference lies in the ethical approach and transparency of American procedures.
War Wings Daily is an independant magazine.