At their October 2021 meeting, Allied Defence Ministers formally adopted an Artificial Intelligence Strategy for NATO. Current and former NATO staff with direct involvement in the development and implementation of the Strategy outline its main features and objectives.

Introduction

One does not have to look far to see how Artificial Intelligence (AI) – the ability of machines to perform tasks that typically require human intelligence – is transforming the international security environment in which NATO operates. Due to its cross-cutting nature, AI will pose a broad set of international security challenges, affecting both traditional military capabilities and the realm of hybrid threats, and will likewise provide new opportunities to respond to them. AI will have an impact on all of NATO’s core tasks of collective defence, crisis management, and cooperative security.

AI will have an impact on all of NATO’s core tasks, as defined in the Alliance’s 2010 Strategic Concept, namely collective defence, crisis management, and cooperative security.© ITCILO
)

AI will have an impact on all of NATO’s core tasks, as defined in the Alliance’s 2010 Strategic Concept, namely collective defence, crisis management, and cooperative security.
© ITCILO

With new opportunities, risks, and threats to prosperity and security at stake, the promise and peril associated with this foundational technology are too vast for any single actor to manage alone. As a result, cooperation is inherently needed to equally mitigate international security risks, as well as to capitalise on the technology’s potential to transform enterprise functions, mission support, and operations.

The continued ability of the Alliance to deter and defend against any potential adversary and to respond effectively to emerging crises will hinge on its ability to maintain its technological edge. Militarily, futureproofing the comparative advantage of Allied forces will depend on a common policy basis and digital backbone to ensure interoperability and accordance with international law. With the fusion of human, information, and physical elements increasingly determining decisive advantage in the battlespace, interoperability becomes all the more essential. Further, as competitors and potential adversaries invest in AI for military purposes, ensuring that Allies develop common responses to ensure their collective security will only become more urgent.

With the formal adoption of the NATO AI Strategy, Allies have committed to the necessary cooperation and collaboration to meet these very challenges in both defence and security, naming NATO as the primary transatlantic forum. The aim of NATO’s AI Strategy is to accelerate AI adoption by enhancing key AI enablers and adapting policy, including by adopting Principles of Responsible Use for AI and by safeguarding against threats from malicious use of AI by state and non-state actors.

By acting collectively through NATO, Allied governments also ensure a continued focus on interoperability and the development of common standards. Overall, with innovation ecosystems implicating different actors and faster technology lifecycles than typically included in traditional capability development systems, the NATO AI Strategy is also a recognition that exploitation of AI will require new efforts to foster and leverage the Alliance’s innovation potential, including through new partnerships and mechanisms. Taken together, these efforts will in turn strengthen the Alliance’s ability to pursue cooperative security efforts and to engage with international partners and other international organisations on matters of international security.

Principles of Responsible Use

Adopting AI in the defence and security context also calls for effective and responsible governance, in line with the common values and international commitments of Allied nations. To that end, Allied governments have committed to Principles of Responsible Use as a key component of NATO’s AI Strategy.

Allies and NATO commit to ensuring that the AI applications they develop and consider for deployment will be in accordance with the following six principles:

  • Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.

  • Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability.

  • Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level.

  • Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures.

  • Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour.

  • Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.

Having agreed to adopt these mutually reinforcing principles, the task now turns to translating them into principled action. As such, NATO’s role in operationalising these principles will involve efforts that similarly tackle different aspects of the technology’s lifecycle. Building the principles of responsible use into the front end of AI development is important because, the later they are considered, the harder it may be to ensure they are upheld. Ensuring a full life-cycle approach also depends on multi-stakeholder engagement because responsibility is diffused amongst the policymakers, designers, developers, and testers, as well as operational end users that engage in AI development and use. For NATO, this is relevant because various entities play an active role in AI integration, and because the Alliance can encourage coherence with national AI developments.

Allies and NATO commit to ensuring that the AI applications they develop and consider for deployment will be in accordance with the six principles explained above. Pictured: Allied Defence Ministers meeting at NATO Headquarters on 21-22 October 2021. © NATO
)

Allies and NATO commit to ensuring that the AI applications they develop and consider for deployment will be in accordance with the six principles explained above. Pictured: Allied Defence Ministers meeting at NATO Headquarters on 21-22 October 2021. © NATO

For NATO, the common commitment to these principles has practical advantages as well, providing a coherent common basis for both NATO and Allies to design and develop AI applications while also supporting interoperability goals. As such, NATO can foster the necessary interlinkages between safety, security, responsible use, and interoperability. This can be seen across the principles. For instance, it is important to ensure that AI systems are adequately robust and reliable for their intended use, not only so that they can be expected to function in accordance with legal obligations, but also to mitigate the risks of the system’s defects or limitations being exploited by nefarious actors.

Putting Principles into Practice

These enduring principles are also foundational to the discussion and adoption of more detailed best practices and standards. Allies and NATO can leverage NATO’s consultative mechanisms and NATO’s specialised staff and facilities to work actively towards that goal. NATO’s own standardisation and certification efforts can also be bolstered by coherence with relevant international standard-setting bodies, including for civilian AI standards.

In addition to best practices and standards, these principles can also be operationalised via other mechanisms including review methodologies, risk and impact assessments, and security certification requirements like threat analysis frameworks and audits, among others. Further, NATO’s cooperative activities provide the basis to test, evaluate, validate, and verify (TEVV) AI-enabled capabilities in various different contexts. More specifically, NATO’s experience not only in operations, but also in trials, exercises, and experimentation provide several avenues in which Allies and NATO can test principles against intended use cases. This is further reinforced by NATO’s scientific and technical communities, which have worked on issues such as trust, human-machine and machine-machine interactions, and human-systems integration, among many others.

In addition to these existing activities, the implementation of the AI Strategy will also benefit from connections with NATO’s forthcoming Defence Innovation Accelerator for the North Atlantic (DIANA). Allied Test Centres affiliated with DIANA could be used to fulfil the aims set out in the definitions of the principles. In the future, use of these Test Centres can help ensure that AI adoption and integration are tested for robustness and resilience. For example, to ensure that AI is Traceable, Reliable and Bias-mitigating, Test Centres could synthesise how AI systems perform in different simulated environments and on different testing data, or provide independent validation and verification to assess compliance with standards that focus on responsible engineering practices.

Through the adoption of principles of responsible use, NATO and Allies are sending a deliberately public message to their domestic populations, to Allied forces, and to other states, reiterating the Alliance’s enduring values and commitments under international law. More than just an obligation, this democratic commitment is also a pre-condition for common policy bases among Allies – and for partnership with non-traditional innovators across the Alliance.

Accelerating principled and interoperable adoption

With the ethical aspects of adoption that the principles underscore, NATO has the chance to signal – and follow through on – responsibility at the core of its outreach efforts. This includes engagement with start-ups, innovative small and medium enterprises, and academic researchers that either have not considered working on defence and security solutions, or simply find the adoption pathways too slow or restrictive for their business models. In contrast to the development of traditional military platforms, AI integration entails fast refresh cycles and requires constant upgrading. This requires a change of mind-set for iterative, adaptive capability development, in contrast to sequential development cycles that take years to deliver small numbers of highly sophisticated platforms. With hostile state and non-state actors increasing their investments in Emerging and Disruptive Technologies including AI, this more flexible approach to adoption is all the more urgent. In this context, with its focus on TEVV and collaborative activities, the AI Strategy sets the framework for technological enablers to out-adapt competitors and adversaries. With more of a focus on agility and adaptation, NATO can make defence and security a more attractive sector for civilian innovators to partner with, while also allowing them to maintain other commercial opportunities. In doing so, efforts to bolster the transatlantic innovation ecosystem can also serve as a bulwark against undesirable foreign investment and technology transfers.

NATO’s experience not only in operations, but also in trials, exercises, and experimentation provide several avenues in which Allies and NATO can test principles against intended use cases. This is further reinforced by NATO’s scientific and technical communities, which have worked on issues such as trust, human-machine and machine-machine interactions, and human-systems integration, among many others. Pictured: U.S. ground troops patrol while robots carry their equipment and drones serve as spotters. Illustration by U.S. Army
)

NATO’s experience not only in operations, but also in trials, exercises, and experimentation provide several avenues in which Allies and NATO can test principles against intended use cases. This is further reinforced by NATO’s scientific and technical communities, which have worked on issues such as trust, human-machine and machine-machine interactions, and human-systems integration, among many others. Pictured: U.S. ground troops patrol while robots carry their equipment and drones serve as spotters. Illustration by U.S. Army

This work requires coordination across the NATO Enterprise. Indeed, several stakeholders across the NATO Enterprise are already involved in the development of AI-related use cases, concepts, and programmes. With the AI Strategy, these activities can gain coherence to ensure the proper connections exist between all innovation stakeholders, including operational end-users.

Moving Ahead

To be sure, the implementation of accelerated, principled, and interoperable AI adoption depends not just on technology, but equally on the talented and empowered people who drive the technological state-of-the-art and integration forward. NATO has also dedicated attention to other AI inputs, notably through the development of a NATO Data Exploitation Framework Policy. With actions to treat data as a strategic asset, develop analytical tools, and store and manage data in the appropriate infrastructure, the Data Exploitation Framework Policy sets the conditions for the AI Strategy’s success.

In addition to the interrelationships between data and AI, ensuring coherence between NATO’s efforts on AI and other Emerging and Disruptive Technologies such as autonomy, biotechnology, and quantum computing will be vital. As Allies and NATO seek to fulfil the aim of this AI Strategy, the linkages between responsible use, accelerated adoption, interoperability, and safeguarding against threats are critical. Indeed, these linkages will also apply to NATO’s follow-on work on other Emerging and Disruptive Technologies, including the development of principles of responsible use. More broadly, this entails further coherence between the work strands on these technologies, understanding that NATO’s future technological edge – and threats the Alliance will face – may depend on their convergence.

As such, not only does the NATO AI Strategy apply to this foundational technology, but it also sets the stage for NATO’s and Allies’ ambitions with regards to other Emerging and Disruptive Technologies. For each of them, the future strategic advantage that comes with NATO innovation efforts will derive from the connections between ethical leadership, iterative adoption, and integration that prizes flexibility, interoperability, and trust.

This is the eighth article of a mini-series on innovation, which focuses on technologies Allies are looking to adopt and the opportunities they will bring to the defence and security of the NATO Alliance. Previous articles: