Deep Dive Recap: Emerging and Disruptive Technologies, and the Gender Perspective
- English
- French
On the 7 June 2022, the NATO International Military Staff (IMS) hosted their fourth Deep Dive Session focusing on the nexus between the Gender Perspective, and Emerging and Disruptive Technologies (EDTs), specifically artificial intelligence (AI). This session discussed the inherent uncertainties of technology development trends and the paramount importance of policies that help pre-empt negative impacts while promoting beneficial applications. NATO’s Principles of Responsible Use for AI exemplify that approach, stressing the need to develop trust and interoperability, as well as the inclusion of a gender perspective in the design of future AI-enabled systems. The presentations were given by Dr Ulf Ehlert, Head of Strategy and Policy in the Office of the Chief Scientist and Zoe Stanley-Lockman from the Innovation Unit of the NATO Emerging Security Challenges Division.
Values-Based Emerging and Disruptive Technologies
Dr Ehlert led the discussion by diving into the topics of innovation, technology, and values. He emphasised that technology areas “co-evolve with society in a process of mutual adaptation, as we all play various roles in shaping this development; still, its outcome cannot and is not pre-determined”. On EDTs, he noted the differences between “emerging” and “disruptive” technologies. An emerging technology is expected to mature in the next 20 years, but its ultimate effects on defence, security and/or enterprise functions are yet uncertain. Therefore, it is a question of maturity. By contrast, a disruptive technology is expected to have a revolutionary effect on defence, security and/or enterprise functions in the next 20 years, which is more a question of impact. In dealing with this such technologies and the inherent uncertainties regarding their future trajectories, NATO should adopt evolutionary policy-making that builds on current knowledge, but also remains flexible enough so that today’s decisions can be adjusted or amended in the future. For more information on this, please refer to Dr Ehlert’s article on why our values should drive our technology choices.
Gender Perspective and AI
Ms Zoe Stanley-Lockman’s presentation focussed more specifically on AI, overviewing how design decisions can embed unintentional bias into AI and machine learning systems. She noted that the intersection between the gender perspective and AI is well documented in the AI ethics field, which provides valuable insights to ensuring the responsible development and use of AI in defence and security. Examples of gender bias in the civilian realm – such as mis-specified goals or reward signals for AI systems, or inaccurate representations of women in computer vision datasets and translation software – bring to light how biased AI systems can lead to harmful decisions and unintentional outcomes. If flawed human assumptions about the role of women in defence and security are designed into technologies such as AI, then they could similarly be less effective or cause inadvertent harm when used in in real-world contexts. Be it at the enterprise, mission support or operational level, these biases can have adverse impacts for NATO if the associated risks are not identified, evaluated, monitored and mitigated over the lifecycle of the technology.
So what for NATO?
In the NATO AI Strategy, endorsed by Defence Ministers in October 2021, Allies agreed to operationalise six Principles of Responsible Use for AI in defence and security. These principles are: lawfulness, responsibility and accountability, explainability and traceability, reliability, governability, and bias mitigation. To put these principles into practice, NATO is working to encourage responsible-by-design innovation, including through the conduct of risk and/or impact assessments. As EDTs, including AI, will help scale and accelerate military decision-making, then NATO must ensure that they do not scale biased outcomes. With bias mitigation having clear links to the other principles, including how explainable and reliable systems are in real-world contexts, operationalising Principles through practical, user-friendly tools can help account for issues, such as ways that gender bias reduces the intended impact of AI systems. In doing so, responsible AI practices can help shape technological trajectories in accordance with democratic principles and international legal commitments.