It is fair to say that our relationship with technology is complicated. Just look at headline topics like renewable energy or Artificial Intelligence (AI), or consider pharmaceuticals, automotive, consumer electronics, social media and biotechnology. On the topic of any of these technologies, you’ll almost certainly hear a cacophony of voices that range from promising a new era of happiness to predicting the doom of humanity. How can we make sense of these confusing perspectives, and how can we maximise the benefits of emerging and potentially disruptive technologies while effectively minimising their risks?
How technology evolves
As individuals, we have many different interests in technology. Some are interested in technology itself, but most are interested in the impact technology could have. These interests are often competing, and sometimes downright conflicting. Here is a simplified overview:
the customer is looking for an affordable solution to a given problem, whereas the designer seeks to improve a given product;
the innovator strives to demonstrate that her idea will work, while the investor is keen on the return on his investment;
the corporate manager is committed to increasing his company’s revenue and market share, whereas the regulator focuses on questions of safety and environmental impact;
the citizen wants to maximise her freedoms and have her rights protected, while the politician tries to balance all the aforementioned interests in devising policies for the benefit of all.
This massive entanglement of interests includes technological ideas, economic and business interests, societal needs, and political considerations. Most of us, most of the time, pursue several of those interests in parallel: certainly as customers and citizens, but also as individuals and members of communities that affect our thinking and the choices we make.
None of these considerations are predetermined, nor are the resulting choices and decisions. Therefore, the collective outcome cannot be predicted. We cannot pre-state the developmental path of any technology. However, it would be hare-brained to conclude that technology follows a path of its own or that we have no influence on technology at all. Quite to the contrary, we all influence technology development, only, this influence is rarely direct or immediately visible. The complexity researcher W. Brian Arthur summarised our multi-faceted relationship with technology as follows: “Technology areas co-evolve together with society in a process of mutual adaptation.”
In other words, our choices today affect the trajectory of a given technology’s further development. That development will in turn present new opportunities and challenges that we will respond to, and this response will influence the further evolution of that technology in an open-ended process.
Take the steam engine for example. This machine marked the beginning of the Industrial Revolution when it was first introduced to pump water from coalmines. That successful application triggered further imagination, as developers and users alike came to look for other problems a steam engine could solve. Those considerations led to the mechanisation of agriculture and manufacturing, so that the steam engine would ultimately replace horse and oxen as humankind’s primary power sources. The story did not end there. Railroads, factories, work contracts, and labour unions all emerged in response to that new technology. None of these long-term impacts were foreseeable, intended, or planned. Rather, they were the result of mutual influencing of technology and society.
The history of the steam engine showcases how technology itself is neither good nor bad. But it is not neutral either. Technology is what we make it. Our choices matter. The question is “How can we make technology what we want it to be?”
Any attempt at shaping the trajectory of a given technology faces a genuine dilemma between today’s knowledge about the future and the available means to affect or change that future. David Collingridge was the first to frame the major challenge of policy making on emerging technologies: “When change is easy, the need for it cannot be foreseen ; when the need for change is apparent, change has become expensive, difficult and time consuming.”
We are literally caught between a rock and a hard place. For a nascent technology, we cannot know all its future applications, nor can we anticipate all its future impacts. Still, at this time, we can exert some control over its development path. In the future, when that technology is mature, we see its full impact. We can thus define what we would like to change. Alas, because the technology is already in the market, broadly distributed and widely used at that time, our means of control are very limited.
We are struggling with a fundamental characteristic of technology development: the principle uncertainty inherent to an open-ended process without a knowable end-state. We cannot know in advance the future target for today’s policy intervention. But what can we actually do? Would it not be a fair choice to accept the limits of our knowledge, to simply “let things run their due course”?
Think about social media. These services promised connectivity across the planet, facilitating new forms of meaningful information-sharing and enabling global communities of unprecedented scale and scope. Their free-of-charge operation is naturally attractive to users, but “behind the scenes” they rely on an advertising-backed business model. For that to work, users should ideally stay connected 24/7 in order to feed the evermore-sophisticated micro-targeting algorithms. Such addictive behaviour and the increasing manipulation facilitated by it are not in the users’ interest. Nor are echo chambers, hate speech, and the tampering with democratic elections in the interest of our societies.
While the promise of social media is compelling, we made two cardinal mistakes. First, we accepted proprietary platforms operated by business enterprises. Second, we forgot that the purpose of business is profit, not philanthropy.
The case of social media demonstrates that the users’ immediate choices can counteract their longer-term interests. Furthermore, a market left to its own devices can spin out of control. Both findings apply in particular to promising nascent technologies during their emerging early development. These fledglings still need to find the products they could successfully deliver and the markets they could serve. And all the various interest-holders still need to learn how they might be affected by those technologies.
As such as technology evolves, we can be sure that innovators, investors and users will be the first on the scene, pursuing their specific interests. Designers and corporate managers will soon join, once the technology demonstrates its effectiveness and first product ideas prove viable. Only after the maturing technology’s impact has become tangible, will regulators, citizens, and politicians enter the discussion. I argue that –in the case of emerging and potentially disruptive technologies– these last interventions come too late.
What we want technology to be
Throughout history, we harnessed technology to gain or maintain military advantage. Without much differentiation, we did what we could do: whatever was technologically doable appeared the right thing to do. Is that “can-do” attitude sufficient to guide us into the future? My answer is no, and I will argue in favour of a values-driven approach towards technology for defence and security purposes.
Humanity lost its innocence
Historically, humanity did not have the means to hurt its immediate existence, neither intentionally nor unintentionally.
Early in the 20th century, we learned about the power of the atom. For the first time, we created a tool that could potentially destroy our very existence. Once that genie was out of the bottle, by mid-century, we worked hard to regain control by weaving nuclear arms control into the nascent international order.
Whether we like it or not: humanity lost the innocence of ignorance. We have access to potentially destructive means and we know it. Hence, we can neither deny nor reject the responsibility we have for our technologies’ impact, both intended and unintended.
These technologies are different
Today, we face multiple emerging technologies that promise to disrupt our established ways, including AI, bio- and quantum technologies. They mature in parallel, at 21st century speed, in a hyper-connected world.
Take one specific area: the combination of AI, Big Data (as input to AI), and autonomy (as one of the main applications of AI). This technology area promises to disrupt the information sphere and “change everything”, from maintaining situational awareness to supporting decision-making, from predictive maintenance to cyber defence.
Yet amidst the euphoria about opportunities, we must afford a sober reality check and ask ourselves critical questions on how we want to develop, feed, and use such systems: would we consider the Chinese Social Credit System as a role model for collecting data? Should we accept black-box algorithms for data processing, when they present results, but cannot explain their plausibility? Should we apply AI in critical decision-making, where we seek to maintain human oversight?
Most of the key technologies operate in the information domain. Given their superior connectivity and speed, their development is particularly challenging to follow, let alone anticipate. Yet, developers focus on civilian applications with global consumer markets in mind, and the Big Tech companies pushing these developments have become the most influential non-state actors on the planet.
All of these factors increase the complexity of the problem space, while at the same time accelerating the speed to technological evolution. In short: our challenges keep growing, while our response time shrinks.
The West is not alone
Our Western values, including the rule of law, democracy, individual liberty, and human rights, provide a solid frame for tackling those challenges. However, we must recognise that their universality is contested, sometimes subtly, sometimes overtly. As the political economist Jeffrey Sachs observed, “Geopolitical power and technological prowess are no longer the privileged preserve of the North Atlantic.”
It would be short-sighted to assume that Western countries could globally enforce emerging technologies’ compliance with Western values. Instead, differences in values may well result in divergent technological competences that can, in turn, affect the global distribution of power.
Setting norms – a role for NATO?
Emerging and Disruptive Technologies (EDTs) came into NATO’s political focus in 2019, when NATO leaders adopted an implementation roadmap for seven such technologies. Regardless of their tremendous promise, we must realise that these technologies are not yet mature, not yet “fully out there”. Therefore, considerable uncertainty remains to which extent these fledgling technologies and their foreseeable applications are appropriately contained within established legal, ethical, and moral norms. These questions are not limited to military applications, nor do they stop at national borders: rather, they cut across many government departments and business sectors, and they affect humanity in its entirety.
In this complex, fast moving, high-stake setting, we must view technology and values as intertwined. While our values should guide our use of technology, we must recognise that our technology choices will, whether intended or not, reflect the values we adhere to.
As inaction is not an option, we must take active measures to establish norms for the future use of technologies; norms that are deeply rooted in our values; technologies that are currently emerging and have recognised disruption potential (such as AI, biotechnology, and quantum technology). How could we realistically master this novel challenge? The following three proposals could pave the way.
We must effectively cope with the uncertainties of technology evolution. Hence, I suggest evolutionary policy-making, building on current knowledge, but flexible enough so that today’s decisions can be adjusted or corrected in the future.
We must strive to limit potential harm without unduly constraining the benefits a technology can bring. Therefore, our policies should set limits for the application of technologies (such as genetically optimised super-soldiers) rather than banning entire technology areas (in this case, biotechnology).
We need to understand when policy changes are necessary and what those changes should be. Reflecting the diversity of interests, we need to institutionalise a broad stakeholder engagement that reaches out to all parties affected by a technology and influencing its evolution.
Within this broadly applicable framing, NATO’s role is specific. As the international organisation committed to defence and security in the North Atlantic area, it convenes considerable political, military, economic, and technological power. Building in particular on its political and intellectual capital, the Alliance can credibly spearhead norm setting for technology applications in defence to comply with Western values.
With its recently published AI Strategy, NATO fulfils its traditional role in an innovative way. This Strategy embraces principles of responsible use, which express the value-driven norms that NATO and its member nations will adhere to in the application of AI. By making these principles public, they set an example for other nations to consider and potentially adopt NATO’s principles. This is an effective approach towards proposing and gradually implementing an international norm, not unlike the European Union’s General Data Protection Regulation.
At the same, NATO responds to the globally distributed innovation landscape. The NATO2030 initiative highlights the need to forge new coalitions with likeminded partners beyond the North-Atlantic region. This broad outreach should not only extend to governmental organisations, it should in general expand the types of partners to collaborate with (even within Allied nations), to include non-governmental organisations, the private sector, academia, and civil society.
Establishing norms to frame technology development within the limits of our established value system is a defining challenge of the 21st century. Our values alone should be the driving force for the policies we devise and the capabilities we field. As technologies keep emerging, so should our policies for setting appropriate norms bounded by the values we hold dear.
This is the ninth article of a mini-series on innovation, which focuses on technologies Allies are looking to adopt and the opportunities they will bring to the defence and security of the NATO Alliance. Previous articles: