The Alliance needs a broadly effective strategy to counter the evolving threat of disinformation. Artificial intelligence (AI) tools can help to identify and to slow the spread of false and harmful content while upholding the values of pluralistic and open societies.

Disinformation: not exactly new

False information and misleading narratives have been tools of conflict and statecraft since the fabled city of Troy fell to the ancient Greeks, and probably before. In the distant past, there were fabricated wooden horses, false witnesses, and faked plans. Today, we have fake news, false social media profiles, and fabricated narratives created to mislead—sometimes as part of coordinated cognitive warfare campaigns.

Social media and the internet have enabled a disinformation revolution. We live in a world of low-cost digital instruments and media with radically expanded reach, scale, and impact. And the concern is that these easily accessible instruments are available not merely to state actors, but to non-state actors, private individuals, and everyone in between.

Social media and the internet have enabled a disinformation revolution that impacts state actors, non-state actors, private individuals, and everyone in between.
© Centre for Research and Evidence on Security Threats
)

Social media and the internet have enabled a disinformation revolution that impacts state actors, non-state actors, private individuals, and everyone in between. © Centre for Research and Evidence on Security Threats

False messages and inflammatory narratives have made headlines in extreme cases, notably in the Western Balkans and in some Allied countries. Their more insidious danger is the damage they can do to citizens’ faith in the institutions of democratic governance and resources of public information and discussion. And recent years have seen growing political polarisation, historically low levels of trust in governing institutions, and instances of unrest and violence, aided in part by false information.

NATO member countries maintain open civil communications systems, some with very high rates of social media and social messaging use. The pluralistic characters of their societies, while an advantage and source of strength, can at the same time provide opportunities for divisive or incendiary narratives to take hold. For many of these countries, resilient and protective regulatory structures are still in their infancy. The combination of these conditions makes the threat of disinformation of particular concern for the Alliance.

The focus on false facts

Global social media companies have taken up the challenge to mitigate false information on their platforms. Most employ in-house human fact checkers to monitor the dissemination of false information. Some rely on third-party human fact checking or moderating tools. Several popular platforms, including Facebook, YouTube, and Twitter provide their users with the option to report other users suspected of spreading false information, either knowingly or unknowingly. And in an effort to retroactively correct for the damage caused by false information, the vast majority of social media platforms have used the mass removal of content identified as harmful or misleading via these methods.

At best, this has proved to be too little, too late. At worst, it has led to accusations of censorship, and to the removal of information or opinions that later turned out to be credible or worthy of public discussion.

The problem of volume

There are nearly three billion monthly active users on Facebook alone, each one capable of posting something inflammatory online. Twitter has over 350 million active users, including prominent individuals, popular opinion leaders, and clever and resourceful influencers.

The current approach to counter disinformation relies mostly upon manual fact-checking, content removal, and damage-control. While human interventions may be useful in cases requiring nuanced understanding or cultural sensitivity, they are poorly matched to the large volumes of information created every day. It is unlikely that adding more personnel is a realistic option to proactively identify false or damaging content before it has the chance to spread widely. Human fact checking is, in itself, subject to error, misinterpretation, and bias.

Billions of monthly active users of social media are capable of posting something inflammatory online. This is an overwhelming amount of information for manual fact-checkers.
© The Globe and Mail
)

Billions of monthly active users of social media are capable of posting something inflammatory online. This is an overwhelming amount of information for manual fact-checkers. © The Globe and Mail

What goes viral?

“Falsehood flies and truth comes limping after it” wrote the satirist Jonathan Swift in the 17th Century. A recent MIT study found that on Twitter, false news items are much more likely to go viral, and regular users, not automated ‘bots,’ are responsible for re-sharing them. People also ‘retweet’ these false news items with sentiments of surprise and disgust. By contrast, true stories produce sentiments of sadness, anticipation, and trust (and are much less frequently shared).

This raises a potential opportunity: Should we focus not on facts, but on emotions instead? And could computers, not humans, be trained do so?

Don’t check facts, check emotions

Artificial intelligence-based sentiment analysis represents an entirely different approach to mitigating disinformation by training computers to identify messages and posts containing elements of surprise, disgust, and other emotional predictors. These are more likely to be associated with false information and to inflame the passions of social media users.

Natural language processing algorithms make it possible to identify linguistic indicators of the emotions in question. They avoid human fact checking altogether, reducing bias and cost, and increasing processing speeds. A student team at Johns Hopkins University has created a promising working prototype, and their teammates at the Georgia Institute of Technology and Imperial College London have developed feasibility assessments and potential regulatory approaches.

Don’t stop disinformation, slow it down

But once a highly viral (and likely false) message or post is identified, what to do? An analogy from the financial markets suggests a solution: an automated ‘circuit-breaker’ that temporarily suspends or slows the dissemination of emotionally charged content.

Stock markets avoid panic selling by temporarily suspending the trading of stocks that have declined by certain percentage thresholds. On the New York Stock Exchange, stocks that decline in price by over 7% are halted for 15 minutes in the first instance. The idea is to slow things down and let cooler heads prevail. Subsequent price declines can trigger additional trading suspensions.

The cooling effect of slowing things down can be significant. On a social messaging site, a message that doubles every 15 minutes can hypothetically reach one million views in five hours, and 16 million views in six hours. But if slowed to doubling every 30 minutes, it would reach only one thousand views in five hours and four thousand in six hours. Small differences in virality yield huge differences in exposure.

Such a mechanism would operate, not by preventing sharing, but by slowing engagement; for example, by imposing cool down periods between comments, or prompting users to consider possible consequences before resending a message. It builds on the central thesis in the Nobel Prize winner Daniel Kahneman’s book Thinking Fast and Slow. Slow thinking is rational and avoids the emotionality of fast reactions to surprising and shocking news or events.

This could reduce concerns of censorship or arbitrary limits upon the free flow of ideas. Messages and posts are not taken down or eliminated. They remain available for review and discussion, only at a slower pace. This mitigates the problem of who adjudicates ‘permissible speech’ and protects valuable freedoms of expression and public discourse. Such an approach could be implemented, via incentives or regulation, at various layers of the communication infrastructure: the source companies themselves, mediating gateways (or ‘middle-ware’ platforms), the message transport level (the communications ‘pipes’), or even at the device level (smartphone or tablet).

Considerations for the Alliance

Disinformation is one of several digital threats facing the Alliance. Recent information campaigns and cyberattacks have revealed that even technologically advanced member states must do more to prepare for current and emerging digital challenges. More progress is needed in establishing successful resilience mechanisms and regulatory frameworks.

Cyberattacks threaten even the most technologically advanced NATO members. More progress is needed in establishing successful resilience mechanisms and regulatory frameworks.
)

Cyberattacks threaten even the most technologically advanced NATO members. More progress is needed in establishing successful resilience mechanisms and regulatory frameworks.

Yet these threats appear to be increasing daily, leaving the Alliance with little time. Taking advantage of existing technologies (such as those mentioned above) and applying them in innovative ways should save both time and resources. Minimally invasive mitigation concepts, such as slowing—but not permanently deleting—potentially harmful social media messages and posts, may be the most promising first step to address the threat of disinformation. The Alliance can then spend more time on additional technology development and more comprehensive regulatory approaches in the future.

Historically, the resilience and strength of open and pluralistic societies has been in their ability to adapt innovatively to emerging challenges and circumstances. A foundational mechanism for this is the free flow of ideas and information, as well as the open and public discussion and examination of options, policies, and plans. Any solution to disinformation must protect this mechanism if we wish to keep this advantage. Moreover, the adoption of solutions by member states will rely upon the acceptance of their societies at large and is unlikely to succeed if internal constituencies consider themselves marginalised or excluded from the public dialogue.

NATO could endeavor to promote the adoption of such technology- and principles-based approaches while leaving member states to decide their own national digital security strategies. This would provide member state governments with the flexibility to implement mechanisms as they see fit, congruent with their local levels of social media adoption, popular expectations of free expression, and the realities of their civil communications infrastructures.

This is the seventh article of a mini-series on innovation, which focuses on technologies Allies are looking to adopt and the opportunities they will bring to the defence and security of the NATO Alliance. Previous articles: