Election Disinformation: Beyond Fake News to Strategic Narrative Manipulation
In an article for Brookings, Valerie Wirtschafter examines concerns about disinformation’s influence on elections. Recently, 1,400 experts, policymakers, and business leaders surveyed by WEF identified disinformation as the greatest short-term risk. With over 60 countries facing elections, this issue cannot be ignored.
But is the topic overhyped? It could be.
As Wirtschafter notes, “fears of generative AI-driven disinformation taking over have not yet materialized,” with a relatively quiet disinformation landscape in several elections: European Parliament, UK, France.
However, the Russia risk is real. Given Russia’s advanced capabilities, doctrine of influence operations, and strategic necessity, “the US election later this year represents the biggest prize for Russia, as it seeks to shift the West’s policies away from support for Ukraine.”
But putting the US election aside for a moment, let’s focus on the relative quietness of other elections.
If we examine some of the examples the article refers to, a picture emerges: disinformation as blatant lies, or fragments of verified information presented from a highly misleading angle. These claims, such as the statement that Zelensky bought a hotel-casino in Cyprus, are then debunked by fact-checkers.
This model for handling such information is well-known, but I’m not convinced it’s the most effective approach. It risks creating a false sense of security: we know there’s disinformation, fact-checkers debunk it, and voilà—a calm election season. And the circle goes on.
This perspective usually narrows the situation down to two main players: the malicious agents spreading disinformation and the unfortunate souls who fall for it. While some undoubtedly belong to these categories, I wonder if this really captures most people’s experiences.
As technology theorist L. M. Sacasas observed, the core issue with disinformation isn’t just the spread of falsehoods or misleading facts. The real challenge in our hyperinformation age is that any piece of data—even when accurate—can be absorbed into wildly conflicting narratives within the experience framework that can be called the Database. Unlike a coherent story that provides a comprehensive account, the Database is more primordial: it presents information in a fragmented manner, resisting immediate interpretation and immersing users in a turbulent array of data that can be endlessly reshaped and recombined.
Add to this the fuzziness of the information-space, which doesn’t align with nation-state borders or supranational entities’ borders like the EU.
Considering this perspective, the scenario where experts claim that the European Election shows “no major disinformation incidents appear to be ongoing” becomes clearer. If we conceptualize disinformation as harmful falsehoods propagated by malevolent actors, this assessment holds true. However, if we adopt the logic of the Database, we must recognize that the information space is not fundamentally an event space. Rather, it resembles a pastiche or montage space. Events do not simply occur there; they emerge from data combinations.
We’re all familiar with this pastiche nature of the Database. In recent years, several intuitively understandable concepts have emerged to highlight the long-term effects of this informational environment: “feed,” “echo chamber,” “rabbit hole,” and so on.
I can illustrate this combinatory logic with how Russian media covered the Democrats’ campaign in July. At Mantis Analytics, we carefully analyzed it using AI technologies. The Russian coverage did not heavily rely on false and harmful content—except for, probably, a few cases like the rumor that Biden was dead. Instead, they took a different approach: selectively republishing Western media reports (from reputable sources like The New York Times) and adding their own commentary.
This created a distinct portrayal of a deteriorating political party struggling under the leadership of an aging figure. It could be true in theory. However, in practice, this was the dominant narrative in major socio-political Telegram channels in Russia, with few alternative voices. The energy and traction of the Harris campaign now suggest that this portrayal was, at best, incomplete.
But should we even use the term “disinformation” in such cases? Formally, we are dealing with strategic combinations and recombinations of narratives. So the terminology is up for debate.
While short-term risks are likely well-managed by traditional fact-checking measures—so much so that, at least in Europe, in a good season it can be stated that no major disinformation incidents are evident—the long-term challenges created by hyperinformation and the Database persist. These include political polarization, institutional distrust, and social unrest.
In the Database, every piece of information is a potential element of disinformation campaigning. To grasp the risks emerging from the combination of various, even seemingly innocuous narratives, timely and effective situational awareness is essential.