Focus on Elon Musk’s algorithms, not his words, says France’s disinformation agency
Anne-Sophie Dhiver is the deputy head of the French government agency Viginum, the first and only investigative body created by a Western democracy entirely focused on tackling the risks posed by online information operations.
Public awareness of those risks is now almost a decade old, with the threat having first come under the spotlight following the 2016 presidential election in the United States. That focus itself has been criticized with experts warning it can amplify rather than suppress the impact of hostile activities. Dhiver, as a former employee of Google and a veteran of Google France’s Elections Task Force, as well as a graduate of the French War College, said Viginum was very aware of the risk of unintentional amplification.
She spoke to Recorded Future News in Paris late last month — shortly after the inauguration of Donald Trump and ahead of the German federal elections — about why and how her agency was addressing the threat. The winner of those German elections, Friedrich Merz, said during the campaign that President Trump’s close associate Elon Musk would face consequences for intervening in support of the far-right AfD. Dhiver explained how Viginum does not pay attention to Musk’s words, but is assisting French regulators investigating the social media platform X over potentially manipulated algorithms.
This interview has been edited for length and clarity.
Recorded Future News: When and why did France create an agency to tackle foreign disinformation?
Anne-Sophie Dhiver: Viginum was created in 2021, because, as with many other countries, France has been the target of foreign digital interference over the past years. It is the cornerstone of the defensive French ecosystem to deal with this threat, on the defensive side.
As you know, generally, progress is achieved through crisis. France faced several crises. Probably the first one was in 2017 and it was, I believe, a kind of wake-up call, because it was the first time that a French presidential election was the target of a hybrid attack. It combined both a cyberattack and an information component. So emails were stolen, and then an information operation was carried out using the content of these emails, but with some slight modifications to try to attack one of the campaign teams.
In response to that, in 2018, France decided to reinforce our legal framework to deal with the information manipulation threat, especially during election periods. So we had two laws that were then published at the end of December 2018. After that, other crises occurred with the COVID-19 pandemic and the yellow jackets protests.
The decision was taken in 2021, in the aftermath of the terrorist attack against a French teacher, Samuel Paty — which was not an information operation for sure, but which triggered an information operation with narratives accusing France of Islamophobia; that was carried out by foreign actors — it was decided to create a dedicated entity with an analytical capacity, an entity which would be able to detect and characterize the online information threat. And it was done in July 2021 with the creation of Viginum.
RFN: How would you characterize the threat in terms of the activity surrounding the murder of Samuel Paty or the yellow jacket protests?
ASD: So what we observe — especially in a context of brutalization of International Relations — some foreign actors, whether it's state or non-state, are deploying techniques, including in the information environment, trying to manipulate public opinion, destabilize society and polarize opinions.
In terms of the objectives which are pursued, mainly what we observe are attempts to polarize opinions on sensitive themes like immigration, yellow jacket protests, agricultural protests or ongoing conflicts, for instance, to amplify some existing facts or content and to instrumentalize them to divide and polarize opinions.
Another objective is also to undermine the credibility of media and fact-checkers and undermine the trust in traditional media. This is what we observed, for instance, for the Matryoshka campaign that we disclosed and revealed in June this year.
And in the context of an election, it could be to undermine the trust in democratic processes or republican institutions.
It can be even broader, it could also be to harm economic interests for instance. This is why our mission is not to deal with information manipulation as a whole. We are focused on a specific segment which is really anchored into defence and national security, so what we identify as foreign digital interference is based on four criteria.
First of all, it's on the narrative, whether it uses misleading or false content. It’s not the main criteria for us, because we do not deal that much with narratives as a state agency, as we believe it's not our role to say that's right or wrong. We are not the Ministry of Truth. So the second criteria is much more important for us, which is the use of inauthentic or coordinated behaviors. These artificial means could be the use of fake accounts, of bots or trolls, for instance, or the non-transparent use of influencers, trolls, these types of things.
So what we are trying to do is really to characterize the usage of these artificial means in the distribution of these narratives.
The first two criteria really are about information manipulation, and then two other criteria really ground our mission into defence and national security. So the third one is the involvement of a foreign actor, whether it's a state or non-state actor. We do not deal with domestic actors. We would always try to identify if the operation has been initiated or launched by a foreign perpetrator.
And lastly, we only deal with operations which try to target or undermine the fundamental interest of the nation. So we do have a typical mandate for elections, but it's a broad definition in the French law. So it could also be to protect scientific or economic interests, for instance.
RFN: Given that there must be a foreign component to your work, we’d be remiss not to ask what the territories are that are most regularly the source of these operations.
ASD: So you can refer to the reports that we published this year, because for us, we believe it's important also to inform the public and to shed light on the modus operandi that are used. So this year, we published five reports, including two that were linked to pro-Russian actors, Portal Kombat, and Matryoshka in June.
This illustrates one trend — that the current conflicts are a very attractive sounding board for foreign actors, even for non-belligerent players. It creates a space where they can amplify some narratives.
Another trend of 2024, of course, is that big events, like the Olympic Games held in Paris, with their large media coverage, are also extremely attractive for information threat actors. So we published a report in September detailing the different operations that we detected during the Olympic Games; 43 operations were detected during the Olympic Games, coming from different types of actors and leveraging several types of modus operandi, whether it's, for instance, false flag operations, use of bots, amplification of hashtags, etc.
Finally, over the last year, the informational threat has particularly targeted the French interests in our overseas departments and collectivities. We have thus revealed, through a technical sheet and our public report “UN-notorious BIG” how pro-Azerbaijani actors are trying to exploit the situation in New Caledonia, and are repeatedly targeting many French overseas departments and territories and Corsica.
RFN: How do you handle information operations that have a hybrid component — an offline component as well as an online one. One thinks of the coffin stunt during the Paris Olympics. Would that fall within Viginum’s remit?
ASD: It would fall into Viginum’s remit, of course, but we do not work alone.
Typically, because this type of operation involved both the physical world and the digital world, we would work on the digital components and perform investigation on the digital components. But on this case — and same on the Stars of David, for instance — we would also collaborate, for instance, with the Ministry of Home Affairs or the Ministry of Foreign Affairs. Because something that I didn't mention is that these operations also aim at provoking effects in real life, whether it's a violent effect or trying to orient the vote of people, for instance. So this is the ultimate aim, for sure.
It was the case as well for instance during the Olympic Games. It was all the state departments that were involved to make sure that the event would be run in a smooth way.
RFN: One of the most challenging aspects of tackling Information Operations, as I see it, is the difficulty in measuring their effect. How does Viginum approach that problem? Do you think it's a problem that can be solved?
ASD: It’s a very good question, which hasn't been solved so far, despite much research already in that field. This is a very complex question, because these campaigns are planned for the long term. So the long-term effect of these strategies is hard to assess and measure.
At our level, what we try to assess first is the risk of short-term impact, because it could guide the decision to inform the public when we believe there is a risk. Because it's always a balance as well, to communicate publicly on these operations, because most of the time, they might stay restricted to small audiences or captive audiences and do not manage to reach a high level of visibility. Sometimes you would give them more visibility by disclosing the operation than not talking about it. So this is always the criteria that we take into account. But then long-term impact is quite difficult to measure. And I believe for now, no real methodology has been developed.
RFN: How do you handle that question between whether you are providing visibility or whether you're amplifying the operation?
ASD: So first of all, it's not Viginum’s decision to go public or not. It's always a political decision that is taken collectively. So ourselves, we are more an investigation department, performing OSINT, and based on our insights, the decision can be taken at a political level to communicate. For instance, when we believe that there is a risk of impact in the physical world — this is what we decided to do in May, because in 2024 the French interests were targeted in our overseas territories. This is why we decided to publish in May a report on New Caledonia, specifically because an Azerbaijan network amplified a narrative with images accusing French police officers of killing people from the independence movements. And so the decision was taken to publish it, because we wanted to inform the public of the operation. So it's always a political decision, taking various things into account.
Same for Portal Kombat in February, we decided to disclose it, and the decision was taken by the French Ministry of Home Affairs, and to do it in cooperation with other countries. So France, together with Poland and Germany, denounced this network with a bit less than 200 news websites that were pre-positioned to target the European elections.
RFN: I have noticed that Viginum’s reports can often describe operations that have previously been outed by the private sector. Is it often the case that you have completed a report a long time before the decision is made to publish it?
ASD: It depends. Some of the operations are quite persistent and we are documenting them on the long term. For instance Doppelgänger, we published a report in June 2023, and the MOI has been lasting for at least two years, and still is persistent today. So it really depends.
RFN: There seems to be a wide range of approaches to these challenges across industry. Who do you think is working well in this space?
ASD: I won’t name and shame, but of course we collaborate with external players, not only with platforms, but also with open source think tanks, for instance, who publish reports or are monitoring specific ecosystems. And even in our reports, it's quite frequent that we quote, for instance, the Microsoft Analysis Center, for instance, even during the Olympic Games, with the modus operandi from Storm-1516 that they detected on one of the false flag operations.
So yes, it is critical for us to collaborate with platforms, among other partners, but we also must rely on our own internal analytical capability, because it's really the core of our mission. So in the department, two thirds of our teams are dedicated to operations. So we mostly rely on our teams. But we do have regular relationships with the platforms as they are a key component of the ecosystem.
RFN: I have to ask, looming over a lot of this conversation — particularly right now — is the platform X and its owner as well, Elon Musk, who has shared and spread objectively false content related to European politics. Would that meet your threshold for foreign interference?
ASD: So maybe, as a foreword on that, I would say that the strategic pressure on our information space that I just described is increasingly intertwined with a growing systemic pressure coming from the evolution of online platforms. The interstate rivalry and competition which is expressed in the information field is overlapping with the competition between large tech companies for dominance of the digital information sphere, with AI at the heart of this fight. So we are more and more considering this as a systemic risk, because the online platforms are hosting the debate, but also, some of their features might be leveraged and contribute to the amplification of some of the information operations. So it's really intertwined.
So coming to X, honestly, there are two ways to look at it. So as a citizen, Elon Musk can completely comment and communicate on the platform, it is his right. The question is more about the potential manipulation of the algorithm, to amplify or reduce the visibility of some content, which would be contrary to the European Union’s Digital Services Act (DSA), as long as the platforms are not mitigating the systematic risks that could arise from the use of their services.
RFN: Would you have the capacity to analyze whether, for instance, those algorithms are disproportionately contributing to false information on X. Or would you say that is solely a matter for the DSA regulators?
ASD: Yes, because we have a partnership, an agreement, with the French regulatory body for audiovisual and digital communications (ARCOM) who is the national coordinator for the enforcement of the DSA. And we signed it in July 2024, and as such, might provide technical insights to the ARCOM on the implementation of the DSA in France, on our scope.
For example, a few months ago, we published a report highlighting a lack of moderation of political ads on Meta. Political ads have to be labeled and follow a specific moderation process. But we uncovered, based on our analysis, that 80% of those political ads on the platform were not labeled as such, thus bypassing the moderation process. And we know that political ads, or advertising and sponsored content in general, are one of the levers that could be activated in foreign information operations. So this is just an example.
RFN: You mentioned Germany and Poland in regards to being targeted by information operations, both European Union states covered by the DSA, but are there any other European member states that have similar agencies to Viginum? You seem quite unique to me.
ASD: I believe France probably spearheaded that field. But we do have a kind of twin agency, not exactly the same, but similar in some aspects, like in Sweden, with the Psychological Defense Agency, which was created in 2022. But because this threat is targeting many democracies, and especially the European democracies, a lot of different countries are trying to reinforce their capacity, especially ahead of elections, for instance. So yes, we are almost unique, but not for long.
RFN: What would you regard as your greatest success so far?
ASD: I believe, probably what's quite unique in our operations, I think, is the smooth integration of advanced data science. Because AI might constitute systemic risk in some aspects, especially in information operations. However, it's also a huge opportunity for us to boost and develop our capacity to detect, characterize, automate some tasks, etc.
At Viginum we are quite proud of our expertise and know-how in that field. And for the AI Summit in Paris that took place recently, for the very first time, not only did we publish a public report on the challenges and opportunities of AI for the fight against information manipulation, but we also published open-source tools that we developed.
So the first one is called Three Delta (D3lta), and it's based on LLM to detect the coordinated duplication at massive scale of textual content. This is the first one. So we have made this code public so that other people, civil society, journalists, researchers, can use this method and improve it, because it's also something that we want to be collective.
And the second one is something that we co-developed with another French administration which is a meta-detector of synthetic content. So it’s a shared library, to make sure that we can make it easier for people involved in the fight against information manipulation to determine if content is synthetic or not, because it's quite complex.
Now there are many, many different detectors, some of them are specialized on one platform. Some of them are specialized on some content. And so even for our analysts, we have to test 10 different detectors to be able to have the proper answer. So the purpose of this meta-detector is in just one click, to test automatically 10 detectors at the same time and have the best answer in the end. So again, this is a project that we want to be collective. So we also want to expand the collaboration with researchers, potentially tech companies as well, because we believe it's important that we join forces to also develop more and more innovative use cases in this field.
RFN: And what impact do you expect that to have, and will they be available in time for the German elections?
ASD: They are already online. They have been online since December, and they are open for tests for anyone, and different foreign partners or entities are right now testing them. So the objective is of course, to equip more people and entities with this tool, but also to trigger and foster the acceleration of collaboration in this field. Because we also want to invite tech companies, researchers, and other foreign players, to develop more and more tools that in the same spirit can be shared, improved collectively, so we can be more innovative with the tactics that can be used.
Alexander Martin
is the UK Editor for Recorded Future News. He was previously a technology reporter for Sky News and is also a fellow at the European Cyber Conflict Research Initiative.