Meta to assign special teams in Europe to fight election disinformation, AI abuse
Meta, the owner of Facebook and Instagram, is ramping up its efforts to protect users from disinformation ahead of the European Parliament elections in June.
The company announced on Sunday that it plans to set up a team of intelligence experts, data scientists, engineers and researchers in Europe who will be tasked with identifying and mitigating election-related threats on its platforms in real time.
In addition, Meta will expand its fact-checking network with three new partners in Bulgaria, France, and Slovakia, bringing the total number of independent fact-checkers, who review and rate content on its platforms, to almost 30 across the EU.
This year marks the largest election cycle in history, with approximately half of the global population across 64 countries and the EU expected to head to the polls. This will be the 10th parliamentary election in the EU and the first one after Brexit. Elections will take place in all 27 EU countries as their citizens vote for members to represent them in the European Parliament.
Analysts predict a significant shift to the right in many countries during this year’s election, with populist radical right parties gaining votes and seats across the EU.
To prevent the spread of election-related disinformation and propaganda on its platforms, Meta has invested over $20 billion in safety and security since 2016, according to the tech giant. Its global team dedicated to this effort consists of around 40,000 people, including 15,000 content reviewers.
"It’s not just the moment around the election that matters — it’s all the work you have to do in the months leading up to them,” said David Agranovich, the director of global threat disruption at Meta, who spoke to Recorded Future News during the Munich Cyber Security Conference (MСSC) earlier in February.
“Information operations don’t start right when the elections come up — they begin months or years in advance as the actors are trying to build the audience or create the groundwork for future operations,” he added.
By continuously monitoring the behavior of threat actors over time, such as those from Russia and China, researchers can anticipate their actions during this year's elections. This affects the type of measures the company chooses to implement.
For example, when Meta began labeling Russian state-controlled media or blocking them entirely in the EU and globally in response to Russia’s invasion of Ukraine, posting volumes on their pages decreased by 55%, and engagement levels dropped by 94% compared to pre-war levels. Additionally, more than half of all Russian state media had stopped posting altogether, Meta said in a blog post.
Safeguarding EU’s election
There are three major threats that users may face on social media platforms during the election season, according to Meta: misinformation, coordinated influence operations involving bots and trolls, and the misuse of artificial intelligence tools.
To combat disinformation, Meta said it will make it easier for independent fact-checking organizations across the EU to discover and attach warning labels to content related to the elections. The company will also prohibit ads targeting the EU that discourage people from voting, question the legitimacy of the election or contain premature claims of election victory.
To protect its users from coordinated influence operations, Meta has already adjusted its approach to respond specifically to campaigns targeting the EU Parliament elections.
The company will also be adding a feature for users to disclose when they share AI-generated video or audio, and penalties may be applied if they fail to do so.
Advertisers running ads related to social issues, elections, or politics with Meta also have to disclose the use of a photorealistic image or digitally altered video, including those created with AI. Between July and December, Meta reported removing 430,000 ads across the EU for failing to carry a disclaimer.
Meta's rival, the Chinese video-sharing app TikTok, has also implemented measures to protect its users from harmful content during elections. Speaking at MCSC, TikTok's vice president for government relations and public policy in Europe, Theo Bertram, said that the company will launch in-app election centers for each of the 27 EU countries to curb the spread of online misinformation.
Bertram said that the average age of TikTok users is 30, making them eligible voters. Therefore, the company will try to guide them toward trusted sources, he added.
Daryna Antoniuk
is a reporter for Recorded Future News based in Ukraine. She writes about cybersecurity startups, cyberattacks in Eastern Europe and the state of the cyberwar between Ukraine and Russia. She previously was a tech reporter for Forbes Ukraine. Her work has also been published at Sifted, The Kyiv Independent and The Kyiv Post.