|
twitter-logo-final|twitter-logo

Twitter introduces new policy for addressing misinformation during crises

Twitter announced new content-moderation policies Thursday to crack down on misinformation related to wars, natural disasters, and other crises. 

The platform, which announced the changes amid a contentious buyout deal with Elon Musk, has often been relied on by the media for details about breaking news events—including during the war in Ukraine and ongoing Covid-19 pandemic. 

Twitter’s new “crisis misinformation policy” is rolling out globally and seeks to slow the spread of misleading information that could lead to “severe harms” during humanitarian emergencies, according to the company. The policy will start with enforcement related to the armed conflict in Ukraine, with plans to expand to other crises, such as public health emergencies and natural disasters.

Under the policy, once Twitter has evidence a claim is misleading, it will stop amplifying it across the platform. It will also “prioritize adding warning notices” to viral Tweets or those from high profile accounts and disable Likes, Retweets, and Shares for the content. 

“We’ve found that not amplifying or recommending certain content, adding context through labels, and in severe cases, disabling engagement with the Tweets, are effective ways to mitigate harm, while still preserving speech and records of critical global events,” Yoel Roth, head of Twitter Safety and Integrity wrote in a blog post about the new policy. 

Some of the types of content that that may get this warning include, per Twitter:

  • False coverage or event reporting, or information that mischaracterizes conditions on the ground as a conflict evolves;
  • False allegations regarding use of force, incursions on territorial sovereignty, or around the use of weapons;
  • Demonstrably false or misleading allegations of war crimes or mass atrocities against specific populations;
  • False information regarding international community response, sanctions, defensive actions, or humanitarian operations.

Twitter has long wrestled with moderating content amidst conflict due to the liveblogging nature of its platform. For example, last November it paused local Trends and halted advertisements in Ethiopia due to local conflict. In February, it also removed accounts targeting Ukrainian users with disinformation about the Russian invasion, NBC News reported

The new crisis misinformation policy expands on many of the company’s past content moderation efforts, but it’s unclear how well it will align with the still murky vision of digital speech proposed by Musk if his purchase of Twitter goes forward.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles

Andrea Peterson

Andrea Peterson

(they/them) is a longtime cybersecurity journalist who cut their teeth covering technology policy at ThinkProgress (RIP) and The Washington Post before doing deep-dive public records investigations at the Project on Government Oversight and American Oversight.