China to ban deepfakes that aren’t properly labeled
As countries around the world grapple with new artificial intelligence technologies that can doctor video, images, text and audio, China is unsurprisingly taking a hard-line approach. Beijing rolled out a new regulation last week, set to take effect on January 10, that prohibits the publication of so-called deepfakes without proper disclosure that they were created by AI.
Users who violate these rules could have their accounts suspended or shut down on platforms that detect such “illegal” activity.
The law only applies to AI service providers that operate in China.
Tougher AI policies will help Chinese authorities fight disinformation and “criminal activity,” such as online scams or defamation, according to the Cyberspace Administration of China (CAC), the country’s top internet watchdog.
Although AI improves the user experience, "it is also abused by people who want to produce, copy and publish illegal and harmful information, discredit other users and spoof their identities," CAC said. Artificially generated content can “harm people's legitimate rights and interests, and endanger the country's national security,” according to the regulator.
China's AI regulation is consistent with the country's general internet policy known for restrictions, bans, and censorship. The new AI rules, for example, will allow regulators to censor artificially generated content so it can fit the “correct political agenda.”
One of China’s text-to-image AI developed by Baidu already filters out politically sensitive content, including explicit mentions of political leaders and potentially controversial places such as Tiananmen Square.
Such restrictions may discourage private companies from developing the technology. China’s regulatory crackdown on the tech industry has already led to a decline in the number of Chinese mobile apps. Nearly 930,000 apps ceased operation in China last year.
The regulation of generative AI — artworks, music, poetry or program code created by computer algorithms — is a global issue.
The rise of text-to-image technologies, such as DALL-E and Stable Diffusion, or AI-powered chatbots such as ChatGPT, is prompting governments around the world to adopt standards that will prevent AI from being misused.
U.S. policy on AI is far less restrictive than China's. In October, the White House proposed a set of non-binding guidelines that private companies could consider when creating their own rules for AI.
Some of the generative AI outcomes are “deeply harmful,” including ethical issues, the spread of fakes, copyright issues and abuse of users' personal data, according to the White House. And yet, they are not inevitable: “[AI] tools hold the potential to redefine every part of our society and make life better for everyone,” it added.
Daryna Antoniuk is a freelance reporter for Recorded Future News based in Ukraine. She writes about cybersecurity startups, cyberattacks in Eastern Europe and the state of the cyberwar between Ukraine and Russia. She previously was a tech reporter for Forbes Ukraine. Her work has also been published at Sifted, The Kyiv Independent and The Kyiv Post.