LinkedIn
Image: Greg Bulla / Unsplash

UK regulator stops LinkedIn from training AI models with British users’ content

LinkedIn recently began harnessing its users’ content and data to train artificial intelligence models, opting all platform participants into the program without formal notice — except for users in the United Kingdom and Europe.

On Friday, an official with the U.K’s data privacy watchdog, the Information Commissioner’s Office (ICO), released a statement indicating the regulator had engaged with LinkedIn to stop the feature from being rolled out in Britain.

"We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” Stephen Almond, executive director of regulatory risk at the ICO, said in the statement.

LinkedIn’s leadership told the data privacy watchdog that it has “suspended such model training pending further engagement with the ICO,” the statement said.

The professional networking platform rolled out the new AI model training program before updating its terms of service and privacy policy, according to 404 Media, which first reported the news on Wednesday.

According to a new privacy policy posted after the 404 Media story ran, LinkedIn now uses private data and user content pulled from the site to “develop and train artificial intelligence (AI) models… and gain insights with the help of AI, automated systems, and inferences, so that our services can be more relevant and useful to you and others.”

The new AI feature is also not being used in Europe, where strict laws protect consumers’ data privacy.

Users can opt out of the model training feature by locating the data privacy button in account settings and then clicking on “Data for Generative AI Improvement,” where they will find a toggle allowing them to turn off the automatic opt-in. 

The company posted a frequently asked questions page detailing its use of personal data and content for AI models about a week ago, according to 404 Media. 

That page warns users that their “personal data may be used (or processed) for certain generative AI features on LinkedIn.”

“Like other features on LinkedIn, when you engage with generative AI powered features we process your interactions with the feature, which may include personal data (e.g., your inputs and resulting outputs, your usage information, your language preference, and any feedback you provide),” it added.

After 404 Media’s story revealing the changes was published on Wednesday, LinkedIn’s general counsel posted a notice on the site saying the platform had updated its user agreement and privacy policy.

“In our privacy policy, we have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation (‘generative AI’) and through security and safety measures,” the post said.

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
No previous article
No new articles
Suzanne Smalley

Suzanne Smalley

is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.