LinkedIn Suspends AI Data Processing in the U.K. Following Privacy Concerns Raised by ICO.
The U.K. Information Commissioner’s Office (ICO) has confirmed that LinkedIn, the professional networking platform, has paused processing user data within the U.K. for training its artificial intelligence (AI) systems.
“We are pleased that LinkedIn has taken into account the concerns we raised about its approach to training generative AI models using information related to its U.K. users,” commented Stephen Almond, the executive director of regulatory risk.
Almond further stated that the ICO is set to monitor companies providing generative AI services, including Microsoft and LinkedIn, to ensure they implement suitable protections for U.K. user data.
This action follows LinkedIn’s recent acknowledgment that it was using user data to train its AI models without obtaining explicit consent, as outlined in its updated privacy policy that took effect on September 18, 2024, according to a report by 404 Media.
LinkedIn clarified, “Currently, we are not allowing training for generative AI using member data from the European Economic Area, Switzerland, and the United Kingdom, and this setting will remain unavailable in these regions until further notice.”
The company also mentioned in an FAQ that they aim to limit personal data in the datasets used for AI model training, employing privacy-enhancing technologies to remove or obscure such data.
For users outside Europe, there is an option to opt out by navigating to the “Data privacy” settings and disabling the “Data for Generative AI Improvement” option. LinkedIn noted that opting out will prevent future use of personal data for training but won’t affect data previously used.
This action by LinkedIn follows Meta’s recent admission of using non-private user data for AI training purposes dating back to 2007, and Zoom’s decision to discontinue plans to use customer content for AI training following public concern over data usage policies.
These developments highlight the growing examination of AI practices, especially regarding how personal data is utilized in developing advanced language models.
Meanwhile, the U.S. Federal Trade Commission (FTC) released a report indicating that large social media and video streaming platforms have engaged in extensive surveillance of users, often lacking robust privacy controls, particularly for younger audiences. According to the FTC, companies have accumulated vast amounts of data from both users and non-users, often combining this with third-party data sources to create detailed consumer profiles, later selling this information to other entities. They also noted that many platforms failed to adequately handle data deletion requests from users.