LinkedIn has been using user-generated content, including posts and articles, to train its generative AI models without obtaining consent from users.
The platform updated its privacy policy and FAQ section to reflect this practice, indicating that data collection for AI training had already begun before the announcement.
Users can opt out of having their data used for AI training by toggling a setting called “Data for Generative AI Improvement” under “Data Privacy” in their settings.
LinkedIn has quietly implemented a new setting that automatically opts users into contributing their personal data, including posts, towards the training of generative AI models. This means that users’ posts are being harvested for AI training without their explicit consent.
LinkedIn’s generative AI models are used for features like writing assistants, and the scraped data will be used to train these models. The company claims to employ privacy-enhancing technologies to anonymise or redact personal data from its AI training sets. However, considering what Larry Ellison, co-founder of Oracle and Chairman of the Board and Chief Technology Officer, said last week about data captured by surveillance cameras we shouldn’t take LinkedIn’s word for it.
During an investor Q&A session at the ‘Oracle Financial Analyst Meeting 2024’ event, Ellison said: “The police … body cameras … our [Oracle’s] body cameras are simply two lenses attached to a vest [and] attached to the smartphone that you’re wearing … the camera is always on, you don’t turn it on and off, you can’t turn it off to go to the bathroom – ‘Oracle, I need two minutes to take a bathroom break,” then we’ll turn it off. The truth is, we don’t really turn it off. What we do is, we record it so no one can see it [so] that no one can get into that recording without a court order.” (see timestamp 1:08:27 HERE.)