AI Training Utilising UK User Data Suspended By LinkedIn

AI Training Utilising UK User Data Suspended By LinkedIn

LinkedIn’s decision to pause using UK user data for training its Artificial Intelligence (AI) models is the latest example of a tech giant having to adjust to privacy regulations. As reported by the BBC, the suspension came after the UK’s Information Commissioner’s Office (ICO) expressed concerns about how LinkedIn was handling user data. While LinkedIn, owned by Microsoft, had opted users around the world into using their data for AI training, the ICO wasn’t convinced that this approach aligned with data privacy norms, particularly for UK users.

The issue centers around LinkedIn’s use of data to train its generative AI tools, which are designed to improve features like job recommendation algorithms or automated message drafting. For example, AI models like ChatGPT and image generators rely on massive datasets to mimic human-like responses and behavior. The more real-world data they consume, the better they become at understanding and generating content that feels natural to users. LinkedIn sees a lot of potential in using user-generated content, profiles, posts, and interactions as training material to make these tools more helpful.

However, the ICO’s intervention suggests that LinkedIn may have been a little too eager in its approach. While LinkedIn has since introduced a way for UK users to opt out of having their data used for these purposes, the ICO’s involvement emphasises the need for stricter scrutiny and user consent in data use, especially in sensitive areas like personal career information.

The ICO welcomed LinkedIn’s pause, with its executive director, Stephen Almond, highlighting the importance of maintaining public trust when deploying generative AI. The UK is not alone in pushing back LinkedIn’s suspension applies not just in the UK, but also across the EU, European Economic Area (EEA), and Switzerland, where privacy laws like the General Data Protection Regulation (GDPR) impose similar constraints.

This story isn’t just about LinkedIn; it’s part of a broader trend where other tech companies, like Meta, have had to adjust their plans for AI training in response to regulatory pushback. Earlier this year, Meta had to put the brakes on using UK user data for AI training following similar criticism. The ICO’s stance makes it clear that tech companies must ensure users’ rights are respected before forging ahead with new AI capabilities.

In response, LinkedIn emphasised that it values user control over data, aiming to collaborate closely with the ICO to align on these privacy matters. This pause could signal a more cautious approach moving forward, as LinkedIn balances the benefits of AI with the need for transparent data practices. But until then, LinkedIn and other tech giants will likely continue to walk a fine line between innovation and privacy compliance, especially in regions like the UK that hold privacy to high standards.

For users, this might serve as a reminder to stay informed about how their data is used and what options they have for opting out when they’re not comfortable. It’s a complex space, but one that’s increasingly important as AI becomes a bigger part of our digital lives.

Sources:

Stu Walsh

Stu Walsh

I have recently left my position as the Chief Information Security Officer (CISO) for Blue Stream Academy Ltd. who are a leading provider of online training and HR solutions to healthcare organisations in the UK. I oversaw the organisation’s information security strategies, ensuring the protection of sensitive data, and complying with healthcare industry-specific regulations and standards. During my time as CISO, I established and maintained the Information Security Management System (ISMS) required for our ongoing General Data Protection Regulation (GDPR) compliance, ISO27001 and PCI-DSS certifications.

Leave a Reply

Your email address will not be published. Required fields are marked *

RSS
Follow by Email
Facebook
X (Twitter)
LinkedIn