LinkedIn is expanding the scope of its generative AI initiative to member profiles throughout the European Union, EEA, United Kingdom and Switzerland. Profile information and on-platform activity can be used by default to help improve its AI systems, according to the company, with the choice of whether or not individual users want to opt out. Minors’ profiles are filtered out.
What Is Changing as LinkedIn Expands EU AI Training
So far, LinkedIn’s AI training guidelines have been limited to certain locales. The expansion also means that professional profiles in Europe can be integrated into the same improvement loop as those of features like content suggestions, job-matching, recruiter tools and writing assistants. LinkedIn explains this as utilizing member-generated content to render its AI feedback more relevant and dependable on-platform.

That isn’t a big deal in practical terms because the setting that lets companies use your data for “generative AI improvement” turns on only when it rolls out to your account. If you don’t want to join, you can turn it off in privacy settings. Data of users under 18 will not be used for these purposes, LinkedIn says.
What Profile Information Can Be Used for AI Training
Based on LinkedIn’s privacy and AI documentation, the kind of data that could be used to feed model refinement is stuff you might generally see on or derive from your profile and activity on the service:
- names
- headlines
- job titles and experience
- education
- skills
- endorsements
- recommendations (all yours)
- posts you’ve shared (written by others in your network or outside sources)
- comments
- reactions to other publicly visible assets contributed by public LinkedIn members
Private messages and some sensitive categories, LinkedIn’s materials state, are not used to train the generative AI, and aggregated or de-identified information may be used to mitigate potential privacy harm.
The company’s logic is simple: the more the representations in training data look like those for real professional speech and career histories, the better the platform’s AI can summarize people’s profiles, suggest edits, surface relevant jobs or help recruiters. That dynamic is similar to industry practice among social platforms, where user-generated content shapes AI systems that automate curation and search.
How This Fits With EU Privacy Law and GDPR Duties
For European users, the legal background is just as important as the feature set. Platforms must state a lawful basis for processing personal data under the General Data Protection Regulation. Many services rely on “legitimate interests” to develop, say, better products — but this is not a get-out clause: it comes with obligations, including a balancing test, transparency and the right (or in some cases an assurance that there is no such right) for individuals to object. The Board has consistently emphasised that people’s reasonable expectations and control mechanisms – such as an easy opt-out – are key to striking that balance.
LinkedIn’s customer data related to EU users is managed by LinkedIn Ireland as controller, so the likely lead regulator will be the Irish Data Protection Commission. The UK’s Information Commissioner’s Office has also issued guidance that using AI to train on user data needs solid grounds and an opt-out if it is under the guise of legitimate interests. Consumer groups like BEUC have called for stricter examination of the training of generative AIs on personal data, and demanded stronger protections and more clarity over what does and does not count.
Meanwhile, the future EU AI regulatory system is expected to further increase expectations over transparency and data governance. Although the new rules concentrate on model risk and oversight, it’s the GDPR that’s still central to how user data might be recycled to make those models better.

How to Opt Out Quickly on Web and Mobile Settings
To manage this setting:
- On the web: log in and click your profile menu; then open Settings & Privacy and select Data privacy before clicking on the option Data for Generative AI Improvement to turn it off.
- On the mobile app: tap Settings from your profile, then Data privacy and select the same option to switch it off.
When turned off, LinkedIn says no new data you create will be used to power the training of AI.
If you wish to have your historical information removed, you can exercise your GDPR rights by filing an objection against the processing of data or a request for removal.
LinkedIn has forms in its Privacy Center for these requests. You have the right to object to a company using your personal data for its legitimate interests, or for direct marketing (where applicable); if you object, the service in question should cease unless it can show compelling legitimate grounds for processing your information which override your rights and interests.
Why Professionals Need to Know About LinkedIn’s AI Use
For most, however, the choice involves a trade-off between utility and control. Infusing real-world profiles and posts into the AI that runs LinkedIn would improve job recommendations by using it, dial down recruiter noise and prime tools for writing content to be more context-aware. LinkedIn has more than a billion members worldwide, Europe being one large chunk of it — meaning that ranking and matching algorithms can have an outsized economic impact on hiring pipelines.
But professionals also have a point: profile information can be sensitive in context, and generative models might mirror or amplify bias from the training data. The danger is not just that of privacy — it is also one of accuracy and fairness. Regulators and academics (including the UK ICO and the Ada Lovelace Institute) have also highlighted the importance of meaningful choice, impact assessments and means for reversal or data deletion. Similar conversations are also taking place on other platforms: Big social networks are tweaking default settings and opt-out pathways for AI instruction.
The bottom line: If you want LinkedIn’s AI to be informed by your professional footprint, then this setting can remain active. (Car and Driver estimates that its readers, for instance, open the door with one hand more than they use a phone or tablet to go online in bed; if you’re uncomfortable having that data used to improve the model of your car — or other expressions of yourself — well, then the tools are there.) (And in Europe your rights travel even as your data does.) Regardless, check back on your privacy settings from time to time; as AI abilities change, so does the law that controls them.
