LinkedIn is broadening how it uses member data to train its generative AI, flicking on broader usage by default in more regions and asking users to opt out if they don’t want in. The company says the change is intended to make the posts and summaries written by tools better, refine job matches and surface more relevant opportunities — but it also raises familiar questions about consent and control of professional data.
What changed in LinkedIn’s AI training policy rollout
LinkedIn will also now use another category of member data for AI training in the EU, EEA, Switzerland, Canada and Hong Kong, in addition to countries where the policy already exists. The setting is on by default. LinkedIn says it is training “content-generating AI models” to make the member experience better, and will not train on private messages.
- What changed in LinkedIn’s AI training policy rollout
- Data categories LinkedIn will use for AI training
- What data is excluded from LinkedIn’s AI training
- Why LinkedIn is expanding data use for AI training
- Privacy and legal context for LinkedIn’s AI data use
- How to opt out of LinkedIn’s AI training data use
- What to watch next as LinkedIn expands AI training

That expansion comes after a stop-start history in Europe. LinkedIn had previously put UK training on hold following similar plans last year while it was being investigated by the UK’s Information Commissioner’s Office. The new rollout indicates that LinkedIn thinks it has been able to tweak its approach to conform to regional privacy rules while scaling AI features.
Data categories LinkedIn will use for AI training
Profile data, including your name, photo, headline, current and past positions, educational background and location, as well as skills, certifications and publications you have added to your page or received endorsements for, are among the information that can be used by AI algorithms. Content you share publicly on LinkedIn, such as posts, articles, comments, polls and contributions in Ideas format, etc., is also covered.
The company might also employ usage data “from the generative AI itself (what you type into AI features)” and job-related materials that you store on LinkedIn, such as resumes and generic answers to screening questions. Additionally, full stories, group content and group messages, as well as feedback signals (i.e., likes/dislikes) on AI suggestions and user reports on AI performance, contain information for training.
What data is excluded from LinkedIn’s AI training
LinkedIn says it does not train on private messages. It also doesn’t include data from those under 18; it doesn’t train the system using login credentials, payment methods, credit card numbers and member-provided salary or job application data from a specific individual.
Why LinkedIn is expanding data use for AI training
LinkedIn has been sprinkling AI on the site everywhere—from writing assistance that helps to compose posts and InMails to features that summarize long threads and fine-tune job recommendations. Models can be trained for real job activity, helping them to understand workplace language, job titles and signals of skills. And with a community that just reached over 1 billion members, the company has an unusually large professional dataset—where even tiny accuracy improvements can benefit recruiters, job seekers and salespeople.
The newest push also fits not just within Microsoft’s broader AI push, but with Copilot across productivity apps. At the same time, Microsoft and other tech companies have faced lawsuits and studies with regulators over how training data is sourced, including accusations that personal communications were utilized without sufficiently informed consent. Those conflicts have focused attention on defaults, transparency and the effectiveness of opt-outs.
Privacy and legal context for LinkedIn’s AI data use
In Europe, companies generally depend on “legitimate interests” or consent under the GDPR when processing personal data for AI training. If users are not effectively informed, or if it is difficult to find the opt-out option, default-on settings can be controversial. Privacy activists have insisted that platforms limit data gathering, employ strong anonymization and retain user controls — particularly when the data can expose employment history, education and professional networks.

LinkedIn adds that it is not training on the most sensitive categories of member data and offers a control to opt out from future use. As with most AI systems, the option to opt out can also mean that it just stops learning new things about you but does not necessarily forget information that’s already been processed.
How to opt out of LinkedIn’s AI training data use
You can untrain AI on your data in a single minute. On desktop or mobile:
- Open LinkedIn and log into your account. Tap your profile photo, then select Settings & Privacy.
- Go to Data Privacy.
- Find How LinkedIn uses your data and click Data for Generative AI Improvement.
- Toggle the setting to Off.
Revisit the control occasionally—policy changes and new features can bring in new default settings. If you have several accounts or public profiles, follow these steps for each account.
What to watch next as LinkedIn expands AI training
Check for product updates and privacy notices — especially if LinkedIn adds new AI features that depend on different data sources.
Review your public profile fields and previous posts to make sure you are comfortable with what’s visible. Professionals in regulated industries may wish to apply extra restrictions on visibility for sensitive details of their work, endorsements or group discussions.
The bottom line: LinkedIn’s AI aspirations are ramping up, and its bias is now to use more data.
If that is not what you want, the opt-out switch is right there — flick it, and adjust your settings as the platform evolves.
