A new report is raising alarms over how LinkedIn’s ID verification program handles the personal information of users seeking a blue check. The Microsoft-owned professional network relies on third‑party provider Persona to verify government IDs, and fresh scrutiny suggests that data collected during that process may be shared more broadly than users expect.
The claims, surfaced by an independent researcher writing under the name “rogi” on The Local Stack and highlighted by business media, allege that Persona’s checks extend far beyond a quick photo match. The result, privacy advocates warn, is a sensitive dossier built from a single verification flow—one that may travel across a network of partners and subprocessors.
- What the Report Says About LinkedIn ID Verification Scope
- Who Else Sees the Data Collected During Verification
- Persona Pushes Back on Claims of Broad Data Sharing
- Verification’s Expanding Footprint Across Major Platforms
- Why the Stakes Are High for Biometric and ID Data
- What Users Can Do Now to Protect Their LinkedIn Privacy

What the Report Says About LinkedIn ID Verification Scope
According to the analysis, Persona collected a wide array of identifiers during LinkedIn’s ID check: full name, passport photo, a live selfie, facial biometric data, details read from an NFC passport chip, nationality, sex, date of birth and age, plus contact and technical metadata such as email, phone number, physical address, IP address, geolocation, device type, MAC address, browser, OS version, and language settings.
The researcher also described “hesitation detection,” which times how long a user lingers at each step, and copy‑paste detection—behavioral signals commonly used to flag fraud. Individually, these signals may seem routine in fintech and online safety. Combined with a passport scan and biometric match, they amount to an unusually rich identity profile.
Who Else Sees the Data Collected During Verification
The crux of the concern is not just what’s collected, but where it may go. Persona’s documentation references a “global network of data partners” and lists subprocessors that include major cloud providers such as Amazon Web Services and Google Cloud Platform, along with AI companies like OpenAI and Anthropic. The terms also describe scenarios in which data can be disclosed to law enforcement upon request.
In practice, subprocessor rosters often function as a menu of potential providers rather than a guarantee that any single customer’s data touches each firm. Still, privacy researchers argue that the complexity and opacity of these chains make it hard for users to give informed consent, particularly when biometric templates and government IDs are involved.
Persona Pushes Back on Claims of Broad Data Sharing
Persona co‑founder and CEO Rick Song responded publicly to the circulating claims, stating that no personal data is used for AI or model training and that information is used exclusively to confirm identity. He added that biometric data is deleted immediately after processing and other personal data is deleted within 30 days.
Song also said that the subprocessor list reflects the full set of vendors used across all Persona customers and is not a definitive list for any one client workflow. In other words, the presence of AI companies or other vendors on that page does not mean LinkedIn verifications specifically rely on them. Persona said it plans to clarify the list to make customer‑specific usage more transparent.

Verification’s Expanding Footprint Across Major Platforms
LinkedIn, which reports serving more than a billion members worldwide, has leaned into verification to curb impersonation and credential fraud targeting recruiters and job seekers. Persona, meanwhile, has become a common backbone for age and ID checks on large platforms, including Roblox and Discord.
That ubiquity is part of why the story has traction. A security researcher recently claimed Persona performs 269 distinct checks for Discord verifications, suggesting the company’s toolset is broad and adaptable. Add to that the involvement of prominent investors linked to the surveillance industry and critics argue the incentives are stacked toward expansive data collection.
Why the Stakes Are High for Biometric and ID Data
Professional profiles are high‑value targets for fraudsters, and LinkedIn accounts commonly anchor job applications, recruiting pipelines, and corporate messaging. When a verification vendor aggregates biometrics, government IDs, device fingerprints, and behavioral metrics, the fallout from a breach or misuse could be severe and difficult to remediate—unlike a password, you can’t reset your face or passport history.
Regulatory frameworks are catching up. In the European Union, GDPR treats biometrics as special‑category data that demands strict necessity and proportionality, documented impact assessments, and clear deletion timelines. In the United States, state laws such as the California Consumer Privacy Act and Illinois’ Biometric Information Privacy Act add disclosure and consent requirements with meaningful penalties for violations. The more vendors involved, the more complex compliance and accountability become.
What Users Can Do Now to Protect Their LinkedIn Privacy
For professionals weighing verification, the prudent move is to read the privacy terms for both LinkedIn and Persona before uploading an ID. If you proceed, consider using a dedicated device and network, ensure your LinkedIn account has strong multi‑factor authentication, and monitor your profile for unusual activity or login alerts.
Users can also exercise data rights where available: request copies of verification data, ask for deletion confirmations, and verify how long information is retained. Ultimately, the onus is on platforms to balance fraud prevention with data minimization and to bring sunlight to their vendor stacks. Until then, the blue check may carry a bigger privacy trade‑off than many expect.
