LinkedIn’s feed is a-changing and the creators are watching. Power users from various industries have suddenly reported steep declines in views and engagement as the platform relies more heavily on big language models to filter what appears in people’s feeds. The company claims it doesn’t factor gender, race or age into how posts are ranked, though a wave of creator experiments has revived an old question with new AI urgency: When algorithms change, who gets heard?
Why Users Say Their LinkedIn Reach Suddenly Dropped
The old playbook simply does not work, veteran posters contend. Strategic maneuvers, such as posting early and often or asking comment-bait questions, seem less dependable, while long-form wisdom or educational threads occasionally take off. One sales influencer told me the system now tests whether a post communicates understanding and value, not how many fast reactions it garners.

LinkedIn cites scale as a reason. The company says its user base is bigger, posting is up 15% YoY and comments are up 24%. More supply means more competition to get attention, and all it takes for reach to be redistributed is for no one user to be doing anything “wrong.”
What LinkedIn Says Its AI Does to Rank Your Feed
LinkedIn’s top engineering team said that the platform is now using LLMs to better curate, for members, content related to their careers. Executives, including the company’s responsible AI lead, have pointed out that factors like demographic characteristics — such as gender and race — are not signals to show people what they see in their feed.
Instead, it is how members behave — what they click on and save and linger on and share with friends — that strongly affects ranking. That feedback loop shifts daily. The company also says it conducts continual tests to optimize for relevance, which can make the models or weights choppy.
Inside the #WearThePants experiment and what it showed
The spark is #WearThePants, a grassroots experiment in which women invited male colleagues to write and publish similar messages from their accounts. In many instances, men with much smaller followings experienced significantly more reach. One leading figure said a post of hers was seen by hundreds, but that the same text on a male peer’s account reached over 10,000 people — more than the total number of followers he had.
More than three dozen women performed the same exercise with similar patterns. Some participants adopted their profile name and photo in the male form for a week. Some found that impressions doubled and engagements increased by approximately 27 percent when they moved to a more clipped, direct form of writing and tanked when they switched back.
LinkedIn insists this doesn’t mean gender is a ranking factor. The company maintains visibility can temporarily be increased through viral participation, dormant reactivation following a posting lull or changes in network composition. In other words, the “experiment” may have been conflating multiple variables — style, topic, novelty, audience — under the heading of gender.
Signals the LinkedIn feed really rewards
There are clues to what works. Share posts that provide professional insights and career lessons, industry analysis or practical learning about work or the economy tend to be popular on LinkedIn. That’s fitting for an LLM-oriented ranking model that weighs grammatical correctness and topical relevance more than raw engagement bait.

Format and intent also matter. Posts that articulate a takeaway, use concrete examples and speak to a defined audience can often do better than inspiration for the sake of it or vague prompts. Some profiles that are dense with role history, topical coherence and plausible interactions might get a boost if the model overweights expertise signals.
What Experts Are Seeing and What Is Still Unclear
Academic researchers warn, though, that demographic targeting is just one route to bias. Since LLMs are trained on human-generated input, subtle patterns can get picked up — language style itself, framing of topics, network effects — that serve as proxies for identity. That can produce results that are discriminatory in effect yet without any direct demographic input.
“Profile context, as well as behavioral history (as you indicate), are used in ranking systems,” said Sarah Dean, a computer science professor at Cornell. That means who you are connected to, what you interact with and how your audience behaves can potentially have an effect on not just what content is presented in front of you but also who sees your own activity. If men traditionally receive more responses on some topics, a relevance model could increase that legacy signal.
LinkedIn has released research on fairness and says it continually assesses and calibrates its systems to minimize unintended bias. However, the details of how feeds are ranked remain obscure. Gaming is an invitation of full transparency; lack of transparency breeds mistrust. That tension is not specific to LinkedIn, but at stake on a platform where visibility can translate into jobs and revenue are also the potential loss of business opportunities and even friendships.
What to do while the dust settles on LinkedIn feeds
Creators can hedge against volatility. It’ll anchor some of the posts in concrete professional insight, refer to data and use plain language. Tighten headlines, lead with value and add a clear takeaway. The more you stick with a subject, the more your topic authority can be inferred by a model; so does working off similarly minded peers in your niche.
Audit your audience mix and feedback loops. (Go deeper: Read more about mediating civilizations caught up in revenge.) These jumps into history aren’t just updates of elementary school textbooks. If most of your network responds to one kind of content in one way, diversify connections and topics within the general subject matter you bring expertise to. Test formats — short analysis, visual explainers or mini case studies — and track outcomes over a few weeks, not just one post.
The upshot: LinkedIn’s algorithm is increasingly focused on language meaning and perception of relevance, not just engagement numbers. That can seem like a black box — particularly when real-world biases creep in through indirect signals. For the time being, since we’re stuck with our system as a given, the clearest path to follow is to learn how it’s working now, figure out who you want to reach — and say something specific, useful and not entirely incredible some number of times for a model that adapts slowly.