Your team isn’t crazy: The tidal wave of AI-powered “good enough” content is eroding trust and increasing rework. Now, new research from BetterUp Labs and the Stanford Social Media Lab has a name for the phenomenon — “workslop,” or slick-looking content that doesn’t actually get work done — and it’s getting on employers’ and colleagues’ nerves.
What workslop looks like in the modern office today
In one of two questions posed to 1,150 staff, 40% reported they had received workslop during the past month. It’s also, for the most part, peer to peer — though plenty of work flows upward as well, at which point managers need to fix or redo it. The recipient then spends an additional 1 hour and 56 minutes, on average, cleaning it up — according to the research, which was summarized in Harvard Business Review.
- What workslop looks like in the modern office today
- The perception penalty is real and undermines trust
- The productivity paradox beneath the hype
- Why workslop is so easily spread across organizations
- Using AI without annoying your team or slowing progress
- A manager’s playbook for eliminating workslop
- The bottom line on using AI without creating workslop
Workslop manifests as slide decks packed with mushy bullets, code that builds but breaks on real-world edge cases, and summaries that remove all nuance. It’s work that seems like progress but passes the actual thinking and validation to the person downstream.
The pattern isn’t unique to any one sector, but professional services and technology are reporting outsized pain. When accuracy is a matter of life or death for clients, AI filler gets expensive, quickly.
The perception penalty is real and undermines trust
And half of the workers in the BetterUp–Stanford study said they were less inclined to see colleagues who trot out AI-slop as creative, reliable and skilled. It’s hard to shake off that reputational drag, even if the sender feels himself or herself to be “moving fast” or “delegating to the machine.”
There’s also a hidden tax: The reviewer becomes the real subject-matter expert, tasked with the fact-checking of citations, vetting the logic, trying to rebuild context that was glossed over by the model. Eventually, teams start inconspicuously routing around the repeat offenders.
The productivity paradox beneath the hype
Yes, AI has been proven to make some things go faster — and that’s especially the case for novices. Drafting and customer support made meaningful strides in 2020, according to a report from the Stanford Institute for Human-Centered AI’s AI Index. But quality tends to slip without human supervision, and complicated tasks continue to require expert judgment.
At the enterprise level, the returns are mixed. We’ve seen recently, in fact, by MIT Sloan Management Review and our industry partners, that only a very small fraction of organizations are seeing clear returns on investment from generative AI. There is still a large gap between pilot demos and consistent production value.
It is within that gap that workslop proliferates: rates of progress flourish, but not their testable proofs. The upshot is a backlog of corrections that negates any time saved at the front end.
Why workslop is so easily spread across organizations
Incentives reward things that are visible, much more than verified reach. Software makes it nearly as easy to whip up a draft version, and most teams haven’t even settled the basics of when AI is ready for prime time, what you should reference about the way it was built and what bar it must meet in order to share. And organizations use “shadow AI” — absent process or documentation — to fill the void.
Throw in overloaded inboxes and status updates that favor “done” over “done well,” and it’s hardly surprising when people reach for the quickest new path to a deliverable, even if it piles on the load for someone else up next.
Using AI without annoying your team or slowing progress
- Own the output. You may have written this just as a model has proposed it; you are still responsible for the accuracy of fact, context and style. Never forward raw generations.
- Disclose and document. Enclose a brief note, too: what the model produced, your prompt, the sources on which the content is based and what you checked. For code, include tests and describe what you tested.
- Raise the bar, not your voice. Leverage AI for scaffolding (outlines, alternative phrasings, test cases), then layer on the original analyses, data and decision-making. If the recipient cannot operate without rework, it isn’t ready.
- Adopt a “recipient ROI” rule. Your application of AI should cut total system time, not just your personal time. If your draft saves you 20 minutes but costs your teammate an hour, it’s workslop.
- Pressure-test the output. Ask the model to debate itself, expose assumptions, and reveal failure modes. Double-check all claims against a primary source or authoritative data set before hitting the send button.
A manager’s playbook for eliminating workslop
- Set team norms. Define acceptable use of AI, citation norms and what “done” means. For AI-assisted work, you just need a brief verification note.
- Train for judgment. Provide hands-on clinics that emphasize prompt design, critical reading and analysis around writing. Focus on where AI excels (structure, exploration) and where humans must step in (choices, trade-offs, ethics).
- Integrate checks into workflow. Include auto-linting, unit tests, fact-checking scripts and style guides. Make the cost of sloppiness in inputs visible, and as a leader, force yourself to see how much time goes into reviews.
- Align incentives. Incentivize with fewer, better artifacts, and measure impact, not body count. Acknowledge team members who minimize rework down the road.
The bottom line on using AI without creating workslop
Generative AI can help speed meaningful work — but only in tandem with rigor and accountability. In other words, treat models as interns: useful, fast and never the last word. Your colleagues are not anti-AI; they’re anti-slop. Fewer drafts, more decisions for them.