Quiet quitting had its moment. Here’s a new villain is corrupting team culture: “workslop” — AI-generated output that appears to be polished but lacks depth to move work forward. New research from BetterUp Labs and Stanford’s Social Media Lab, which was reported on in Harvard Business Review, warns that this creeping sludge of low-priority deliverables is eroding trust and wasting time — dragging down productivity in the process.
What “workslop” really is and why it spreads
Workslop, the researchers said, is content generated by AI that looks decent at a glance but requires someone else to analyze it, correct it or redo it. As part of their continuing survey of 1,150 knowledge workers, 40 percent claim they were slogged in the past month. It is mainly passed peer-to-peer, though managers are also catching it from subordinates.
The burden isn’t theoretical. Recipients said each instance added roughly one hour and 56 minutes of cleanup. Half of respondents said they regard colleagues who turn in workslop as less creative, reliable and capable — a reputation hit that transcends any specific task.
The issue cuts across industries, with professional services and technology teams feeling it most. That lines up with the ascendance of AI assistants for drafting documents, writing code, building slides and summarizing research — tasks where speed is seductive and superficiality can be difficult to detect until late in the process.
Why the productivity pledge with AI boomerangs
AI can accelerate quality work. It can also speed bad work. And when workers outsource cognitive labor to a model without adding experience, they tend to produce something that feels plausible but empty. The difference between appearance and reality moves effort downstream, turning one person’s shortcut into another person’s overtime.
There’s also an increasing gap between AI hype and actual value delivered. Yet despite heavy investment, a recent report by MIT finds that around five percent of companies are seeing a “significant” return on their AI spending. Teams relying on AI to make up for head count or outpace competitors often find frustration and rework instead — particularly when governance and training fall behind adoption.
Here’s a typical pattern: an assistant puts together a sales deck that looks nice on the first page, but its claims have no sourcing and its messaging has nothing to do with what your client does. Or there’s an AI-written code patch that compiles but causes edge-case failures that QA has to go hunting for. In each case, the sender “saves time,” but the group foots the bill.
Morale takes a hit as trust erodes under workslop
Workslop is not only a quality problem; it is also a trust problem. We work on a psychological contract: my best thinking and yours. When collaboration partners get glossy but shallow submissions, they feel condescended to and overworked. The cleanup stuff never gets any brownie points, but it chews up attention and due dates — ripe for resentment.
Leaders should flag the social signal: work that seems to be automated without discerning human judgment comes across as disengagement, even when it offers, to the sender’s mind, an efficient response. The findings of BetterUp underscore that peers and managers rapidly take an opinion poll of “worksloppy” contributors and that such perceptions make a long-term impact on opportunities, feedback, reviews, and retention.
From slop to substance: sensible solutions
- Set a “human-in-the-loop” standard. Mandate that any AI-assisted deliverable contains the value add by its creator — “decisions that were made,” “assumptions we tried to break,” and “what we validated.” A brief provenance note — what was produced, what was revised, which sources were fact-checked — brings invisible labor into view and slices down rework.
- Define what “good” looks like. Post checklists for common outputs (memos, decks, code reviews, analyses) that indicate minimum evidence, citations, test coverage and alignment with stakeholders. If a set of AI drafts can’t achieve this quality, they don’t ship.
- Measure the downstream cost. Monitor repeat rates, escalation numbers and mean time to correction for AI-touched work. If cleanup time goes up, pause the automation in that workflow and retrain. Treat models as though they are junior teammates: at a minimum, they require supervision, feedback and scope boundaries.
- Upgrade skills, not just tools. Make investments in training for swift design, critical reading and fact-checking. Share guidelines to use within team prompts: company style guides, code standards and confirmed data. Combine this with red-teaming several key outputs so that issues have already been discovered before clients or executives ever see them.
- Build governance that enables. Implement frameworks like the NIST AI Risk Management Framework to define policies on permissible usage, treatment of data and review gates. Open lanes lessen fears, inconsistencies… and shadow AI junkyards that generate slop.
AI that earns trust by improving quality and clarity
The idea isn’t to outlaw generative tools; it’s to hold them responsible. When teams treat AI as a collaborator to be directed — not as a vending machine for instant deliverables — quality goes up and so does the team’s morale. The BetterUp-Stanford research is a cautionary tale, but a hopeful one: You see productivity gains when people remain accountable for doing the thinking and machines do most of the grunt work, out in the open rather than behind some shiny curtain.