Your inbox knows it when it sees it. The interminable status update that seems auto-posted by a bot, the slide deck that is slick but crumples to dust in response to questions, the code patch that compiles OK, only to passively break edge cases later. Workers even have a name for it now: workslop, the tide of AI-generated output that resembles work to anyone who doesn’t take time to look closely, but actually makes more work for someone else, somewhere along the line.
How AI-generated workslop is rising across workplaces
Workslop is content produced by AI that gets you at a quick glance, but doesn’t actually advance the ball.
- How AI-generated workslop is rising across workplaces
- The productivity paradox of AI: speed gains, quality costs
- Where teams are hurting most from AI-driven workslop
- Protecting your reputation when AI aids your output
- Practical fixes teams can adopt now to prevent workslop
- The bottom line on using AI without losing trust
In a joint survey of 1,150 employees by researchers at BetterUp Labs and the Stanford Social Media Lab, 40 percent reported workslop in the past month. It usually comes peer-to-peer, but even direct reports are giving it to managers.
Workslop does more than fritter time away; it redistributes the burden of cognition. The sender outsources drafting to a machine and the receiver must decode, check, and re-do. In the study in progress, recipients described nearly two additional hours — or 1 hour and 56 minutes to be exact — cleaning up after just one instance. And there is a reputational cost: some 48 percent of those surveyed said they consider chronic workslop creators to be less creative, less reliable, and even less capable.
That’s not to offer an indictment of AI for its own sake. Services like ChatGPT and Gemini can speed processes along from drafting to debugging. But when they’re employed, not as an accelerator but as autopilot, the result is output that reads fine in a vacuum and falls down in context.
The productivity paradox of AI: speed gains, quality costs
AI’s allure is speed and leverage. In some cases, such as coding, controlled experiments show throughput and time-to-first-draft gains. But the business story is more complex. MIT-affiliated research concluded that there is only a tiny percentage of companies who have seen straightforward, quantifiable returns on AI investment so far — on the order of single digits or worse — which suggests a gap between pilot excitement and production reality.
Why the mismatch? The time advantage is eroded by the quality control. Hallucinations require fact-checking. Nuance gets sanded off when we summarize. Context that resides within people — customer history, policies, edge-case heuristics — seldom translates well across modal windows. By the time teammates fact-check, sync with strategy, and correct tiny semantic mistakes on down the line, any time you “gained” up front has been stolen back two times over.
Where teams are hurting most from AI-driven workslop
Professional services and tech teams are among those hit hardest, according to the BetterUp–Stanford research. Consider the patterns managers report:
- Code “help” that passes unit tests but breaks the production environment enough to require senior engineers to triage.
- Slide decks with tight-looking charts from models, burning up in the smoke of a follow-up question asking for methodology.
- Auto-summarized meeting notes leave out dissent and action owners, leading to misalignment, which takes multiple follow-ups to fix.
- Email drafts that project encouragement but state a policy wrongly, prompting compliance reviews and loss of customer trust.
Every case looks like a step forward until someone has to save it. The hidden cost is not just time, but morale. The sender loses credibility with coworkers, and they quietly route work to the individuals who can be relied upon to produce.
Protecting your reputation when AI aids your output
Workslop is a social signal. What it says is, “I subbed the hard stuff out and bequeathed you the mess.” Over time, that sticks. Studies in team dynamics tell us that perceptions of reliability harden themselves quickly and are very difficult to reverse. Managers also remember in performance reviews who made work easier — and who did not.
Ethics frameworks reinforce the point. The National Institute of Standards and Technology’s AI Risk Management Framework also prioritizes human accountability and oversight. The ACM Code of Ethics values competence, honesty, and respect for work by colleagues. In practice, this means you own what you submit as output, no matter which tool helped to produce it.
Practical fixes teams can adopt now to prevent workslop
- Frame it yourself first. You should write the thesis, outline, or test plan first before you ask. Let AI be for speeding up the process, not determining what you think.
- Demand verifiable sources. If a model makes a claim, supply the report, data, or policy doc. No source, no shipment.
- Set a review budget. To my mind, ALL AI-assisted deliverables should be self-checked to a standard that reduces review, downline, to being in minutes, not hours.
- Label assistance, not ownership. Reveal when AI is being used and say, in plain language, who verified what. There should be human accountability, end-to-end.
- Measure the externality. Record response times, number of revisions, and rates of escaped defects. If these lift with AI use, you don’t have a productivity win; you have a workslop problem.
- Train to cues and the courtroom. Prompt engineering helps, but domain expertise and critical reading are more important. Pair juniors with AI and seniors who can audit logic, not just grammar.
The bottom line on using AI without losing trust
Your colleagues aren’t anti-AI—they’re anti-cleaning-up-after-you. Consider AI as a power tool that requires skill and responsibility. If the people working with your outputs need to interpret, fix, or re-do them, you are not delegating to a piece of equipment; you are delegating to your team without permission. One simple (if not easy) fix: Think first, check always, and ship only what you’re willing to stand behind. That is how you use the power of AI without burning down your reputation.