FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

‘Workslop’: AI can do everything but paperwork

Bill Thompson
Last updated: September 24, 2025 3:07 am
By Bill Thompson
Technology
7 Min Read
SHARE

Quiet quitting had its moment. Here’s a new villain is corrupting team culture: “workslop” — AI-generated output that appears to be polished but lacks depth to move work forward. New research from BetterUp Labs and Stanford’s Social Media Lab, which was reported on in Harvard Business Review, warns that this creeping sludge of low-priority deliverables is eroding trust and wasting time — dragging down productivity in the process.

What “workslop” really is and why it spreads

Workslop, the researchers said, is content generated by AI that looks decent at a glance but requires someone else to analyze it, correct it or redo it. As part of their continuing survey of 1,150 knowledge workers, 40 percent claim they were slogged in the past month. It is mainly passed peer-to-peer, though managers are also catching it from subordinates.

Table of Contents
  • What “workslop” really is and why it spreads
  • Why the productivity pledge with AI boomerangs
  • Morale takes a hit as trust erodes under workslop
  • From slop to substance: sensible solutions
  • AI that earns trust by improving quality and clarity
AI automation struggles with paperwork and administrative forms

The burden isn’t theoretical. Recipients said each instance added roughly one hour and 56 minutes of cleanup. Half of respondents said they regard colleagues who turn in workslop as less creative, reliable and capable — a reputation hit that transcends any specific task.

The issue cuts across industries, with professional services and technology teams feeling it most. That lines up with the ascendance of AI assistants for drafting documents, writing code, building slides and summarizing research — tasks where speed is seductive and superficiality can be difficult to detect until late in the process.

Why the productivity pledge with AI boomerangs

AI can accelerate quality work. It can also speed bad work. And when workers outsource cognitive labor to a model without adding experience, they tend to produce something that feels plausible but empty. The difference between appearance and reality moves effort downstream, turning one person’s shortcut into another person’s overtime.

There’s also an increasing gap between AI hype and actual value delivered. Yet despite heavy investment, a recent report by MIT finds that around five percent of companies are seeing a “significant” return on their AI spending. Teams relying on AI to make up for head count or outpace competitors often find frustration and rework instead — particularly when governance and training fall behind adoption.

Robot overwhelmed by paperwork, highlighting AI’s struggle with administrative tasks

Here’s a typical pattern: an assistant puts together a sales deck that looks nice on the first page, but its claims have no sourcing and its messaging has nothing to do with what your client does. Or there’s an AI-written code patch that compiles but causes edge-case failures that QA has to go hunting for. In each case, the sender “saves time,” but the group foots the bill.

Morale takes a hit as trust erodes under workslop

Workslop is not only a quality problem; it is also a trust problem. We work on a psychological contract: my best thinking and yours. When collaboration partners get glossy but shallow submissions, they feel condescended to and overworked. The cleanup stuff never gets any brownie points, but it chews up attention and due dates — ripe for resentment.

Leaders should flag the social signal: work that seems to be automated without discerning human judgment comes across as disengagement, even when it offers, to the sender’s mind, an efficient response. The findings of BetterUp underscore that peers and managers rapidly take an opinion poll of “worksloppy” contributors and that such perceptions make a long-term impact on opportunities, feedback, reviews, and retention.

From slop to substance: sensible solutions

  • Set a “human-in-the-loop” standard. Mandate that any AI-assisted deliverable contains the value add by its creator — “decisions that were made,” “assumptions we tried to break,” and “what we validated.” A brief provenance note — what was produced, what was revised, which sources were fact-checked — brings invisible labor into view and slices down rework.
  • Define what “good” looks like. Post checklists for common outputs (memos, decks, code reviews, analyses) that indicate minimum evidence, citations, test coverage and alignment with stakeholders. If a set of AI drafts can’t achieve this quality, they don’t ship.
  • Measure the downstream cost. Monitor repeat rates, escalation numbers and mean time to correction for AI-touched work. If cleanup time goes up, pause the automation in that workflow and retrain. Treat models as though they are junior teammates: at a minimum, they require supervision, feedback and scope boundaries.
  • Upgrade skills, not just tools. Make investments in training for swift design, critical reading and fact-checking. Share guidelines to use within team prompts: company style guides, code standards and confirmed data. Combine this with red-teaming several key outputs so that issues have already been discovered before clients or executives ever see them.
  • Build governance that enables. Implement frameworks like the NIST AI Risk Management Framework to define policies on permissible usage, treatment of data and review gates. Open lanes lessen fears, inconsistencies… and shadow AI junkyards that generate slop.

AI that earns trust by improving quality and clarity

The idea isn’t to outlaw generative tools; it’s to hold them responsible. When teams treat AI as a collaborator to be directed — not as a vending machine for instant deliverables — quality goes up and so does the team’s morale. The BetterUp-Stanford research is a cautionary tale, but a hopeful one: You see productivity gains when people remain accountable for doing the thinking and machines do most of the grunt work, out in the open rather than behind some shiny curtain.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Phreeli Launches MVNO That Doesn’t Keep Any Names
New $25 PC Transfer Kit Makes Upgrading Easier
Google adds 3D movies to Samsung Galaxy XR via Google TV
Video Call Glitches Cost Jobs And Parole, Study Finds
OpenAI Rejects Ads As ChatGPT Users Rebel
Pixel 10 always-on display flicker reported after update
Anker SOLIX C300 DC Power Bank discounted to $134.99
Musk Says Tesla Software Makes Texting While Driving Possible
Kobo Refreshes Libra Colour With Upgraded Battery
Govee Table Lamp 2 Pro Remains At Black Friday Price
Full Galaxy Z TriFold user manual leaks online
Google adds Find Hub to Android setup flow for new devices
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.