Here is an excerpt from an article written by , , , , and for Harvard Business Review. To read the complete article, check out others, sign up for email alerts, and obtain subscription information, please click here.
Illustration Credit: HBR Staff/AI
* * *
A confusing contradiction is unfolding in companies embracing generative AI tools: while workers are largely following mandates to embrace the technology, few are seeing it create real value. Consider, for instance, that the number of companies with fully AI-led processes nearly doubled last year, while AI use has likewise doubled at work since 2023. Yet a recent report from the MIT Media Lab found that 95% of organizations see no measurable return on their investment in these technologies. So much activity, so much enthusiasm, so little return. Why?
In collaboration with Stanford Social Media Lab, our research team at BetterUp Labs has identified one possible reason: Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” In the context of work, we refer to this phenomenon as “workslop.” We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.
Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.
If you have ever experienced this, you might recall the feeling of confusion after opening such a document, followed by frustration—Wait, what is this exactly?—before you begin to wonder if the sender simply used AI to generate large blocks of text instead of thinking it through. If this sounds familiar, you have been workslopped.
According to our recent, ongoing survey (which you can take), this is a significant problem. Of 1,150 U.S.-based full-time employees across industries, 40% report having received workslop in the last month. Employees who have encountered workslop estimate that an average of 15.4% of the content they receive at work qualifies. The phenomenon occurs mostly between peers (40%), but workslop is also sent to managers by direct reports (18%). Sixteen percent of the time workslop flows down the ladder, from managers to their teams, or even from higher up than that. Workslop occurs across industries, but we found that professional services and technology are disproportionately impacted.
Here’s what leaders need to know about workslop—and how they can stop it from gumming up the works at their company.
The Workslop Tax
Cognitive offloading to machines is not a novel concept, nor are anxieties about technology hijacking cognitive capacity. In 2006, for instance, the technology journalist Nicolas Carr published a provocative essay in The Atlantic that asked “Is Google Making Us Stupid?” The prevailing mental model for cognitive offloading—going all the way back to Socrates’ concerns about the alphabet—is that we jettison hard mental work to technologies like Google because it’s easier to, for example, search for something online than to remember it.
Unlike this mental outsourcing to a machine, however, workslop uniquely uses machines to offload cognitive work to another human being. When coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context. A cascade of effortful and complex decision-making processes may follow, including rework and uncomfortable exchanges with colleagues.
Consider a few examples.
When asked about their experience with workslop, one individual contributor in finance described the impact of receiving work that was AI-generated: “It created a situation where I had to decide whether I would rewrite it myself, make him rewrite it, or just call it good enough. It is furthering the agenda of creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces.”
In another case, a frontline manager in the tech sector described their reaction: “It was just a little confusing to understand what was actually going on in the email and what he actually meant to say. It probably took an hour or two of time just to congregate [sic] everybody and repeat the information in a clear and concise way.”
A director in retail said: “I had to waste more time following up on the information and checking it with my own research. I then had to waste even more time setting up meetings with other supervisors to address the issue. Then I continued to waste my own time having to redo the work myself.”
Each incidence of workslop carries real costs for companies. Employees reported spending an average of one hour and 56 minutes dealing with each instance of workslop. Based on participants’ estimates of time spent, as well as on their self-reported salary, we find that these workslop incidents carry an invisible tax of $186 per month. For an organization of 10,000 workers, given the estimated prevalence of workslop (41%), this yields over $9 million per year in lost productivity.
Respondents also reported social and emotional costs of workslop, including the problem of navigating how to diplomatically respond to receiving it, particularly in hierarchical relationships. When we asked participants in our study how it feels to receive workslop, 53% report being annoyed, 38% confused, and 22% offended.
The most alarming cost may be interpersonal. Low effort, unhelpful AI generated work is having a significant impact on collaboration at work. Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output. Forty-two percent saw them as less trustworthy, and 37% saw that colleague as less intelligent. This may well echo recent research on the competence penalty for AI use at work, where engineers who allegedly used AI to write a code snippet were perceived as less competent than those who didn’t (and female engineers were disproportionately penalized).
What’s more, 34% of people who receive workslop are notifying teammates or managers of these incidents, potentially eroding trust between sender and receiver. One third of people (32%) who have received workslop report being less likely to want to work with the sender again in the future.
Over time, this interpersonal workslop tax threatens to erode critical elements of collaboration that are essential for successful workplace AI adoption efforts and change management.
* * *
Here is a direct link to the complete article.