AI slop is costing teams time, productivity, and trust. Learn how to spot AI slop, prevent it, and maximize the benefits of AI in your workforce.

The promise of AI is speed and efficiency — but what happens when outputs create more problems than they solve? In this episode of Work Week, Dr. Gabby Burlacu, Senior Manager at The Upwork Research Institute, unpacks the concept of “AI slop” — low-effort, AI-generated content that looks fine on the surface but erodes trust, credibility, and productivity.
Drawing on research from the Stanford Social Media Lab, BetterUp Labs, MIT, Apiiro, and The Upwork Research Institute, we explore the hidden costs of injudicious AI use, why many organizations are failing to see ROI from their investments, and how leaders can avoid the productivity paradox. You’ll also learn practical steps to prevent AI slop in your organization — from setting clearer review standards to investing in AI literacy — and see why freelancers are playing an increasingly vital role in keeping AI-enabled work both accurate and trustworthy.
Hello and welcome to Work Week, the podcast where we tackle one big question about the rapidly evolving workplace, explore what the research says about the issue, and explain what it all means for you.
I’m Dr. Gabby Burlacu, Senior Manager at The Upwork Research Institute — and what you’re hearing are my words, brought to life by a digital proxy of my voice, created by our team using AI.
Today, we’re discussing a challenge that many teams are facing as organizations increasingly integrate AI. The big question for this week is: What is AI slop and how can you prevent it at your organization?
“AI slop” describes an output generated by artificial intelligence that may look fine on the surface but that is fundamentally flawed. It may lack depth or context, or be inaccurate or irrelevant, and it often creates more work than it saves due to the need for cleanup, decoding, editing, or redrafting. Common contributors to AI slop include users not knowing the limits of the technology, or lacking expertise in the material they’re asking the AI to create, and accepting an output without enough due diligence.
The impact of AI slop can be subtle but significant — eroding trust, muddying accountability, and increasing the cognitive load on already stretched teams. While AI is promoted as a way to make work faster and smarter, many teams now spend more time fixing or discarding poor outputs than they would have spent doing the work themselves.
Recent research from the Stanford Social Media Lab and BetterUp Labs highlighted this issue. The researchers coined the term “workslop” to describe this phenomenon, which can lead to significant hidden costs and undermine team culture.
For the research, the teams surveyed about one-thousand full-time U.S. office workers. The survey found that forty percent of respondents reported receiving workslop in the previous month, indicating that workslop isn’t an isolated incident but a regular workplace disruption. When asked how much of the content they receive at work qualifies as workslop, respondents estimated it was just over fifteen percent.
This is a staggering figure. Think about what this means across an entire organization. Nearly one in six messages, reports, or project deliverables is unfinished and possibly indecipherable or in need of reworking.
And here’s where it gets even more concerning: the emotional toll is real. Over half — fifty-three percent — of employees who receive workslop say they feel annoyed when they encounter it. Thirty-eight percent feel confused. And twenty-two percent say they are offended by workslop.
The reputational damage to the sender is equally troubling. Approximately half of respondents said they now view colleagues who send workslop as less capable, less reliable, and less creative. Forty-two percent saw those coworkers as less trustworthy. And thirty-seven percent said they now see colleagues who have sent workslop as less intelligent.
This data suggests that what began as a well-intentioned, but perhaps hasty, use of AI can erode credibility, damage relationships, and create more work.
And while the interpersonal fallout is noteworthy, the organizational impact is equally significant.
A separate report from the MIT Media Lab found that ninety-five percent of organizations are seeing no measurable return on their AI investments. This statistic should be a wake-up call. With so much time, money, and energy being poured into these technologies, why are the gains so elusive?
One reason is that leaders are often given unrealistic expectations. AI is sold as a tool that can double or triple output or allow teams to operate with fewer resources. But this glosses over the very real need for training, structure, and critical oversight.
Another factor is the way these tools are being used. Many employees rely on AI as a shortcut and use the technology to produce large volumes of content. But they skip the important step of looking at the output critically to ensure it is useful, accurate, and aligned with context.
Skipping this review too often results in content that looks acceptable but feels off. It will lose its audience. It requires interpretation. It raises questions. It prompts follow-up. And fixing all of this takes time — time that leaders often fail to account for when measuring the productivity benefits of AI.
The Upwork Research Institute has been studying this closely, and we’ve uncovered what we call the productivity paradox. In our research titled From Tools to Teammates: Navigating the New Human-AI Relationship, seventy-seven percent of executives say they’ve seen gains from AI adoption, and employees report being forty percent more productive when using these tools. On the surface, these numbers sound encouraging.
But dig deeper, and a different story emerges. Among those same employees who report being highly productive with AI, eighty-eight percent also report feeling burned out. The increased speed and volume of output may be impressive, but the cost to individual wellbeing — and team cohesion — is significant.
In an earlier study, titled From Burnout to Balance: AI-Enhanced Work Models, we found that nearly half of employees who use AI said they have no idea how to actually meet the productivity goals their employers now expect. Sixty-five percent of full-time employees said they are actively struggling with productivity expectations.
This tells us something important — the problem isn’t just with the tools. It’s with how they’re being introduced, managed, and measured. Without the right structure, AI becomes just another source of stress.
And in some cases, it becomes a risk. Research from Apiiro on AI-assisted coding found that while AI tools can accelerate development and reduce minor bugs, they also significantly increase the likelihood of major security vulnerabilities. Developers using these tools were found to have four times the delivery speed — but they also shipped code with ten times more high-severity risks.
This is one of the many hidden costs of relying on AI to produce more — without ensuring it’s producing better.
As organizations realize the cost of AI slop, many are turning to freelancers to provide oversight and quality assurance to their AI-related projects. Companies are looking for experts in the areas they’re applying AI. They need people who know what a good output looks like, and are finding that freelancers provide on-demand access to niche skills.
For example, data published in the September twenty twenty-five Upwork Monthly Hiring Report found that translation and localization services jumped twenty-nine percent in September. This is because companies recognized the need for human oversight to catch AI slip-ups in nuance and context with translation services.
The report also found that the demand for quality assurance testing increased by nine percent as businesses engage freelancers to validate AI outputs before they are released or published.
And one of the most striking findings is the significant increase in demand for project managers, especially among small and medium-sized businesses. Hiring in this category skyrocketed one hundred two percent in September. With annual planning season underway, SMBs are bringing in freelance project managers to create structure, oversight, and alignment as they balance new AI capabilities with core business operations.
In addition to engaging freelancers, what can leaders and teams do to prevent AI slop and realize the real benefits of these tools?
First, start treating AI as a tool, not a replacement. Think of AI as a junior team member — capable, fast, and helpful, but lacking the experience and judgment to operate without oversight. This means workers need to learn how to prompt and iterate well — while also applying their own judgement, creativity, and expertise.
Second, set clear review standards. Output that is generated quickly may not be ready to share, and may need to be iterated or edited. Make expert reviews an explicit part of every workflow, and encourage teams to ask themselves whether the content or outputs are adding value — or simply adding volume.
Third, shift your metrics. Instead of asking how much output has increased since AI was introduced — such as a total number of blog posts — ask how much value the output is delivering. Have key metrics, such as site visits or conversions, increased? Measure net productivity — the total time saved and value added after accounting for the time spent revising, reviewing, or fixing AI-generated content.
Fourth, invest in AI literacy. Employees need more than tool training — they need to understand how to prompt AI well, how to spot weaknesses in its output, and how to use it to augment their own strengths rather than offload their responsibilities. Invest in AI literacy and training resources such as online courses, knowledge sharing sessions, and hackathons.
And finally, create a culture of feedback. Normalize the idea that AI-generated work should be reviewed, discussed, and improved. If something doesn’t land, say so. Ask what the prompt was. Offer a better version. A strong culture of feedback can both improve outputs and build trust.
As always, we like to end each episode of Work Week with an action step you can implement immediately, and a reflection question to think about.
Here’s your action step: Pick one piece of AI-generated content your team has produced recently — an internal email, a presentation slide, a client-facing deliverable — and review it together. Ask what worked. Ask what didn’t. Talk about how the prompt was structured, and where more oversight was needed. Measure how long the process took — including any revisions needed by team members. Then, use the conversation to refine your processes moving forward.
And here’s your reflection question: Is AI helping your team produce better work — or simply faster work?
If AI is only speeding processes up, it may be time to rethink how you're using it. Speed without direction and expert oversight isn’t likely to drive progress.
That’s a wrap for this episode of Work Week. We looked at the concept of AI slop, explored why so many organizations are struggling to see a return on their AI investments, and identified real steps you can take to improve both the quality and trustworthiness of AI-enabled work.
Thank you for listening. If you found this episode helpful, consider sharing it with a colleague or leaving a review. And don’t forget to subscribe for more data-driven insights on the future of work.
-p-500.jpg.png)
Managing Director of the Research Institute
Dr. Kelly Monahan is the Founder and Managing Director of the Upwork Research Institute, where she leads research on emerging technologies, remote workforce strategies, and fostering inclusive cultures for non-traditional talent like freelancers. With over a decade of experience in future of work research, her work focuses on delivering actionable insights to help organizations adapt to the evolving world of work.
Previously, as Director at Meta, Kelly led data analytics initiatives that enhanced distributed team performance and supported the growth of remote workers. Prior to that, she spearheaded future of work research at Accenture and Deloitte. Her commitment to a people-first approach to work continues to guide her thought leadership and keynote speaking engagements, where she highlights innovative talent strategies and human-centric organizational leadership.
Kelly is the author of two books, including the USA Today bestseller Essential, and How Behavioral Economics Influences Management Decision-Making: A New Paradigm. She holds a B.S. from Rochester Institute of Technology, an M.S. from Roberts Wesleyan College, and a Ph.D. in organizational leadership from Regent University.

Senior Research Manager, Upwork Research Institute
Dr. Burlacu is Senior Research Manager of the Upwork Research Institute, where she studies how organizations are adjusting their cultures and talent practices to access skilled talent in a rapidly evolving world of work. Her research has been featured in a variety of peer-reviewed studies, articles, book chapters, and media outlets, and has informed strategy and technology development across a range of Fortune 500 companies. Gabby received her Ph. D. in industrial-organizational psychology from Portland State University.
.jpg)