Shadow AI is rising in the workplace. Discover what’s driving this trend and how leaders can reduce risk while enabling safe, responsible AI use.

Shadow AI is on the rise as employees adopt unapproved AI tools to work faster and smarter. In this episode of Work Week, Dr. Kelly Monahan, Managing Director at The Upwork Research Institute, explores what’s fueling this behavior; the financial, security, and ethical risks that shadow AI poses; and why outright bans are ineffective.
Learn how forward-thinking companies can reduce exposure, without stifling innovation, through clear governance, trusted tools, and strong AI literacy. Plus, find out what freelancers are doing differently — and how enterprises can learn from them.
Hello and welcome to Work Week, the podcast where we tackle one big question about the rapidly evolving workplace, explore what the research says about the issue, and explain what it all means for you.
I’m Dr. Kelly Monahan, Managing Director at The Upwork Research Institute. What you’re hearing is a digital proxy of my voice, created by our team with the help of AI.
Today, we’re discussing a topic that’s quietly spreading across organizations of all sizes — something that has IT teams nervous, executives concerned, and employees... well, simply trying to get their work done faster.
This week’s big question is: What’s driving the rise of shadow AI in enterprises?
Shadow AI is what happens when employees use AI tools — like chatbots, code generators, or writing assistants — without the approval or awareness of the company’s IT department. For example, pasting a sensitive email into ChatGPT and asking the bot to adjust the email’s tone. Or uploading proprietary code to check for bugs or security vulnerabilities. This use of shadow AI is becoming increasingly common.
Research from MIT shows that workers at ninety percent of companies use chatbots — and most workers hide AI usage from the IT department. And it’s more than the occasional copy and paste task. These tools are increasingly embedded in how people work — from summarizing long documents to analyzing data.
Accounting and business advisory firm EisnerAmper recently surveyed one thousand full-time office workers in the United States about their AI usage. The data shows that over twenty-eight percent of respondents would keep using AI tools even if they were explicitly told not to.
So what’s driving this surge in the use of shadow AI?
First, workers are looking to speed up their workflow. As we covered in episode 30 of Work Week about AI anxiety, workers today are under tremendous pressure to move fast, deliver results, and juggle multiple demands. When AI tools promise to increase productivity and save hours on routine tasks, workers naturally gravitate toward them.
Second, many companies offer limited effective alternatives. If internal systems are outdated or if the organization hasn’t rolled out any official AI tools, employees will turn to whatever is available. They’ll open a browser, log into one of the many free AI tools available, and get to work — because the business needs results, and waiting for IT approval isn’t always an option.
Third, training is lagging far behind the pace of AI innovation. The Upwork Research Institute’s Future Workforce Index found that twenty-nine percent of leaders don’t feel comfortable with workers pursuing self-development on key topics, including AI, outside of the organization’s four walls. And yet, formal learning and development programs rarely keep up with the speed of tool development and shifting in-demand skills. So proactive workers do what they’ve always done — they figure it out themselves.
Fourth, workers are curious. AI feels like new territory, and many employees genuinely want to explore what the technology can do. They want to know if AI can help them think more creatively, make faster decisions, or improve the quality of their work. This spirit of experimentation should be encouraged, not penalized — but right now, it’s happening in the shadows.
So while leadership is stalled at the pilot phase, workers are moving forward. And this begs the question, “If workers are using AI to get their jobs done faster, what’s the harm?”
So let’s talk about what’s at stake.
The first and most obvious risk is security. When employees input sensitive company data into external AI tools, there’s often no way to control or retrieve that information. Some AI tools retain data for model training unless users explicitly opt out, which means that data could later become publicly available — something most employees don’t even realize. This creates an enormous exposure risk.
And there’s solid data to highlight the scale of this problem. According to IBM's Cost of a Data Breach Report 2025, ninety-seven percent of organizations that experienced AI-related security incidents lacked proper access controls. And nearly two-thirds — sixty-three percent — had no governance policies in place to manage AI use or detect unauthorized tools. That’s a massive gap in oversight.
Moreover, IBM found that 20% of organizations reported security breaches linked to the use of shadow AI, and estimates that each data breach now costs an average of nearly four and a half million dollars. On its own, this is serious financial harm.
But there’s more. A second risk of shadow AI involves compliance. With regulations like GDPR in Europe and HIPAA and the Gramm-Leach-Bliley Act in the U.S., companies must know where their data is going and how it’s being used. When AI use is unmonitored, compliance becomes nearly impossible. And the consequences include fines, lawsuits, and reputational damage.
Finally, AI use without organizational guardrails creates a risk in quality control. AI tools can hallucinate, perpetuate bias, or produce offensive or substandard outputs. If these issues are unresolved and used in marketing materials, client emails, or public reports, companies may face backlash from regulators, customers, and employees.
But let’s be clear. The answer isn’t to enforce strict rules or attempt blanket bans. If twenty-eight percent of employees say they’ll use AI even if tools are forbidden, more policing won’t solve the problem.
Instead, the answer is to build a better path forward.
To reduce the risks of shadow AI without stifling innovation, organizations need to rethink how they enable AI use internally. And this starts with trust, transparency, and tools that actually work.
First, establish clear AI governance frameworks. This doesn’t necessarily mean issuing a forty-page document no one has the time to read. It means creating simple, usable guidelines. Spell out which tools are approved, which types of data are off-limits, and who’s responsible for monitoring usage. Make the rules clear, consistent, and — most importantly — easy to find and follow.
Second, provide approved, enterprise-grade AI tools that actually help people do their jobs. Employees aren’t using shadow tools because they want to break the rules. They’re using them because they don’t see better options inside the company. So meet them where they are. Whether this means offering secure internal copilots, licensed large language models, or embedding AI in tools they already use — like customer relationship management systems or project management software. If you give workers effective tools, they’ll stop looking elsewhere.
Third, offer AI literacy training across the organization. Everyone across the company needs to know how to use AI responsibly. Think beyond generic courses. Create lunch-and-learns, build internal champions, or run small experiments in which teams can try new tools in a safe environment. You don’t need everyone to become a prompt engineer. But you do need people to understand when and how AI should be used — and when it shouldn’t.
And finally, monitor AI use transparently. Yes, companies need to track which tools are being used and how. But organizations also need to do that in a way that respects employee privacy and builds trust. Be honest about what’s being monitored, and use the data to spot trends, offer support, and iterate on your strategy. Surveillance alone won’t solve this. Dialogue will.
There’s another angle I want to highlight here. While enterprises are still trying to catch up, freelancers are often ahead of the game. As we discussed in episode seven of Work Week, our Future Workforce Index found that sixty-two percent of skilled freelancers regularly use and embed AI tools into their workflows. This is compared to only forty-nine percent of full-time employees. Additionally, fifty-four percent of skilled freelancers say their skill level in using AI is advanced, compared to only thirty-eight percent of their full-time counterparts.
Instead of fearing the experimentation happening at the edges, what if your organization invited it in? Consider asking your freelance contributors which tools they use and inquire about their process for vetting new AI tools. Invite them to demo new workflows. Build cross-functional AI innovation labs where internal and external talent collaborate to pilot ideas.
Because innovation doesn’t only come from the top down. It comes from the edges. From the freelancers, the side hustlers, the junior analysts testing a new plugin on their lunch break. This is the energy and resourcefulness companies need to harness — not shut down.
As we always do, let's end this episode with an action you can implement immediately and a reflection question to consider throughout the week.
If you’re a business leader, your action step is this: Run an anonymous AI usage survey inside your organization. Ask your employees which tools they’re using, why they’re using them, and what they wish they had access to. Keep the survey judgment-free. This isn’t about catching policy violators or individuals who don’t yet use AI tools — it’s about understanding your current reality so you can make better decisions. And the data may reveal early AI adopters inside your company you didn’t even know you had.
If you’re an individual worker, take a few minutes to audit your own AI usage. Are you sure you’re not pasting confidential data into unsecured tools? Are you clear on your company’s policies? If not, pause before you hit send. Be sure to vet any information before you upload it, and look for AI platforms that allow for private mode or have strong data privacy settings. Set aside time this week to read the terms of use for any tool you rely on. Taking these precautions isn’t about fear — it’s about professional responsibility.
Now for your reflection question. Ask yourself: Is your organization treating AI as a risk to be managed — or a capability to be cultivated? Because the companies that thrive in an AI-driven world of work won’t be the ones who crack down the hardest. They’ll be the ones who lead with clarity, curiosity, and trust.
That’s it for this episode of Work Week. I’m Kelly Monahan and if this topic sparked new thinking or if you’re now wondering how much shadow AI may be impacting your business, share this episode with a colleague. Leave a review, subscribe, and join us again next week.
-p-500.jpg.png)
Managing Director of the Research Institute
Dr. Kelly Monahan is the Founder and Managing Director of the Upwork Research Institute, where she leads research on emerging technologies, remote workforce strategies, and fostering inclusive cultures for non-traditional talent like freelancers. With over a decade of experience in future of work research, her work focuses on delivering actionable insights to help organizations adapt to the evolving world of work.
Previously, as Director at Meta, Kelly led data analytics initiatives that enhanced distributed team performance and supported the growth of remote workers. Prior to that, she spearheaded future of work research at Accenture and Deloitte. Her commitment to a people-first approach to work continues to guide her thought leadership and keynote speaking engagements, where she highlights innovative talent strategies and human-centric organizational leadership.
Kelly is the author of two books, including the USA Today bestseller Essential, and How Behavioral Economics Influences Management Decision-Making: A New Paradigm. She holds a B.S. from Rochester Institute of Technology, an M.S. from Roberts Wesleyan College, and a Ph.D. in organizational leadership from Regent University.

Senior Research Manager, Upwork Research Institute
Dr. Burlacu is Senior Research Manager of the Upwork Research Institute, where she studies how organizations are adjusting their cultures and talent practices to access skilled talent in a rapidly evolving world of work. Her research has been featured in a variety of peer-reviewed studies, articles, book chapters, and media outlets, and has informed strategy and technology development across a range of Fortune 500 companies. Gabby received her Ph. D. in industrial-organizational psychology from Portland State University.
.jpg)