Creating effective partnerships between people and AI: A conversation with Dr. Jenna Butler

by Jenna Butler, Allie Blaising, and Gabby Burlacu
The intersection of humans and AI has captured public attention and is dominating recent headlines, with some mixed messages reaching today’s workers and business leaders. On the one hand, there is promise: new ways of doing things will help us be more productive and innovative than ever before. On the other, there is uncertainty: as AI tools become increasingly adept at completing processes end-to-end, the question emerges of what role humans will play in the future of work. Further, the pitfalls of AI– inaccurate output, chronic agreeability, and potentially negative impacts to learning and development, particularly for those early on in their careers– do not yet have satisfactory, scalable solutions. Upwork labor market trends shed some light on how work is shifting in the age of AI, but many questions remain.
In her role as Principal Applied Research Scientist at Microsoft, Dr. Jenna Butler explores the questions that sit at this intersection, helping develop Microsoft’s understanding of how AI is already changing work, while also shaping a point of view about how it should be used in the future. Dr. Butler leads the New Future of Work initiative—an interdisciplinary research effort focused on designing a future of work that is meaningful, productive, and equitable– within the Experiences & Devices group at Microsoft. She earned her Ph. D. in Computer Science from Western University in Canada in 2015 and has long been drawn to the intersection of disciplines—from biology and computer science to social science and technology. Her current research centers on developer productivity, with a human-centered lens on software engineering. She’s particularly interested in AI, team and individual wellbeing, decision-making in organizations, cross-disciplinary engineering collaboration, and diversity in tech.
Our conversation is part of Upwork’s Reimagining Work—a lecture series designed to provide a forum for expert practitioners and academics to foster the exchange of views on the present and future of work. In our discussion, Dr. Butler shares what she and her team have uncovered when it comes to the impact of AI on work, and how people interact with the technology. What becomes clear is that this is a fast-moving field, with rapid innovation redefining the nature of the human-AI boundary by the day. But Dr. Butler ultimately sees a lot of positive potential impact from embedding AI into how we work– and she leaves us with actionable insights for how to upskill and stay future-ready as the world around us evolves. The views expressed in this discussion are her own.
- Gabby: We see a lot of headlines today focused on substitution or replacement, suggesting AI will take the jobs of people. What are those headlines getting wrong?
Dr. Butler: Unfortunately, I think those headlines are more right than wrong—not because they reflect the true potential of AI, but because they reflect how companies are choosing to deploy and use it right now. The industry today is thinking far too narrowly. Most companies are focused on using AI to automate existing tasks and roles, aiming for short-term efficiency gains rather than long-term transformation. This replacement mindset may reduce costs in the near term, but it’s neither the most effective nor the most responsible use of AI.
Historically, the most powerful impact of new technology has come not from substitution, but from the creation of entirely new industries, professions, and capabilities. Electricity didn’t just replace gas lamplighters—it gave rise to electrical engineering, home appliances, repair services, and modern manufacturing. These were jobs and domains that didn’t previously exist.
AI should be treated the same way. We need to shift from a mindset of replacement to one of amplification. Rather than asking, “What tasks can AI do instead of humans?” we should be asking, “What new things can humans do with AI that were never possible before?” This is the foundation of a human-centered approach—one that treats AI as a collaborator and enabler, not a substitute, and keeps humans in the equation.
Right now, many of the headlines are accurate reflections of what companies are actually building: tools that write boilerplate code, automate repetitive tasks, and replace junior-level roles. If we stay on this trajectory, yes—people will lose jobs and we’ll be stuck within the boundaries of what we already know how to do. We’ll be faster, maybe more efficient, but not fundamentally more creative or capable.
While removing drudgery sounds appealing, doing so without reimagining economic structures—like introducing Universal Basic Income—creates instability. Instead of building AI to do what we already do, we should be building AI to help humans do what we never could before. That’s where the true opportunity lies, and it’s where we’re failing to aim right now.
- Allie: Prompt engineering is a skill that emerged almost overnight, as higher quality input is supposed to elicit higher quality output of generative AI tools. Is this going to continue to be a specialized skill, or one we all need to learn? What is the future of prompt engineering?
Dr. Butler: In a perfect world, we wouldn't all need to learn specialized prompting skills. Instead, it’s our job as technology designers to lower the cognitive load for users. That means building systems that help people interact with AI naturally, even if they don’t have technical expertise.
To that end, we’re exploring new forms of interaction with LLMs. For example, my colleagues at Microsoft Research are investigating microprompting — breaking down prompts into smaller, more manageable steps. It’s often hard to know exactly what to ask for upfront, so the idea is to let users build up a prompt piece by piece, adapting as they go.
Other colleagues are exploring how these pieces might be represented as buttons or graphical elements, so the system can help steer the interaction in the right direction. We’re also looking at adaptive interfaces that change based on what a user is trying to do, giving them cues to make better prompts without needing to know how to write one perfectly.
And it doesn’t stop at language. Other things can act as prompts too. If you’re writing code and hit an error, that error message could serve as a prompt — the system can infer what you were trying to do based on the context. In the future, LLMs might take cues from your activity, your history, or even your open tabs to offer help that feels seamless. That’s the direction we’re heading in — making prompting feel less like programming and more like magic.
- Allie: You’ve spoken about “challenge, not obey” as a key AI design principle. What’s most important to keep in mind when designing tools that curb overreliance without losing user trust or momentum?
Dr. Butler: The most important thing is to actually test whether your tools are addressing overreliance, deskilling, or other unintended consequences — not just assume they are. It’s easy to introduce design mitigations with good intentions, but without real testing, those efforts can backfire. Software should be evaluated not just for performance, but also for safety and long-term human impact.
At Microsoft, we’ve developed an Overreliance Framework to help teams assess whether LLM-based tools are unintentionally encouraging users to disengage or trust the system too much. It’s not just about accuracy — it’s about the human-AI interaction.
I’m also a big fan of the principle “challenge, not obey,” a phrase originally coined by my colleague Advait Sarkar. The idea is that instead of building AI systems that passively follow user instructions, we design tools that engage the user — by asking questions, offering alternative perspectives, or nudging them to think more critically. That’s how we can help users upskill, not deskill.
Of course, there’s a balance. We do want AI to boost productivity — but productivity isn’t just about doing more tasks faster. It can also mean doing higher-quality work, making better decisions, and learning along the way. That’s the kind of augmentation we should be aiming for.
- Gabby: The intuitive, human-like output of generative AI has contributed to its widespread adoption– and it’s now being applied to use cases we couldn’t have imagined, like therapy, conflict resolution and feedback. What do you see as the key risks and opportunities of this relationship, and how should that shape how we design and interact with these systems?
Dr. Butler: The opportunities here are enormous. I’m a big advocate for mental health support, but I also recognize that I come from a place of privilege — I can access private therapy or get a tutor for my kids if they need extra support. But not everyone has access to those kinds of resources. Generative AI has the potential to democratize access to support that was once limited to a few — whether it’s a virtual tutor, a writing coach, or even a companion to talk through challenges.
But with that opportunity comes real risk. An LLM isn’t a cheaper version of a human. It doesn’t have emotions, intelligence, or lived experience. Yet, these models often produce responses that feel deeply human — warm, empathetic, even funny — which leads people to relate to them as if they were sentient or emotionally aware.
We’ve seen that users respond positively to anthropomorphic AI — they trust it more, engage with it more, and report higher satisfaction when it seems empathetic. But the flip side is that this perceived humanness can lead to overreliance, misplaced trust, or even emotional dependency. In rare cases, that can result in serious harm.
That’s why education is so critical. We need to help people understand what AI is — and what it isn’t — so they can interact with it safely and effectively. If we get that right, we can retain the benefits of skill democratization while protecting users from the risks that come from mistaking simulated empathy for the real thing.
- Gabby and Allie: What skills and capabilities would you advise workers to develop today, in order to be ready for an increasingly AI-enabled future?
Dr. Butler: As a mother of three kids who will be growing up in an AI-powered world, I think about this a lot. There are three key skills I believe will be essential for anyone preparing for the future of work: adaptability, continuous learning, and humanity.
First, adaptability. Technology has been evolving at an accelerating pace for decades, and AI has only increased the speed of change. New tools, roles, and even entire domains are emerging all the time. The ability to adapt — to pivot as things shift — will be crucial. Fortunately, adaptability isn’t just a trait you have or don’t have; it’s something you can practice and strengthen over time.
That leads to the second skill: continuous learning. To stay adaptable, you have to stay curious. You need to be open to new ideas, explore unfamiliar tools, and remain a lifelong learner. One of my academic heroes, Jaime Teevan, reminds us that now is the time to think like a scientist — observe what others are doing, experiment with new methods, and share your learnings. The more you engage in that process, the more prepared you’ll be to navigate an evolving landscape.
And finally, humanity. As AI becomes capable of doing more tasks — especially those that are repetitive or technical — the skills that make us uniquely human will become even more important. Mentorship, empathy, collaboration, conflict resolution, motivation — these are the things machines can’t authentically replicate, and they’re what people will value most in colleagues and leaders.
So yes, we need to build AI skills and stay current with technology. But just as importantly, we need to take a human-centered approach — in how we design technology, and in how we treat each other. Empathy and kindness aren’t just soft skills; they’re survival skills for the future of work.
About Dr. Jenna Butler
Dr. Butler is a Principal Applied Research Scientist in E+D at Microsoft, where she helps lead the New Future of Work initiative—an interdisciplinary research effort focused on designing a future of work that is meaningful, productive, and equitable. She works with the Human Empathy and Understanding (HUE) lab, exploring tech worker wellbeing, stress, and work-life balance. Jenna earned her PhD in Computer Science from Western University in Canada in 2015.
The views expressed in this article are her own.
About Allie Blaising
Allie Blaising is a Senior User Experience Researcher at Upwork, where she leads customer research that shapes design and business decisions across multiple verticals, with a recent focus on Generative AI product initiatives.
Join the world's work marketplace
Find great talent. Find great work. Are you ready to move your business or career forward?
%20(1).png)






.png)
.png)
.png)
.png)
.png)












.jpg)