Using AI to Enhance Collective Intelligence: A Conversation with Anita Williams Woolley, Ella Glikson, and Pranav Gupta

By Anita Williams Woolley, Ella Glikson, Pranav Gupta, Allie Blaising, Ted Liu
In the workplace, we often approach human-AI interaction with an individualistic mindset, focusing primarily on how each worker can optimize the use of AI tools based on their unique skills, background, and experience. While this perspective is important, it often overlooks the broader social and team dynamics that play a crucial role in shaping both the development and effective use of these tools. Ignoring these dynamics would be a missed opportunity, as understanding and leveraging how teams interact with AI can lead to more collaborative, innovative solutions that benefit not just individuals, but entire organizations.
Upwork’s User Research team and the Upwork Research Institute recently spoke with Dr. Anita Williams Woolley, Dr. Ella Glikson, and Dr. Pranav Gupta. Dr. Woolley is Professor of Organizational Behavior at Carnegie Mellon University. Dr. Glikson is Senior Lecturer in the Graduate School of Business Administration at Bar-Ilan University. Dr. Gupta is Assistant Professor of Business Administration at University of Illinois. Together they are award-winning scholars who have led foundational research on collective intelligence, virtual/remote teams, and AI-enhanced communication and satisfaction.
Our conversation was part of Upwork’s Reimagining Work—a lecture series designed to provide a forum for expert practitioners and academics to foster the exchange of views on the present and future of work. In this conversation, we discuss the role of AI in enhancing collective intelligence—the amplified ability to solve problems and make decisions by harnessing diverse skills and perspectives. The discussion focuses on how AI can enhance team collaboration, foster trust, and ultimately improve job satisfaction.
[1] Allie Blaising, Lead User Researcher:
What is collective intelligence, and what factors support its success within teams?
Anita, Ella, and Pranav: Collective intelligence (CI) is defined as a group's ability to solve a wide range of problems together. CI is not simply the sum of individual abilities but arises from the interactions and collaborative processes within a team.
Several factors contribute to the success of collective intelligence within teams:
- Diversity: Diversity, both demographic and cognitive, is crucial for CI. It provides a wider range of perspectives, knowledge, and skills, contributing to a robust collective memory the team can draw from to process information effectively and solve problems. Gender diversity, especially with a higher representation of women, and cultural diversity, particularly in terms of individualism and collectivism, also increase CI by promoting more effective communication.
- Engagement: High and equal engagement in communication is essential for CI. Groups exhibiting more frequent and balanced communication tend to have higher CI. This suggests that active participation and information sharing among all team members are crucial for leveraging collective knowledge and perspectives.
- Social Perceptiveness: Teams whose members are highly socially perceptive develop a higher quality of collective attention, which contributes to CI. This highlights the importance of team members being attuned to each other's social cues, perspectives, and contributions to facilitate smoother collaboration and understanding.
Our recent research on digital nudges has shown promise in enhancing CI within teams. Nudges are subtle changes to the environment or interface that guide individuals towards more beneficial choices without restricting their options. A recent paper published by Gupta, Kim, Glikson, and Woolley in MIS Quarterly involving 168 online groups revealed that a Skill Facilitator Bot nudge was effective at increasing CI. This nudge operated via a chat-based facilitator that prompted discussions about skills and expertise led to an increase in skill use within the teams. This highlights how technology can be used to facilitate knowledge sharing, ensure that diverse skills are leveraged, and improve team coordination.
However, a few nudges were not successful:
- Effort Feedback Display: This nudge provided real-time information on each team member’s relative effort, intended to prevent free-loading. However, in this online context with temporary teams, participants who were called out tended to withdraw more. At the same time, those carrying the load got frustrated and also quit, leading some teams to implode.
- To-Do List Widget: While intended to prompt consideration of task coordination strategy, prominently displaying a to-do list did not yield the desired results. Instead, some teams ignored it while others became preoccupied, spending too much time focused on that element of their work while neglecting others. This suggests that simply providing task structure without actively promoting strategic discussion or coordination might not effectively enhance CI.
These findings offer valuable insights for designing effective nudges to boost CI:
- Facilitate Meaningful Interaction: While structuring information can be helpful, nudges should aim to promote meaningful interaction and discussion among team members. For instance, prompting reflection on skills or strategies can be more impactful than simply providing tools or lists.
- Consider Team Dynamics: The effectiveness of nudges can vary depending on team composition, task complexity, and existing communication norms. Thus, designing nudges that align with the specific characteristics and dynamics of the team is essential.
- Promote Transparency and Awareness but Target Team Performance: Nudges that increase transparency and awareness, such as the effort feedback display, can effectively enhance accountability and encourage active participation, but target feedback on the whole team’s contribution, rather than calling out specific individuals.
[2] Ted Liu, Economist at the Upwork Research Institute:
When we think about building collective intelligence with AI, what are some of the opportunities and challenges related to human cognitive biases?
Anita, Ella, and Pranav: Here are some opportunities and challenges related to human cognitive biases when building collective intelligence with AI:
Opportunities
- Mitigating Biases: AI can be designed to identify and mitigate human cognitive biases, such as confirmation bias (the tendency to favor information that confirms preexisting beliefs) or anchoring bias (the tendency to rely too heavily on the first piece of information encountered). For instance, AI-powered tools can be used to provide a more balanced set of perspectives or to challenge assumptions within a team. AI can also help with identifying and addressing biases related to diversity and inclusion by analyzing team communication and interactions.
- Enhancing Collective Reasoning: AI can support the development of Transactive Reasoning Systems (TRS) by facilitating more structured and objective discussions around goals and priorities. For instance, AI tools can help teams identify and prioritize key information, evaluate different options, and make more rational decisions. By structuring and guiding discussions, AI can reduce the influence of biases on collective decision-making.
- Improving Trust Calibration: Trust in AI is essential for successful collaboration, but humans often exhibit biases in their trust assessments. AI can be designed to provide transparent explanations of its reasoning and to build trust gradually over time. By understanding and addressing human biases related to trust, AI can foster more effective human-AI collaboration.
Challenges
- Amplifying Existing Biases: If not carefully designed, AI can amplify existing human cognitive biases. For example, AI algorithms trained on biased data can perpetuate and even exacerbate those biases in their outputs. This can lead to unfair or discriminatory outcomes, particularly in areas like hiring or performance evaluation. It is crucial to ensure that AI systems are developed and trained using diverse and representative data to minimize bias.
- Over-Reliance on AI: Humans can become over-reliant on AI systems, leading to a decline in their own cognitive skills. For instance, if teams rely too heavily on AI for memory tasks, they may experience a reduction in their ability to remember information independently. This can lead to a decrease in individual and collective cognitive capacity.
- Difficulty in Understanding AI: The complexity of AI systems can make it difficult for humans to understand how they work and make decisions. This lack of transparency can lead to mistrust and reluctance to accept AI recommendations. Even when efforts are made to make AI explainable, the complexity of algorithms can still make it challenging for humans to fully comprehend the reasoning behind AI outputs.
- Impact on Human Motivation: If AI is perceived as replacing human jobs or devaluing human skills, it can negatively impact human motivation and engagement. This can lead to resistance to AI adoption and hinder the development of effective human-AI collaboration. It is important to design AI systems that complement and augment human abilities rather than replace them entirely.
It is also important to note that the success of building collective intelligence with AI depends on a multitude of factors beyond cognitive biases, including the design of AI systems, the nature of the tasks, and the overall team dynamics. Addressing these challenges and harnessing the opportunities presented by AI will require ongoing research and collaboration between computer scientists, social scientists, and organizational leaders.
[3] Allie:
In your research, you describe the current and ideal roles of AI in supporting collective intelligence. What are some key challenges in advancing AI from its current assistive role to more diagnostic and coaching roles?
Anita, Ella, and Pranav: There are a number of important areas of development that are needed to support the ability of AI to support collective intelligence.
- Developing a Machine Theory of Collective Intelligence: A fundamental challenge lies in developing AI systems that can understand and interpret the nuances of human collaboration. This requires going beyond individual cognition and creating a "Machine Theory of Collective Intelligence", enabling AI to recognize and respond to the dynamics of group interactions, shared understanding, and collective decision-making. This involves identifying observable indicators of collective processes, such as effort, task strategy, and skill use, that can be used to diagnose the state of a team and predict its trajectory.
- Building Trust and Acceptance: As AI takes on more agency in guiding team processes, building trust and acceptance among human team members becomes crucial. Teams need to trust the AI's capabilities, understand its reasoning, and feel confident in its ability to enhance their work. This requires addressing both cognitive trust, based on the perceived reliability and competence of the AI, and emotional trust, rooted in perceptions of the AI's benevolence and alignment with team goals. Transparency and explainability are essential for building cognitive trust, while careful consideration of AI embodiment and interaction design can influence emotional trust.
- Balancing AI Support with Human Autonomy: A key challenge is finding the right balance between AI support and human autonomy. While AI can offer valuable assistance in coordinating tasks, managing information, and facilitating communication, excessive reliance on AI can undermine human motivation, engagement, and the development of essential cognitive skills. Striking a balance between AI assistance and opportunities for human interaction, skill development, and decision-making is crucial for fostering both individual and collective intelligence.
- Addressing Ethical Considerations: As AI systems become more integrated into team processes, ethical considerations become paramount. This includes mitigating biases in AI algorithms, ensuring transparency and accountability in AI decision-making, and addressing concerns about job displacement and the impact on human skills. Designing AI systems that are fair, unbiased, and aligned with human values is crucial for fostering trust and ensuring that AI is used to enhance, rather than undermine, collective intelligence.
Overall, advancing AI to more diagnostic and coaching roles presents both exciting opportunities and significant challenges. Successfully navigating these challenges will require interdisciplinary collaboration between computer scientists, social scientists, and organizational leaders to develop AI systems that are technically sophisticated, socially intelligent, and ethically grounded.
[4] Ted:
What do you think is important to consider when designing AI to enhance collective intelligence so it improves, rather than degrades, job satisfaction and personal identity?
Anita, Ella, and Pranav: When designing AI to enhance collective intelligence in a way that improves job satisfaction and personal identity, it is important to consider the following points:
- Prioritize AI as a Tool to Augment Human Capabilities, Not Replace Them: The sources emphasize that AI should be designed to complement and enhance human skills and expertise, not to replace human workers entirely. AI can be particularly valuable in handling routine tasks, managing information overload, and identifying patterns that humans might miss, freeing up human team members to focus on more complex, creative, and interpersonal aspects of their work. This approach can lead to a more fulfilling and engaging work experience, as employees can leverage their unique skills and contribute meaningfully to the team's success.
- Foster Transparency and Explainability in AI Systems: To build trust and acceptance among human team members, AI systems should be transparent and explainable. Employees need to understand how the AI arrives at its conclusions and recommendations, ensuring that AI decisions are perceived as fair, unbiased, and aligned with the team's goals and values. Providing clear explanations of AI processes and outputs can help alleviate fears of being controlled or manipulated by an opaque system, promoting a sense of ownership and agency among team members.
- Support the Development of Collective Cognitive Systems: AI can be designed to enhance collective intelligence by supporting the development of transactive memory systems (TMS), transactive attention systems (TAS), and transactive reasoning systems (TRS). For example, AI tools can facilitate communication and knowledge sharing, helping teams develop a shared understanding of each other's expertise (TMS). AI can also help teams prioritize tasks and allocate attention effectively (TAS), as well as facilitate discussions and decision-making processes to align on goals and strategies (TRS).
- Promote a Sense of Team Identity and Cohesion: While AI can enhance individual capabilities, it is important to ensure that AI integration does not undermine the social and emotional aspects of teamwork. AI should be designed to promote a sense of team identity and cohesion, facilitating communication and collaboration among human team members. For instance, AI tools can be used to celebrate team successes, recognize individual contributions, and foster a sense of shared purpose.
- Consider the Impact of AI on Personal Identity and Growth: As AI becomes more integrated into the workplace, it is crucial to consider the impact on personal identity and growth. AI should be designed to empower employees, allowing them to learn new skills, expand their expertise, and take on more challenging roles. Organizations should provide training and development opportunities to help employees adapt to changing job requirements and leverage AI as a tool for personal and professional growth.
By carefully considering these factors, AI can be designed and implemented in a way that enhances collective intelligence while also improving job satisfaction and preserving personal identity. The goal is to create a synergistic relationship between humans and AI, where technology augments human capabilities, fosters collaboration, and empowers individuals to thrive in the evolving workplace.
[5] Allie:
In your research, you describe opportunities to use AI to enhance psychological safety in teams. Can you share more about this and potential examples or use cases?
Anita, Ella, and Pranav: Here are some ways AI might contribute to increasing psychological safety:
- Facilitating Open Communication: AI could analyze team communication and identify instances when individuals are reluctant to contribute. The AI could encourage participation from these individuals to ensure that everyone is heard. By promoting inclusivity and open communication, AI could create a psychologically safer environment where team members feel comfortable expressing themselves. If communicating a particular message is too threatening, an AI agent could communicate on a member’s behalf or make a leader aware that employees are hesitant to speak and help facilitate a conversation that enables members to speak openly
- Managing Conflict Constructively: An AI system could recognize early signs of conflict in team interactions and suggest strategies for resolving disagreements. An AI agent could also facilitate perspective taking to help different parties more deeply understand others’ points of view. By helping teams navigate conflict effectively, AI could prevent escalation and foster a more trusting environment.
- Enhancing Learning Orientation: Individuals with a learning orientation focus on increasing competence and developing new skills with feedback, which are important for psychological safety. AI-based coaches could provide more private feedback to individuals practicing new skills, encouraging a learning orientation and framing mistakes as a necessary part of development.
- AI as a Non-Judgmental Source of Information: AI could provide non-judgmental information and opportunities to test new ideas without fear of negative consequences, increasing psychological safety. For example, individuals might be more willing to ask an AI for help or feedback compared to a human supervisor. When AI is perceived as supportive and not evaluative, individuals are more willing to engage in risk-taking essential for learning and innovation.
[6] Ted:
What are key AI and collective intelligence questions that your teams are most excited to explore in the future?
Anita, Ella, and Pranav: We remain intrigued by the role AI can play in facilitating team collaboration by helping teams develop various forms of collective cognition which are instrumental to collective intelligence. For instance, AI can be designed to enhance collective intelligence by supporting the development of collective memory, enabling a team to expand their total memory capacity by making optimal use of each other’s expertise and developing further expertise in key areas. AI can also help optimize collective attention aiding teams in prioritizing tasks and allocate attention effectively. In addition, AI can enhance collective reasoning by facilitating discussions and decision-making processes to align on goals and strategies. All of these are dynamic processes in a team; they are not solved once and then that’s it, there’s an ongoing monitoring process that needs to happen to address changes in the team and environment and adjust accordingly. Given the amount of information and the speed of change most teams face, there are lots of opportunities for AI to monitor and help manage these processes.
About Anita Williams Woolley
Dr. Anita Wolley is a Professor of Organizational Behavior at Carnegie Mellon University’s Tepper School of Business. Dr. Woolley received her doctorate in organizational behavior from Harvard University, and her research includes seminal work on collective intelligence in teams, first published in Science in 2010. Her current work focuses on collective intelligence in human-computer collaboration, with projects funded by DARPA and the NSF focused on how AI enhances synchronous and asynchronous collaboration in distributed teams. Professor Woolley has been a Senior Editor at Organization Science and a founding Associate Editor of Collective Intelligence.
About Ella Glikson
Dr. Ella Glikson is a Senior Lecturer at the Graduate School of Business Administration at Bar-Ilan University. Dr. Glikson holds a PhD in Industrial Engineering and Management from the Technion – Israel Institute of Technology. Her research focuses on geographically dispersed virtual teams, with a special interest in the impact of AI-based technology on team communication, collective intelligence, and cultural differences. Dr. Glikson's recent work explores how AI can reshape teamwork, influencing both cognitive and psychological processes within virtual environments.
About Pranav Gupta
Dr. Pranav Gupta is an Assistant Professor of Business Administration at the Gies College of Business at the University of Illinois Urbana-Champaign. Dr. Gupta received his doctorate in Organizational Behavior and Theory from Carnegie Mellon’s Tepper School of Business. His research includes significant work on collective intelligence and human-machine collaboration, focusing on the emergence of intelligent behavior in digitally-augmented teams and self-organized collectives. His most recent projects, funded by DARPA, NSF, and ARO, aim to facilitate smarter real-time coordination through Transactive Memory, Transactive Attention, and Transactive Reasoning.
Recommended research
Join the world's work marketplace
Find great talent. Find great work. Are you ready to move your business or career forward?
%20(1).png)






.png)
.png)
.png)
.png)
.png)














