
Most People Don't Expect AI to Benefit Them. What Can We Do About That?
CO.LAB, KUNGFU.AI's futures-oriented co-design series, is a venue where we explore the evolving relationship between humans and AI. This session built on previous discussions (such as, attention, authenticity, and productivity), and tackled the question:
How can we bridge the expectations gap between AI experts and the general public about who will benefit from AI—and what drives these divergent views?
76% of AI experts believe AI will benefit them personally. Only 24% of the public agrees (Pew Research Center, 2025). When three-quarters of experts see personal benefit in AI while three-quarters of the public does not, we need to understand why—and what we can do about it.
Our discussion revealed that expected benefits and trust are deeply intertwined. Trust shapes whether we expect positive outcomes from AI, while our expectations of benefit influence our willingness to trust these systems. Rather than treating these as separate phenomena, our session explored how they reinforce each other.
The Benefits Gap, Quantified
As mentioned above, while 76% of AI experts believe AI will benefit them personally, only 24% of the general public shares this optimism (Pew Research Center, 2025). This gap extends beyond individual benefit. 73% of experts think AI will positively impact how people (other than themselves) do their jobs, compared to just 23% of average adults (Pew Research Center, 2025). Gender differences compound the divide, with men nearly twice as likely as women to expect AI to have a net-positive impact on society over the next 20 years (Pew Research Center, 2025).
What explains these dramatically different expectations about AI's benefits? Our discussion surfaced several interconnected factors shaping how people perceive AI's potential impact on their lives.
"I think a big piece of it is just knowledge of what AI even is," one participant observed, noting that most public-facing AI takes the form of chatbots and generative tools—"none of it is the AI that I trust." This disconnect between the AI systems many experts work with (in particular, specialized, narrow applications like AlphaFold) and what the public encounters (often broad, anthropomorphized chatbots) creates fundamentally different reference points for evaluating potential benefits.
But knowledge gaps don't fully explain the divide. As another participant astutely noted, "AI experts kind of have to think that it's good... otherwise why am I working in this space?" This self-selection bias means those closest to the technology are inherently predisposed to see its benefits, while also having greater visibility into positive applications that may be invisible to the public.
Four Factors Shaping Trust
A. Contextual Dependency of Trust
One participant posed the following thought experiment: "If someone told you they had 'unlearned everything' and developed an entirely new approach to something, would you trust them more or less?"
In chess, such radical reimagining seems revolutionary—even beautiful. Rick Rubin, the influential music producer, responded by crying when he first heard about AlphaGo's victory, not from sadness but from recognizing beauty in how the AI won. In an interview with Krista Tippett he explained: "I was crying because it was about creativity. And that the computer made a creative choice that man wouldn't have made. And the reason the computer made the creative choice was not because the computer was smarter. It was actually because the computer knew less." In other words, AI's blank-slate approach revealed new possibilities previously hidden by human assumptions.
But apply the same "unlearning" to heart surgery and most people would decline that surgery. For self-driving cars, people tend to be somewhere in the middle—appreciating innovation but requiring proof that fundamental safety principles remain intact despite the novel approach to the task.
This contextual nature of trust reveals a critical flaw in how we discuss AI. Asking whether people "trust AI" makes as much sense as asking if they "trust tools"—the answer depends entirely on which tool, used for what purpose, in what context.
B. Obscured Subject-Object Relationships
We habitually make AI the subject of sentences—"AI is changing the world"—when in fact it's something that people and organizations use. This grammatical choice positions us as passive observers of an autonomous force rather than active agents of change.
"We are changing the world with AI" reasserts human responsibility and control. It's not just semantic precision; these language patterns influence whether people feel like architects of the future or victims of technological inevitability. When we obscure human agency in our language, we deepen the sense that AI is something happening to us rather than something we're doing.
The illusion of understanding compounds this power dynamic. People who think they comprehend AI often trust it more—even when that belief is false. One participant noted how they've talked to people who "really trust ChatGPT and have absolutely the wrong idea about what is happening when they ask it something." They trust it because they think they understand it, projecting their own ideas of what's happening in the absence of actual legibility.
C. AI as an Economic Arrangement
"AI is not just a technology," one participant observed. "It's an economic arrangement" involving massive data capture, often without meaningful consent.
Building modern AI requires massive computational resources and vast datasets—often scraped from the internet without creators' explicit consent. This concentration of requirements means only well-resourced organizations can develop cutting-edge models, creating new power structures. Trust in AI therefore involves questions far beyond technical capability: Who owns the training data? Who controls access to powerful models? Who captures the value when AI transforms industries? Building trust will necessarily involve answering these questions in a way that feels fair to people.
D. Model Sensitivity to Small Interventions
"Business logic tacked on top" of AI systems creates a different but equally corrosive trust problem. As one participant explained: when you take an AI system optimized to find the "global minimum in a landscape of error" and adjust it for business purposes, you risk compromising its core function. The risk is two-fold:
(1) Technically, small adjustments to optimize for profit can degrade the AI's core performance—these systems are often sensitive to perturbations in ways traditional software isn't. A minor tweak to boost engagement might cascade through the system, producing unexpected and undesirable outputs.
(2) Ethically, such modifications introduce biases that users often detect. Users may not understand the technical details, but they sense when a system that once felt ‘neutral’ starts pushing certain outcomes, such as promoting sponsored content or shadowbanning accounts. For specific user groups, this erosion of trust tends to first happen gradually, and then all at once as more people are put-off by what feels to them as manipulation or tampering.
Human Responses to AI Progress
Grieving the Loss of Previously Uniquely Human Work
When AI, particularly the large language models that currently dominate public perception, starts to perform tasks we once considered uniquely human—such as writing, creating, and problem-solving—these previously distinctive human abilities begin to become commoditized. And this process diminishes what once made us feel special.
One participant described watching AI do their work as going through "a grieving process," starting with amazement ("Wow, this is amazing") before shifting to existential concern ("Oh man, I thought that was something only I could do").
Dismissing this emotional response doesn't just alienate people; it erodes trust in AI's genuine benefits and prevents honest conversations about how to integrate these tools thoughtfully.
Amplification of Existing Trends
In addition to creating new trends, AI often amplifies existing ones. As one participant observed: "AI is exacerbating the patterns of how industries are already trending due to human influence.” For example, the widely-held public perception that AI will have a negative impact on news (only 10% of US adults think AI will positively impact news, Pew Research Center, 2025) is likely driven in part by preexisting distrust in media as an institution. In other words, AI is a multiplier for whatever trust or skepticism already exists in a given domain, because that trust or skepticism is predictive of how we think the domain will adopt and deploy AI.
This suggests that building trust in AI requires also building trust in the underlying human systems that AI is augmenting.
The Missing Discursive Middle
As our session concluded, one participant noted: "It's really hard to think about what [positive AI] solutions are... it feels a lot more difficult to talk about than what a negative one looks like."
This difficulty reveals a deeper problem: our AI discourse has become polarized between extremes. On one side are the "Boomers,” techno-optimists who promise AI will solve humanity's greatest challenges, from curing cancer to reversing climate change. On the other are the "Doomers" who warn of a job apocalypse, surveillance states, and existential risk. Both narratives tend to produce headlines and, at least to some extent, shape policy discussions.
What's missing is the pragmatic middle—the space for those who see AI as neither salvation nor catastrophe but as a powerful tool requiring thoughtful deployment. This polarization isn't just an academic problem. When every AI conversation starts from extreme positions, everyday people become skeptical of all narratives.
Design Principles for Building Trust and Expanding Benefits Perception
1. Embrace Specificity
Rather than pursuing trust in or expected benefits from "AI" as a monolithic category, focus on specific applications that provide tangible value in specific contexts. Benefits from AI-assisted medical diagnosis involve different considerations than benefits from AI-generated art or AI-powered productivity tools. Each application carries a unique value proposition and requires different kinds of buy-in from different users.
2. Restore Agency Through Language
How we talk about AI shapes how we relate to it. When we emphasize human decisions—"we chose to implement this system"—rather than treating AI as an autonomous force, we help restore a sense of agency. Small shifts in language can reframe AI from something that happens to us to something we actively deploy and govern.
3. Align Incentives Before Algorithms
As Stafford Beer observed, "the purpose of a system is what it does." If you build an AI system to maximize engagement, it will maximize engagement—even at the cost of user well-being. If you optimize for ad revenue, the system will serve advertisers first, users second. The incentives you embed are the true purpose of the system, regardless of ancillary stated intentions or values, like saying you endorse content moderation or responsible use (in the long run, if these things are merely espoused and not incentivized, they will lose out to the incentivized behaviors of the system). Therefore, choose your incentives wisely.
4. Design for Grief and Transition
For some, AI automation of meaningful work triggers genuine loss—a reality that deserves acknowledgment rather than dismissal. Thoughtful deployment strategies can honor this emotional dimension while creating pathways for people to grow alongside AI capabilities, finding new forms of meaningful contribution.
5. Make Power Visible
Both trust and expected benefits deepen when people understand not just how AI works technically, but who controls it and how value flows through the system. Transparency about governance structures, data provenance, and value distribution helps people assess whether they're likely to benefit from particular AI deployments.
Conclusion
One final idea that came out of our discussion was the provocation: What if we abandoned the term "artificial intelligence" altogether? One participant proposed thinking instead of "intelligence amplification"—placing AI alongside tools like calculators, telescopes, and other human capability enhancers that clearly benefit their users.
This shift is especially important for language models, which create a powerful illusion of personhood simply by producing fluent text. When we interact with these ‘word machines’ through natural conversation, it's easy to anthropomorphize them—to imagine consciousness behind the words. But viewing them as sophisticated word manipulators, like advanced versions of spell-check or translation tools, can help ground our understanding. We can appreciate their utility without projecting human qualities onto statistical pattern matching.
However, this reframing must acknowledge that unlike traditional tools or even software, AI systems do have different-in-kind qualities, such as being probabilistic in nature, having difficult-to-interpret internal mechanics, and unforeseen emergent capabilities. And AI capabilities are advancing quickly. We need to be specific about what today's systems can actually do for people, while acknowledging that they may enable more powerful systems in the future.
Ultimately, the question isn't whether AI will benefit people in the abstract, but how to build and deploy systems that provide tangible, equitable benefits that people can actually perceive and access. This means creating AI that enhances rather than replaces human capabilities, that makes power structures visible rather than obscured, and that acknowledges the full spectrum of human needs beyond efficiency and optimization.
But how do we know if we're building systems that will engender trust and benefit most people? One approach draws from philosopher John Rawls's "veil of ignorance"—we could evaluate AI deployments by imagining that we didn’t know our position in society. This would mean assessing potential AI scenarios not from our current position, but by imagining we don't know whether we'll be AI developers, displaced workers, or members of communities shaped by algorithmic decisions. Such exercises could help us envision AI futures where benefits are more broadly distributed—and therefore more likely to be recognized by all.
If you're grappling with how to ensure your AI initiatives deliver tangible benefits to all stakeholders, we'd love to talk. Whether you're deploying AI for the first time or rethinking existing systems, we can help design approaches that create and communicate real value, building appropriate levels of trust through transparency and genuine benefit delivery. You can reach me at ben.szuhaj@kungfu.ai.
Please note that the dialogue represented from the CO.LAB discussion has been edited for clarity and brevity. Additionally, I would like to explicitly acknowledge my use of AI in the creation of this work—from helping to transcribe the session to helping write this piece.