How AI fatigue turned trust into something users are willing to pay for.
Until quite recently, we could ship new features with rough edges and still get users’ curiosity.
The mentality was: if AI can do it, means we ship it.
Novelty covered for a lot there.
But NN/g now describes this year as an AI-fatigue moment and says trust is becoming a major design problem because users burned by AI features are more hesitant to try the next one.
If that’s the change, then the scarce thing isn’t intelligence here. It’s willingness to trust it.
📌 What’s inside
Why didn’t transparency solve the trust problem?
What changed in users?
What are the products that succeed, doing?
What does that leave us with?
Trust used to feel abundant because we were spending the user’s curiosity, not our own credibility on it.
But once every product promises some version of intelligence, trust turns into a commodity. Everybody says it. But fewer products follow through.
When every product claims to be trustworthy, users become skeptical.
And it’s exactly when verified trust suddenly feels premium: something users will spend attention on, return to, and, in the right category, actually pay for.
Why didn’t transparency solve the trust problem?
Our first response made sense. If AI is opaque, then disclosure sounds like the ethical correction. Fix? Label the output. Mention the model. Slap a confidence badge.
And fair enough: we were working from a reasonable assumption that if users know AI is involved, trust goes up, simply because we’ve been honest about it.
But it’s not how it works.
The Penn State University study led by Yuan Sun offers an interesting insight here. It turns out, that users don’t always want more explanation.
When the system meets their expectations, they ere fine without it.
But they do demand it, when the system either underdelivers (why did it fail?) or overdelivers (driven by curiosity).
So maybe transparency was never a single pattern. Maybe it only works when it gives people the right amount of interpretive help for the moment they’re in.
Then the Korean Journal study makes the tension even harder to wave away.
In an experiment using fictional laundry detergent ads, disclosing AI use lowered perceived authenticity. The negative effect was even stronger among people with higher AI literacy.
That means, that in some contexts, when AI usage is disclosed, users trust the brand/product less.
That’s awkward, but useful.
It raises a question: when we add disclosure, are we helping people judge, or are we just signalling compliance to ourselves?
Maybe disclosure is valuable only when it serves the user’s current needs and goals (not when it’s irrelevant or unwanted).
And perhaps sometimes, disclosure is simply not enough, as users need to verify for themselves whether they can truly trust the system or not.
What changed in users?
Users didn’t become skeptical for no reason. They adapted.
Pew found that AI experts overwhelmingly think Americans interact with AI almost constantly or several times a day, while Americans themselves report AI showing up across everyday contexts and still say they want more control over how it’s used.
At the same time, NN/g’s latest site-chatbot research found that users often don’t notice those chatbots, rarely use them, and struggle to see what they offer beyond search, filters, or general-purpose tools like ChatGPT.
AI is more present, but each individual AI feature seems to start with less goodwill than the last.
That changes the job of UX in a way I don’t think we’ve fully absorbed yet.
Onboarding used to be about teaching capability. More and more, it feels like the first job is answering a question: Can I put my trust in this system? What happens if something goes wrong?
What are the products that succeed doing?
The products that make you pause and think, all right, something different is happening here, might have a different strategy. A few questions keep resurfacing here:
Are they showing information, or the history of information?
A label tells me AI was involved. But provenance tells me what the answer is standing on. Those are not the same thing, even though we’ve often treated them as interchangeable.
Maybe because labels are faster to ship and provenance more difficult to uncover and implement.
But Tow Center’s analysis of eight AI search tools found sixteen hundred test queries, incorrect answers on more than 60 percent of them, fabricated links, and a tendency to sound certain even when the systems were wrong.
In that environment, disclosure without any lineage starts to feel strangely incomplete.
Maybe what users want isn’t confession. Maybe they want receipts: sources, freshness, reviewer, rationale, the trail behind the outputs. Maybe that’s how trust is built in the era of AI.
Are they giving control, or simply asking for a leap of faith?
We’ve leaned hard on simple accept-or-reject patterns, and I get why. Simplicity has been one of our better instincts. But should a low-stakes suggestion and a high-stakes recommendation ask for the same kind of trust? No.
Maybe users aren’t asking us to remove AI. Maybe they’re asking us to stop pretending every act of automation deserves the same amount of faith.
Is the real trust moment the failure moment?
NN/g says support when the system fails is one of the fundamentals for building confidence in AI experiences. If that’s right, then the real trust test may be the admission of limits, the recovery path, the preserved context, the moment the system stops performing like it knows everything and starts helping honestly.
Could restraint be the signal users believe?
We were under real pressure to add AI everywhere. Production got cheaper. Iteration sped up. Competitive anxiety did the rest. Of course we all filled surfaces with co-pilots and chat entry points.
But NN/g’s 2024 and 2026 research keeps circling the same tension: AI features need a well-defined role, they need to solve a real problem.
So I keep coming back to a question that would have sounded oddly conservative not long ago: what if the most trustworthy AI choice in some flows is not adding another AI touchpoint at all?
Where is the human when judgment matters?
This may be the hardest one.
Because we haven’t actually removed humans from many systems. We’ve often just hidden them until escalation.
NN/g’s 2026 report argues that human direction, curation, and verification remain essential even as AI capability improves. That makes me wonder whether visible human accountability is becoming part of the interface, even when the interface looks automated.
Not because users need a person hovering over every decision, but because consequential systems still seem to need a recognisable owner.
What does that leave us with?
I don’t think this adds up to a clean playbook. If anything, it makes the problem stranger in a useful way.
We used to ask whether users trusted AI. Now I find myself asking: what kind of evidence does a skeptical user need before they’re willing to trust this system, for this task, right now? That’s a different design brief. And maybe a more honest one.
UI is becoming less of a differentiator, while deeper understanding, contextual judgment, and strategic problem-solving matter more.
That feels connected to all of this.
If trust is scarce, then maybe the work isn’t making AI sound so confident. Maybe the crux lies in admitting what it knows and what it doesn’t, so a skeptical user can decide, quickly, whether they want to trust it or not.
I don’t think we have that solved.
But perhaps that’s the question worth carrying forward.
Subscribe on Substack⬇️
You might also like:
📚 Sources
- State of UX 2026: Design Deeper to Differentiate: NNG
- ScienceDirect: Does transparency matter when an AI system meets performance expectations? An experiment with an online dating site
- To explain or not? Need for AI transparency depends on user expectation: The Pennsylvania State University
- “Made with AI” The Paradox of Transparency: The Impact of Labeling on Authenticity, Brand Trust, and Brand Attitude: KCI
- Artificial intelligence in daily life: Views and experiences: Pew Research Center
- AI in Americans’ lives: Awareness, experiences and attitudes: Pew Research Center
- AI Search Has a Citation Problem: Columbia Journalism Review
Share this article: