October 22, 2025
by Soundarya Jayaraman / October 22, 2025
Do you trust AI? Not just to autocomplete your sentence, but to make decisions that affect your work, your health, or your future?
These are questions asked not just by ethicists and engineers, but by everyday users, business leaders, and professionals like you and me around the world.
In 2025, AI tools aren’t experimental anymore. ChatGPT writes our messages, Lovable and Replit build our apps and websites, Midjourney designs our visuals, and GitHub Copilot fills in our code. Behind the scenes, AI screens resumes, triages support tickets, generates insights, and even assists in clinical decisions.
But while adoption is soaring, the big question persists: Is AI trustworthy? Or more precisely, is AI safe? Is AI reliable? Can we trust how it’s used, who’s using it, and what decisions it’s making?
In 2025, trust in AI is fractured, rising in emerging economies and declining in wealthier nations.
In this article, we break down what global surveys, G2 data, and reviews reveal about AI trust in 2025, across industries, regions, demographics, and real-world applications. If you're building with AI or buying tools that use it, understanding where trust is strong and where it’s slipping is essential.
The world isn’t just talking about AI anymore. It’s using it.
According to KPMG, 66% of people now say they use AI regularly, and 83% believe it will deliver wide-ranging benefits to society. From recommendation engines to voice assistants to AI-powered productivity tools, artificial intelligence has moved from the margins to the mainstream.
This rise in AI adoption isn’t limited to consumers. McKinsey’s data shows that the share of companies using AI in at least one function has more than doubled in recent years, climbing from 33% in 2017 to 50% in 2022, and now hovering around 78% in 2024.
G2 Data echoes that momentum. According to G2’s study on the state of generative AI in the workplace, 75% of professionals now use generative AI tools like ChatGPT and Copilot to complete daily tasks. In a separate AI adoption survey, G2 found that:
In short, AI adoption is high and rising. But trust in AI? That’s another story.
According to a 2024 Springer study, a search for “trust in AI” on Google Scholar returned:
As of 2025, a Google search for the same phrase yields over 3.1 million results, reflecting the growing urgency, visibility, and complexity of the conversation around AI trust.
This rise in attention doesn't necessarily reflect real-world confidence. Trust in AI remains limited and uneven. Here’s the latest data on what the public says about AI and trust.
In advanced economies, willingness drops further, to just 39%. This is part of a larger downward trend in trust. Between 2022 and 2024, KPMG found:
In short, even as AI systems grow more capable and widespread, fewer people feel confident relying on them, and more people feel anxious about what they might do.
These trends reflect deeper discomforts. While a majority of people believe AI systems are effective, far fewer believe they’re responsible.
This 13-point gap highlights a core tension: people may trust AI to work, but not to do the right thing. They worry about opaque decision-making, unethical use cases, or a lack of oversight. And this divide isn’t limited to one part of the world. It shows up consistently across countries, even in regions where confidence in AI’s performance is high.
Trust in AI isn’t uniform. It varies dramatically depending on where you are in the world. While global averages show a cautious attitude, some regions place significant faith in AI systems, while others remain deeply skeptical, with sharp differences between emerging economies and high-income countries.
Across countries like Nigeria, India, Egypt, China, the UAE, and Saudi Arabia, over 60% of respondents say they are willing to trust AI systems, and nearly half report high acceptance. These are also the countries where AI adoption is accelerating the fastest, and where digital literacy around AI appears to be higher.
Country | % willing to trust AI |
Nigeria | 79% |
India | 76% |
Egypt | 71% |
China | 68% |
UAE | 65% |
In contrast, most advanced economies report significantly lower trust levels:
Country | % willing to trust AI |
Finland | 25% |
Japan | 28% |
Czech Republic | 31% |
Germany | 32% |
Netherland | 33% |
France | 33% |
Despite strong digital infrastructure and widespread access, advanced economies appear to have more questions than answers when it comes to AI governance and ethics. This hesitancy may stem from several factors: greater media scrutiny, regulatory debates, or more exposure to high-profile AI controversies, from data privacy lapses to deepfakes and algorithmic bias.
Source: KPMG
The trust gap between advanced and emerging economies isn’t just visible in their willingness to trust and acceptance of AI. It’s reflected in how people feel about AI. Data shows that people in emerging economies are far more likely to associate AI with positive emotions:
In contrast, emotional responses in advanced economies are more ambivalent and conflicted:
This emotional split reflects deeper divides in exposure, expectations, and lived experiences with AI technologies. In emerging markets, AI may be seen as a leap forward, improving access to education, healthcare, and productivity. In more developed markets, however, the conversation is more cautious, shaped by ethical concerns, automation fears, and a long memory of tech backlashes.
Edelman’s 2025 Trust Barometer presents a complementary angle on how comfortable people are with businesses using AI.
44% globally say they’re comfortable with the business use of AI. But the breakdown by region reveals a similar trust gap, one that mirrors the trust divide between emerging and advanced economies seen in KPMG’s data.
People in emerging economies, India, Nigeria, and China are not only willing to trust AI more but are also more comfortable with businesses using AI.
Country | % of people comfortable with businesses using AI |
India | 68% |
Indonesia | 66% |
Nigeria | 65% |
China | 63% |
Saudi Arabia | 60% |
In contrast, people from Australia, Ireland, the Netherlands, and even the US have a trust deficit. Less than 1 in 3 say they are comfortable with businesses using AI.
Country | % of people comfortable with businesses using AI |
Australia | 27% |
Ireland | 27% |
Netherlands | 27% |
UK | 27% |
Canada | 29% |
While regional divides are stark, they’re only part of the story. Trust in AI also breaks down along demographic lines — from age and gender to education and digital exposure. Who you are, how much you know about AI, and how often you interact with it can shape not just whether you use it, but whether you trust it.
Let’s take a closer look at the demographics of optimism versus doubt.
Trust and comfort with AI aren’t just shaped by what AI can do, but by who you are and how much you’ve used it. The data shows a clear pattern: the more people engage with AI through training, regular use, or digital fluency, the more likely they are to trust and adopt it.
Conversely, those who feel underinformed or left out are far more likely to view AI with caution. These divides cut deep, separating generations, income groups, and education levels. What’s emerging isn’t just a digital divide, but an AI trust gap.
Trust in AI systems declines steadily with age. Here’s how it breaks down:
The trust gap by age doesn’t exist in isolation. It tracks closely with how frequently people use AI, how well they understand it, and whether they’ve received any formal training, all of which decline with age. The generational divide is clear when we look at the following data:
Metric | 18–34 years | 35–54 years | 55+ years |
Trust in AI systems | 51% | 48% | 38% |
Acceptance of AI | 42% | 35% | 24% |
AI use | 84% | 69% | 44% |
AI training | 56% | 41% | 20% |
AI knowledge | 71% | 54% | 33% |
AI efficacy (confidence using AI) | 72% | 63% | 44% |
AI trust isn’t just a generational story. It’s also shaped by privilege, access, and digital fluency. Across the board, people with higher incomes and more formal education report significantly more trust in AI systems. They’re also more likely to use AI tools frequently, feel confident navigating them, and believe these systems are safe and beneficial.
52% of men say they trust AI, but only 46% of women do.
Trust gaps show up in comfort with business use, too. The age, income, and gender-based divides in AI trust also shape how people feel about its use in business. Survey data shows:
In short, the same groups who are more familiar with AI — younger, higher-income, and digitally fluent individuals — are also the ones most comfortable with companies adopting it. Meanwhile, skepticism is stronger among those who feel left behind or underserved by AI’s rise.
Beyond who’s using AI, how it’s being used plays a huge role in public trust. People make clear distinctions between applications they find useful and safe, and those that feel intrusive, biased, or risky.
Surveys show clear variation: some sectors have earned cautious confidence, while others face widespread skepticism. Below, we break down how trust in AI shifts across key industries and applications.
Among all use cases, healthcare stands out as the most trusted application of AI. According to KPMG, 52% of people globally say they’re willing to rely on AI in healthcare settings. In fact, it’s the most trusted AI use case in 42 of the 47 countries surveyed.
That optimism is shared across stakeholders, albeit unequally. Philips’ 2025 study reveals that:
This signals broad confidence in AI’s potential to enhance diagnostics, treatment planning, and clinical workflows. But trust in AI doesn’t always mean comfort with its application, especially among patients.
While healthcare professionals express high confidence in using AI across a range of tasks, patients’ comfort drops sharply as AI moves from administrative roles to higher-risk clinical decisions. The gap is especially pronounced in tasks like:
Patients appear hesitant to hand over trust in sensitive, high-stakes contexts like note-taking or diagnosis, even as they acknowledge AI’s broader potential in healthcare.
Beneath this is a far less confidence in how responsibly AI will be deployed. A JAMA Network study underscores this tension:
In other words, the problem isn’t always the technology; it’s the system implementing it. Even in the most trusted AI sector, questions about governance, safeguards, and accountability continue to shape public sentiment.
In no other domain has AI seen such rapid, grassroots adoption as in education. Students around the world have embraced generative AI, often more quickly than their institutions can respond.
83% of students report regularly using AI in their studies, with 1 in 2 using it daily or weekly, according to KPMG’s study. Notably, this outpaces AI usage at work, where only 58% of employees use AI tools regularly.
But high usage doesn’t always equate to high trust. Just 53% of students say they trust AI in their academic work. And while 72% feel confident using AI and claim at least moderate knowledge, a more complex picture emerges on closer inspection:
Educators are seeing the impact, and their top concerns reflect that. According to Microsoft’s recent research:
Students share similar anxieties:
Together, these data points underscore a critical tension:
The gap here is more about the gap in responsibility and preparedness. It is less about belief in AI’s potential and more about confidence in whether it’s being used ethically and effectively in the classroom.
AI-powered chatbots have become a near-daily presence, from troubleshooting an app issue to tracking an online order. But while consumers regularly interact with AI in customer service, that doesn’t mean they trust it.
Here’s what recent data reveals:
These concerns aren’t just about quality; they’re about access.
These findings paint a layered picture: people may tolerate AI in service roles, but they want it to be more human-like, especially when empathy, nuance, or complexity is required. There's openness to hybrid models where AI supports, but doesn't replace, human agents.
Self-driving technology has been one of AI’s most visible — and controversial — frontiers. Brands like Tesla, Waymo, Cruise, and Baidu’s Apollo have spent years testing autonomous vehicles, from consumer-ready driver-assist features to fully driverless robotaxis operating in cities like San Francisco, Phoenix, and Beijing.
Globally, interest in autonomous features is growing. S&P Global’s 2025 research finds that around two-thirds of drivers are open to using AI-powered driving assistance on highways, especially for predictable conditions like long-distance cruising. Over half believe AVs will eventually drive more efficiently (54%) and be safer (47%) than human drivers.
But in the United States, the road to trust is bumpier. According to AAA’s 2025 survey:
The gap between technological readiness and public acceptance underscores a core reality: while AI may be capable of taking the wheel, many drivers — especially in the U.S. — aren’t ready to hand it over. Trust will depend not just on technical milestones, but also on proving safety, reliability, and transparency in real-world conditions.
Law enforcement agencies are embracing AI for its investigative power — using it to uncover evidence faster, detect crime patterns, identify suspects from surveillance footage, and even flag potential threats before they escalate. These tools can also ease administrative burdens, from managing case files to streamlining dispatch.
But with this expanded reach comes serious ethical and privacy concerns. AI in policing often intersects with sensitive personal data, facial recognition, and predictive policing — areas where public trust is fragile and missteps can erode confidence quickly.
Here’s some data on how the law enforcement officials and the general public see AI being used for public safety.
A U.S. public safety survey reveals strong internal support:
Among investigative officers, AI is viewed as a powerful enabler, according to Cellebrite research:
But globally, public sentiment towards AI use in policing is mixed. UNICRI’s global survey, spanning six continents and 670 respondents, reveals a nuanced public stance.
Trust hinges on transparency, human oversight, and robust governance, with respondents signaling that AI must be used as a tool, not a replacement, for human judgment.
Media is emerging as one of the most scrutinized fronts for AI trust, not because of its absence, but because of its overwhelming presence in shaping public opinion. From deepfake videos that blur the line between satire and deception to AI-written articles that can spread faster than they can be fact-checked, the information ecosystem is now flooded with content that’s harder than ever to verify.
In this environment, the risks of AI-generated misinformation aren’t just a fringe concern — they’ve become central to the global debate on trust, democracy, and the future of public discourse.
According to recent Ipsos survey data:
The public sees AI’s role in spreading disinformation as urgent enough to require formal guardrails:
This sentiment reflects a unique trust paradox: people see the dangers clearly, they expect institutions to act decisively, but they don’t necessarily trust their own ability to keep up with AI’s speed and sophistication in content creation.
AI is now a staple in recruitment. Half of companies use it in hiring, with 88% deploying AI for initial candidate screening, and 1 in 4 firms that use AI for interviews relying on it for the entire process.
According to HireVue’s 2025 report:
The payoff is tangible. Talent acquisition teams credit AI for clear efficiency and fairness benefits:
However, candidates remain uneasy, especially when AI directly influences hiring outcomes:
This widening trust gap means companies will need to blend AI’s efficiency with clear communication, visible fairness measures, and human touchpoints to win over job seekers.
Across industries, the same pattern keeps surfacing: people’s trust in AI often hinges less on the technology itself and more on who’s building, deploying, and governing it. Whether it’s healthcare, education, or customer service, public sentiment is shaped by perceptions of transparency, accountability, and alignment with human values.
Which raises the next question: How much do people actually trust the companies driving the AI revolution?
As trust in AI’s capabilities — and its role across industries — remains uneven, confidence in the companies building these tools is slipping. People may use AI daily, but that doesn’t mean they trust the intentions, ethics, or governance of the organizations developing it. This gap has become a defining fault line between broad enthusiasm for AI’s potential and a more guarded view of those shaping its future.
Edelman data shows that while overall trust in technology companies has held relatively steady, dipping only slightly from 78% in 2019 to 76% in 2025, trust in AI companies has fallen sharply. In 2019, 63% of people globally said they trusted companies developing AI; by 2025, that figure had dropped to just 56%, even though it's a slight increase from the previous year.
Year | Trust in AI companies |
2019 | 63% |
2021 | 56% |
2022 | 57% |
2023 | 53% |
2024 | 53% |
2025 | 56% |
As skepticism toward AI companies grows, so does the question of who the public actually wants at the helm of AI development: which institutions, whether academic, governmental, corporate, or otherwise, are seen as most capable of building AI in the public’s best interest?
Opinions diverge sharply, not only by institution, but also by whether a country is an advanced or emerging economy.
Globally, universities and research institutions enjoy the highest trust:
Healthcare institutions follow closely, with 41% high confidence in advanced economies and 47% in emerging economies.
By contrast, big technology companies face a pronounced trust divide:
Commercial organizations and governments rank lower still, with fewer than 40% of respondents in most regions expressing high confidence. Governments score just 26% in advanced economies and 39% in emerging ones, signaling a widespread skepticism about state-led AI governance.
The takeaway? Trust is concentrated in institutions perceived as more mission-driven (universities, healthcare) rather than profit-driven or politically influenced.
Once the question of who should build AI is settled, the harder challenge is making those systems trustworthy over time. So, what makes people trust AI more?
Four out of five people (83%) globally say they would be more willing to trust an AI system if organizational assurance measures were in place. The most valued include:
The takeaway: people want AI to follow the same trust playbook as high-stakes industries like aviation or finance — where safety, transparency, and accountability aren’t optional, they’re the baseline.
On G2, AI is no longer a side feature — it’s becoming an operational backbone across industries. From healthcare and education to finance, manufacturing, retail, and government technology, AI-enabled solutions now appear in thousands of product categories. That includes everything from CRM systems and HR platforms to cybersecurity suites, data analytics tools, and marketing automation software.
But whether you’re a hospital deploying diagnostic AI, a bank automating fraud detection, or a public agency introducing AI-driven citizen services, the trust challenge looks remarkably similar. Reviews and buyer insights on G2 show that trust isn’t built by AI capability alone — it’s built by how organizations design, communicate, and govern AI use.
For businesses and institutions, three patterns stand out:
For leaders rolling out AI, whether in software, public services, or physical products, the takeaway is clear: trust is now a competitive advantage and a public license to operate. The most successful adopters combine AI innovation with visible safeguards, user agency, and verifiable outcomes.
If the last decade was about proving AI’s potential, the next will be about proving its integrity. That battle won’t be fought in glossy launch events — it will be decided in the micro-moments: a fraud alert that’s both accurate and respectful of privacy, a chatbot that knows when to hand off to a human, an algorithm that explains itself without being asked.
These moments add up to something bigger: an enduring license to operate in an AI-powered economy. Regardless of sector, the leaders of the next decade will be those who anticipate doubt, give users genuine agency, and make AI’s inner workings visible and verifiable.
In the end, the winners will not just be the fastest model builders; they will be the ones people choose to trust again and again.
Explore how the most innovative AI tools are reviewed and rated by real users on G2’s Generative AI category.
Soundarya Jayaraman is a Content Marketing Specialist at G2, focusing on cybersecurity. Formerly a reporter, Soundarya now covers the evolving cybersecurity landscape, how it affects businesses and individuals, and how technology can help. You can find her extensive writings on cloud security and zero-day attacks. When not writing, you can find her painting or reading.
AI knows it all — but what happens when it makes it up?
AI isn’t just creating; it’s collecting.
The MarTech industry is fast-growing, fast-evolving, and a bit overwhelming. A new tool crops...
AI knows it all — but what happens when it makes it up?
AI isn’t just creating; it’s collecting.