Trust in AI: Insights from Global Surveys and G2 Data

October 22, 2025

trust in AI

Do you trust AI? Not just to autocomplete your sentence, but to make decisions that affect your work, your health, or your future?

These are questions asked not just by ethicists and engineers, but by everyday users, business leaders, and professionals like you and me around the world.

In 2025, AI tools aren’t experimental anymore. ChatGPT writes our messages, Lovable and Replit build our apps and websites, Midjourney designs our visuals, and GitHub Copilot fills in our code. Behind the scenes, AI screens resumes, triages support tickets, generates insights, and even assists in clinical decisions.

But while adoption is soaring, the big question persists: Is AI trustworthy? Or more precisely, is AI safe? Is AI reliable? Can we trust how it’s used, who’s using it, and what decisions it’s making?

In 2025, trust in AI is fractured, rising in emerging economies and declining in wealthier nations.

In this article, we break down what global surveys, G2 data, and reviews reveal about AI trust in 2025, across industries, regions, demographics, and real-world applications. If you're building with AI or buying tools that use it, understanding where trust is strong and where it’s slipping is essential.

TL;DR: Do people trust AI yet?

  • Short answer: No.
  • Only 46% of people globally say they trust AI systems, while 54% are wary.
  • Confidence varies widely by region, use case, and familiarity.
  • In high-income countries, only 39% trust AI.
  • Trust is highest in emerging economies like China (83%) and India (71%).
  • Healthcare is the most trusted application, with 44% willing to rely on AI in a medical context.

Trust in AI in 2025: Global snapshot shows divided confidence

The world isn’t just talking about AI anymore. It’s using it.

According to KPMG, 66% of people now say they use AI regularly, and 83% believe it will deliver wide-ranging benefits to society. From recommendation engines to voice assistants to AI-powered productivity tools, artificial intelligence has moved from the margins to the mainstream.

This rise in AI adoption isn’t limited to consumers. McKinsey’s data shows that the share of companies using AI in at least one function has more than doubled in recent years, climbing from 33% in 2017 to 50% in 2022, and now hovering around 78% in 2024.

G2 Data echoes that momentum. According to G2’s study on the state of generative AI in the workplace, 75% of professionals now use generative AI tools like ChatGPT and Copilot to complete daily tasks. In a separate AI adoption survey, G2 found that:

  • Nearly 75% of businesses report using multiple AI features in their daily workflows.
  • 79% of companies say they prioritize AI capabilities when selecting software.

In short, AI adoption is high and rising. But trust in AI? That’s another story. 

How global trust in AI is evolving (and why it’s uneven)

According to a 2024 Springer study, a search for “trust in AI” on Google Scholar returned:

  • 157 results before 2017
  • 1,140 papers from 2018 to 2020
  • 7,300+ papers from 2021 to 2023

As of 2025, a Google search for the same phrase yields over 3.1 million results, reflecting the growing urgency, visibility, and complexity of the conversation around AI trust.

This rise in attention doesn't necessarily reflect real-world confidence. Trust in AI remains limited and uneven. Here’s the latest data on what the public says about AI and trust. 

  • 46% of people globally are willing to trust AI systems in 2025.
  • 35% are unwilling to trust AI.
  • 19% are ambivalent — neither trusting nor rejecting AI outright.

How willing are you to trust AI

In advanced economies, willingness drops further, to just 39%. This is part of a larger downward trend in trust. Between 2022 and 2024, KPMG found:

  • The perceived trustworthiness of AI dropped from 63% to 56%.
  • The percentage willing to rely on AI systems fell from 52% to 43%.
  • Meanwhile, the share of people worried about AI jumped from 49% to 62%.

In short, even as AI systems grow more capable and widespread, fewer people feel confident relying on them, and more people feel anxious about what they might do.

These trends reflect deeper discomforts. While a majority of people believe AI systems are effective, far fewer believe they’re responsible. 

  • 65% of people believe AI systems are technically capable, meaning they trust AI to deliver accurate results, helpful outputs, and reliable performance.
  • But only 52% believe AI systems are safe, ethical, or socially responsible, that is, designed to avoid harm, protect privacy, or uphold fairness.

This 13-point gap highlights a core tension: people may trust AI to work, but not to do the right thing. They worry about opaque decision-making, unethical use cases, or a lack of oversight. And this divide isn’t limited to one part of the world. It shows up consistently across countries, even in regions where confidence in AI’s performance is high. 

Where is AI trusted the most (and the least)? A regional breakdown

Trust in AI isn’t uniform. It varies dramatically depending on where you are in the world. While global averages show a cautious attitude, some regions place significant faith in AI systems, while others remain deeply skeptical, with sharp differences between emerging economies and high-income countries.

Top 5 countries most willing to trust AI systems: Emerging economies lead the way

Across countries like Nigeria, India, Egypt, China, the UAE, and Saudi Arabia, over 60% of respondents say they are willing to trust AI systems, and nearly half report high acceptance. These are also the countries where AI adoption is accelerating the fastest, and where digital literacy around AI appears to be higher. 

Country % willing to trust AI
Nigeria 79%
India 76%
Egypt 71%
China 68%
UAE 65%

Top 5 countries least willing to trust AI systems: Advanced economies are wary of AI

In contrast, most advanced economies report significantly lower trust levels:

  • Fewer than half of respondents in 25 of the 29 advanced economies surveyed by KPMG say they trust AI systems.
  • In countries like Finland and Japan, trust levels fall as low as 31%.
  • Acceptance rates are also much lower. In New Zealand and Australia, for example, only 15–17% report high acceptance of AI systems.
Country % willing to trust AI
Finland 25%
Japan 28%
Czech Republic 31%
Germany 32%
Netherland 33%
France 33%

Despite strong digital infrastructure and widespread access, advanced economies appear to have more questions than answers when it comes to AI governance and ethics. This hesitancy may stem from several factors: greater media scrutiny, regulatory debates, or more exposure to high-profile AI controversies, from data privacy lapses to deepfakes and algorithmic bias. 

Countries willingness to trust AI

Source: KPMG

How emotions shape trust in AI across the world

The trust gap between advanced and emerging economies isn’t just visible in their willingness to trust and acceptance of AI. It’s reflected in how people feel about AI. Data shows that people in emerging economies are far more likely to associate AI with positive emotions:

  • 74% of people in the emerging economy are optimistic about AI, and 82% report feeling excited about AI.
  • Only 56% in emerging economies say they feel worried.

In contrast, emotional responses in advanced economies are more ambivalent and conflicted:

  • Optimism and worry are nearly tied: 64% feel worried, while 61% feel optimistic.
  • Just over half (51%) say they feel excited about AI.

This emotional split reflects deeper divides in exposure, expectations, and lived experiences with AI technologies. In emerging markets, AI may be seen as a leap forward, improving access to education, healthcare, and productivity. In more developed markets, however, the conversation is more cautious, shaped by ethical concerns, automation fears, and a long memory of tech backlashes.

How comfortable are people with businesses using AI?

Edelman’s 2025 Trust Barometer presents a complementary angle on how comfortable people are with businesses using AI.

44% globally say they’re comfortable with the business use of AI. But the breakdown by region reveals a similar trust gap, one that mirrors the trust divide between emerging and advanced economies seen in KPMG’s data.

Countries most comfortable with businesses using AI 

People in emerging economies, India, Nigeria, and China are not only willing to trust AI more but are also more comfortable with businesses using AI.

Country % of people comfortable with businesses using AI 
India 68%
Indonesia 66%
Nigeria 65%
China 63%
Saudi Arabia 60%

Countries least comfortable with the business use of AI

In contrast, people from Australia, Ireland, the Netherlands, and even the US have a trust deficit. Less than 1 in 3 say they are comfortable with businesses using AI.

Country % of people comfortable with businesses using AI 
Australia 27%
Ireland 27%
Netherlands 27%
UK 27%
Canada 29%

While regional divides are stark, they’re only part of the story. Trust in AI also breaks down along demographic lines — from age and gender to education and digital exposure. Who you are, how much you know about AI, and how often you interact with it can shape not just whether you use it, but whether you trust it.

Let’s take a closer look at the demographics of optimism versus doubt.

Who trusts AI? Demographics of optimism vs. doubt

Trust and comfort with AI aren’t just shaped by what AI can do, but by who you are and how much you’ve used it. The data shows a clear pattern: the more people engage with AI through training, regular use, or digital fluency, the more likely they are to trust and adopt it.

Conversely, those who feel underinformed or left out are far more likely to view AI with caution. These divides cut deep, separating generations, income groups, and education levels. What’s emerging isn’t just a digital divide, but an AI trust gap.

Age matters: Younger adults are more likely to trust AI

Trust in AI systems declines steadily with age. Here’s how it breaks down:

  • 51% of adults aged 18–34 say they trust AI
  • 48% of those aged 35–54 say the same
  • Among adults 55 and older, trust drops to just 38%

The trust gap by age doesn’t exist in isolation. It tracks closely with how frequently people use AI, how well they understand it, and whether they’ve received any formal training, all of which decline with age. The generational divide is clear when we look at the following data:

Metric 18–34 years 35–54 years 55+ years
Trust in AI systems 51% 48% 38%
Acceptance of AI 42% 35% 24%
AI use 84% 69% 44%
AI training 56% 41% 20%
AI knowledge 71% 54% 33%
AI efficacy (confidence using AI) 72% 63% 44%

Income and education: Trust grows with access and understanding

AI trust isn’t just a generational story. It’s also shaped by privilege, access, and digital fluency. Across the board, people with higher incomes and more formal education report significantly more trust in AI systems. They’re also more likely to use AI tools frequently, feel confident navigating them, and believe these systems are safe and beneficial.

  • 69% of high-income earners trust in AI, compared to just 32% among low-income respondents.
  • Those with AI training or education are nearly twice as likely to trust and accept AI technologies as those without it.
  • University-educated individuals also show elevated trust levels (52%) versus those with no university education (39%).

The AI gender gap: Men trust it more.

 52% of men say they trust AI, but only 46% of women do.

Trust gaps show up in comfort with business use, too. The age, income, and gender-based divides in AI trust also shape how people feel about its use in business. Survey data shows:

  • 50% of those aged 18–34 are comfortable with businesses using AI
  • That drops to 35% among those 55 and older
  • 51% of high-income earners express comfort with the business use case of AI
  • Just 38% of low-income earners show the same comfort

In short, the same groups who are more familiar with AI — younger, higher-income, and digitally fluent individuals — are also the ones most comfortable with companies adopting it. Meanwhile, skepticism is stronger among those who feel left behind or underserved by AI’s rise.

Beyond who’s using AI, how it’s being used plays a huge role in public trust. People make clear distinctions between applications they find useful and safe, and those that feel intrusive, biased, or risky.

Trust in AI by industry: Where it passes and where it fails

Surveys show clear variation: some sectors have earned cautious confidence, while others face widespread skepticism. Below, we break down how trust in AI shifts across key industries and applications.

AI in healthcare: High hopes, lingering doubts

Among all use cases, healthcare stands out as the most trusted application of AI. According to KPMG, 52% of people globally say they’re willing to rely on AI in healthcare settings. In fact, it’s the most trusted AI use case in 42 of the 47 countries surveyed.

That optimism is shared across stakeholders, albeit unequally. Philips’ 2025 study reveals that:

  • 79% of healthcare professionals are optimistic that AI can improve patient outcomes
  • 59% of patients feel the same

This signals broad confidence in AI’s potential to enhance diagnostics, treatment planning, and clinical workflows. But trust in AI doesn’t always mean comfort with its application, especially among patients.

While healthcare professionals express high confidence in using AI across a range of tasks, patients’ comfort drops sharply as AI moves from administrative roles to higher-risk clinical decisions. The gap is especially pronounced in tasks like:

  • Documenting medical notes: 87% of clinicians are confident, vs. 64% of patients being comfortable
  • Scheduling appointments or check-in: 88–84% of clinicians are confident, 76% of patients are comfortable
  • Triaging urgent cases: There is an 18% confidence gap, with 81% clinicians being confident as opposed to 63% patients
  • Creating treatment plans: There is a 17% confidence gap, with 83% of clinicians being optimistic that AI can help create a tailored treatment plan, compared to 66% of patients. 

Patients appear hesitant to hand over trust in sensitive, high-stakes contexts like note-taking or diagnosis, even as they acknowledge AI’s broader potential in healthcare. 

Beneath this is a far less confidence in how responsibly AI will be deployed. A JAMA Network study underscores this tension:

  • Around 66% of respondents said they had low trust that their healthcare system would use AI responsibly.
  • Around 58% expressed low trust that the system would ensure AI tools wouldn’t cause harm.

In other words, the problem isn’t always the technology; it’s the system implementing it. Even in the most trusted AI sector, questions about governance, safeguards, and accountability continue to shape public sentiment.

AI in education: Widespread use, rising concerns 

In no other domain has AI seen such rapid, grassroots adoption as in education. Students around the world have embraced generative AI, often more quickly than their institutions can respond.

83% of students report regularly using AI in their studies, with 1 in 2 using it daily or weekly, according to KPMG’s study. Notably, this outpaces AI usage at work, where only 58% of employees use AI tools regularly.

But high usage doesn’t always equate to high trust. Just 53% of students say they trust AI in their academic work. And while 72% feel confident using AI and claim at least moderate knowledge, a more complex picture emerges on closer inspection:

  • Only 52% of student users say they critically engage with AI by fact-checking output or understanding its limitations.
  • A staggering 81% admit they’ve put less effort into assignments because they knew AI could “help.”
  • Over three-quarters say they’ve leaned on AI to complete tasks they didn’t know how to do themselves.
  • 59% have used AI in ways that violated school policies.
  • 56% say they’ve seen or heard of others misusing it.

Educators are seeing the impact, and their top concerns reflect that. According to Microsoft’s recent research:

  • 36% of K-12 teachers in the U.S cite an increase in plagiarism and cheating as their number one AI concern.
  • 23% of educators worry about privacy and security concerns related to student and staff data being shared with the AI.
  • 22% fear students getting overdependent on AI tools.
  • 21% point to misinformation, leading to inaccurate use of AI-generated content by students as another top AI concern.

Students share similar anxieties:

  • 35% fear being accused of plagiarism or cheating
  • 33% are worried about becoming too dependent on AI
  • 29% flag misinformation and accuracy issues

Together, these data points underscore a critical tension:

  • Students are enthusiastic users of AI, but many are unprepared or unsupported in using it responsibly. 
  • Educators, meanwhile, are navigating an evolving landscape with limited resources and guidance. 

The gap here is more about the gap in responsibility and preparedness. It is less about belief in AI’s potential and more about confidence in whether it’s being used ethically and effectively in the classroom.

AI in customer service: Divided expectations 

AI-powered chatbots have become a near-daily presence, from troubleshooting an app issue to tracking an online order. But while consumers regularly interact with AI in customer service, that doesn’t mean they trust it.

Here’s what recent data reveals:

  • According to a PWC study, 71% of consumers prefer human agents over chatbots for customer service interactions.
  • 64% of U.S. consumers and 59% globally feel companies have lost touch with the human element of customer experience.

These concerns aren’t just about quality; they’re about access. 

  • A Genesys survey found that 72% of consumers worry AI will make it harder to reach a human, with the highest concern among Boomers (88%).  This fear drops significantly among younger generations, though.
  • Another US-based study found that only 45% of shoppers trust AI-powered recommendations or chatbots to provide accurate product suggestions.  
  • Just 38% of those who’ve used chatbots were satisfied with the support, with a mere 14% saying they were very satisfied.
  • Concerns about data use also loom large, as 43% believe brands aren’t transparent about how customer data is handled.
  • And even when AI is in the mix, most people want it to be more humane: 68% of consumers are comfortable engaging with AI agents that exhibit these human-like traits, according to a Zendesk study.

These findings paint a layered picture: people may tolerate AI in service roles, but they want it to be more human-like, especially when empathy, nuance, or complexity is required. There's openness to hybrid models where AI supports, but doesn't replace, human agents.

Autonomous driving and AI in transportation: Still  a long road to trust

Self-driving technology has been one of AI’s most visible — and controversial — frontiers. Brands like Tesla, Waymo, Cruise, and Baidu’s Apollo have spent years testing autonomous vehicles, from consumer-ready driver-assist features to fully driverless robotaxis operating in cities like San Francisco, Phoenix, and Beijing.

Globally, interest in autonomous features is growing. S&P Global’s 2025 research finds that around two-thirds of drivers are open to using AI-powered driving assistance on highways, especially for predictable conditions like long-distance cruising. Over half believe AVs will eventually drive more efficiently (54%) and be safer (47%) than human drivers.

But in the United States, the road to trust is bumpier. According to AAA’s 2025 survey:

  • Only 13% of U.S. drivers say they would trust riding in a fully self-driving vehicle — up slightly from 9% last year, but still strikingly low.
  • 6 in 10 drivers remain afraid to ride in one.
  • Interest in fully autonomous driving has actually fallen — from 18% in 2022 to 13% today — as many drivers prioritize enhancing vehicle safety systems over removing the human driver altogether.
  • Although awareness of robotaxis is high (74% know about them), 53% say they would not choose to ride in one.

The gap between technological readiness and public acceptance underscores a core reality: while AI may be capable of taking the wheel, many drivers — especially in the U.S. — aren’t ready to hand it over. Trust will depend not just on technical milestones, but also on proving safety, reliability, and transparency in real-world conditions.

AI in law enforcement and public safety: Powerful but polarizing

Law enforcement agencies are embracing AI for its investigative power — using it to uncover evidence faster, detect crime patterns, identify suspects from surveillance footage, and even flag potential threats before they escalate. These tools can also ease administrative burdens, from managing case files to streamlining dispatch.

But with this expanded reach comes serious ethical and privacy concerns. AI in policing often intersects with sensitive personal data, facial recognition, and predictive policing — areas where public trust is fragile and missteps can erode confidence quickly.

How law enforcement professionals view AI

Here’s some data on how the law enforcement officials and the general public see AI being used for public safety. 

A U.S. public safety survey reveals strong internal support:

  • Law enforcement officers’ trust in agencies using AI responsibly stands high at 88%.
  • 90% of first responders support the use of AI by their agencies, marking a 55% increase over the previous year.
  • 65% believe AI improves productivity and efficiency, while 89% say it helps reduce crime.
  • 87% say AI is transforming public safety for the better through better data processing, analytics, and streamlined reporting.

Among investigative officers, AI is viewed as a powerful enabler, according to Cellebrite research:

  • 61% consider AI a valuable tool in forensics and investigations.
  • 79% say it makes investigative work easier and more effective.
  • 64% believe AI can help reduce crime.
  • Yet, 60% warn that regulations and procedures may limit AI implementation, and 51% express concern that legal constraints could stifle adoption.

What do the public say about AI in law enforcement

But globally, public sentiment towards AI use in policing is mixed. UNICRI’s global survey, spanning six continents and 670 respondents, reveals a nuanced public stance. 

  • 53% believe AI can help police protect them and their communities; 17% disagree 
  • Among those who were suspicious about the use of AI systems in policing (17%), nearly half were women (48.7%).
  • 53% believe safeguards are needed to prevent discrimination.
  • More than half think their country’s current laws and regulations are insufficient to ensure AI is used by law enforcement in ways that respect rights.

Trust hinges on transparency, human oversight, and robust governance, with respondents signaling that AI must be used as a tool, not a replacement, for human judgment.

AI in media: Disinformation deepens the trust crisis

Media is emerging as one of the most scrutinized fronts for AI trust, not because of its absence, but because of its overwhelming presence in shaping public opinion.  From deepfake videos that blur the line between satire and deception to AI-written articles that can spread faster than they can be fact-checked, the information ecosystem is now flooded with content that’s harder than ever to verify. 

In this environment, the risks of AI-generated misinformation aren’t just a fringe concern — they’ve become central to the global debate on trust, democracy, and the future of public discourse.

According to recent Ipsos survey data:

  • 70% say they find it hard to trust online information because they can’t tell if it’s real or AI-generated.
  • 64% are concerned that elections are being manipulated by AI-generated content or bots.
  • Only 47% feel confident in their own ability to identify AI-generated misinformation, highlighting the gap between awareness and capability.
  • In one Google-specific study, only 8.5% of people always trust the AI Overviews generated by Google for searches, while 61% say they sometimes trust it. 21% never trust them at all. 

The public sees AI’s role in spreading disinformation as urgent enough to require formal guardrails:

  • 88% believe there should be laws to prevent the spread of AI-generated misinformation.
  • 86% want news and social media companies to strengthen fact-checking processes and ensure AI-generated content is clearly detectable.

This sentiment reflects a unique trust paradox: people see the dangers clearly, they expect institutions to act decisively, but they don’t necessarily trust their own ability to keep up with AI’s speed and sophistication in content creation.

AI in hiring and HR: efficiency meets trust challenges

AI is now a staple in recruitment. Half of companies use it in hiring, with 88% deploying AI for initial candidate screening, and 1 in 4 firms that use AI for interviews relying on it for the entire process.

HR adoption and trust in AI hit new highs

According to HireVue’s 2025 report:

  • AI adoption among HR professionals jumped from 58% in 2024 to 72% in 2025, signaling full-scale implementation beyond experimentation.
  • HR leaders' confidence in AI systems rose from 37% in 2024 to 51% in 2025.
  • Over half (53%) now view AI-powered recommendations as supportive tools, not replacements, in hiring decisions.

The payoff is tangible. Talent acquisition teams credit AI for clear efficiency and fairness benefits:

  • Talent acquisition teams report 63% improved productivity, 55% automation of manual tasks, and 52% overall efficiency gains.
  • 57% of workers believe AI in hiring can reduce racial and ethnic bias—a 6-point increase from 2024.

Job seekers remain cautious

However, candidates remain uneasy, especially when AI directly influences hiring outcomes:

  • A ServiceNow survey found that over 65% of job seekers are uncomfortable with employers using AI in recruiting or hiring.
  • Yet, the same respondents were much more comfortable when AI was used for supportive tasks, not decision-making.
  • Nearly 90% believe companies must be transparent about their use of AI in hiring.
  • Top concerns include a less personalized experience (61%) and privacy risks (54%).

This widening trust gap means companies will need to blend AI’s efficiency with clear communication, visible fairness measures, and human touchpoints to win over job seekers.

Across industries, the same pattern keeps surfacing: people’s trust in AI often hinges less on the technology itself and more on who’s building, deploying, and governing it. Whether it’s healthcare, education, or customer service, public sentiment is shaped by perceptions of transparency, accountability, and alignment with human values. 

Which raises the next question: How much do people actually trust the companies driving the AI revolution?

Trust in AI companies: Falling faster than tech overall

As trust in AI’s capabilities — and its role across industries — remains uneven, confidence in the companies building these tools is slipping. People may use AI daily, but that doesn’t mean they trust the intentions, ethics, or governance of the organizations developing it. This gap has become a defining fault line between broad enthusiasm for AI’s potential and a more guarded view of those shaping its future.

Edelman data shows that while overall trust in technology companies has held relatively steady, dipping only slightly from 78% in 2019 to 76% in 2025, trust in AI companies has fallen sharply. In 2019, 63% of people globally said they trusted companies developing AI; by 2025, that figure had dropped to just 56%, even though it's a slight increase from the previous year.

Year Trust in AI companies
2019 63%
2021 56%
2022 57%
2023 53%
2024 53%
2025 56%

Who should build AI? The institutions people trust most (and least)

As skepticism toward AI companies grows, so does the question of who the public actually wants at the helm of AI development: which institutions, whether academic, governmental, corporate, or otherwise, are seen as most capable of building AI in the public’s best interest?

Opinions diverge sharply, not only by institution, but also by whether a country is an advanced or emerging economy.

Globally, universities and research institutions enjoy the highest trust:

  • In advanced economies, 50% express high confidence in them.
  • In emerging economies, that figure rises to 58%.

Healthcare institutions follow closely, with 41% high confidence in advanced economies and 47% in emerging economies.

By contrast, big technology companies face a pronounced trust divide:

  • Only 30% in advanced economies have high confidence in them, compared to 55% in emerging markets.

Commercial organizations and governments rank lower still, with fewer than 40% of respondents in most regions expressing high confidence. Governments score just 26% in advanced economies and 39% in emerging ones, signaling a widespread skepticism about state-led AI governance.

The takeaway? Trust is concentrated in institutions perceived as more mission-driven (universities, healthcare) rather than profit-driven or politically influenced.

Can AI earn trust? What people say it takes

Once the question of who should build AI is settled, the harder challenge is making those systems trustworthy over time. So, what makes people trust AI more? 

Four out of five people (83%) globally say they would be more willing to trust an AI system if organizational assurance measures were in place. The most valued include:

  • Opt-out rights: 86% want the right to opt out of having their data used.
  • Reliability checks: 84% want AI’s accuracy and reliability monitored.
  • Responsible use training: 84% want employees using AI to be trained in safe and ethical practices.
  • Human control: 84% want the ability for humans to intervene, override, or challenge AI decisions.
  • Strong governance: 84% want laws, regulations, or policies to govern responsible AI use.
  • International standards: 83% want AI to adhere to globally recognized standards.
  • Clear accountability: 82% want it to be clear who is responsible when something goes wrong.
  • Independent verification: 74% value assurance from an independent third party.

The takeaway: people want AI to follow the same trust playbook as high-stakes industries like aviation or finance — where safety, transparency, and accountability aren’t optional, they’re the baseline.

G2 take: How organizations can earn (and keep) AI trust

On G2, AI is no longer a side feature — it’s becoming an operational backbone across industries. From healthcare and education to finance, manufacturing, retail, and government technology, AI-enabled solutions now appear in thousands of product categories. That includes everything from CRM systems and HR platforms to cybersecurity suites, data analytics tools, and marketing automation software.

But whether you’re a hospital deploying diagnostic AI, a bank automating fraud detection, or a public agency introducing AI-driven citizen services, the trust challenge looks remarkably similar. Reviews and buyer insights on G2 show that trust isn’t built by AI capability alone — it’s built by how organizations design, communicate, and govern AI use. 

For businesses and institutions, three patterns stand out:

  • Explainability over mystique: Users across sectors are more confident in AI systems when they understand how outputs are generated and what data is involved.
  • Human-in-the-loop: Across industries, people prefer AI that assists rather than replaces human judgment, particularly in high-impact contexts like healthcare, hiring, and legal processes.
  • Accountability structures: Vendors and organizations that clearly state who is responsible when AI makes a mistake, and how issues will be resolved, score higher on trust and adoption.

For leaders rolling out AI, whether in software, public services, or physical products, the takeaway is clear: trust is now a competitive advantage and a public license to operate. The most successful adopters combine AI innovation with visible safeguards, user agency, and verifiable outcomes.

So, do we trust AI? It depends on where, who, and how

If the last decade was about proving AI’s potential, the next will be about proving its integrity.  That battle won’t be fought in glossy launch events — it will be decided in the micro-moments: a fraud alert that’s both accurate and respectful of privacy, a chatbot that knows when to hand off to a human, an algorithm that explains itself without being asked.

These moments add up to something bigger: an enduring license to operate in an AI-powered economy. Regardless of sector, the leaders of the next decade will be those who anticipate doubt, give users genuine agency, and make AI’s inner workings visible and verifiable.

In the end, the winners will not just be the fastest model builders; they will be the ones people choose to trust again and again.

Explore how the most innovative AI tools are reviewed and rated by real users on G2’s Generative AI category.


Get this exclusive AI content editing guide.

By downloading this guide, you are also subscribing to the weekly G2 Tea newsletter to receive marketing news and trends. You can learn more about G2's privacy policy here.