May 8, 2025
by Kamaljeet Kalsi / May 8, 2025
In part 1 of this series, we examined how fragmented AI regulations and the absence of universal governance frameworks are creating a trust gap — and a dilemma — for enterprises. Four burning questions emerged, leaving us on a cliffhanger.
Q: What were the major concerns raised at the Paris AI Summit regarding AI governance?
A: The summit highlighted the lack of global consensus on AI governance, posing significant challenges for enterprises trying to balance innovation and compliance in a fragmented regulatory landscape.
Q: Why does the absence of universal AI policies increase reputational risks for businesses?
A: Without universal policies, organizations must depend more heavily on strong cybersecurity and GRC practices to protect their reputations and manage risks associated with the handling of sensitive data and IP.
Q: What have we learned about the performance of GRC, AI governance, and security compliance tools?
A: These tools have generally high user satisfaction, though users face challenges related to setup complexity and varying timelines for achieving ROI. But, there is more to explore and find out the answer to the burning question, “Is governance becoming the silent killer of AI innovation?”
If Part 1 showed us the problem, Part 2 is all about the playbook.
GRC leaders can expect a data-backed benchmark for smarter investment decisions as our data analysis will reveal the tools delivering real value and how satisfaction scores differ across regions, company sizes, and leadership roles.
You’ll also get an inside look at how leading vendors like Drata, FloQast, AuditBoard, and more are embedding responsible AI into product development, shaping internal policies, and future-proofing their strategies.
As companies brave the complexities of AI governance, understanding the perspectives of key leaders like CTOs, CISOs, and AI governance executives becomes essential.
Why? Because these stakeholders are pivotal in shaping an organization’s risk posture. Let's explore what these leaders think of current tools and zoom in on their GRC priorities.
CTOs, CISOs, and AI governance executives each bring distinct perspectives. Their satisfaction scores remain high overall, but priorities and pain points differ based on their responsibilities and involvement.
CTOs rated security compliance tools 4.72/5 in terms of user satisfaction.
They value time-saving automation, progress tracking with end-to-end visibility, and responsive support, but are frustrated by tool fragmentation and limited non-cyber risk features.
Security compliance tools helped CTOs solve problems regarding ISO 27001/DORA/GDPR compliance, vendor risk, and audit tracking.
In addition to security compliance tools, we also found data on how CTOs feel about GRC tools.
CTOs rated GRC tools 4.07/5 in terms of user satisfaction.
CTOs value the link between GRC and audit integrations, automation in merchant onboarding, and intuitive user experience. Frustrations arise around complex deployment and time-consuming configuration times. GRC tools helped CTOs address risks related to rapid merchant growth, compliance, and audit readiness.
CISOs rated security compliance tools 4.72/5 in terms of user satisfaction.
CISOs appreciate audit readiness, framework mapping integrations and automation but dislike outdated training features and complex policy navigation. Security compliance software helped CISOs solve problems related to framework management, task prioritization, and continuous risk coverage.
Interestingly, CISOs aren’t directly involved with GRC tools as they delegate down the chain. Their teams — like security engineers, risk managers, or GRC specialists are often the ones evaluating and interacting with these tools daily and are more likely to submit feedback.
G2 data revealed that while CISOs and CTOs aren’t heavily involved with AI governance tooling (considering it is a new “child” category), AI governance executives like network and security engineers and heads of compliance seem to be active reviewers.
AI governance executives rated security compliance tools 4.5/5 in terms of user satisfaction.
They praised AI governance tools for automated threat detection and AI-powered data handling and customer response improvements. While pain points included implementation hurdles, system performance lag, and maintenance burden. Risk remediation, data strategy, and enhancing security team’s performance are key problems solved for these users.
Building on insights from satisfaction data, let's delve into how companies are creatively bridging the compliance and AI governance gap.
In part 1, we mentioned that companies are DIY-ing their way through compliance in a world without universal AI regulations. Here’s a look at how GRC software leaders are augmenting innovation while maintaining their risk posture.
Self-regulation can be a double-edged sword. While its flexibility allows businesses to move quickly and innovate without waiting for policy mandates, it can lead to a lack of accountability and increased risk exposure.
Privacy-first platform Private AI’s Patricia Thaine remarks, "Companies now rely on internally defined best practices, leading to AI deployment inefficiencies and inconsistencies."
As a result of ambiguous industry guidelines, companies are compelled to craft their own AI governance frameworks by guiding their actions with a responsible AI mindset.
Alon Yamin, Co-founder and Chief Executive Officer of Copyleaks, highlights that without standardized guidelines, businesses may delay advancements. But those implementing responsible AI can set best practices, shape policies, and build trust in AI technologies.
"Companies that embed responsible AI principles into their core business strategy will be better positioned to navigate future regulations and maintain a competitive edge," comments Matt Blumberg, Chief Executive Officer at Acrolinx.
Businesses are using the ISO/IEC 42001:2023 artificial intelligence management system (AIMS) and ISO/IEC 23894 certification as guardrails to tackle the AI governance gap.
"Trusted organizations are already providing guidance to place guardrails around the acceptable use of AI. ISO/IEC 42001:2023 is a key example," adds Tara Darbyshire, Co-founder and EVP at SmartSuite.
Some view the regulatory gap as a chance to gain a competitive edge by understanding competitors' reluctance and making informed AI investments.
Mike Whitmire noted that FloQast's future focus on transparency and accountability in AI regulation led them to pursue ISO 42001 certification for responsible AI development.
The EU's AI Continent Action Plan, a 200 billion-euro initiative, aims to place Europe at the forefront of AI by boosting infrastructure and ethical standards. This move signals how governance frameworks can drive innovation, making it imperative for GRC and AI leaders to watch how the EU balances regulation and progress, offering a fresh template for global strategies.
Join industry leaders at G2's free AI in Action Roadshow for actionable insights and proven strategies to reimagine your funnel. Register now
Bridging global discrepancies in AI governance is no small feat. Organizations face a tangled web of regulations that often conflict across regions, making compliance a moving target.
So, how are VPs of security, CISOs, and founders bridging the AI governance gap and fostering innovation while ensuring compliance? They gave us a look under the hood.
Drata embraces the core tenets of security, fairness, safety, reliability, and privacy to guide both the company’s organizational values and its AI development practices. The team focuses on empowering users ethically and adopting responsible, technology-agnostic principles.
“Amid the rapid adoption of AI across all industries, we take both a calculated and intentional approach to innovating on AI, centered on protecting sensitive user data, helping ensure our tools provide clear explanations around AI reasoning and guidance, and subjecting all AI models to rigorous testing,” informs Matt Hillary, Vice President of Security & CISO at Drata.
Private AI believes privacy-first design is a fast track to mitigate risk and accelerate innovation.
“We ensure compliance without slowing innovation by de-identifying data before AI processing and re-identifying it within a secure environment. This lets developers focus on building while meeting regulatory expectations and internal safety requirements,” explains Patricia Thaine, Chief Executive Officer and Co-founder of Private AI.
AuditBoard takes a thoughtful approach to crafting acceptable use policies that greenlight innovation without compromising compliance.
Richard Marcus, CISO at AuditBoard, comments, “A well-crafted AI key control policy will ensure AI adoption is compliant with regulations and policies and that only properly authorized data is ever exposed to the AI solutions. It should also ensure only authorized personnel have access to datasets, models, and the AI tools themselves.”
AuditBoard emphasizes the importance of:
These principles reduce the risk of data leakage and help detect unusual activity through strong access controls and monitoring.
Tara Darbyshire, SmartSuite's Co-founder and EVP, shared an outline of effective AI governance that enables innovation while aligning with international standards.
FloQast achieves the compliance-innovation balance by embedding governance into the AI development lifecycle from the start.
“Rather than waiting for AI regulations to take shape, we align our AI governance with globally recognized best practices, ensuring our solutions meet the highest standards for transparency, ethics, and security.” — Mike Whitmire, CEO and Co-Founder of FloQast.
For FloQast, effective AI governance isn’t siloed; it’s cross-collaborative by design. “Compliance isn’t just a legal or IT concern. It’s a priority that requires alignment across R&D, finance, legal, and executive leadership.”
FloQast’s strategies on operationalizing governance:
Mike also emphasizes the importance of injecting compliance into company culture.
By combining structure with adaptability, FloQast is building a GRC strategy that protects its customers and brand while empowering innovation.
Future-focused strategies are crucial to organizational success to withstand global changes. While there’s no crystal ball to show us the future of AI and GRC, examining expert insights and predictions can help us better prepare.
We asked security leaders, analysts, and founders how they see AI governance evolving in the next five years and what ripple effects it might have on innovation, regulation, and trust.
Lauren Worth questioned the practical impact of new regulations and pointed out that if existing penalties for data breaches are any indication, AI-related enforcement may also fall short of prompting meaningful change.
Drata’s Matt Hillary predicts that a universal AI policy is unlikely, given regional regulatory differences, but foresees the rise of reasonable regulations that will provide innovation with risk mitigation guardrails.
He also emphasizes how trust will be a core tenet in modern GRC efforts. As new risks emerge and frameworks evolve at local, national, and global levels, organizations will face greater complexity in continuously demonstrating trustworthiness to users and regulators.
AuditBoard’s Richard Marcus underscores the importance of well-defined policies that greenlight safe innovation. Frameworks like the EU AI Act, the NIST AI Risk Management Framework, and ISO 42001 will inform compliant product development.
Private AI’s Patricia Thaine predicts that the risk and innovation balance will be a reality. As regulations and customer expectations mature, companies using GRC tools will benefit from simplified compliance and improved data access, accelerating responsible innovation.
Cutting through the ambiguity of a fragmented governance landscape, we analyzed regional sentiment data to identify where innovation ecosystems are forming, and why certain regions might become early movers in responsible AI deployment.
For this, we focused on the security compliance software category as it offers a valuable lens into where governance innovation may accelerate. High satisfaction scores and adoption patterns in key regions signal broader readiness for scalable, cross-functional GRC and AI governance practices.
With a satisfaction score of 4.78, APAC tops the charts. High adoption of cloud compliance automation and reduced manual workflows make the region a standout. This reflects strong vendor support and well-tailored compliance solutions.
Latin American users report strong satisfaction (4.68), driven by localized compliance support and platforms compatible with agile processes.
North America’s satisfaction score reveals strong confidence in mature software offerings that meet the demands of stringent regulations, especially in industries like finance, healthcare, and government. These tools are clearly built for scale, but lagging support responsiveness hints at post-sale pain points. In high-stakes AI governance environments, slow issue resolution and delayed escalations could become a liability unless vendors double down on customer success.
With an improved satisfaction score of 4.65, EMEA shows growing confidence in reliable compliance software, particularly among large enterprises investing in scalable governance tools. However, smaller organizations still face usability barriers, often lacking the internal security teams needed to maximize platform value. To unlock broader adoption of AI governance, vendors must address this accessibility gap across mid-market and leaner teams.
As global demand for governance technology grows, regions like APAC and Latin America could become early hubs for GRC and AI governance innovation. These regions highlight where momentum, satisfaction, and agile feedback loops could foster next-gen compliance and AI governance maturity.
As new regulations emerge and customer expectations shift, governance will not be optional but foundational to trustworthy, scalable AI innovation.
And as governance tooling evolves, cross-functional utility and integrated frameworks will be key to converting friction into forward motion.
Leaders who embrace compliance as a strategic function and not just a checkbox will be well-positioned to adapt, attract trust, and drive responsible growth.
Because in the race for AI advantage, as it turns out, governance isn’t the silent killer — it’s the unlikely enabler.
Enjoyed this deep-dive analysis? Subscribe to the G2 Tea newsletter today for the hottest takes in your inbox.
Edited by Supanna Das
Kamaljeet Kalsi is Sr. Editorial Content Specialist at G2. She brings 9 years of content creation, publishing, and marketing expertise to G2’s TechSignals and Industry Insights columns. She loves a good conversation around digital marketing, leadership, strategy, analytics, humanity, and animals. As an avid tea drinker, she believes ‘Chai-tea-latte’ is not an actual beverage and advocates for the same. When she is not busy creating content, you will find her contemplating life and listening to John Mayer.
As we sleigh into December, let’s inspect how November's seven most impactful AI news events...
AI knows it all — but what happens when it makes it up?
Key takeaways: Ethical and legal data usage isn't just good practice—it's a competitive edge...
As we sleigh into December, let’s inspect how November's seven most impactful AI news events...
AI knows it all — but what happens when it makes it up?