May 23, 2025
by Sidharth Yadav / May 23, 2025
At the Paris AI Action Summit in February, cracks around AI governance surfaced for the first time at a global forum.
The US and the UK refused to sign the declaration on “inclusive AI”, citing “excessive regulation” and ignorance of “harder questions around national security”.
This was the first time state heads were meeting to seek consensus on AI governance. A lack of agreement means a common ground on AI governance remains elusive as geopolitical equations shape the conversation.
The world is divided over AI governance. Most nations have no dedicated laws. For instance, there’s no federal legislation or regulations in the US that regulate the development of AI. Even when they do, states within them script unique laws. In addition, industries and sectors are drafting their own versions.
The pace of AI development today outpaces the talk of governance. So, how are the companies using and building AI products navigating governance? They are writing their own norms to nudge AI use while protecting customer data, mitigating biases, and fostering innovation. And how does this look in practice? I spoke with leaders at Salesforce, Zendesk, Acrolinx, Sprinto, and the G2 Market Research team to find out.
These companies, sized differently, offer solutions for sales and CRM software, support suites, content analytics, and compliance automation. I asked them how they kept their policies dynamic to evolving regulations.
Below is the best of what the leaders of the four companies shared with me. These responses represent their varying approaches, values, and governance priorities.
Leandro Perez, Chief Marketing Officer for Australia and New Zealand, says, “While AI regulations evolve, the fundamentals remain the same. As with any other new technology, companies need to understand their intended use case, potential risks, and the wider context when deploying AI agents.” He stresses that companies must mitigate harm and implement sector-specific regulations.
He also adds that companies must implement strong guardrails, including sourcing technology from trusted providers that meet safety and certification standards.
“Broader consumer protection principles are core to ensuring AI is fair and unbiased”
Leandro Perez
CMO, Australia and New Zealand, Salesforce
“Over the last 18 years, Zendesk has cultivated customer trust using a principles-based approach,” says Shana Simmons, Chief Legal Officer at Zendesk.
She points out that technology built on tenets like customer control, transparency, and privacy can keep up with regulation.
Another key to AI governance is focusing on the use case. “In a vacuum, AI risk might feel overwhelming, but governance tailored to a specific business will be efficient and high-impact,” she reasons.
She explains this by saying that Zendesk thinks deeply about finding “the world’s most elegant way” to inform a user that they are interacting with a customer support bot rather than a human. “We have built ethical design standards targeted to that very topic.”
Every Thursday, we spill hot takes, insider knowledge, and news recaps straight to your inbox. Subscribe here
According to a statement shared by Sprinto, it has set up a cross-functional governance committee comprising legal, security, and product teams to oversee AI policy updates. It has also defined ownership of AI risk management across departments.
The company also uses secure control frameworks to assess and address AI risks across multiple regulatory frameworks, helping Sprinto align AI governance with industry standards.
To clip governance gaps, Sprinto uses its own compliance automation platform to enforce controls and ensure real-time adherence to policies.
Matt Blumberg, Chief Executive Officer at Acrolinx, claims that staying ahead of evolving regulations starts with continuous learning.
“We prioritize ongoing training across our teams to stay sharp on emerging risks, shifting regulations, and the fast-paced changes in the AI landscape,” he adds.
He cites Acrolinx data to show that misinformation is the primary AI-related risk enterprises are concerned about. “But compliance is more often overlooked. There’s no doubt that overlooking compliance leads to serious consequences, from legal and financial penalties to reputational damage. Staying proactive is key,” he stressed.
In companies’ responses, I saw a clear pattern of self-regulation. They are creating de facto standards before regulators do. Here’s how:
Companies show remarkable alignment around principles-based frameworks, cross-functional governance bodies, and continuous education. This suggests a deliberate, although uncoordinated, approach to drafting industry norms before formal regulations concretize. Doing so will also position companies as influential entities in the discussion around a consensus on norms.
At the same time, while showing they can effectively self regulate, the companies are making an implicit case against strong external regulation. They’re sending out a message to regulators saying, “We’ve got this under control.”
None of the executives admit to this, but I notice a pivot. Companies are quietly moving away from a compliance-first approach. They’re realizing regulations can’t keep pace with AI innovation. And the investment in flexible, principles-based frameworks suggests companies anticipate a prolonged period of regulatory uncertainty.
The companies' emphasis on principles and fundamentals points to a shift. They are building governance around transcendental values such as customer control, transparency, and privacy. This approach recognises that while regulations evolve, it’s wise to hinge governance on stable ethical principles.
Companies are making risk assessments to allocate attention to governance. For instance, Zendesk mentions tailoring governance to specific business contexts. This implies that, as resources are finite, not all AI applications deserve the same governance attention.
This suggests companies are focusing more on protecting high-risk, customer-facing AI while being liberal with internal, low-risk applications.
I notice an absence in the talk around cross-functional governance: how companies are tackling the expertise gap around AI ethics. It’s aspirational to talk about bringing different teams together, yet they may lack knowledge about other functions’ AI applications or a general understanding of AI ethics. For instance, legal professionals may lack deep AI technical knowledge, while engineers may lack regulatory expertise.
Companies are positioning themselves as bulwarks of AI governance to inspire confidence in customers, investors, and employees.
When Acrolinx cites data showing misinformation risks or when Zendesk says its legal team uses Zendesk’s AI products daily, they attempt to demonstrate their AI capabilities — not just on the technical front but also on the governance front. They want to be seen as trusted experts and advisors. This helps them gain a competitive edge and create barriers for smaller companies that may lack resources for structured governance programs.
Brandon Summers-Miller, Senior Research Analyst at G2, says he’s seen an uptick in new AI-integrated GRC products added to G2’s marketplace that are integrated with AI. Additionally, major vendors in the security compliance space were also quick to adopt generative AI capabilities.
“Security compliance products are increasingly integrating with AI capabilities to aid InfoSec teams with gathering, classifying, and organizing documentation to improve compliance.”
Brandon Summers-Miller
Senior Research Analyst at G2
“Such processes are traditionally cumbersome and time consuming; AI’s ability to make sense of the documentation and its classification is reducing headaches for security professionals,” he says.
Users like AI platforms’ automation capabilities and chatbot features to secure answers to audit-mandatory processes. However, the platforms have yet to reach maturity and need more innovation. Users flag the intrusive nature of AI features in product UX, their inability to conduct refined operations for larger tasks, and their lack of contextual understanding.
But governance isn’t just about policies and frameworks — it’s also becoming a way to support people. As companies build out frameworks and tools to manage AI responsibly, they're simultaneously finding ways to empower their teams through these same mechanisms.
When I dug deeper into these conversations about AI governance, I noticed something fascinating beyond checklists and frameworks. Companies are also now using governance to empower people.
As strategic tools, governance helps build confidence among employees, redistribute power, and develop skills. Here are a few patterns that emerged from the responses of the leaders:
Companies are using AI governance not just to manage risks but to empower employees. I noticed this in Acrolinx’s case when they said that governance frameworks are about creating a safe environment for people to confidently embrace AI. This further addresses employee anxiety about AI.
Today, companies are beginning to realize that without guardrails, employees may resist using AI out of fear of job displacement or making ethical mistakes. Governance frameworks give them confidence.
I notice a revolutionary streak in Salesforce’s claim about enabling "users to author, manage, and enforce access and purpose policies with a few clicks.” Traditionally, governance has been centralized and controlled by legal departments, but now companies are offering agency to technology users to define the rules relevant to their roles.
From Salesforce’s Trailhead modules to Sprinto’s training around ethical AI use, companies are building employee capabilities. They view AI governance expertise not just as a compliance necessity but as a way to build intellectual capital among employees to gain a competitive edge.
In my conversations with company leaders, I wanted to understand the components of their AI strategies and how they help employees. Here are the top responses from my interaction with them:
At Salesforce, the Office of Ethical and Humane Use governs AI strategy. It provides guidelines, training, and oversight to align AI applications with company values.
In addition, the company has created ethical frameworks to govern AI use. This includes:
To build employee capability, Leandro says the company empowers them through education and certifications, including dedicated Trailhead modules on AI ethics. Plus, cross-functional oversight committees foster collaborative innovation within ethical boundaries.
Shana tells me that the best AI governance is education. “In our experience — and based on our review of global regulation — if thoughtful people are building, implementing, and overseeing AI, the technology can be used for great benefit with very limited risk,” she explains.
The company’s governance structure includes executive oversight, security and legal reviews, and technical controls. “But at its heart, this is about knowledge,” she says. “For example, my own team in legal uses Zendesk’s AI products every day. Learning the technology equips us exceptionally well to anticipate and mitigate AI risks for our customers.”
Apart from implementing risk-based AI controls and accountability, Sprinto engages special interest groups, industry fora, and regulatory bodies. “Our workflows incorporate these insights to maintain compliance and alignment with industry standards,” says the statement.
The company also enforces ISO-aligned risk management frameworks (ISO 27005 and NIST AI RMF) to identify, assess, and tackle AI risks in advance.
In a bid to empower employees, the company also holds training around ethical AI use and governance policies and procedures to ensure responsible AI use.
Matt says the company’s governance framework is built on clear guidelines that reflect not just regulatory and ethical standards, but their company values.
“We prioritize transparency and accountability to maintain trust with our people, while strict data policies safeguard the quality, security, and fairness of the data feeding our AI systems,” he adds.
He explains that as the company aims to create a safe and structured environment for AI use, it removes the risk and uncertainty that comes with new technologies. “This gives our people the confidence to embrace AI in their workflows, knowing it’s being used in a responsible, secure way that supports their success.”
In the next three years, I expect to see a consolidation of these diverse governance practices. The regulation patterns aren’t just stopgap measures; they will influence formal regulations. Companies with proactive governance today will not just be compliant — they’ll help write the rules of the game.
That said, I anticipate that current AI governance efforts by larger companies will create a governance chasm between them and smaller companies. They are focused more on creating principles-based structures on top of compliance, while smaller companies want to first follow a checklist approach of ensuring adherence, meeting international quality standards, and placing access controls.
I also expect AI governance capabilities to become a common component of leadership development. Companies will value those managers more who show a working understanding of AI ethics, just like they value an understanding of AI privacy and financial controls. In the coming years, AI governance certifications will become a mandatory requirement, similar to how SOC 2 evolved to become a standard for data security.
Time is running out for companies still thinking about laying a governance framework. They can start with these steps:
2. Make governance tangible for your teams and devolve it.
3. Automate where you can. Manual processes won’t be enough as AI applications multiply across teams and functions. Look for tools that can help you comply with policies and create your own while freeing your people’s time.
The right moment to start is not when regulations solidify — it’s right now, when you can set your own rules and have the power to shape what those regulations will become.
AI is pitched against AI in cybersecurity as defensive technologies try to keep up with attacks. Are companies equipped enough? Find out in our latest article.
Sidharth Yadav is a senior editorial content specialist at G2, where he covers marketing technology and interviews industry leaders. Drawing from his experience as a journalist reporting on conflicts and the environment, he attempts to simplify complex topics and tell compelling stories. Outside work, he enjoys reading literature, particularly Russian fiction, and is passionate about fitness and long-distance running. He also likes to doodle and write about employee experience.
The recent Paris AI Summit made headlines as the US and UK declined to support a diplomatic...
Pop quiz: what do today’s marketing teams and the Oakland Athletics from “Moneyball” have in...
The recent Paris AI Summit made headlines as the US and UK declined to support a diplomatic...