Part 3: How to Choose an AI Governance Model That Works for Your Organization

Emma Pirchalski
,
AI Strategist

If your organization is investing in AI, you’re likely balancing multiple priorities.

  • You want to move quickly, but responsibly.
  • You want teams to experiment, but with the right oversight and guidance.
  • You want value from AI, but without introducing unnecessary risk.

These tradeoffs are often where governance enters the conversation. And while governance is sometimes seen as a constraint, as we explored in Part 1, it’s actually a strategic imperative for doing AI well. In Part 2, we explored how to design governance that works in practice, and built it into real decisions, workflows, and team structures.

But a challenge remains: most organizations aren’t approaching AI as a tool for today—they’re laying the groundwork for a future where AI is central to how they operate. And that shift raises an important question for governance: how do you design AI governance for an organization whose use of AI will look very different in the future than it does today? Although achieving this is quite complex, the answer is simple: your governance model needs to connect to two things—where your organization wants to go with AI, and where it is today.

Start with what you want to achieve with AI

AI governance isn’t a one-size-fits-all—because AI adoption isn’t, either. Organizations are approaching AI with very different goals, timelines, and risk tolerances. Some are focused on broad adoption of tools like ChatGPT or Copilot, others on proving value in a few targeted domains, while others are looking to fundamentally transform their businesses with AI. 

We’ve worked with companies across this entire spectrum, and while every journey is unique, we’ve seen consistent patterns emerge in how AI adoption tends to take shape. These patterns are categorized as ‘AI adoption archetypes’ in the table below. Each archetype reflects different ways organizations are using AI, the goals they’re pursuing, and the governance required to support them. 

Unlike most approaches to governance—which often begin by asking “What are the risks we’re trying to avoid?”—these archetypes offer a different entry point: they help clarify what governance needs to enable. In other words, when organizations align governance with what they’re trying to achieve with AI,  they’re better positioned to build governance that actually helps them get there. 

 

Aligning Governance to AI Adoption Archetypes

AI Adoption Archetype

Primary AI Goal

Governance Priorities

What This Enables

Self-Service Accelerator

Starting with off-the-shelf GenAI tools to unlock productivity and encourage safe use

  • AI principles
  • Usage guidelines
  • Approved tools
  • AI 101 training
  • Knowledge sharing channels
  • AI literacy
  • Innovation with guardrails

Domain Experimenter 

Exploring high-potential use cases in targeted domains to test value and build momentum

  • Lightweight risk assessment
  • Prioritization framework
  • Evaluation criteria
  • Demonstrated ROI
  • Playbooks for future use cases 

Portfolio Builder

Expanding from successful pilots to coordinated, repeatable delivery

  • Development standards
  • Standardized documentation
  • Review checkpoints
  • Production-grade AI
  • Reusable workflows
  • Embedded oversight

Federated Innovator

Empowering teams to develop AI solutions independently—with shared guardrails

  • AI inventory
  • Audit controls
  • RACI models
  • Peer review processes
  • Local innovation
  • Transparent reporting
  • Role clarity

Enterprise Transformer

Embedding AI into core systems, decisions, and operations

  • Executive sponsorship
  • Role-specific training
  • Lifecycle and portfolio governance
  • AI embedded at scale
  • Cross-functional coordination
  • Long-term adaptability

These archetypes don’t exist in isolation. In fact, more often than not, organizations have dynamics of multiple archetypes at once—one team might be experimenting with targeted use cases, while another is beginning to scale what’s already working. This variation makes it even more critical to offer clarity on what governance is needed, and when. Development standards won’t matter to teams that are just trying to get the most out of a productivity tool, and an AI policy alone won’t help the developer trying to assess deployment risk or implement red-teaming procedures. Different teams have varying levels of expertise and comfortability with AI, and governance has to support all of it. Additionally, the ‘What This Enables’ category reflects outcomes that are most-likely to be positive when aligned with the organization’s specific goals; what signals progress in one context may indicate risk or misalignment in another. The takeaway here is that governance does not need to be comprehensive from day-one, it just needs to be intentionally aligned with where the organization is headed.

Assess where you are today

That alignment requires clarity on where the organization is today. A challenge we often hear from leaders is that they’re unsure whether they’re “ready” for AI governance, or what the right approach should look like. Even with clear goals in mind, it’s not always obvious where to begin. The landscape is evolving quickly, and many organizations feel caught between a sense of urgency to move forward and uncertainty around how to manage the risks.

But getting started doesn’t require building something entirely new. The most effective governance models are often grounded in what already exists. When we help organizations assess their starting point, we typically focus on three areas:

1. AI Capability & Readiness

This reflects how well-equipped the organization is, across people, processes, and infrastructure, to adopt and manage AI effectively.

  • Do teams have an understanding of how AI operates, and how to effectively and responsibly use it within their roles?
  • Do legal, risk, and product teams have shared frameworks—or are they working from different assumptions?
  • Are patterns of responsible use already emerging—or are teams unsure what’s expected?

What this means for governance:
When readiness is low, governance needs to provide more scaffolding—clearer escalation paths, advisory support, training, and lightweight resources that help teams navigate decisions confidently. Policies and expectations should be easy to find, easy to understand, and easy to apply to real-world scenarios. When readiness is higher, governance can focus more on enabling innovation—supporting teams in identifying risk, navigating tradeoffs, and scaling practices that already work.

2. Risk Profile & Tolerance

This reflects both the level of exposure associated with how AI is being used, and the organization’s posture toward that risk.

  • Are AI systems influencing regulated domains, critical decisions, or customer-facing experiences?
  • Is there internal alignment on what constitutes a high-risk use case?
  • Has the organization defined what level of review is required—and when?

What this means for governance:
Governance should match the level of risk a use case presents—not every project needs the same level of oversight. That means defining what “high risk” looks like in the context of the business, and what safeguards are appropriate in those cases. For example, a generative AI chatbot answering internal FAQs may require minimal review, while an AI model influencing financial decisions or customer outcomes will likely need formal approval processes, audit trails, and post-deployment monitoring. Getting alignment on what level of governance is required (and when) is critical to avoid both under- and over-governing.

3. Operating Model

This is about how the organization functions: where decisions are made, how authority is distributed, and how coordination happens across teams.

  • Is AI adoption being driven centrally, or are individual business units leading the way?
  • Are there shared tools and platforms, or is infrastructure managed locally?
  • How much collaboration currently exists across legal, technical, and business teams?

What this means for governance:
In more centralized organizations, governance can be more consistent—but it also runs the risk of becoming too rigid or disconnected from day-to-day work. It’s important to clarify what’s driven centrally (e.g., policies, review structures, shared tooling) and where teams have flexibility to adapt based on their context. In more federated or decentralized organizations, governance needs to be embedded within teams—often through designated leaders, shared processes, and regular coordination. These organizations benefit from frameworks that clarify roles and responsibilities while still allowing innovation to happen locally. Regardless of the structure, strong communication channels and clear alignment across teams is critical to making governance work in practice.

The playbook for AI governance

As this series, Rethinking Governance for the AI-Driven Organization, comes to a close, we wanted to leave readers with a practical takeaway. Across years of work in this space, one thing has become clear: governance is one of the most critical components of doing AI well. And given the scale of change ahead, it’s in all of our collective interest to get it right. 

As you lead your organization forward, we hope the following three steps—and the blogs that detail them—serve as a lasting reference for building governance that is actionable, adaptable, and fit for the future.

Step 1: Make the case that AI governance is a strategic capability, not a compliance function, and build shared buy-in across leadership. Read part 1.

Step 2: Design governance for how AI actually shows up in your organizations, with practices that embed in real decisions, workflows, and responsibilities. Read part 2.

Step 3: Choose a governance model that is grounded in where your organization is today, and built to help you move toward your AI goals.

And if this topic is front of mind for you or your team, join us for our upcoming webinar, Governing the Future: Why AI Governance Is a Strategic Imperative, where we’ll go deeper on the frameworks, practices, and real-world lessons that are shaping how organizations are getting this right. Register here.

Related resources

No items found.