AI as the Engine of the Innovation Economy: Part 4 – AI Governance Models

AI governance isn't a brake on progress; it’s a strategic enabler of ethical, scaled innovation, requiring a shift to use-case focus and universal literacy.

Conclusion

For C–Level executives, strategic investment in artificial intelligence (AI) governance is no longer a matter of compliance; it is a direct investment in sustainable innovation. An effective AI governance framework should not be a brake on progress; instead, it should contain the flexibility to de-risk the adoption of AI and ensure it creates long-term value. By moving from a mindset of restrictive control to one of enablement, organisational culture supports teams to innovate confidently, ethically, and at pace. This approach will be the primary differentiator between organisations that merely experiment with AI and those that harness it as a core driver of competitive advantage. Consider the practices and models outlined below to establish your AI governance model, which protects the business while fueling AI innovations.

Observations

The AI Governance Imperative: Balancing Speed with Safety

The pace of AI development presents enormous opportunities and significant risks. Early, ungoverned adoption of AI tools has already exposed organisations to tangible threats. IBRS raised concerns in April 2019 that AI contained a number of cognitive biases1 or hallucinations that impact decision-making. These factual errors or embedded biases in work have caused many leaders to observe a general decline in the quality of business outputs. These are not minor issues; they represent material reputational, legal, and operational risks that cannot be ignored.

The core challenge is to mitigate these downsides without stifling the creativity and productivity gains that AI promises to deliver. A lack of clear governance is a primary barrier to scaling our AI capabilities effectively. Currently, 37 per cent of leaders surveyed by Datacamp highlight the absence of formal training as a critical challenge their teams face in leveraging AI responsibly2. Commonly identified as a shortfall in data literacy. Inaction is not an option. Falling behind competitors who successfully integrate and govern their AI initiatives poses a significant threat to a company’s market position. Therefore, AI governance must be treated not as an administrative burden or a cost centre, but as a strategic enabler of innovation and a crucial component of our long-term competitive strategy.

From Inhibitor to Innovator: Best Practices for AI Governance

Effective AI Governance: is not about restriction, it is about empowerment. It creates the necessary guardrails that give teams the confidence to experiment and innovate safely. The objective is to incentivise innovation while minimising the potential for harm. This requires a fundamental shift in an organisation’s approach to governance of AI. Attempting to regulate each technology is an impractical application of AI governance.

Focus On the Use Case, Not the Model: Resist attempts to regulate the underlying AI models, as these are constantly evolving. Instead, governance should focus on the interface between AI and individuals, where harm can occur. A lightweight review process can assess the risk of a proposed AI use case – for example, a customer-facing chatbot requires more scrutiny than an internal coding assistant.

Encourage Openness to Avoid Data Balkanisation3: The true power of AI is unlocked through access to large, high-quality datasets. If individual departments hoard data to create a competitive advantage internally, it stifles broader innovation and reduces the ability to develop powerful, enterprise-wide AI capabilities. Governance policies must therefore promote data sharing and openness as a core principle to maximise the collective return on AI investment.

Prioritise Universal AI Literacy: An overwhelming 79 per cent of business leaders surveyed by DataCamp aligned foundational training on the responsible and ethical use of AI as mandatory for every employee, regardless of their role4. Training programmes must extend beyond the technical capabilities of AI to cover its inherent risks, including the potential for bias, the generation of misinformation, and privacy concerns. Cultivating a deeply embedded culture of AI literacy enhances responsible and sustainable innovation. It empowers every employee to be a steward of ethical principles as they explore the potential of AI in their daily work.

Choice of AI Governance Model

Centralised Model

A single, central authority, such as an AI Centre of Excellence (CoE), holds all responsibility for AI governance. This group creates the rules, vets the projects, and provides the expertise for the entire organisation. Business units must go through this central team for approval and guidance.

Pros:

  • High consistency and control.
  • Efficient use of scarce talent.
  • Clear accountability.
  • Strong risk mitigation.

Cons:

  • Innovation bottleneck.
  • Lack of business context.
  • Perception as innovation police.
  • Scalability challenges.

Best Suited For:

Organisations in the early stages of their AI journey with a limited number of projects. Companies in highly regulated industries like financial services (banking, insurance) and healthcare, where consistency in compliance and risk management is paramount, and the cost of failure is extremely high. These sectors will seek to prioritise control and standardisation over speed of experimentation.

Decentralised (or Distributed) Model

In a decentralised structure, there is no central AI governance body. Instead, each business unit or functional team is responsible for governing its own AI initiatives. They develop their own standards, manage their own risks, and make their own decisions.

Pros:

  • Maximum agility and speed.
  • High business relevance.
  • Strong ownership and accountability at the team level.
  • Fosters local expertise.

Cons:

  • Inconsistent standards and reinventing the wheel.
  • Potential for governance gaps and silos.
  • Difficulty in managing enterprise-wide risk.
  • Inefficient use of resources.

Best Suited For:

Highly decentralised organisations with a strong, pre-existing culture of local autonomy and robust governance within their business units. Large tech companies or conglomerates where different divisions operate in vastly different markets and have unique needs that cannot be effectively served by a one-size-fits-all approach.

Hub-and-Spoke (or Federated) Model

A central hub (similar to an AI CoE) sets the overarching principles, policies, and best practices. The spokes are individuals or small teams embedded within the business units who are responsible for implementing and customising the central guidance for their specific needs.

Pros:

  • Balances consistency with agility.
  • Scales effectively.
  • Promotes collaboration and knowledge sharing.
  • Deep business integration.

Cons:

  • Requires more resources and coordination.
  • Potential for conflict with business units.
  • Complexity in implementation.

Best Suited For:

Most large, complex organisations that are looking to scale their AI initiatives responsibly. This model is becoming the de facto standard for mature companies across various industries, including retail, manufacturing, and technology, as it offers the best combination of control and flexibility.

Industry Preferences: A Matter of Maturity and Risk

While there are no hard and fast rules, different industries do gravitate towards certain models based on their regulatory landscape and risk tolerance:

Financial Services, Healthcare, and Government: These industries typically start with a centralised model. The high stakes of regulatory compliance and the severe consequences of error (financial loss, patient harm) necessitate a cautious, highly controlled approach. As the AI models mature, transitioning to a hub-and-spoke model could increase innovation within established safety guardrails.

Technology and E-commerce: These fast-moving industries often begin with a more decentralised approach to foster rapid innovation. However, as they scale and face greater public and regulatory scrutiny, they almost invariably move towards a hub-and-spoke model to bring consistency and manage enterprise-wide risks more effectively.

Manufacturing and Industrials: This sector is increasingly adopting a hub-and-spoke model as it seeks to leverage AI for everything from predictive maintenance to supply chain optimisation. The need for consistent standards across a global network of factories and operations makes a purely decentralised approach risky, while a purely centralised model would be too slow to adapt to local conditions.

Next Steps

  • Examine your approach to AI governance and determine if it is limiting innovation.
  • To Support AI innovation:
    • Establish an AI Governance Council: A cross functional team with executive sponsorship, a clear charter, and a focus on enabling innovation.
    • Develop a Principles-Based AI Policy: Task the new council with drafting a concise, principles-based AI policy for board review within six months.
    • Fund a Company-Wide Responsible AI Upskilling Programme: Allocate budget to develop and deploy a mandatory AI literacy programme for all employees.

Footnotes

  1. Recognising Cognitive Biases for Better Decisions’, IBRS, 2019.
  2. The DataCamp Data & AI Literacy Report 2025’, Datacamp, 2025.
  3. Preventing the Balkanization of the Internet’, Council on Foreign Relations, 2018.
  4. The DataCamp Data & AI Literacy Report 2025’, Datacamp, 2025.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week