Six Questions for Boards and Six Questions for Executives to Ask Regarding AI

Boards and executives must ask targeted questions to bridge their AI knowledge gap, ensuring responsible adoption and effective governance.

Conclusion

Australian boards’ and executives’ knowledge of artificial intelligence (AI) is still in a developmental phase, characterised by a growing awareness of its importance and risks, but also a significant gap in practical expertise and governance. While some directors and executives are actively upskilling, a lack of understanding of AI’s technical, ethical, and governance complexities remains a major challenge. Current research suggests that there is generally limited board oversight of AI use, with AI application often subject to inadequate controls and risk oversight. In many cases, directors and senior executives are unaware of where AI is being used in their organisation. For organisations to deploy AI effectively while mitigating risks, both the board of directors and the executive team must ask critical questions to ensure alignment with business goals, ethical standards, and regulatory requirements.

Below we outline six key questions a board should ask about AI and six questions executives should ask to facilitate and navigate responsible AI adoption within their organisation.

Observations

1. Background

Directors and senior leaders in many organisations now recognise that AI is no longer a niche technology but a strategic imperative that affects anything from operational efficiency to competitive advantage. However, while organisations are in early stages of developing and deploying AI-based solutions, some recent findings1,2 suggest that:

  • While AI use is growing in Australian companies, there is still a significant lack of trust and understanding of AI.
  • Over 70 per cent of employees in organisations are experimenting with tools like ChatGPT at work, often without oversight.
  • Over 50 per cent of organisations identify governance and compliance as the primary barrier to AI adoption, rather than the technology itself.
  • Over 60 per cent of employees admit to not evaluating AI output and over 60 per cent of employees avoid disclosing their AI use (shadow AI).

Navigating AI governance in any organisation is complex, requiring expertise in business, ethical issues, compliance, and the evolving regulatory landscape. Understandably, boards and executives are becoming increasingly concerned about their responsibilities in effectively overseeing AI developments in their organisation.

Asking the right questions is one way board members and executives can start to fulfill their AI oversight responsibilities. While the board will need to question high-level governance and strategic alignment and enterprise risk, executives will need to focus on operations and execution. Together these sets of questions can create a pragmatic framework for responsible AI adoption while more structured frameworks and experience are established.

2. Six Questions a Board Should Ask About AI

The board of directors plays a pivotal role in overseeing AI strategy, ensuring it aligns with the organisation’s mission, values, and risk appetite. These questions focus on governance, strategy, and accountability.

  1. How does AI align with our strategic objectives? The board must ensure AI initiatives support the organisation’s long-term goals, whether improving operational efficiency, enhancing customer experiences, or driving innovation. Ask for a clear articulation of how AI projects contribute to competitive advantage and measurable outcomes, avoiding investments in AI for its own sake.
  2. What are the ethical implications of our AI initiatives? AI can amplify biases, infringe on privacy, or erode trust if not managed carefully. The board should investigate whether ethical frameworks guide AI development and deployment, including considerations of fairness, transparency, and accountability. Are there policies to address unintended consequences, such as discriminatory outcomes? Importantly, the board should also inquire about the impact of AI on our own workforce.
  3. What risks are associated with our AI systems, and how are they being mitigated? AI introduces risks such as data breaches, regulatory non-compliance, or reputational damage. The board should demand a comprehensive risk assessment, including cyber security measures, data governance protocols, and contingency plans for AI failures or misuse. Is there a risk-based monitoring and reporting system for mission-critical and high-risk AI systems?
  4. Are we compliant with current and emerging AI regulations? Global and local regulations, such as the EU’s AI Act and sector-specific laws, are evolving rapidly. The board must confirm that the organisation understands and adheres to applicable laws and standards, and that processes are in place to comply with current regulations.
  5. Do we have the right talent and governance structures for AI? Successful AI adoption requires appropriate skills and robust oversight. The board should ask whether the organisation has the necessary expertise, including data scientists, AI ethicists, and compliance specialists, and whether governance frameworks, such as AI steering committees, are in place to monitor progress and risks.
  6. How are we measuring the success and ROI of AI initiatives? AI projects can be costly and complex, with benefits often lacking clarity. The board should ensure that clear metrics and key performance indicators (KPIs) are established to evaluate the impact of AI on revenue, efficiency, and customer satisfaction. Are investments delivering tangible value, and how is success being tracked over time?

3. Six Questions Executives Should Ask About AI

Executives are responsible for operationalising AI strategies, ensuring they are practical, scalable, and aligned with business priorities. These questions focus on execution, integration, and impact.

  1. What specific business problems are we solving with AI? Executives must ensure AI initiatives address well-defined challenges, such as optimising supply chains or automating repetitive tasks. Are projects prioritised based on business impact, and do they avoid chasing trends without clear use cases?
  2. How are we integrating AI into existing processes, systems, and people? AI may complement or disrupt current operations and people. Executives should ask how AI tools integrate with existing systems, workflows, and data structures. Are there plans to manage change and train employees to work alongside AI technologies?
  3. What data do we need, and is it accessible and of high quality? The effectiveness of AI depends on the quality and availability of data. Executives should confirm that the organisation has access to relevant, clean, and unbiased data, as well as legal information. Are there gaps in data infrastructure, and how are privacy and security concerns being addressed?
  4. How are we ensuring AI systems are transparent and explainable? Stakeholders, customers, and regulators increasingly demand transparency in AI decision-making. Executives should inquire whether AI models are interpretable and whether processes exist to explain outcomes to non-technical audiences, particularly in high-stakes areas such as finance or healthcare. Are there processes in place to assess the risks from third-party vendors?
  5. What is our plan for scaling AI across the organisation? Pilot projects are a start, but scaling AI requires a strategy. Executives should assess whether the necessary infrastructure, budgets, and training programs are in place to expand AI use. Are there clear roadmaps for moving from experimentation to enterprise-wide deployment?
  6. How are we engaging employees and stakeholders in the AI journey? AI adoption can spark resistance if employees fear job displacement or lack understanding. Executives should ask how the organisation is communicating AI’s benefits, addressing concerns and upskilling staff to work with AI. Are customers and partners also informed about the use of AI-based solutions?

Next Steps

CIOs have the opportunity to play a critical role in facilitating the responsible adoption of AI within their organisations. CIOs should:

  • Establish a dialogue with the boards and executives to gauge their level of understanding and maturity in AI governance. Consider undertaking an AI Maturity Assessment.
  • Educate boards and executives on the latest developments, opportunities, and risks related to AI.
  • Provide the board and executives with relevant perspectives and questions to facilitate the adoption of AI and effective risk management.

Footnotes

  1. ‘Trust, attitudes and use of artificial intelligence: A global study 2025’, KPMG and University of Melbourne study, 2025.
  2. ‘Directors’ Guide to AI Governance’, Australian Institute of Company Directors, 2025.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week