How is AI Governance Different?
AI governance differs from traditional IT governance primarily due to the range of technologies that may be utilised, the ethical considerations involved, and associated legal and reputational risks. While traditional IT governance focuses largely on security and compliance, AI governance must also address bias, fairness, and transparency, ensuring ethical AI usage. Governance implements guardrails, ensuring that AI operates within legal and ethical boundaries, in addition to aligning with organisational values.
Given that some organisations may have multiple governance structures (IT, data, digital), IBRS recommends that rather than implementing a standalone AI governance structure, AI governance should be considered under the umbrella of digital governance1.
Responsible AI Usage Policy – Template
A safe (or responsible) AI use policy is the cornerstone of AI governance. It outlines the domains, practices and values that need to be considered in the development, acquisition and deployment of AI based solutions.
The following template is consistent with the Australian Government’s AI Ethics Principles and the recent European Artificial Intelligence Act. Organisations should customise it to their own particular context.
- AI Policy – Purpose and Scope
- Purpose: this policy establishes guidelines for the responsible acquisition, development, and use of AI systems within [Organisation Name].
- Scope: this policy applies to all employees, contractors, and partners who develop, procure, implement, or use AI systems on behalf of the Organisation.
- Definitions
- Artificial Intelligence (AI): systems designed to perform human related tasks including learning, reasoning, problem-solving, perception, and language understanding.
- ML and Graph-Based Models: subsets of AI that enable systems to learn from data and improve performance without explicit programming.
- AI System: any software, platform, or tool that incorporates AI or ML capabilities.
- Governance Structure
- AI Oversight/Governance Committee: The governance committee is responsible for overseeing AI initiatives, ensuring compliance, and addressing risks. This cross-functional committee includes representatives from leadership, legal, IT, data privacy, ethics, and relevant business units.
- Risk Assessment and Management
- Regular Risk Assessments for All AI Systems: The organisation will maintain a record of AI applications. Each AI implementation must undergo a formal risk assessment before development or procurement and again before deployment. These assessments should evaluate risks across multiple dimensions, including privacy, security, bias, reliability, and regulatory compliance. Given that AI technologies are now well-embedded in many Software-as-a-Service (SaaS) and Cloud services, organisations will also need to consider the interconnected risks.
- Periodic Review and Updates to Risk Assessments: Risk assessments must be reviewed and updated according to a defined schedule based on risk level, with annual reviews for low-risk systems, semi-annual reviews for medium-risk systems, and quarterly reviews for high-risk systems.
- Ethical Principles
- Fairness: AI systems must be designed to avoid bias and discrimination. This requires diverse and representative training data. Systems that make decisions affecting individuals must undergo fairness impact assessments.
- Transparency: Decision-making processes must be explainable and documented. For customer-facing or decision-making AI systems, the Organisation must be able to explain in understandable terms how the system works and what factors influence its outputs.
- Accountability: Each AI system must have designated owners accountable for its operation, maintenance, and impacts. Audit trails must track system actions and human oversight decisions. Mechanisms must exist for addressing complaints or concerns about AI-driven decisions.
- Privacy: AI systems must respect individual privacy rights. This includes conducting privacy impact assessments and ensuring secure data handling throughout the AI lifecycle.
- Copyright and Intellectual Property: AI systems must respect copyright and intellectual property.
- Human Oversight: AI systems should complement human decision-making, not replace it entirely. Appropriate levels of human oversight must be defined for each AI system based on its potential impact.
- Data Governance
- Guidelines for Data Collection, Storage, and Processing: Data for AI systems must be collected for legitimate purposes using lawful and ethical methods.
- Data Quality Standards: Training and operational data must be relevant, reasonably accurate, and current. Data sources must be documented and assessed for reliability.
- Data Retention and Deletion Policies: Clear retention periods must be established for different data types used in AI systems.
- Security Measures for AI-related Data: AI training and operational data must be secured according to their sensitivity level.
- Procurement and Development
- Criteria for Evaluating External AI Vendors and Solutions: Vendor assessment must include evaluation of data practices, ethical principles, transparency, performance metrics, security measures, and compliance capabilities. Vendors must be able to demonstrate that their solutions can meet the organisation’s AI responsible use policy requirements.
- Standards for Internal AI Development Projects: Internal development must follow established methodologies that incorporate ethical considerations from the design phase. This includes requirements gathering processes, quality assurance procedures, security standards, and documentation requirements.
- Documentation Requirements for AI Systems: All AI systems must maintain comprehensive documentation, including system architecture, data flows, model information, performance metrics, and risk assessments.
- Deployment and Monitoring
- Procedures for Deploying AI Systems: Deployment must follow a controlled process, including final approvals, user training, communication plans, rollback capabilities, and post-deployment verification. Phased deployments should be considered for high-risk systems.
- Ongoing Monitoring Requirements: All AI systems must be monitored for performance, accuracy, fairness, and unexpected behaviours.
- Contestability and Feedback Mechanisms for Impacted Stakeholders: Channels must be established for customers, employees, and other stakeholders to provide feedback on AI systems, report concerns, or request human review of automated decisions.
- Incident Reporting: Employees must report any suspected violations of this policy or any potential ethical, legal, or regulatory concerns related to AI use to the AI Officer or through the Organisation’s established reporting channels.
- Training and Awareness
- Required AI Literacy Training for Relevant Personnel: All employees involved in AI development, procurement, implementation, or use must complete basic AI literacy training, covering AI capabilities and risks.
- Role-Specific Training Requirements: Additional specialised training is required based on job function.
- Compliance
- Regulatory Requirements (GDPR, CCPA, Industry-Specific Regulations, etc.): AI systems must comply with all applicable laws and regulations.
- Internal Audit Procedures: AI systems and the use of AI by SaaS providers will be incorporated into the organisation’s risk management and audit program, verifying policy compliance, risk management effectiveness, and the implementation of controls.
- External Communication
- Communication and Transparency to Customers/Clients: Use of AI-based solutions will be disclosed to customers and clients as and when they interact with the solution. Customers and clients will have a mechanism for contesting the outcomes and results of AI-based solutions.
Next Steps
The rapid adoption and evolution of AI-based applications means that CIOs must stay on top of this technology and its implications for their organisation. Many executives and boards have scant knowledge of AI, its risks, and nuances. We suggest that CIOs and executives:
- Engage with their line of business managers and their broader organisation and develop a safe AI usage policy (also known as a responsible use of AI policy) to manage risks and guide adoption. Use the IBRS framework and template as a starting point. A more comprehensive IBRS guide is also available2.
- Begin to adopt some of the above practices, dependent on your organisations level of AI adoption maturity.
Footnotes
- ‘Digital Governance Framework Presentation Kit’, IBRS, 2021.
- ‘Safe AI Usage Policy Template’, IBRS, 2025.


