VENDORiQ: Proposals for AI Regulation

Proposals to protect the public against the risks associated with AI technology have been proposed. IBRS looks into the need for AI regulations that promote the development and deployment of AI in a manner with respect to ethical standards, fairness, accountability, and transparency.

The Latest

27 June 2023: Some of the recent announcements related to AI regulation:

  • Microsoft President Brad Smith emphasised the necessity of taking action to distinguish between genuine and AI-generated content, particularly to combat foreign cyber influence operations. In his speech in Washington, Smith underscored the importance of implementing protective measures against the deceptive manipulation of authentic content using AI. He also suggested critical AI applications with security obligations should require licences. 
  • G7 leaders have collectively urged the development and implementation of technical standards to ensure the trustworthiness of AI during an event in Japan. They recognised that governance of AI has not kept up with its rapid growth and acknowledged that approaches may differ in achieving the common vision of trustworthy AI. However, they emphasised the importance of rules for digital technologies, including AI, aligning with shared democratic values, with a focus on accuracy, reliability, safety, and non-discrimination. 
  • Two committees of the European Parliament have approved a preliminary mandate for proposed regulations on AI that classify AI systems as high-risk based on their potential. If enacted, these would impose a ban on high-risk AI systems, prohibit the use of real-time biometric systems, and introduce stricter regulations for AI developers. Furthermore, developers of foundation models would be obligated to register in an EU database and adhere to specific design and information requirements as part of the additional rules proposed. 

Why It’s Important

While Australia is considered as one of the first countries to regulate AI through its voluntary ethics framework in 2018 and the creation of the Artificial Intelligence Ethics Framework in 2019, it has yet to come up with a specific legislation to regulate AI that promotes accountability and security. AI systems make autonomous decisions that can have misleading or harmful consequences, so the proper regulation will ensure that developers, organisations, and users of AI technologies are held accountable for the outcomes of their systems. Clear guidelines and frameworks can help determine who is responsible for AI actions and provide avenues for legal recourse in case of harm or misuse.

IBRS recommends that legislators follow a risk-based approach similar to the European Union proposal where AI systems are evaluated and regulated based on their potential risks and impact. High-risk AI applications, such as those used in critical infrastructure, autonomous vehicles, or healthcare, require stricter regulations to ensure safety, privacy, and accountability.

Who’s Impacted

  • CEO
  • AI developers
  • IT teams

What’s Next?

  • Prioritise cyber security and privacy protection. Robust security measures, data anonymisation techniques, and compliance with relevant regulations (e.g., GDPR) should be implemented to mitigate risks and ensure the responsible use of AI technologies.

Related IBRS Advisory

Trouble viewing this article?

Search