Case Studies
The following case studies are examples of Australian, New Zealand, and Indian organisations facing challenges or failing to fully adhere to AI ethics principles, particularly with gen AI. These provide cautionary tales for other organisations and highlight the impact of not providing adequate AI ethics and governance.
While not explicitly an AI project, the Australian government’s automated debt recovery system, “Robodebt,” caused significant harm and controversy. The system, which used automation to identify and raise debts against welfare recipients, resulted in many people being wrongly accused of owing money to the government. This failure led to a Royal Commission and has made government agencies hesitant to announce AI projects publicly due to reputational risk.
Facial Recognition
The Australian Federal Police has been doing face recognition trials with two vendors without consideration for safety practices and potential for misuse2.
The adoption of Facial Recognition AI by various government agencies and law enforcement in India has raised concerns about privacy, surveillance, and potential biases. The lack of a comprehensive data protection law and the absence of clear guidelines for the ethical use of AI have fuelled debates about the technology’s impact on civil liberties and individual rights3.
Biometric Privacy
In 2021, the Australian Information Commissioner found that Clearview AI breached the privacy of Australians by scraping their biometric information from the web without consent and disclosing it through its facial recognition tool. The Administrative Appeals Tribunal upheld this decision in 20234.
Lack of Transparency
Specific applications of algorithms by government agencies like ACC and Immigration New Zealand have attracted criticism from media and academic commentators over accuracy, human control, transparency, bias, and privacy concerns5.
Reinforcing Racial Bias
In 2019, concerns were raised about using an algorithmic risk assessment tool in the New Zealand criminal justice system. The tool was criticised for potentially perpetuating biases against certain ethnic groups, particularly Māori and Pacific Islanders, and for lacking transparency in its decision-making process6.
Misguided Use
Gen AI has been inadvertently used to create fake legal cases in US courtrooms and Australian legal professionals must safeguard against the same7.
New South Wales Bar Association has issued guidelines for the correct use of generative AI tools8.
Recruitment Bias
Automated Recruitment System by Amazon: Amazon developed an AI-powered automated recruitment system that showed bias against female candidates. As a result, the project was cancelled. While not specific to Australia, this example is mentioned in the context of the Australian government’s hesitancy to adopt AI for citizen-facing services, discussed in the DCA guidelines.
AI Ethics Case Studies and Industry Reports
Following are key case studies and industry reports of Australian organisations implementing AI Ethics Frameworks:
CSIRO AI Ethics Principles
The CSIRO worked with Australia’s biggest businesses to pilot its AI Ethics Principles. Telstra was one of the companies that participated, testing the principles of two of its AI solutions. Telstra prepared a case study summarising its experiences in applying the AI Ethics Principles10.
Operationalising Australia’s 8 AI Ethics Principles
Investment firm Alphinity developed a Responsible AI Framework for investors by operationalising Australia’s 8 AI Ethics Principles. The framework provides practical tools for the investment community to assess the ESG implications of AI development and deployment as standard practice11.
Need for Explainability
Australian SMEs and startups need to uplift their understanding of ethical AI issues, especially the explainability principle12.
Bar Association Guidelines for Generative AI Use
In Australia, the NSW Bar Association has a generative AI guide for barristers. The Law Society of NSW and the Law Institute of Victoria have released articles on responsible use in line with solicitors’ conduct rules13.
Evaluation of Risk Assessment Algorithms
Risk assessment algorithms are already in use in the New Zealand criminal justice system to assist with sentencing and parole decisions14.
Next Steps
These case studies and industry reports should be used as part of a presentation by IT teams when making recommendations for an AI Ethics framework in their organisation.
Footnotes
- ‘Time for an Ethical Framework for AI – IBRS’, IBRS, 2023.
- ‘AFP called out for trialling controversial facial recognition software’, Cyber Daily, 2023.
- ‘Facial Recognition AI in State Surveillance and Monitoring’, SFLC, 2024.
- ‘Clearview AI breached Australians’ privacy’, OAIC Australian Government, 2021.
- ‘Government use of Artificial Intelligence in New Zealand’, University of Otago, 2019.
- ‘Risk assessment algorithms in the New Zealand criminal justice system’, New Zealand Law Journal – Issue 328, 2022.
- ‘AI fake legal cases in real courtrooms’, UNSW, 2024.
- ‘Issues Arising from the Use of AI in Legal Practice’, NSW Bar Association, 2023.
- ‘DCA Releases Guidelines to Reduce Bias in AI Recruitment’, Diversity Council Australia, 2023.
- ‘Case studies from our AI Ethics Principles pilot’, Australian Government, 2021.
- ‘A Responsible AI Framework for Investors’, Alphinity, 2024.
- ‘SMEs and Explainable AI: Australian Case Studies’, Journal for the Australian and New Zealand Societies for Computers and the Law, 2022.
- ‘Issues Arising from the Use of AI Language Models in Legal Practice’, NSW Bar Association, 2023.
- ‘https://search.informit.org/doi/10.3316/agispt.20201102038974’, New Zealand Law Journal, 2020.