Speakers
Synopsis
Australia has, as yet, no comprehensive regulatory regime for artificial intelligence. There are a number of voluntary guidelines and sets of principles, but no clear statement of what activities are, and are not, allowed to be performed by AI, and no legal guardrails around the implementation of AI in an enterprise or agency.
This means when AI projects are under consideration, often the main regulatory consideration is actually the Commonwealth Privacy Act 1988 and the Australian Privacy Principles (APPs) and / or State and Territory equivalents (where they exist).
The APPs and equivalents are not completely fit for purpose in this context, and also only regulate part of the AI field.
What do the APPs cover?
Firstly the AI has to be handling data which comes within the definition of 'personal information'. This is not necessarily an easy question to answer in some reasonably common AI use cases.
Secondly, the APPs principally regulate those parts of an AI project involving use / disclosure of personal information (APP 6), security of personal information (APP 11), and quality of personal information (APP 10).
Future amendments to the Privacy Act (introduced into parliament in August 2024) will go slightly further, but only in requiring greater public transparency around the use of AI to inform certain 'substantially automated' decisions' which have 'a legal or similarly significant effect on an individual’s rights'. This is partly in response to the so-called 'Robo-debt' episode.
This presentation will note issues and challenges that arise under the current and proposed future privacy law for both government agencies and private sector organisations when attempting to assess compliance with the APPs in an AI context. Examples arising from current AI implementations in both organisations and agencies will be discussed.
The presentation will then consider the approach taken in the European Union's Artificial Intelligence Act (EU AI Act) (set to come into force 3 weeks from time of submitting this proposal, with most regulatory aspects kicking in two years after that). The EU AI Act is the first of its kind in the world and implements a tiered approach to regulating the use of AI which goes far beyond its handling of personal information. The EU AI Act risk tiers - minimal, limited, high and unacceptable risk - each have their own graduated levels of regulatory response, with AI use prohibited altogether in identified 'unacceptable risk' domains.
The Australian government has indicated it intends a lighter-touch regulatory framework for AI, with mandatory guardrails for high-risk settings but a voluntary best-practice toolkit for other implementations, possible voluntary watermarking of AI output, and strengthening of existing laws to minimise potential risks and harms from the use of AI in relevant domains. However with an election likely in May 2025 time is running out.
In conclusion the presentation will consider lessons from the EU AI Act for possible future Australian AI regulation, and consider how that might differ from leaving all the heavy lifting to the Privacy Act.