Australian AI Regulation: Where the Policy Conversation Stands
As 2025 winds down, Australia’s approach to artificial intelligence regulation remains in a state of deliberate flux. Unlike the European Union’s comprehensive AI Act or China’s algorithmic regulation framework, Australia has opted for what government officials describe as a “risk-based, sector-specific” approach that prioritizes existing regulatory frameworks over new legislation.
The Department of Industry released its voluntary AI Ethics Framework back in 2019, and six years later, it remains exactly that: voluntary. No mandatory compliance requirements, no penalties for violations, and no enforcement mechanism beyond reputational pressure. Industry groups have largely welcomed this light-touch approach, while consumer advocates and academic researchers argue it leaves significant gaps in accountability.
The Safe and Responsible AI Consultation
The government’s Safe and Responsible AI consultation process, which closed in September, attracted over 400 submissions from industry, academia, and civil society. The key tension evident across submissions: businesses want regulatory clarity and consistency, but most also want to avoid prescriptive rules that might constrain innovation or create compliance burdens.
Several themes emerged consistently. Privacy remains the dominant concern, particularly around training data and model outputs. The existing Privacy Act, currently under review itself, wasn’t designed for large language models or synthetic data generation. How do you handle a data subject access request when someone’s information has been incorporated into model weights?
Transparency requirements proved contentious. While there’s broad agreement that high-risk AI systems (those making decisions about credit, employment, or government services) should be explainable, defining “explainable” in practice is challenging. A credit scoring model using gradient boosting might be technically interpretable, but is a 47-factor decision tree actually meaningful to an applicant denied a loan?
Sector-Specific Versus Horizontal Regulation
Australia’s regulatory architecture has historically been sector-based. Financial services have ASIC, telecommunications have ACMA, health has TGA and state regulators, and so on. The government’s current thinking appears to favor extending these existing regulators’ mandates to cover AI systems within their domains, rather than creating a new horizontal AI regulator.
This approach has merit. Sectoral regulators understand their industries’ specific risk profiles and operating realities. APRA, for instance, already requires financial institutions to manage model risk, and extending those principles to AI systems is a natural evolution. The TGA’s software as medical device framework provides a template for regulating diagnostic AI systems.
However, many AI systems don’t fit neatly into single sectors. A facial recognition system might be used by law enforcement, private security, retail stores, and entertainment venues. Which regulator owns that? The sector-specific approach risks creating regulatory gaps and inconsistencies.
What’s Actually Happening on the Ground
While policy debates continue, Australian organizations are implementing AI systems under the current rules, such as they are. The big banks all have AI governance frameworks, partly because APRA expects it and partly because they remember the Royal Commission. Most follow something resembling the voluntary ethics principles: transparency, fairness, accountability, and so on.
Procurement is emerging as a de facto regulatory lever. Government agencies increasingly include AI ethics and explainability requirements in RFP processes. If you want to sell an AI system to a government department, you need to demonstrate it meets certain standards, even if those standards aren’t legally mandated. Large enterprises are adopting similar approaches with their vendors.
The legal risk calculation is also shifting. While there’s no AI-specific liability regime, existing tort law, contract law, and anti-discrimination law all apply. Organizations implementing AI systems are getting legal advice on potential exposure, and that’s influencing deployment decisions. No one wants to be the test case that establishes AI liability precedent.
International Alignment Questions
Australia’s approach is being shaped by international developments. The EU’s AI Act takes effect in phases starting in 2026, and any Australian company operating in Europe or providing services to European customers needs to comply. That’s creating a gravitational pull toward EU standards regardless of Australian policy.
Similarly, the Australian government has been involved in multilateral discussions through the OECD, the Global Partnership on AI, and bilateral agreements with key partners. There’s an expressed desire to avoid fragmenting global AI development with incompatible national regimes, but also awareness that Australia’s interests don’t always align with larger economies.
The Copyright Act review has AI implications too, particularly around training data and synthetic content. If Australian copyright law takes a different position than US or EU law on whether training constitutes fair dealing, that affects where AI research happens and where models are deployed.
What Comes Next
Expect draft legislation in 2026, but probably not comprehensive AI regulation. More likely: targeted amendments to existing laws. Privacy Act changes to address AI-generated content and automated decision-making. Consumer law updates around AI-powered advertising and recommendation systems. Perhaps specific requirements for high-risk AI in government services.
The voluntary AI Ethics Framework will probably remain voluntary, but industry codes of practice developed under it might become enforceable through existing regulators. That’s the Australian pattern: soft law first, co-regulation if that fails, prescriptive rules as a last resort.
The debate over mandatory vs. voluntary approaches will continue, as will tensions between innovation promotion and consumer protection. That’s appropriate. AI regulation involves genuine tradeoffs, and getting it right matters more than moving quickly. What’s less clear is whether the current paced approach will prove sufficient if serious AI-related harms materialize before stronger protections are in place.