Home Insights Australia releases proposed mandatory guardrails for AI regulation
Share

Australia releases proposed mandatory guardrails for AI regulation

In a significant step towards regulation of artificial intelligence (AI), the Federal Government has released proposed mandatory guardrails for high-risk AI. In parallel, it has introduced a Voluntary AI Safety Standard, which closely aligns with the mandatory guardrails and outlines best practice guidelines for the use of AI in Australia.

On 5 September 2024 the Australian Government released two closely-aligned publications on AI regulation:

  • a proposals paper introducing mandatory guardrails for AI in high-risk settings (Proposals Paper); and

  • a Voluntary AI Safety Standard (Voluntary Standard).

These documents build on the Government’s previous Discussion Paper titled ‘Safe and Responsible AI in Australia’ (published June 2023) and interim update (published 17 January 2024), and represent a significant step towards AI regulation in Australia.

The mandatory guardrails are proposed to apply to ‘high-risk’ AI and are not yet in place whereas the Voluntary Standard applies to all AI (not just high-risk) and are in place now but, as the name suggests, are voluntary in terms of compliance. Both are highly useful and informative for Australian organisations who are developing policies, procedures and contracts in relation to the use or development of AI.

While there is no binding law as yet specifically relating to the use or deployment of AI in Australia, it may be that the proposed mandatory guardrails could be given legal effect in some form in the future.

What are the proposed mandatory guardrails for high-risk AI?

The Proposal Paper sets out ten proposed mandatory guardrails that would require organisations developing or deploying high-risk AI systems to:

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

  2. Establish and implement a risk management process to identify and mitigate risks.

  3. Protect AI systems, and implement data governance measures to manage data quality and provenance.

  4. Test AI models and systems to evaluate model performance and monitor the system once deployed.

  5. Enable human control or intervention in an AI system to achieve meaningful human oversight.

  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.

  7. Establish processes for people impacted by AI systems to challenge use or outcomes.

  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

  9. Keep and maintain records to allow third parties to assess compliance with guardrails.

  10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails.

What types of AI use would the proposed mandatory guardrails apply to?

The mandatory guardrails would apply to ‘high-risk AI’ systems. While a proposed definition of high-risk AI has been put to public consultation, the proposal paper proposes two categories:

  • Category 1: High-risk AI based on intended and foreseeable uses. Within this category, the paper proposes two options: a principles-based approach and a list-based approach. The proposed principles provide guidance for an organisation to decide itself whether a particular use represents a ‘high-risk’, whereas a list-based approach, similar to that taken in the EU and Canada, would identify specific areas and use cases that would be deemed as high-risk AI (e.g. for recruitment and credit applications). The principles proposed to be used in determining if AI is high-risk would include having regard to the risk of:

    • adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations;

    • adverse impacts to an individual’s physical or mental health or safety;

    • adverse legal effects, defamation or similarly significant effects on an individual;

    • adverse impacts to groups of individuals or collective rights of cultural groups;

    • adverse impacts to the broader Australian economy, society, environment and rule of law; and

    • the severity and extent of those adverse impacts outlined above.

  • Category 2: General purpose AI. General purpose AI is defined as ‘an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems’. The Proposal Paper solicits feedback on whether the use of all general purpose AI should automatically be classed as high-risk AI due to the potential for use in unforeseen contexts which the models were not originally designed for.

Who would need to comply with the proposed mandatory guardrails?

The Government proposes that the guardrails would apply across the AI supply chain for both developers and deployers of AI, and throughout the AI lifecycle. There is a strong emphasis on testing, transparency and accountability, with the onus on developers and deployers of AI in high-risk settings taking responsibility for ensuring their AI products comply with the mandatory guardrails.

How would the proposed mandatory guardrails be implemented?

It is likely that the regulatory model ultimately adopted to establish the mandatory guardrails will affect the speed at which reforms are established and impact the consistency of the enforcement of the laws. The Proposal Paper proposes three potential regulatory models for consultation:

  1. A domain-specific approach which adopts the guardrails within existing regulatory frameworks as needed. This approach would seek to implement the guardrails on a sector-by-sector basis (e.g. by establishing requirements under Australia’s privacy, consumer, copyright, online safety and other laws). This approach would likely involve changes being rolled out earlier and incrementally. Regulators may also take different approaches to enforcing the law if a sector-by-sector approach is adopted.

  2. A whole-of-economy approach which introduces a new cross-economy AI-specific Act (for example, an Australian AI Act). This approach would establish new legislation which sets out the guardrails, thresholds and definitions and would likely involve establishing a new AI regulator to monitor and enforce compliance.

  3. A framework approach which introduces new framework legislation to adapt existing regulatory frameworks across the economy. This approach sits between the above two models and relies on a separate piece of legislation to establish the guardrails, thresholds and definitions but would also involve amending existing legislative regimes to facilitate compliance. For example, existing regulators such as the Office of the Australian Information Commissioner (OAIC) would be responsible for enforcing compliance for breaches of the AI provisions under the Privacy Act 1988.

What is the Voluntary AI Safety Standard?

The Voluntary Standard consists of ten voluntary standards, the first nine replicating the first nine proposed mandatory guardrails. The tenth voluntary standard, instead of requiring certification of compliance, requires that 'organisations engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness'.

Unlike the proposed mandatory guardrails, these voluntary standards have been developed to provide guidance on the safe and responsible use and innovation of AI across all types and uses of AI, not just high-risk AI. The Voluntary Standard sets out detailed guidance on what is required by organisations when implementing and deploying AI systems, especially as compared to the Proposals Paper which only presents a high-level summary of what might be required in respect of each mandatory guardrail.

What’s next?

The content, scope, application and regulatory model establishing the mandatory guardrails remain under consideration by the Government. It is not expected that the proposed mandatory guardrails (or aspects of them), and any legal requirement for organisations to comply, will be legislated until at least 2025.

The Voluntary Standard, on the other hand, is in effect and adherence is voluntary. However, the Government has stated that its intention is that the immediate implementation of the Voluntary Standard will assist businesses start to develop practices that will be required in a future regulatory environment and is likely to be used to interpret the precise requirements of the forthcoming mandatory guardrails.

Key takeaways

Although the proposed mandatory guardrails have not yet been legislated, the Australian Government has signalled that it is taking the risks of AI seriously. Organisations should review their use of AI against the proposed requirements and the definitions of high-risk AI, and consider adopting the Voluntary Standard to build up internal governance processes in preparation for a future mandatory regulatory environment and to align with best practice.

In line with the first voluntary standard (and potentially the first mandatory guardrail), many organisations are now establishing internal governance frameworks and AI specific policies (e.g. responsible AI policies and user policies in the context of generative AI systems). This forward-looking best practice approach to AI governance will serve to align businesses on an organisational level when it comes to using AI productively and responsibly.


Authors

Kit Lee

Senior Associate

Amy Yu

Lawyer


Tags

Technology, Media and Telecommunications

This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.