08 October 2024
In a significant step towards regulation of artificial intelligence (AI), the Federal Government has released proposed mandatory guardrails for high-risk AI. In parallel, it has introduced a Voluntary AI Safety Standard, which closely aligns with the mandatory guardrails and outlines best practice guidelines for the use of AI in Australia.
On 5 September 2024 the Australian Government released two closely-aligned publications on AI regulation:
These documents build on the Government’s previous Discussion Paper titled ‘Safe and Responsible AI in Australia’ (published June 2023) and interim update (published 17 January 2024), and represent a significant step towards AI regulation in Australia.
The mandatory guardrails are proposed to apply to ‘high-risk’ AI and are not yet in place whereas the Voluntary Standard applies to all AI (not just high-risk) and are in place now but, as the name suggests, are voluntary in terms of compliance. Both are highly useful and informative for Australian organisations who are developing policies, procedures and contracts in relation to the use or development of AI.
While there is no binding law as yet specifically relating to the use or deployment of AI in Australia, it may be that the proposed mandatory guardrails could be given legal effect in some form in the future.
The Proposal Paper sets out ten proposed mandatory guardrails that would require organisations developing or deploying high-risk AI systems to:
The mandatory guardrails would apply to ‘high-risk AI’ systems. While a proposed definition of high-risk AI has been put to public consultation, the proposal paper proposes two categories:
The Government proposes that the guardrails would apply across the AI supply chain for both developers and deployers of AI, and throughout the AI lifecycle. There is a strong emphasis on testing, transparency and accountability, with the onus on developers and deployers of AI in high-risk settings taking responsibility for ensuring their AI products comply with the mandatory guardrails.
It is likely that the regulatory model ultimately adopted to establish the mandatory guardrails will affect the speed at which reforms are established and impact the consistency of the enforcement of the laws. The Proposal Paper proposes three potential regulatory models for consultation:
The Voluntary Standard consists of ten voluntary standards, the first nine replicating the first nine proposed mandatory guardrails. The tenth voluntary standard, instead of requiring certification of compliance, requires that 'organisations engage stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness'.
Unlike the proposed mandatory guardrails, these voluntary standards have been developed to provide guidance on the safe and responsible use and innovation of AI across all types and uses of AI, not just high-risk AI. The Voluntary Standard sets out detailed guidance on what is required by organisations when implementing and deploying AI systems, especially as compared to the Proposals Paper which only presents a high-level summary of what might be required in respect of each mandatory guardrail.
The content, scope, application and regulatory model establishing the mandatory guardrails remain under consideration by the Government. It is not expected that the proposed mandatory guardrails (or aspects of them), and any legal requirement for organisations to comply, will be legislated until at least 2025.
The Voluntary Standard, on the other hand, is in effect and adherence is voluntary. However, the Government has stated that its intention is that the immediate implementation of the Voluntary Standard will assist businesses start to develop practices that will be required in a future regulatory environment and is likely to be used to interpret the precise requirements of the forthcoming mandatory guardrails.
Although the proposed mandatory guardrails have not yet been legislated, the Australian Government has signalled that it is taking the risks of AI seriously. Organisations should review their use of AI against the proposed requirements and the definitions of high-risk AI, and consider adopting the Voluntary Standard to build up internal governance processes in preparation for a future mandatory regulatory environment and to align with best practice.
In line with the first voluntary standard (and potentially the first mandatory guardrail), many organisations are now establishing internal governance frameworks and AI specific policies (e.g. responsible AI policies and user policies in the context of generative AI systems). This forward-looking best practice approach to AI governance will serve to align businesses on an organisational level when it comes to using AI productively and responsibly.
Authors
Tags
This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.