09 June 2023
Australian companies may soon be required to comply with a regulatory framework for the use of Artificial Intelligence (AI), which is to be developed following a public consultation run by the Department of Industry, Science and Resources.
The Safe and Responsible AI in Australia discussion paper (Discussion Paper), announced by Minister for Industry and Science Ed Husic last week, poses a range of consultation questions concerning the direction and scope of Australia’s approach to regulating the rapidly developing technology. The paper has a particular focus on adoption of a ‘risk-based framework’ favoured in other advanced economies, including the European Union (EU).
Organisations now have an opportunity to make submissions on the direction of Australia’s regulatory approach, with public consultation closing on 26 July 2023. As this is an early-stage consultation, we anticipate further rounds of public consultation will follow, although no timeline has been provided for the finalisation of legislation.
As regulators in jurisdictions around the world race to formulate a response to increasingly powerful AI platforms like OpenAI’s ChatGPT, a number of different approaches have taken shape.
One model is to focus on technological neutrality, as proposed in a current whitepaper open for consultation in the United Kingdom titled AI regulation: a pro-innovation approach. Under this proposal, no AI-specific laws will be developed. Instead, regulators are advised to consider five principles in applying existing regulatory frameworks to the use of AI. Also proposed are relaxed-regulation sandboxes, with a view to avoid stifling development in the AI field.
Contrasting this is the more prescriptive approach taken by the People’s Republic of China, which is developing regulations for specific use cases of AI. It has laws which govern how companies develop ‘deep synthesis technology’, used to generate deep fakes, and is currently consulting the public on draft rules to manage how companies develop generative AI products.
The risk-based model, which is the focus of the Discussion Paper, can be seen as a ‘Goldilocks’ model, striking a balance between the two other approaches, and is the official approach to AI regulation called for by members of the G7 following its 49th summit in April this year. The European Parliament is also set to vote this month on a risk-based regulatory model that would introduce separate regulatory requirements for minimal, limited, high and unacceptable risks. Under the EU’s AI Act, AI posing a minimal risk will be permitted with no mandatory obligations, and AI posing an unacceptable risk will be banned. The Act is likely to receive sufficient support and be adopted by the end of 2023.
While no commitments have been made regarding Australia’s approach to regulating AI, we consider it likely that a risk-based model will be adopted, given the international climate and the number of consultation questions devoted to the model in the Discussion Paper. However, as the regulation of AI remains uncharted territory worldwide (even in the EU, which typically leads the world in technology regulation), issues may arise with the risk-based approach as the technology evolves.
One issue beginning to emerge already is the question of whether certain aspects of AI require specific rules which cannot be dealt with under a universal framework. This was seen in the last-minute drafting of the EU’s AI Act, which inserted an obligation on generative AI platforms (platforms which generate text, image, video and other media) to disclose where models had been trained on copyright works. This would put a significant burden on language models like ChatGPT which train on publicly available texts in which copyright may subsist. China is also specifically addressing generative AI, with a draft law now open for public comment which would impose requirements on the content that generative AI models train on. Australia’s Discussion Paper is apparently alive to the issue of technology-specific regulation, and poses the consultation question of how a risk-based framework applies to foundational models for generative AI (large language models and multi-modal foundation models). This is an important issue for many organisations given many AI platforms use the same underlying foundational model, and when addressing it lawmakers should be mindful of the risk of duplication and potential inconsistency in any compliance measures.
The Discussion Paper provides a ‘possible draft’ risk management framework, modelled after the EU’s. This would see the development of obligations attaching to platforms categorised as:
Though the specific obligations to be imposed as part of this framework are subject to consultation, the Discussion Paper does provide the following draft elements, with varying standards of compliance for each risk level. We have set these out below alongside our comments on issues that organisations should consider.
Proposed element for risk-based regulation | Comment |
Impact assessments to consider the impact of AI, ranging from basic self-assessments to independent expert assessments | This is in line with proposals made by the Australian Human Rights Commission, which proposed compulsory human rights impact assessments for AI implemented by government bodies. The scope of impact assessments, and whether they will require assessment of human rights impacts, is not yet clear |
Notification requirements to inform end users where AI is used in ways that may materially affect them | Jurisdictions have set different standards for notifications for automated decision making, including whether decisions should be solely or substantially automated to give rise to notification and other obligations. How ‘use of AI’ and ‘materially affect’ are defined will determine the impact of this requirement |
‘Human in the loop’ requirements for human sign-off on certain high-risk AI decisions, depending on a range of factors, including the complexity and risk of the decision | The paper acknowledges that human oversight may be overly burdensome in use cases where AI is deployed at scale, which may be a large portion of AI implementations. It is not clear how requirements may be drafted to address this issue |
Explanations of AI decisions provided to end users, experts and regulators | ‘Explainability’ continues to be an important legal consideration in AI, both to comply with proposed regulatory frameworks around the world, as well as to determine liability when errors occur. While AI companies are working to address this, it remains to be seen whether true explainability is technically achievable, particularly in more advanced AI implementations like machine learning |
Training, including potential requirements to nominate responsible employees for the oversight of AI, as well as requirements for monitoring and documentation of implementation and outcomes of AI, ranging from internal monitoring to external audits | Entities will need to modify internal governance structures and policies to meet this requirement |
The paper also poses more open-ended consultation questions about the general direction of Australia’s AI regulation, including whether sector-specific regulation should be considered, how regulation should apply to foundational models like ChatGPT and whether some AI implementations should be banned. This last question concerns a key difference between the Discussion Paper’s draft framework and the model proposed in the EU, which imposes a complete ban on certain AI implementations which pose an ‘unacceptable risk’, such as government-sponsored social scoring.
Another open question is the extent to which AI-related proposals in other discussion and policy papers, namely the recent Privacy Act Review Report and the Australian Human Rights Commission’s ‘Human Rights and Technology’ Final Report, will be aligned with Australia’s eventual AI law. For example, the Privacy Act Review Report proposes privacy impact assessments be made for activities with high privacy risks, including automated decision making.
AI is developing rapidly, as are attempts by regulators in jurisdictions around the world to grapple and address its risks.
The Safe and Responsible AI in Australia discussion paper represents Australia’s first step towards defining its own approach to regulating AI. Regardless of whether Australia adopts a risk-based framework, or chooses another approach, the technical and regulatory burden on entities implementing AI will likely be significant.
While we await the new AI regulatory framework, organisations should actively manage the risks that AI presents, including by:
Authors
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG
Senior Associate
Lawyer
Lawyer
Tags
This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG