09 July 2021
The Australian Human Rights Commission’s Human Rights and Technology Final Report (Report) was recently tabled in the Australian Parliament. It is the culmination of a three-year national project by the Commission, and comes at a time of unprecedented technological growth and investment in AI. Corrs hosted a webinar to explore the report’s key findings and recommendations.
The event featured a presentation by Human Rights Commissioner Edward Santow who, while recognising the human rights risks that can arise in artificial intelligence (AI) informed decision-making, explained that the Commission ultimately recommended reforms to ensure that the law does not treat decision-making informed by AI differently to human decision-making.
The timing of the Report is particularly important as, on 18 June 2021, the Australian Government announced an A$124 million AI Action Plan to accelerate the development and adoption of new AI technologies by supporting the private sector, growing and attracting AI talent and directing AI toward solving national challenges. In this push to develop and adopt new AI and technologies it will be critically important for businesses designing and deploying AI to understand and consider the human rights implications.
We outline six of the Commission’s key recommendations in respect to the use of AI, and offer guidance for companies seeking to develop or use AI technologies in Australia.
The Final Report recommends regulation of ‘AI-informed decision-making’, defined as “a decision, or decision-making process, that is materially assisted by the use of an AI technology or technique, and where the decision has a legal or similarly significant effect for an individual.”
This definition can be broken down into three elements:
The phrase ‘decision or decision-making process’ recognises that the use of AI in decision-making may affect a person’s human rights through:
The Report recommends regulation to ensure that human rights are appropriately safeguarded against the use of AI in both final decisions and decision-making processes.
The Final Report recommends a materiality threshold to ensure that regulation does not capture uses of AI that play a trivial role in decision-making, which could chill innovation of important technology without a meaningful corresponding increase in human rights protection. For example, a human decision-maker recording their decision in a word processing application which runs on AI would not constitute material AI-informed decision-making.
Finally, the AI-informed decision must have a ‘legal or similarly significant effect’ for an individual. This expression is taken from the European Union’s General Data Protection Regulation (GDPR), where a ‘similarly significant effect’ has been described as an effect with an equivalent impact to a legal effect on an individual’s circumstances, behaviour or choices.
The GDPR gives the examples of refusal of an online credit application, or e-recruiting practices as decisions with similarly significant effects and where human rights are likely to be engaged more broadly.
The Report emphasises that its focus is to recommend technology-neutral regulation – that is, regulation that ensures organisations are suitably accountable for their AI-informed decision-making without imposing more onerous obligations than those that apply to conventional human decision-making.
The Report stresses the importance of technology-neutral regulation to avoiding regulatory chill of beneficial AI innovation in Australia.
The Report’s most significant AI-related recommendations for companies include the following.
The Report recommends clarifying the law with a rebuttable presumption that a decision-maker’s legal liability for its decision is not affected by the fact that the decision was AI-informed.
The main field of law where this clarification is likely to be relevant is anti-discrimination. AI systems make decisions based on analysis of large databases of past human-made decisions. If that data indicates a trend of bias (for example, due to historically prevalent prejudices), that bias may be replicated in the decisions made by the AI system. If a company makes an AI-informed decision which is discriminatory due to underlying bias in the data set, it may be liable for breach of anti-discrimination law.
However, the clarification importantly does not alter the ordinary liability position for corporate decision-making. For example, the manufacturer of defective AI technology may still be liable:
The recommended liability scheme follows the fault-based liability norm in Australian law. This may be contrasted with proposed AI regulation in the European Union, which would impose strict joint and several liability upon companies that use ‘high-risk’ AI systems, as well as upon manufacturers, distributors and importers of defective products including AI systems. The proposed EU regime would create strict liability for defective AI systems across the entire supply chain.
The Report also aims to ensure that companies cannot rely upon their use of AI-informed decision-making systems to avoid existing legal obligations of transparency. It makes three important recommendations relating to transparency:
Particular examples of a company being unable to comply with such an order (and hence being subject to an adverse inference) include where:
Biometric technology is technology that uses an individual’s physical or biological characteristics to identify or characterise that person. The Report highlights specific human rights risks of biometric technology, particularly facial recognition technology, which:
Where facial recognition technology is used in what the Report calls ‘high-stakes decision making’, such as policing, errors in identification can lead to significant risks of injustice and other human rights infringement.
As a result, and in a significant exception to the Commission’s technology-neutral approach, the Report recommends a range of specific legislative regulation on the use of biometric technologies, including facial recognition to provide stronger human rights protections. Until this is in place, the Report recommends a temporary moratorium on the use of any biometric technology including facial recognition in high-risk areas.
The Report also recommends that the Australian Government develop a tool to assist companies to undertake human rights impact assessments (HRIAs), in order to:
While the proposed HRIAs are proposed as optional for the private sector, they would help companies minimise legal and human rights risks resulting from their use of AI. Companies that use HRIAs to build human rights considerations into their use of AI also increase their capability to develop relationships of trust with consumers and other affected individuals. These trust relationships are important for companies to minimise community resistance to the use of AI in their business.
In light of the Commission’s recommendations, companies should conduct an internal audit of any AI systems that are already in use or are proposed for use. An audit is advisable because AI systems deployed by the company may not be clearly labelled as AI, but only as tools according to their ultimate function.
Once the company’s AI systems are identified, the audit should also determine which AI systems are involved in AI-informed decision-making. It is particularly important to flag any biometric technology in high risk areas, as use of this technology may be affected by a moratorium if the Commissioner's recommendations are adopted.
For each AI-informed decision-making system, companies may implement legal and human rights safeguards such as conducting HRIAs for each system and introducing human oversight over the operation of the system to minimise the risk of unexpected bias in its decisions.
***
As companies look to a future in which AI will play an increasingly important role, it’s important to take proactive steps to mitigate the risks that may be created by the burgeoning use of AI in business, and these risks include human rights risks.
Powered by Froala Editor
Authors
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG
Tags
This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG