06 October 2021
Directors and managers should understand the technology they are deploying in the business, in order to be able to assess and mitigate any risks arising from its use. These risks can be varied and in some cases extremely complex, requiring subject matter expert consideration of the technology and its impacts from the design stage through to end use.
Companies may incur liability for unlawful decisions made using AI-informed technology. AI systems make decisions based on analysis of large databases, which may include data relating to historical human-made decisions. If that data indicates a trend of bias (for example, due to historically prevalent prejudices), that bias may be replicated in the decisions made by the AI system.
Similarly, AI systems use algorithms that may reflect the prejudices of the engineers that developed them. If a company makes an AI-informed decision which is discriminatory due to underlying bias in the data set or algorithms – such as a hiring decision which factors in protected attributes such as race or gender – it may be liable for breach of anti-discrimination law.
Liability risks are likely to increase as regulation of AI use expands. For example, the Australian Human Rights Commission has recommended a moratorium on the use of biometric technology due to the high risk of human rights impacts. Companies should ensure that their deployment of AI does not conflict with expanding regulation.
There are a number of measures and processes that companies and general counsel can put in place to verify appropriate AI-informed decision-making, including:
As the use of technology expands, it is expected that directors will increasingly seek to use machine learning and AI to assist them in their own decision-making. At a minimum, directors will likely rely on AI-informed decisions taken elsewhere within the organisation. Where the AI is wrong, or has been built on flawed data-sets, wrong decisions or even decisions that breach the law may result.
The question for directors is whether they may be exposed to a breach of their statutory duty to exercise reasonable care and diligence. For example, directors are obligated to inform themselves about the subject matter of business decisions to the extent that they reasonably believe to be appropriate. It may be difficult for directors to comply with this obligation if they rely upon the conclusions drawn by an AI system when they do not fully understand the operation of that system.
Steps that directors can take to mitigate their risks of breach of statutory duties and personal liability for AI-informed decision-making include:
Even if companies do not incur liability for technology-assisted decisions, they may still suffer reputational damage and associated loss of public trust if those decisions impact upon human rights. Even if a company’s AI systems do not make harmful decisions, non-transparent AI-informed decisions may contribute to public distrust of the company.
The risk of reputational damage associated with AI is particularly high in a social context of low public trust in AI – a recent report by the University of Queensland and KPMG indicated that only one in three Australians currently trust AI technology.
There are several voluntary tools that companies may use to reduce their reputational and liability risk and ensure that their AI systems are safe, secure and reliable. For example, the Australian Government has introduced voluntary AI Ethics Principles, which encourage companies deploying AI to ensure that:
Further, the Australian Human Rights Commission has recommended private sector adoption of human rights impact assessments to determine how their use of AI systems engages human rights, and the compliance measures that can be taken to ensure that human rights are not violated.
As we look ahead to a future in which emerging technologies will play an increasingly important role, it is vitally important that companies and boards take proactive steps to mitigate the associated legal, reputational and human rights risks.
This article is part of our publication Continuity Beyond Crises: Staying ahead of risk in an evolving legal landscape. Read more here.
Authors
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG
Tags
This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.
Head of Technology, Media and Telecommunications
Head of Responsible Business and ESG