Home Insights Technology and human rights: emerging risks for companies and boards
Share

Technology and human rights: emerging risks for companies and boards

As Australia treads a rapid path towards becoming a leading digital economy, corporates are increasingly adopting emerging technologies, including artificial intelligence (AI), to assist with various business operations and functions.

But while novel technologies offer exciting commercial opportunities, they can also create new legal, reputational and human rights risks that companies and boards should be taking proactive steps to mitigate.

Directors and managers should understand the technology they are deploying in the business, in order to be able to assess and mitigate any risks arising from its use. These risks can be varied and in some cases extremely complex, requiring subject matter expert consideration of the technology and its impacts from the design stage through to end use. 

Liability risks for AI-informed decision-making

Companies may incur liability for unlawful decisions made using AI-informed technology. AI systems make decisions based on analysis of large databases, which may include data relating to historical human-made decisions. If that data indicates a trend of bias (for example, due to historically prevalent prejudices), that bias may be replicated in the decisions made by the AI system. 

Similarly, AI systems use algorithms that may reflect the prejudices of the engineers that developed them. If a company makes an AI-informed decision which is discriminatory due to underlying bias in the data set or algorithms – such as a hiring decision which factors in protected attributes such as race or gender – it may be liable for breach of anti-discrimination law.

Liability risks are likely to increase as regulation of AI use expands. For example, the Australian Human Rights Commission has recommended a moratorium on the use of biometric technology due to the high risk of human rights impacts. Companies should ensure that their deployment of AI does not conflict with expanding regulation.

How can liability risks be mitigated?

There are a number of measures and processes that companies and general counsel can put in place to verify appropriate AI-informed decision-making, including:

  • Obtaining contractual protections from the provider of the AI system. These may include warranties that the AI system is fit for purpose and has been trained on appropriate data, or indemnities against the liability resulting from discrimination in the AI system.

  • Taking operational steps to minimise the risk of harm resulting from its use of AI. These may include ensuring that the AI system is rigorously tested in a safe environment prior to commercial use, that the data used to train the AI system is fit for purpose and free from biases, that the operation and decisions made by the AI is subject to appropriate human oversight, and that appropriate procedures are put in place to handle complaints and redress any unintended harm.

  • Ensuring that an audit is conducted to determine what AI systems are already in use at the company or are proposed for future use. This will help general counsel understand the relevant risks that might arise from the company’s use of AI systems, and what mitigation measures would be appropriate to address those risks.

Directors’ duties and personal liability

As the use of technology expands, it is expected that directors will increasingly seek to use machine learning and AI to assist them in their own decision-making. At a minimum, directors will likely rely on AI-informed decisions taken elsewhere within the organisation. Where the AI is wrong, or has been built on flawed data-sets, wrong decisions or even decisions that breach the law may result. 

The question for directors is whether they may be exposed to a breach of their statutory duty to exercise reasonable care and diligence. For example, directors are obligated to inform themselves about the subject matter of business decisions to the extent that they reasonably believe to be appropriate. It may be difficult for directors to comply with this obligation if they rely upon the conclusions drawn by an AI system when they do not fully understand the operation of that system.

How can directors’ risks be mitigated? 

Steps that directors can take to mitigate their risks of breach of statutory duties and personal liability for AI-informed decision-making include:

  • Ensuring that an audit is conducted to determine what AI systems are already in use at the company or are proposed for future use. An AI audit helps directors understand what information and decisions they are making has been influenced or informed by AI, and empower them to further interrogate aspects and operation of the AI where necessary.

  • Requiring management to implement human rights safeguards. These may include conducting human rights impact assessments for each system and ensuring human oversight over the operation of the system to minimise the risks of unexpected bias in decisions.

  • Increasing the technology capabilities of the board through targeted training. This will enable the board to provide appropriate oversight of the company’s use of AI. A recent study by the Australian Institute of Company Directors and the University of Sydney showed that only 3% of surveyed company directors brought technological expertise to the board.

Reputational and human rights risks of AI use

Even if companies do not incur liability for technology-assisted decisions, they may still suffer reputational damage and associated loss of public trust if those decisions impact upon human rights. Even if a company’s AI systems do not make harmful decisions, non-transparent AI-informed decisions may contribute to public distrust of the company. 

The risk of reputational damage associated with AI is particularly high in a social context of low public trust in AI – a recent report by the University of Queensland and KPMG indicated that only one in three Australians currently trust AI technology. 

How can human rights and reputational risks be mitigated? 

There are several voluntary tools that companies may use to reduce their reputational and liability risk and ensure that their AI systems are safe, secure and reliable. For example, the Australian Government has introduced voluntary AI Ethics Principles, which encourage companies deploying AI to ensure that: 

  • they respect human rights;

  • they protect diversity and the autonomy of individuals;

  • the outcomes of their decisions are fair and remain inclusive and accessible;

  • there is a measure of transparency and explainability on any decisions made using AI;

  • consumers are able to contest those decisions; and, ultimately

  • those responsible for the deployment of the technology are accountable for the decisions that result.

Further, the Australian Human Rights Commission has recommended private sector adoption of human rights impact assessments to determine how their use of AI systems engages human rights, and the compliance measures that can be taken to ensure that human rights are not violated.

***

As we look ahead to a future in which emerging technologies will play an increasingly important role, it is vitally important that companies and boards take proactive steps to mitigate the associated legal, reputational and human rights risks. 

This article is part of our publication Continuity Beyond Crises: Staying ahead of risk in an evolving legal landscape. Read more here.


Authors

NORTH-james-highres_SMALL
James North

Head of Technology, Media and Telecommunications

WYNN POPE Phoebe SMALL
Dr Phoebe Wynn-Pope

Head of Responsible Business and ESG


Tags

Board Advisory Technology, Media and Telecommunications Responsible Business and ESG

This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.