17 September 2019
Artificial intelligence (AI) and automation more broadly continue to be identified as the next frontier in productivity enhancement and growth. Last year, McKinsey estimated AI could potentially increase economic outputs by $13 trillion by 2030, and add to global GDP by approximately 1.2%.[1]
Consistent with the trend, it is likely that Australian boards will increasingly look to AI and machine learning to improve the quality of their decision making. But can an algorithm run a company instead of a director?
The term ‘AI’ is often used synonymously with machine learning, but this is not strictly correct.
True AI exhibits features of human-like intelligence and the ability to use human-like judgement in decision-making. This is in contrast to machine learning tools that conduct statistical analysis of data sets to identify patterns, but which are not exercising ‘judgement’ to reach conclusions. Despite these differences, both AI and machine learning tools rely on large, high quality data sets to improve, and both will inevitably make mistakes along the way.
Predictions of robots in the boardroom are not far-fetched. In late 2016, OMX-listed Tieto Corporation announced that it had appointed an AI platform known as Alicia T to be a member of its executive leadership team. Alicia T is equipped with a conversational interface that allows its human counterparts to ask it questions. The platform even has a vote on some management decisions.
More recently, Hong Kong venture capitalist Deep Knowledge Ventures appointed an algorithm known as Vital to help the fund make its investment decisions. These appointments reflect a growing acceptance that machine learning may be capable of making better business decisions than human beings.
For the time being, the answer to this question is no. A robot can’t be a director under Australian law. By definition, a director must be a ‘person’. We do expect, however, that directors will increasingly seek to use machine learning and AI to assist them in their own decision making and to rely on decisions taken elsewhere within the organisation that are the product of the application of AI. In this context, it is critical that directors are aware of the legal risks associated with using AI and how to properly manage them.
AI is not foolproof and directors must expect that some decisions made by AI will be wrong. This may be for numerous reasons, including that:
Where the AI is wrong, this can result in wrong decisions or even decisions that breach the law. For example, in the human resources context, the use of AI tools in conjunction with data about previous successful employees to predict which candidates are most likely to be successful in the future may simply reinforce existing biases or discrimination in hiring practices. The issue for directors is whether they might be exposed to a breach of their duty to exercise reasonable care and diligence as a result of the failure of the AI.
There are three important safe harbours available under the Corporations Act to directors who are accused of breaching their duty to exercise reasonable care and diligence. These are:
Australian courts have not yet had an opportunity to consider how those safe harbours might respond to a case where an impugned decision was made by or with the assistance of AI. However, a first principles assessment suggests that the safe harbours might not be available if directors were to simply adopt decisions made by AI without exercising independent judgement.
1. The business judgement rule
Under section 180(2) of the Corporations Act, a director who makes a business judgement is taken to have discharged his or her duty of care and diligence if they:
There would seem to be two potential obstacles to a director who relies on AI to make a decision, taking advantage of the business judgement rule (assuming that items 1,2 and 4 are made out).
The first is whether the director has made a ‘business judgement’ at all. Under section 180(3), a business judgement means any decision to “take or not take action in respect of a matter relevant to the business operations of the corporation”. In ASIC v Rich, Austin J noted that the decision must be ‘consciously made’ and that the director must have ‘turned his or her mind to the matter’. Austin J’s language seems to attach to the impugned decision itself rather than to the preceding decision to make that decision using AI. It would appear, therefore, that a director who wholly hands over decision making to AI does not make a business judgement to which the defence can attach.
This point is further underscored by the requirement in section 180(2)(c) that the director must have informed themselves about the subject matter of the judgement ‘to the extent they reasonably believe to be appropriate’. Again, this requirement appears to attach to the impugned decision and is not satisfied by a director who determines that a class of decision making can be best left to AI. If any of those decisions turn out to be incorrect, the director can hardly say that they have informed themselves about the subject matter of that decision in the manner required by section 180(2)(c).
2. The right to delegate
Section 198D(1)(d) of the Corporations Act entitles directors to delegate any of their powers to another ‘person’. Again, the reference to a ‘person’ here precludes delegation to a machine. The directors may, however, choose to delegate to a person (such as an employee) who they know will rely on the use of AI for the purposes of discharging that power.
Under section 190(2)(b), a director is not responsible for the actions of the delegate where the director believed on reasonable grounds, in good faith and after making proper enquiry if the circumstances indicated the need for inquiry, that the delegate was reliable and competent in relation to the power delegated. Therefore, the question is what inquiry should directors undertake before they delegate any of their decision making power to a person who will use AI in exercising that power?
We would suggest that the proposed use of AI constitutes circumstances that ‘indicate the need for inquiry’ as to the reliability and competence of the decision maker within the meaning of section 190(2)(b).
Given that the decision will be made or informed by AI, this likely translates into an obligation on directors to satisfy themselves as to the reliability and competence of the AI itself. A director who fails to interrogate the algorithm and or data set, or to question the appropriateness of the particular platform being deployed to the duties being delegated, risks the court finding that the director has not satisfied himself or herself as to the reliability and competence of the delegate. In that case, the director will be liable for any failure of the delegate as if it were the director’s own breach of duty see section 180(1).
3. The right of reliance
In certain circumstances, a director is entitled to rely on information or advice taken from an employee, professional adviser, expert or another director.
Section 189 of the Corporations Act provides that a director’s reliance on such information or advice will be deemed to be reasonable for the purposes of discharging that director’s duty of care and diligence if the reliance was made in good faith and after making an independent assessment of the information or advice, having regard to the director’s knowledge of the corporation and the complexity of the structure and operations of the corporation.
There is no apparent reason why a director would not be entitled to rely on information or advice that has been generated by the relevant adviser with the benefit of AI. What is not clear, however, is whether section 189 allows a director to rely directly on the output of AI itself. This turns on whether the court would be willing to regard the AI tool as a ‘professional adviser or expert’ within the meaning of section 189.
While it is unlikely that Parliament intended those words to include a machine, the wording does not necessarily preclude such a finding. Demonstrating that the AI is expert in relation to a particular subject, however, would require strong evidence as to the workings of the automated decision making and its application to the subject matter of the decision. The safer and more likely course is therefore for directors to rely on the advice of an employee or expert that has used AI in forming the advice.
Where a director relies on the advice of an employee which is generated with the help of AI, the director must believe on reasonable grounds that the employee is ‘reliable and competent in relation to the matters concerned’. In the case of a professional adviser, the director must believe on reasonable grounds that the ‘matter is within the person’s professional or expert competence’. This creates a potential disconnect where machine learning is used to reach a decision, as the person who is expert in the application of AI may not be expert in the subject matter to which the AI is being deployed. The language of the Corporations Act seems to require expertise in relation to the subject matter rather than expertise in the way that decisions are made. Following the logic suggests that the reliance defence is only available where the adviser has taken the output of the AI and applied their subject matter expertise to the outcome before providing advice to the board.
The final requirement – that the director must have made an ‘independent assessment’ of the information or advice on which he or she relies – is perhaps the most significant.
This goes beyond the equivalent requirement in the business judgement rule (which requires that the director be informed about the subject matter of the judgement ‘to the extent they reasonably believe to be appropriate’) or the delegation right (which requires the director to make proper enquiry that the delegate was reliable and competent), in that it requires the director to actively interrogate the advice itself. The degree of interrogation required will vary depending on the gravity of the decision and its potential consequences to the company. On any assessment, however, it appears that a director must not simply follow a decision that is formed by AI and must form their own view on the issue, if the reliance defence is to be made out.
It is clear from the above analysis that directors who wish to make use of AI should do so as an aide to their own decision making, rather than as a substitute for making an independent assessment.
On every level, the law continues to expect directors to exercise an inquiring mind as to the matters before them and to interrogate the advice and information on which they rely. The risk of automation bias, where humans are inclined to assume that a decision made by machine must be correct, is significant.
In the case of AI tools, directors will need to invest in their understanding of the technology and how it is being deployed. This can be challenging in the context of complex proprietary systems, but at the very least, directors should be requiring rigorous testing of the outputs of AI tools for inbuilt biases and other problems.
At the end of this article, we have set out a series of questions that directors may choose to ask in relation to the use of AI tools in the company in order to guard against the risks identified in this article. However, in most cases, we would recommend that a director seeks advice from their General Counsel or other legal adviser about the use of AI tools in decision making prior to their being deployed.
Proponents of AI may complain that imposing requirements on directors to ‘second guess’ AI defeats the purpose of the technology, and risks impeding innovation and good decision making in Australian boardrooms. We consider, however, that the current state of the law is well placed to both support the further implementation of AI tools and preserve good governance in decision making.
It is important that responsibility for corporate decisions continues to rest with a tangible being who is ultimately answerable to shareholders. This tension will support the development of good AI and the sensible application of new tools to boardroom decisions.
Further regulation of AI is also on the horizon, with a number of jurisdictions considering the ethical and liability implications of the use of these technologies.
[1] McKinsey & Company, Notes from the AI frontier: Modelling the impact of AI on the world economy (April 2018)
Authors
Head of Technology, Media and Telecommunications
Tags
This publication is introductory in nature. Its content is current at the date of publication. It does not constitute legal advice and should not be relied upon as such. You should always obtain legal advice based on your specific circumstances before taking any action relating to matters covered by this publication. Some information may have been obtained from external sources, and we cannot guarantee the accuracy or currency of any such information.
Head of Technology, Media and Telecommunications