08 May 2019
As similar legal and policy developments start to emerge in Australia – the recent release of Data61’s Ethics Framework being one example – we consider whether the approaches being taken to regulate AI in key overseas jurisdictions like Europe and the US are influencing AI policy-making in Australia.
The Coordinated Plan aims to foster the development and use of AI and robotics in Europe, and has a number of objectives, including the development of ethics guidelines and ensuring the EU remains competitive in the AI sector. The plan also proposes joint action by EU Member States in four key areas:
What are the EU Ethics Guidelines?
Of the four key areas, the Ethics Guidelines are of most interest from a regulatory perspective. The Ethics Guidelines proposed in the Coordinated Plan are designed to ‘maximise the benefits of AI while minimising its risks’.
Following the publication of draft Ethics Guidelines in December 2018 (which received more than 500 comments), revised Ethics Guidelines were released by the EU’s High Level Expert Group on Artificial Intelligence on 8 April 2019.
These are focused on creating a concept of ‘Trustworthy AI’, which is comprised of three core components which should be met throughout an AI system’s life cycle:[1]
These three core components underpin the following seven requirements for ‘Trustworthy AI’, many of which closely align with existing privacy laws, and particularly the EU General Data Protection Regulation (GDPR).
The Ethics Guidelines are addressed to all ‘stakeholders’ (any person or organisation that develops, deploys, uses or is affected by AI), and are intended to go ‘beyond a list of ethical principles, by providing guidance on how such principles can be operationalised in socio-technical systems’. The Ethics Guidelines also include practical checklists that stakeholders can use when implementing AI into their organisations.
In addition to the immediate potential uses for stakeholders, the Ethics Guidelines are designed to foster discussion on an ethical framework for AI at a global level, and are likely to be an influential reference document for policy and lawmakers around the world, including those in Australia.
For law makers and lawyers, the Ethics Guidelines also provide an insight into how laws may need to adapt to deal with the increasing use and prevalence of AI. While it is not proposed that the Ethics Guidelines be legally binding, the EU Commission has revealed that stakeholders will be able to voluntarily endorse and sign up to a ‘pilot phase’ to test whether the guidelines can be effectively applied to AI systems in June 2019.
In addition to considering compliance with EU standards and laws, the ability to voluntarily endorse (and apply) the Ethics Guidelines may become an important step for AI businesses in Australia that are considering entry into the EU market.
In early April 2019, the ‘Algorithmic Accountability Act’ was introduced as a bill to the US Congress.
If passed, the bill would require certain organisations to conduct ‘automated decision system impact assessments’ and ‘data protection impact assessments’ for algorithmic decision-making systems (including AI systems). In short, affected organisations would be required to proactively evaluate their algorithms to prevent inaccurate, unfair, biased or discriminatory decisions.
The bill would place regulatory power in the hands of the US Federal Trade Commission, the same agency with responsibility for consumer protection and antitrust regulation. It would apply to organisations with annual revenue above US$50 million, and also to data brokers and businesses that hold data for over one million consumers.
While the introduction of the bill has been praised as an important step towards AI regulation, it is unclear whether or when it will become law, largely due to the current political environment in the US. It is, however, likely to remain an important topic leading into the 2020 US elections, with multiple large US tech companies increasingly under the spotlight for their use of automated decision making systems.
As part of the 2018 Federal Budget, the Federal Government pledged to invest almost $30 million towards improving Australia’s capability in AI and machine learning.
Of this investment, the Government has allocated approximately $3 million to Data61 (a division of the CSIRO) to develop an AI ‘technology roadmap’ and an AI ‘ethics framework’. It is intended that these documents will help to pave the way forward for AI innovation and policy making in Australia. It is understood that the remainder of the $30 million Federal Government investment is to be distributed among several organisations including Standards Australia and Co‑operative Research Centres.
On 5 April 2019, Data61 released its discussion paper titled Artificial Intelligence: Australia’s Ethics Framework. The purpose of the paper is to encourage a conversation about how Australia develops and uses AI, and makes direct reference to the developments in Europe (and elsewhere), demonstrating that the Australian draft framework has, unsurprisingly, been influenced by the approach being taken to regulate AI in key overseas jurisdictions like Europe.
The Data61 paper bases the proposed ethics framework on eight ‘Core Principles for AI’, which are designed to guide organisations in the use or development of AI systems:[2]
The principles have clear similarities to the seven requirements for ‘Trustworthy AI’ included the EU Ethics Guidelines.
The Commonwealth Department of Industry, Innovation and Science has invited written submissions on the proposed Australian ethics framework from industry and other interested parties, including:
Submissions in response to the Data61 discussion paper are due by 31 May 2019.
The EU Coordinated Plan and the introduction of the ‘Algorithmic Accountability Act’ as a bill in the US underline the importance that governments are placing on AI and its expected impact on society and the global economy.
It is likely that, in due course, an independent certification process will be developed for AI systems similar to the ‘Conformité Européenne’ or ‘CE’ marking of electronic devices. CE marking has become a respected and internationally recognised certification that indicates that a product conforms with particular health, safety, and environmental standards.
A certification process for AI systems would clearly need to be more sophisticated and address a wide range of matters, including those in the Ethics Guidelines. It would also provide suppliers of AI systems with a service mark that could be used to provide consumers with confidence that the AI system has been independently verified to meet certain standards.
It is certainly an exciting time for regulatory and policy developments relating to AI. We will continue to monitor and report on AI regulatory developments overseas and in Australia.
[1] Ethics Guidelines for Trustworthy AI – High-Level Expert Group on Artificial Intelligence (8 April 2019).
Authors
Special Counsel
Tags