September 2020

PREVIOUS HOME NEXT

Revving up AI and directors’ duties – an insurance view


by Natania Locke and Helen Bird*


Much hype is attached to the potential of artificial intelligence (AI) to change society.

That extends to predictions about the future role of AI in corporations, with suggestions ranging from AI augmenting the capabilities of directors and other governance professionals within corporations to the more extreme prediction that AI will completely replace directors and managers.

Will we see ‘self-driving’ corporations in the way we anticipate self-driving cars? What impact will that have on the ideas of corporate governance that we have grown used to, such as that boards of directors are accountable to shareholders and owe specific legal duties that can be enforced against them by ASIC, the corporate regulator?

In a newly published article, Armour and Eidenmüller consider the implications that today’s AI holds for corporations and corporate governance,1 and the implications for corporations if AI should achieve singularity. ‘Singularity’ refers to the state where AI achieves general intelligence of equal or greater extent than human intelligence. We share some of their thoughts with you and, in particular, what they may mean for the insurance industry.

Today’s AI capabilities and governance

Armour and Eidenmüller conclude that the primary effect of today’s AI capability is to augment human decision-making. That has two impacts. First, it reduces internal agency and coordination costs as more activities can be done using AI models, removing humans from the equation. ‘Agency costs’ refers to the costs of a business when managers act self-interested in their decisions rather than optimally in the interests of investors. Armour and Eidenmüller argue that automated systems do not act self-interested, so agency costs must be reduced. However, that is replaced by the discretionary power for the establishment and testing of the new systems – a complex task demanding understanding of the systems and the technology underlying them from directors.

While we agree with the basic tenet of their reasoning,2 it is important to keep certain realities in mind. Integrating AI systems into legacy IT systems with a view to replace typical administrative management functions with AI may be easier said than done. There will be some risk of failure of systems, accompanied with the associated disruption to operations, compliance and possibly consumer satisfaction. It may simply end up in the too hard box.

Even if the implementation costs are considered warranted, developing such systems will be time-consuming and difficult. In particular, care must be taken to avoid biases and other errors, which are almost inevitably part of the human effort, being reflected in the AI system. The data on which the AI systems will rely must be of sufficient quality to deliver the outcomes expected. Finding such data is not always easy or even possible.

Of particular concern are unconscious biases that may be present in the training data, the effects of which may only become apparent over time.

Examples abound, including the Amazon recruiting tool that had to be abandoned after it favoured men over women.3 Not a good look for the tech industry where the ‘bro culture’ is already notorious. In short, introducing AI may simply swap one agency problem (management) for another (technology, IT experts).

The fidelity of the systems will also need to be monitored and tested regularly and cannot simply be assumed.  There will inevitably be humans behind the operation of the systems as they are maintained and improved. The tasks of developing, maintaining and improving will often be outsourced to third-party consultants, as few organisations will have the necessary in-house expertise.

Consultancies and other third-party vendors have incentives to compete with each other for business. They will have open access to the data used and produced by the systems. Moreover, they will have such data across numerous organisations. This may enable them to develop improved AI systems, but will simultaneously create concerns about appropriate use of the data. In short, the AI systems may not be as disinterested as suggested because they may partly serve the aims of their developers.

The endeavour to replace current systems with AI systems will come with new risks. Directors and officers will carry the final accountability for governing these risks.  They will have to develop their own AI awareness and interpretation skills. AI is likely to improve the quality of information accessible by directors, meaning there will be an expectation that the soundness of their judgement will improve. These factors will accentuate, rather than diminish, the duty of care and diligence expected from directors and officers.

Tomorrow’s AI capabilities and governance

There is no scientific agreement about the timelines for achieving singularity, or even whether it could be achieved at all. However, Armour and Eidenmüller argue that, even with today’s more limited AI capability, organisations could organise to ring-fence some business units into subsidiaries that are completely run by AI. That is what they refer to as ‘self-driving corporations’. Of course, there may need to be legislative amendment to facilitate such structures, as currently corporations must have human directors (Corporations Act 2001, s201A), but that is easily achievable.

As there will be no directors or officers to keep liable when third parties are prejudiced by actions taken by a self-driving corporation, alternative methods of regulation will need to be devised. Armour and Eidenmüller make some recommendations. The first is that registration of self-driving corporations should be made dependent on clearly stated corporate objectives and functions. Only objectives and functions that have been previously designated as desirable will be allowed, or registration could be refused. However, even algorithms that work towards allowed objectives and functions may fail and could cause considerable financial or personal loss. 

They therefore favour a strict liability regime whereby the ultimate liability will still vest in the corporation, rather than in any third-party vendors under product liability regimes. That conforms with the approach in the European Union, where strict liability is favoured for hazardous assets or activities.

However, this will bring third parties no closer to compensation, as the self-driving corporation will primarily be a judgement proofing exercise, typically devoid of any substantial disposable assets with which to settle a judgement.

To address that issue, they recommend legally mandated liability insurance as a condition of registration, with prescribed minimum coverage. Insurers will then have to price the risk of liability into premiums, which they would be better equipped to do than governments. Corporations that have included better safeguards against failures would benefit from lower premiums, which they consider an effective incentive for self-regulation.

While the suggestion of mandatory liability insurance is elegant4, we are not convinced the insurance industry would want to take up the risk of failure of fully-automated corporations. The strict liability regime would mean no proof of negligence was needed to fix liability. Conceivably, plaintiffs will need only to prove the system did not meet its stated functions and that they suffered prejudice as a legal consequence. There will be limited if any historical or comparable data with which to ascertain correct pricing of the risk presented by AI systems. High premiums may stifle uptake of the self-driving option, which means the data could remain patchy.

Just because something can be done, does not mean it should be done. Some have warned that self-driving corporations will probably be the go-to corporate form of criminal, terrorist and other anti-social organisations, because it will be difficult for outsiders to know that they are dealing with human-less entities. Those entities will not have a ‘real seat’ of operations, as there will be no humans involved in the day-to-day management of their activities. It is likely they will move to wherever they find the least prescriptive regulatory environment.

Currently, discussion of AI and corporate governance is very much in its infancy. Data quality, data sharing impediments, legacy IT systems, and the regulatory framework of the Corporations Act stand clearly in the way of self-driving corporations. Then there is the ‘trust’ issue. Before AI becomes a standard tool of corporate life, corporates, shareholders and regulators must come to accept, understand and feel comfortable to rely on AI-driven outcomes. In a world of uncertain and complex risk, best illustrated by the Covid-19 crisis, corporates have a habit of returning to ‘core’ business and core habits. Humans remaining in charge of business enterprises is a critical part of that.

* Dr Natania Cook is Deputy Chair of the Swinburne Law School and a Senior Lecturer; Helen Bird is a Senior Lecturer and Discipline Leader, Governance Studies, Swinburne Law School and Australian Graduate School of Entrepreneurship.


1 John Armour and Horst Eidenmüller, ‘Self-driving Corporations’ (2020) 10 Harvard Business Law Review 87.

2 See also Natania Locke and Helen Bird, ‘Perspectives on the Current and Imagined Role of Artificial Intelligence and Technology in Corporate Governance Practice and Regulation’ (2020) 35 Australian Journal of Corporate Law 4.

4Insurance is not an absolute guarantee of indemnity. It can be cancelled after a loss for a misrepresentation or non-disclosure that occurred during the contractual negotiation stage. Consequently, regulation might stipulate minimum coverage but that does not guarantee an insured loss will be paid.

 
Back to top
 
 

Resolve is the official publication of the Australian Insurance Law Association and
the New Zealand Insurance Law Association.