Conference Issue 2018

PREVIOUS HOME NEXT

Artificial intelligence needs agile, flexible laws


New technologies require legal responses to ensure the right balance is struck between encouraging innovation and protecting consumers' rights and interests, NSW Appeal Court President Justice Margaret Beazley told the AILA conference in Perth.

"Existing laws need to be reviewed to determine whether they are agile and flexible enough to withstand the current rate of technological change and, if not, lawmakers must be willing to adopt new legal and regulatory frameworks," she said.

Japanese insurer Fukoku Mutual Life last year introduced the IBM-developed cognitive computing system Watson to calculate payouts to policyholders, making 34 employees redundant. It anticipated use of artificial intelligence (AI) would drastically reduce the time needed to calculate payouts and increase productivity by an estimated 30%.

Last November, Suncorp announced it had integrated Watson technology into its online claims process for motor vehicle claims. The system enabled customers to lodge claims online, receive an outcome, pay any excess, and choose their repairer all within five minutes.

Watson was trained using nearly 15,000 claim scenarios and used IBM's Natural Language Classifier to interpret and analyse customer descriptions of accidents, which allowed it to understand slang such as 'crunched' and 'slammed'. Justice Beazley said Suncorp reported that, since its launch, Watson had been able to complete 90% of claims.

US start up Lemonade used 'chatbots' (artificial conversational entities) and cognitive automation to process insurance claims within seconds.

Justice Beazley said the rise of AI would have significant implications for the insurance industry, from using chatbots to guide consumers through applications; implementing robo-advisers to provide financial product advice; designing policies for autonomous vehicles; using the Internet of Things – that is, connecting all devices to the internet; to applying AI tools to determine claims.

She said global business value derived from AI had been predicted at a total of $1.2 trillion this year, up 70% from last year.

In a common law system, the doctrines of stare decisis and precedent meant a court's statement of a common law principle extended beyond its particular application to the case at hand. "It sets the law for the future and governs the way in which cases that raise the same issue are later decided.," Justice Beazley said.

"The common law is also adept at the interpretation and application of statute. It is the common law principles of statutory interpretation that govern the task. Again, the principles of stare decisis and precedent operate to determine the future application of a statute to a particular case. Determinations in one Australian jurisdiction, whether [about] common law principles or statute, apply in all other jurisdictions unless considered by another jurisdiction to be plainly wrong. This caveat can create uncertainty, at least until the High Court makes a binding determination."

Those principles were developed in a non-digital world and an essentially non-digital economy, which by definition was slower that its digital counterpart. "What will be the legal and economic consequences in a digital world? What legal developments will be required to create legal certainty in a digital economy?" Justice Beazley asked.

The challenge lay not only in the field of 'straight' insurance law. "There are related issues concerning data collection, data protection, privacy and intellectual property rights. These issues are not only legal. They pose important governance questions for corporations and ethical issues as to how [insurers] conduct business."

She said reputation issues raised at the financial services royal commission would affect insurers. Already there had been media reports of insurers "determining the availability of life policies based on medical records of past illnesses, including mental illness – even if mild and reactive, such as after a death – rather than on the prognosis for the illness in the particular individual.

"AI is changing the way in which the insurance industry delivers services to consumers and it is critical the law adapts and responds accordingly," Justice Beazley said.

In 2017, the European Commission conducted a public consultation on the impact of new technology on the European financial services sector. In response, Insurance Europe said legislation and regulatory rules must be "digital-friendly, technologically neutral and sufficiently future proof".

In March, the European Commission released a fintech action plan, which said "technology neutrality is one of the guiding principles of the EC's policies".

Justice Beazley said a technologically neutral law did not discriminate between different forms of technology or advantage a particular form of technology; it applied irrespective of what technology was used.

The Australian Treasury had indicated support for technological neutrality, including in fintech, saying: "We [are] committed to amending priority areas of existing financial regulation to ensure they are technology neutral. We will also embed the principle of technology neutrality into our approach for making future legislation and regulations."

Justice Beazley said adopting a technology neutral approach to regulations would enable businesses to adopt approaches that best suited their business models and consumer preferences. It would ensure regulators could readily respond and adapt their oversight to take account of innovation and development of new technologies.

Justice Beazley said flexibility and longevity were desirable outcomes. However, Brad Greenberg, writing in the Minnesota Law Journal, said: "Neutrality, it turns out, is both suboptimal and often self-defeating. It is also not neutral." Considering US copyright law, he said the law must be more technologically specific to achieve the perceived benefits of technological neutrality, including "improving legal tailoring, reducing legal uncertainty, limiting efforts to exploit statutory ambiguity" and increasing statutory longevity.

Greenberg said laws inevitably drafted with existing technologies in mind harboured the potential to discriminate, even if inadvertently, against new technologies.

Justice Beazley said there may be challenges in proving appropriate advice had been given when a robo-adviser applied a pre-programmed set of results and outcomes.

Another challenge for the law was in the development, if not the evolution, of insurance services themselves. In April, McKinsey published an article painting a picture of the insurance landscape in 2030. "It predicted a shift from a 'purchase and renewal' model to dynamic, usage-based products that operate on a continuous cycle; the constant communication of real-time consumer data to insurers from connected devices – using telematics and the Internet of Things – will enable insurers to constantly change their product offerings, tailored to an individual consumer's behavioural patterns and risk profile," she said.

If AI technologies were found to be in breach of the law – eg, a robo-adviser failed to comply with its best interests duty – a question arose as to who was liable for the breach.

In March, a US District Court dismissed a putative class action brought by a participant in a defined contribution plan on behalf of more than five million investors. The plaintiff alleged a robo-adviser had been configured to intentionally steer retirement investors into high-cost investments that paid unwarranted fees and kickbacks to its developer and insurers.

The case raised important questions on liability and responsibility where automated decision-making processes breached of financial services laws. "Should the programmer or designer of the robo-adviser be liable? Should the provider of advice, ie the company offering the robo-advice service, be liable? Should liability be apportioned between these parties?" Justice Beazley said.

"Taking it one step further, will the company that offers the robo-advice service always be properly characterised as the provider of advice? To the extent that the algorithm may itself be dynamic and change as the machine learns, should responsibility be attributed to the algorithm?

"Does there come a point at which an AI tool may be viewed as making decisions independent of its programmer and the company offering its services, such as where the breach arises not from a fault in design? At a practical level, algorithms cannot provide compensation to consumers. It may be able to compute compensation, but would a consumer trust an algorithm to do so if it was an algorithm that caused the problem?"

Justice Beazley suggested a liability gap. "One solution would be to require the programmer or the company offering the tool to be covered by mandatory professional indemnity insurance for failures of the algorithm."

Controversially, the European Parliament's Committee on Legal Affairs had urged granting AI a form of electronic personhood, proposing creating "a specific legal status for robots".

Justice Beazley said it was critical to ensure AI technologies did not develop unintended biases either through machine learning or because of the input data set used to train the tool.

She said Amazon sought to automate its recruitment process using an AI tool to assist. However, the tool had an inbuilt bias against female candidates because, to accrue its data, it combed through predominantly male resumes. Consequently the tool downgraded resumes containing the word "women's" and filtered out candidates who had attended women-only colleges.

Justice Beazley said there was much to be said for implementing a right of appeal from decisions made by AI systems to human decision-makers. "However, the circumstances in which such a right may be exercised will need to be defined carefully, lest the benefits of time and cost efficiency gained by use of AI are lost in the appeal process."

 
Back to top
 
 

Resolve is the official publication of the Australian Insurance Law Association and
the New Zealand Insurance Law Association.