Blog: Opinion | Product liability unsuitable for Artificial Intelligence – Livemint
On the 26th of August, 1928, May Donoghue and a friend met at the Wellmeadow Cafe. It was her friend’s treat and so she ordered an ice cream float. When she refilled her glass with ginger beer, Mrs. Donoghue noticed decomposed remains of a snail floating in the bottle. She fell seriously ill and decided to sue. Since her friend had paid for the meal, May Donoghue had no claim for damages against the innkeeper. Her only recourse was to sue the manufacturer of the ginger beer — a gentleman called David Stevensen. However, according to the law at the time, manufacturers had no duty of care to their ultimate consumer unless they were bound by contract to show such care.
In his defence, Stevensen cited two recent cases that differed from this one only in that the bottles in question contained the rotting parts of mice. In both those cases the court had dismissed the claim on the ground that the complainant had no contract with the manufacturer. Following a similar line of argument, it had no hesitation dismissing Mrs. Donoghue’s claim.
On appeal, Lord Atkin, in a path breaking judgment that would have far reaching consequences, held that a “manufacturer of products, which he sells in such a form as to show that he intends them to reach the ultimate consumer in the form in which they left him with no reasonable possibility of intermediate examination, and with the knowledge that the absence of reasonable care in the preparation or putting up of the products will result in an injury to the consumer’s life or property, owes a duty to the consumer to take that reasonable care.”
This decision is the foundation on which product liability law around the world has been built. Law students everywhere still study Donoghue v. Stevensen and courts continue to apply its principles to attribute liability in an ever expanding variety of situations. Apart from food and beverages, product liability applies to the many devices we use to reduce human effort — from cruise control and automatic parking technologies in cars to industrial automation in factories and even household appliances like microwaves and air conditioners in our homes.
But as technology grows to be more intuitive, questions are being raised as to whether this approach to liability still makes sense in the context of modern technology. Thanks to advances in artificial intelligence, we are increasingly trusting our machines to take the sorts of complex decisions that until recently needed human discretion. I’ve written in the past about how artificial intelligence has inveigled itself into the legal industry so much so that law firms have begun to use AI tools to proofread, prepare and review agreements, judges rely on them to arrive at sentencing decisions and in some instances, AI programs are being used to contest parking fines and other minor offences.
Similarly, doctors use AI in their work more and more. Pattern recognition technology can now identify tumours in PET scans with greater than accuracy than humans can. IBM’s Watson can make therapeutic suggestions based on reported symptoms and its analysis of hundreds of thousands of past cases and insurance companies increasingly use multiple data sources — from wearable devices to medications purchased — to build more accurate actuarial models.
We instinctively treat these new technologies like products and apply the product liability principles set out in Donoghue v. Stevensen to the errors they throw up. But if algorithms are actually a substitute for human decision making then they will, just like humans, inevitably make some mistakes. Judgment, by its very nature is subjective and so contract review software will inevitably fail to spot a change of control clause just when it is critical to a transaction. Even the best legal research software will, at some time or the other, suggest an argument that fails to find favour with the judge. As good as it is today, pattern recognition technology will, on occasion, fail to identify a cancerous tumour and it is unlikely that actuarial algorithms will price every premium accurately.
If we apply product liability law to all these circumstances, developers will be liable for mistakes made by AI even though humans who make those very same mistakes would be given considerably more leeway. If this high standard of accountability is applied to companies trying to develop AI products they will, crippled by the constant threat of litigation, be too scared to innovate. What is needed is a liability framework that, on the one hand holds manufacturers liable for egregious negligence but at the same time gives them the flexibility to make the honest mistakes without consequence.
Various models have been suggested to address this problem. One attempts to apply standards of reasonableness to AI in the same way that it is used to give humans the benefit of the doubt. Others propose the implementation of insurance schemes that allow autonomous systems to get things wrong without dire consequences. Still others treat these systems as “agents” of their manufacturers or users in order to fix liability.
However, none of this makes any sense unless the laws that we pass in order to encourage innovation incorporate some of this thinking into their provisions. Which is why, when I read that the RBI’s Draft Enabling Framework for Regulatory Sandboxes stated specifically that it offered no legal waivers, I was deeply disappointed.
Rahul Matthan is partner at Trilegal and author of ‘Privacy 3.0 :Unlocking Our Data Driven Future’