Who is responsible when AI harms someone?
A California jury may soon have to decide. In December 2019, a person driving a Tesla with an artificial intelligence driving system killed two people in Gardena in an accident. The Tesla driver faces several years in prison. In light of this and other incidents, both the National Highway Transportation Safety Administration (NHTSA) and National Transportation Safety Board are investigating Tesla crashes, and NHTSA has recently broadened its probe to explore how drivers interact with Tesla systems. On the state front, California is considering curtailing the use of Tesla autonomous driving features.
Our current liability system—our system to determine responsibility and payment for injuries—is completely unprepared for AI. Liability rules were designed for a time when humans caused the majority of mistakes or injuries. Thus, most liability frameworks place punishments on the end-user doctor, driver or other human who caused an injury. But with AI, errors may occur without any human input at all. The liability system needs to adjust accordingly. Bad liability policy will harm patients, consumers and AI developers.
The time to think about liability is now—right as AI becomes ubiquitous but remains underregulated. Already, AI-based systems have contributed to injury. In 2018, a pedestrian was killed by a self-driving Uber vehicle. Although driver error was at issue, the AI failed to detect the pedestrian. Recently, an AI-based mental health chatbot encouraged a simulated suicidal patient to take her own life. AI algorithms have discriminated against the resumes of female applicants. And, in one particularly dramatic case, an AI algorithm misidentified a suspect in an aggravated assault, leading to a mistaken arrest. Yet, despite missteps, AI promises to revolutionize all of these areas.
Getting the liability landscape right is essential to unlocking AI’s potential. Uncertain rules and potentially costly litigation will discourage investment in, and development and adoption of, AI systems. The wider adoption of AI in health care, autonomous vehicles and in other industries depends on the framework that determines who, if anyone, ends up liable for an injury caused by artificial intelligence systems.
AI challenges traditional liability. For example, how do we assign liability when a “black box” algorithm—where the identity and weighting of variables changes dynamically so no one knows what goes into the prediction—recommends a treatment that ultimately causes harm, or drives a car recklessly before its human driver can react? Is that really the doctor or driver’s fault? Is it the company that created the AI’s fault? And what accountability should everyone else—health systems, insurers, manufacturers, regulators—face if they encouraged adoption? These are unanswered questions, and critical to establishing the responsible use of AI in consumer products.
Like all disruptive technologies, AI is powerful. AI algorithms, if properly created and tested, can aid in diagnosis, market research, predictive analytics and any application that requires analyzing large data sets. A recent McKinsey global survey showed that already over half of companies worldwide reported using AI in their routine operations.
Yet, liability too often focuses on the easiest target: the end-user who uses the algorithm. Liability inquiries often start—and end—with the driver of the car that crashed or the physician that gave faulty treatment decision.
Granted, if the end-user misuses an AI system or ignores its warnings, he or she should be liable. But AI errors are often not the fault of the end-user. Who can fault an emergency room physician for an AI algorithm that misses papilledema—a swelling of the retina? An AI’s failure to detect the condition could delay care and potentially cause a patient to go blind. Yet, papilledema is challenging to diagnose without an ophthalmologist’s examination because more clinical data, including imaging of the brain and visual acuity, are often necessary as part of the workup. Despite AI’s revolutionary potential across industries, end-users will avoid using AI if they bear sole liability for potentially fatal errors.
Shifting the blame solely to AI designers or adopters doesn’t solve the issue either. Of course, the designers created the algorithm in question. But is every Tesla accident Tesla’s fault to be solved by more testing before product launch? Indeed, some AI algorithms constantly self-learn, taking their inputs and dynamically using them to change the outputs. No one can be sure of exactly how an AI algorithm arrived at a particular conclusion.
The key is to ensure that all stakeholders—users, developers and everyone else along the chain from product development to use—bear enough liability to ensure AI safety and effectiveness—but not so much that they give up on AI.
To protect people from faulty AI while still promoting innovation, we propose three ways to revamp traditional liability frameworks.
First, insurers must protect policyholders from the excessive costs of being sued over an AI injury by testing and validating new AI algorithms prior to use, just as car insurers have been comparing and testing automobiles for years. An independent safety system can provide AI stakeholders with a predictable liability system that adjusts to new technologies and methods.
Second, some AI errors should be litigated in special courts with expertise adjudicating AI cases. These specialized tribunals could develop an expertise in particular technologies or issues, such as dealing with the interaction of two AI systems (say, two autonomous vehicles that crash into each other). Such specialized courts are not new: for example, in the U.S., specialist courts have protected childhood vaccine manufacturers for decades by adjudicating vaccine injuries and developing a deep knowledge of the field.
Third, regulatory standards from federal authorities like the U.S. Food and Drug Administration (FDA) or NHTSA could offset excess liability for developers and some end-users. For example, federal regulations and legislation have replaced certain forms of liability for medical devices or pesticides. Regulators should deem some AIs too risky to introduce into the market without standards for testing, retesting or validation. Federal regulators ought to proactively focus on standard processes for AI development. This would allow regulatory agencies to remain nimble and prevent AI-related injuries, rather than reacting to them too late. In contrast, although state and local consumer protection and health agencies could not erect a national regulatory system, they could help clarify industry standards and norms in a particular area.
Hampering AI with an outdated liability system would be tragic: Self-driving cars will bring mobility to many people who lack transportation access. In health care, AI will help physicians choose more effective treatments, improve patient outcomes and even cut costs in an industry notorious for overspending. Industries ranging from finance to cybersecurity are on the cusp of AI revolutions that could benefit billions worldwide. But these benefits should not be undercut by poorly developed algorithms. Thus, 21st-century AI demands a 21st-century liability system.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.