When AI in healthcare goes wrong, who is responsible?

Artificial intelligence can be used to diagnose cancer, predict suicide, and assist in surgery. In all these cases, studies suggest AI outperforms human doctors in set tasks. But when something does go wrong, who is responsible?

There’s no easy answer, says Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University. At any point in the process of implementing AI in healthcare, from design to data and delivery, errors are possible. “This is a big mess,” says Lin. “It’s not clear who would be responsible because the details of why an error or accident happens matters. That event could happen anywhere along the value chain.”

How AI is used in healthcare

Design includes creation of both hardware and software, plus testing the product. Data encompasses the mass of problems that can occur when machine learning is trained on biased data, while deployment involves how the product is used in practice. AI applications in healthcare often involve robots working with humans, which further blurs the line of responsibility.

Responsibility can be divided according to where and how the AI system failed, says Wendall Wallace, lecturer at Yale University’s Interdisciplinary Center for Bioethics and the author of several books on robot ethics. “If the system fails to perform as designed or does something idiosyncratic, that probably goes back to the corporation that marketed the device,” he says. “If it hasn’t failed, if it’s being misused in the hospital context, liability would fall on who authorized that usage.”

Surgical Inc., the company behind the Da Vinci Surgical system, has settled thousands of lawsuits over the past decade. Da Vinci robots always work in conjunction with a human surgeon, but the company has faced allegations of clear error, including machines burning patients and broken parts of machines falling into patients.

Some cases, though, are less clear-cut. If diagnostic AI trained on data that over-represents white patients then misdiagnoses a Black patient, it’s unclear whether the culprit is the machine-learning company, those who collected the biased data, or the doctor who chose to listen to the recommendation. “If an AI program is a black box, it will make predictions and decisions as humans do, but without being able to communicate its reasons for doing so,” writes attorney Yavar Bathaee in a paper outlining why the legal principles that apply to humans don’t necessarily work for AI. “This also means that little can be inferred about the intent or conduct of the humans that created or deployed the AI, since even they may not be able to foresee what solutions the AI will reach or what decisions it will make.”

Inside the AI black box

The difficulty in pinning the blame on machines lies in the impenetrability of the AI decision-making process, according to a paper on tort liability and AI published in the AMA Journal of Ethics last year. “For example, if the designers of AI cannot foresee how it will act after it is released in the world, how can they be held tortiously liable?,” write the authors. “And if the legal system absolves designers from liability because AI actions are unforeseeable, then injured patients may be left with fewer opportunities for redress.”

AI, as with all technology, often works very differently in the lab than in a real-world setting. Earlier this year, researchers from Google Health found that a deep-learning system capable of identifying symptoms of diabetic retinopathy with 90% accuracy in the lab caused considerable delays and frustrations when deployed in real life.

Despite the complexities, clear responsibility is essential for artificial intelligence in healthcare, both because individual patients deserve accountability, and because lack of responsibility allows mistakes to flourish. “If it’s unclear who’s responsible, that creates a gap, it could be no one is responsible,” says Lin. “If that’s the case, there’s no incentive to fix the problem.” One potential response, suggested by Georgetown legal scholar David Vladeck, is to hold everyone involved in the use and implementation of the AI system accountable.

AI and healthcare often work well together, with artificial intelligence augmenting the decisions made by human professionals. Even as AI develops, these systems aren’t expected to replace nurses or automate human doctors entirely. But as AI improves, it gets harder for humans to go against machines’ decisions. If a robot is right 99% of the time, then a doctor could face serious liability if they make a different choice. “It’s a lot easier for doctors to go along with what that robot says,” says Lin.

Ultimately, this means humans are ceding some authority to robots. There are many instances where AI outperforms humans, and so doctors should defer to machine learning. But patient wariness of AI in healthcare is still justified when there’s no clear accountability for mistakes. “Medicine is still evolving. It’s part art and part science,” says Lin. “You need both technology and humans to respond effectively.”

Leave a Reply

Your email address will not be published. Required fields are marked *