Who’s at Fault when AI Fails in Health Care?
Suppose a young man comes into the hospital. His blood is drawn and the lab results are analyzed by a predictive algorithm that suggests he’s healthy; he can go home. Six weeks later he dies of cardiac arrest. The algorithm, it turns out, didn’t consider the man’s family history, which was riddled with early cardiac deaths.
Who’s to blame? The answer is unclear right now, and that means hospitals need to be thinking through the risk that AI poses to themselves and their patients.
Michelle Mello, a professor with joint appointments at Stanford Law School and the Stanford School of Medicine, discussed this issue in a recent talk titled “Understanding Liability Risk from Health Care AI Tools.” In the talk, she and her collaborator, fourth-year JD/PhD student Neel Guha, explored how hospitals should approach risk given the rise of AI tools in medicine. (The talk was based on an article in The New England Journal of Medicine co-authored by Mello and Guha, a summary of which is available as a policy brief.)
Read the article, Understanding Liability Risk from Using Health Care Artificial Intelligence Tools
“We desperately need this technology in many areas of health care,” Mello says, noting its potentially revolutionary power in patient diagnosis and treatment. “But people are rightly concerned about the safety risks.”
The Murkiness of the Present
The absence of a clear regulatory structure creates two core challenges for the health care sector. First, there is no well-articulated testing process for these new technologies. While drugs go through FDA approval, for instance, AI tools are simply tested by the companies and developers that create them.
“Everyone is racing to be first in this area,” Mello says. “If we’re moving quickly from innovation to dissemination, then this poses risk.”
Second, in the absence of meaningful formal regulation, it largely falls to the courts to define inappropriate use of these new technologies. The uncertainty surrounding liability for patient harms could be problematic for hospitals. Six out of 10 Americans—the potential jurors who will decide many lawsuits—are uncomfortable with AI in health care; harms are often covered in the media, which poses reputational concerns; and the judges who oversee these cases rarely have a clear understanding of how AI tools work.
Recommendations for Managing Risk
As they consider whether and how to deploy AI tools, hospitals should be balancing the specific risks of a given tool against its potential benefits, while also developing frameworks for managing risk more universally.
When it comes to specific technologies, “hospitals need to start by asking how likely is the output to be wrong and how wrong might it be—that is, the likelihood and size of the error,” Mello says. Of particular concern would be products with high potential to cause harm along either of these dimensions, especially if the harm occurs in cases where outcomes are life and death or the patient population is very fragile.
Also relevant are the ease with which a model can be explained in court and the degree to which humans are involved in decision-making. Counterintuitively, it’s likely that a poorly performing model that is opaque in its operations is, in the end, less likely to generate a lawsuit than a better performing model that is easy to understand because, for the former, it’s hard for attorneys to prove the algorithm caused the bad outcome. Likewise, AI tools that rely on people somewhere in the loop may be more likely to result in liability for hospitals and health care practitioners, because the error may be connected to that human/computer interaction.
When considering liability more globally, Mello had four recommendations for hospitals. First, they should focus their most intensive monitoring plans on the highest-risk technologies, stepping down the intensity of oversight as the risk of the technology gets lower. Second, and related, hospitals need to be fastidious about documenting precise details of the tool they’ve deployed, like which model version it is and which software package it’s using.
Third, hospitals should “take advantage of the fact that things are good in the AI market for health care right now,” Mello says. “There are lots of vendors that want to sell, often in exchange for patient data, and this puts hospitals in a great position to bargain over the terms.” What this looks like varies by technology, but one important practice is using licensing contracts to ensure that AI developers shoulder their fair share of liability; another is contracting around any disclaimers issued by the developer that have the effect of shifting liability to users.
Finally, hospitals should give thought to whether use of particular AI tools should be disclosed to patients. Doctors and patients may have very different perceptions about what level of disclosure is appropriate. Patients who feel they weren’t adequately informed can layer claims for breach of informed consent on top of medical malpractice claims.
In talking through these concerns, Mello made clear that the stakes of this conversation extend well beyond health care legal dockets and hospital boardrooms. The implications touch the general marketplace for new health care technologies—and, subsequently, the world we all occupy as patients.
“This matters to developers, as uncertainty about downside risk affects the cost of capital, which affects the kinds of innovations that reach the public and the prices attached to them—and therefore who adopts and benefits from them,” Mello says. “This is far more than a lawyer’s concern.”
Watch the Seminar:
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.
link