The New Malpractice: Who is Responsible when Artificial Intelligence Makes a Mistake in Healthcare?

By Erin Keen, Staff Writer

Photo courtesy of Pixabay

Artificial Intelligence (AI) provides expansive opportunities for the medical field in improving healthcare for patients while reducing costs. As AI use becomes more frequent, attorneys are increasingly focused on the potential liability risks, posing the question: who is responsible when AI use contributes to patient injury?[1]

A survey published by the American Medical Association found that in 2024, around 3 in 5 physicians used AI in their practice, which was a 28% increase from the prior year.[2] Most practices cite the potential assistive benefits of AI to reduce administrative burdens. However, flawed integration of AI systems into Electronic Health Records (EHR) can lead to inaccurate patient recommendations, introducing a host of liability concerns.[3] AI systems have further proved beneficial in the diagnostic and therapeutic phases of treatment, which lead to more precise diagnoses and less invasive surgeries.[4] Operators are not allowed to verify or understand how the AI systems came to its results, leading to several liability concerns.[5]

Product liability law related to AI is scarce due to the difficulty of pointing blame at one party or proving there is liability in the first place. Negligence claims may arise from a “failure in programming, in supervision, or from actions of physicians of the algorithm itself.”[6]  Since the burden of proof falls on the plaintiff, the difficulty of bringing a valid claim is enhanced by the lack of a clear party at fault.

Though there are many judicial opinions on personal injury claims, there are no existing landmark cases strictly pertaining to medical malpractice and AI.[7] Courts typically look towards precedent to decide where to allocate liability when a product injures a patient.[8]

Current caselaw in AI software product liability can be inconsistent due to a variety of factors, such as rules across jurisdictions and different jury pools. In one case, Mracek v. Bryn Mawr Hosp., the plaintiff’s case was dismissed because they could not show how the AI robot’s failure caused the specific injuries to the patient.[9] However, in Singh v. Edwards Lifesciences Corp., the court approved a jury award against a developer whose software caused injury to the patient.[10]

When AI contributes to patient injury, liability will likely fall on multiple parties. David A. Simon, an associate law professor and expert on health care law and liability, points to how plaintiffs typically sue manufacturers under product liability.[11] Simon believes similar strategies for bringing claims will be used for AI devices, especially if plaintiffs allege the device had a design defect or lacked sufficient warnings.[12]

A majority of states require plaintiffs suing product manufacturers to prove the injury was foreseeable, which is difficult due to the inability of parties to see into the black box of AI systems.[13] Ultimately, because software is not a tangible object, courts have and will be reluctant to apply product liability doctrines to AI-related claims.[14]

Though some believe liability may fall on multiple parties, others believe practitioners will likely be responsible as the sole human actors when harm results from AI-assisted care.[15] Doctors must abide by their duty to “do no harm,” and avoid overreliance on AI in order to avoid malpractice claims. While AI has tremendous benefits for the healthcare industry, healthcare providers must ensure they are considering any faults that come with relying on AI or AI-related products.

There is no denying that AI will become more prevalent in the future, and strong legislation is needed to mitigate the risks associated with AI and liability in the health sector. As more cases develop precedent, both physicians and attorneys will be able to navigate who liability falls on when AI fails.


[1] https://hai.stanford.edu/policy/policy-brief-understanding-liability-risk-healthcare-ai

[2] https://www.ama-assn.org/press-center/ama-press-releases/ama-physician-enthusiasm-grows-health-care-ai

[3] Id.

[4] https://www.frontiersin.org/journals/medicine/articles/10.3389/fmed.2023.1305756/full

[5] Id.

[6] Id.

[7]  https://hai.stanford.edu/policy/policy-brief-understanding-liability-risk-healthcare-ai

[8] Id.

[9] https://www.milbank.org/quarterly/articles/artificial-intelligence-and-liability-in-medicine-balancing-safety-and-innovation/

[10] Id.

[11] https://www.medicaleconomics.com/view/the-new-malpractice-frontier-who-s-liable-when-ai-gets-it-wrong-

[12] Id.

[13] https://hai.stanford.edu/assets/files/2024-02/Liability-Risk-Healthcare-AI.pdf

[14] Id.

[15] https://carey.jhu.edu/articles/fault-lines-health-care-ai-part-two-whos-responsible-when-ai-gets-it-wrong