How AI in Healthcare Affects Medical Malpractice Liability.

Artificial intelligence (AI) has become prevalent in most aspects of our lives, from how we search for information online, to how we do our jobs, to how we receive medical care. AI use is projected to continue to grow by leaps and bounds, and it’s already transforming the medical space, with both positive and concerning effects.

The use of AI in healthcare is uncharted territory for everyone, and we can’t yet know, or even imagine, all the possibilities it will reveal. It is important, at least, to try to think about the ways that AI might shape the future of medicine, including how AI affects medical malpractice. When AI is involved in medicine and something goes wrong, who is liable? What are the risks of AI in a medical setting, and are the benefits worth it?

How is AI Used in Medicine?

AI has been present in healthcare for decades, but its use has exploded in the last decade or so. If you have ever used a “symptom checker” on a medical website, you have used AI in your own healthcare.

Some of the more common applications for AI in medical settings include:

  • Medical imaging analysis (such as detecting tumors or hairline fractures in radiology)
  • Analyzing patient data like medical history, lab results, and demographics to identify whether a patient is at higher risk of certain conditions
  • Allowing doctors to make personalized treatment recommendations for patients
  • Interpreting pathology slides
  • Matching patients with appropriate clinical trials
  • Accelerating the discovery and development of new drugs
  • Development and use of robotic surgery systems
  • Monitoring patient health conditions remotely through wearable devices
  • Helping hospitals, clinics, and medical offices optimize scheduling and workflow

As the technology continues to advance, it’s a virtual certainty that providers will use AI not just to improve existing processes and treatments, but to develop entirely new ones.

What are the Benefits of AI in Healthcare?

People still have the edge when it comes to good bedside manner, but there are some things AI does much better and more efficiently than humans. One of those things is recognizing patterns, which enables AI tools to spot irregularities which human eyes, however well trained, might miss. On the diagnostic side of care, that might mean spotting cancer sooner, when it may be much more treatable and curable. Scientists have also developed AI technology that can determine in just seconds during a surgery whether any part of a cancerous brain tumor that could be removed remains.

Another of AI’s unique strengths is its ability to digest and analyze enormous amounts of data instantly. This, together with its ability to recognize patterns, means that AI can take a patient’s entire medical history and other relevant data, interpret it, and arrive at a customized treatment plan that is most likely to be effective for that particular patient. The advantages of AI in healthcare are not limited to patient diagnosis and treatment. By optimizing workflows for providers, AI helps doctors, nurses, and other team members to spend less time on administrative tasks, and more time providing patient care. AI has the potential to make medicine more effective and the patient experience more positive. But while AI’s capabilities are impressive, they are not infallible.

What are the Risks of AI in Healthcare?

AI can be used to reduce the risk of medical malpractice when it is used as a tool to support doctors’ judgment rather than a replacement for that judgment. AI is like other forms of technology in its potential for failure or misuse. A faulty algorithm can lead to a misdiagnosis. A doctor may overrely on AI recommendations without fully understanding the basis for those recommendations. Biases in the datasets on which an AI system is trained can lead to unequal treatment or misdiagnosis of patients who are underrepresented in the data. All of which leads to the question: Where do we point the finger when AI-supported treatment leads to a bad outcome?

How Might AI Affect Medical Malpractice?

Let’s consider a hypothetical situation to explore how different parties might be held liable and how AI affects medical malpractice. A hospital in a large city uses an AI tool that has been cleared by the Food and Drug Administration (FDA) to help detect early signs of sepsis, which can be lethal. To pick up on these signs, the tool analyzes a patient’s vital signs, lab results, and information from the patient’s electronic health record (EHR).

One busy evening, a patient arrived in the emergency department with vague symptoms. The sepsis-detection tool was applied and indicated that the patient was at low risk of sepsis. However, the laboratory data that the tool used to arrive at that conclusion was incomplete; furthermore, the emergency department staff was poorly trained in the use of the tool.

The treating physician noted the low-risk score assigned by the AI tool, and, relying heavily on it, decided not to order further testing, IV fluids, or broad-spectrum antibiotic treatment that would be appropriate for a septic patient. Overnight, the patient’s condition deteriorated rapidly. By the time the patient is correctly diagnosed with sepsis, he has suffered serious and avoidable harm, including acute kidney injury that led to an extended ICU stay. He sued for failure to diagnose and treat his condition in a timely manner.

Who Might Be Held Liable for Medical Malpractice Involving AI?

The treating physician would most likely have liability in this case. Even though she was relying on the AI tool’s output, she is not absolved of the duty of care (the duty to act as a reasonable physician would under similar circumstances). If the doctor substituted the AI output for her own judgment, she may have been negligent.

The hospital in which the emergency department was located may also have liability. Liability could arise on multiple grounds, such as:

  • Failing to adequately train staff on the use of the tool
  • Failing to have protocols in place to address incomplete or poor-quality data inputs
  • Using the tool in a way that would encourage staff to overrely on its recommendations

The hospital’s liability, if it exists, stems largely from its control over implementation of the tool, control of policies and training that affect the tool’s use, and control over the quality of data that affects recommendations. What about the vendor of the AI tool? This is a trickier question. If there is liability, it may not fall under the heading of medical malpractice, but rather product liability: things like failure to update systems as medical knowledge evolves, or mistakes in the algorithm.

Medical malpractice is based on the standard of care: what a reasonable doctor, similarly situated, would do. The real question about medical malpractice and AI is not whether a doctor who uses AI can be held liable for medical malpractice. It’s whether, as AI becomes more prevalent, it will be a violation of the standard of care not to use it. Stay tuned.

Work with an Experienced Medical Malpractice Attorney

If you believe your provider may have committed medical malpractice, or you would like to learn more about how AI affects medical malpractice, contact the Fraser Law Firm to schedule a consultation.