• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

A New Ethical Wrinkle for Medical Algorithms

Article

Unintentional bias and data privacy often steer the conversation. Profiteering, intentional bias, and the possibility of machine dependence don’t.

Much of the debate around machine learning’s role in medicine has centered on capabilities, like whether an algorithm can actually provide clinical recommendations that meet physician standards. Talks tend to focus on best intentions and failed delivery.

Three Stanford University doctors mostly skirt that debate in a recent New England Journal of Medicine commentary, instead heading into even stickier territory.

Danton S. Char, MD, Nigam H. Shah, PhD, and David Magnus, PhD, do touch on some well-worn concerns—unintentional biases seeping into systems based on where they were designed, or patients losing confidentiality in the undying quest for more data—but they raise a less-discussed worry that might be even more troubling.

“In the US healthcare system, there is perpetual tension between the goals of improving health and generating profit,” the authors write, citing cases where large tech companies have designed algorithms that benefit them but may be unethical, like Uber’s attempts to screen for passengers who in law enforcement. “Private-sector designers who create machine-learning systems for clinical use could be subject to similar temptations.”

The academics may well have turned in the piece, published yesterday, before Uber’s recent announcement that it is entering the world of medicine. While the ridesharing giant isn’t looking at any interventional technologies, there’s little doubt that more and more tech darlings are entering the space—from Amazon to Apple to Lyft.

Several individuals who spoke to Healthcare Analytics News™ during the recent HIMSS meeting in Las Vegas, Nevada, said they were excited by the new entrants. But the insiders were under no impression that the companies were making such moves based on altruism as opposed to the enormous economic opportunity that healthcare represents.

One fear that the Stanford trio notes is the possibility that systems could be designed to steer clinicians toward more profitable interventions without their knowledge. Given the number of stakeholders—hospitals, tech companies, and even pharmaceutical makers—they emphasize this risk. Physicians, they write, should be educated on the construction of clinical support algorithms to avoid dependence on ethically-questionable black boxes.

There’s an imperative, too, that the understanding of the technology be widespread. Given the increasingly value-minded, team-based nature of American healthcare in the 21st century, it’s rare that a single physician oversees a patient from diagnosis to final outcome. The algorithms could gain immense power as a lone constant in care.

“At its core, clinical medicine has been a compact—the promise of a fiduciary relationship between a patient and a physician,” they write. “As the central relationship in clinical medicine becomes that between a patient and a healthcare system, the meaning of fiduciary obligation has become strained and notions of personal responsibility have been lost.”

Standard for such commentaries, the authors don’t have solutions for these quandaries. “Machine-learning systems could be built to reflect the ethical standards that have guided other actors in health care—and could be held to those standards,” they conclude. But they raise important points that further complicate the insertion of complex technologies into an increasingly interconnected healthcare landscape.

Related Coverage:

Ethical Concerns for Cutting-Edge Neurotechnologies

The Dystopian Concerns of AI for Healthcare

Applying a Human Touch to EMRs and AI

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
Related Content
© 2024 MJH Life Sciences

All rights reserved.