PATH Releases Guidelines for Beneficial, Ethical and Safe Use of AI in Healthcare

Samara Rosenfeld
JULY 12, 2019
AI

Healthcare providers can follow a new set of artificial intelligence (AI) guidelines when developing and implementing the tech. The principles detail the implications if developers design faulty software and that patient safety is always the priority when using such technologies. 
 
This week, the Partnership for AI, Telemedicine and Robotics in Healthcare (PATH), an alliance of stakeholders working to improve care and build efficiencies using advanced technologies, released the guidelines. PATH designed the principles to foster the safe, ethical and valuable use of AI in medicine, largely holding tech developers responsible for the implications of use and misuse of AI. 

Automation, robotics and AI in healthcare have a lot of press around them, even though the industry is still in the early stages of development and deployment, Jonathan Linkous, co-founder and CEO of PATH, said in a statement to Inside Digital Health™.

But patients fear the technology due to warnings about “killer robots” and stories of “what-ifs,” he added.

“So, developing a set of principles around the development and implementation of AI in medicine was developed to set out certain guidelines that, if providers and developers agree to follow, would go a long way in alleviating those concerns,” Linkous  said.
 
The principles are designed to assure patients and the public that the use of AI in healthcare will provide safe, equitable and high-quality services.
 
The guidelines include 12 principles, developed by members of PATH and other healthcare leaders:
  1. Do no harm. Regardless of the intervention or procedure, the patient’s safety and well-being are the priority.
  2. Human values. Advanced technologies should be designed and operated to be compatible with the ideals of human dignity, rights, freedoms and cultural diversity.
  3. Safety. AI technology should be safe and secure throughout their operational lifetime.
  4. Design transparency. The design and algorithms used should be open to inspection by regulators.
  5. Failure transparency. If a system causes harm, it should be possible to find out why.
  6. Responsibility. Designers of advanced health technology are the stakeholders in the moral implications of the use, misuse and actions, with responsibility to shape those implications.
  7. Value alignment. Autonomous systems should be designed so that their goals and behaviors align with human values.
  8. Personal privacy. Designers should build safeguards into the design and deployment of healthcare AI applications. This is to protect patient’s personal data.
  9. Liberty and privacy. AI application to personal data should not curtail people’s real or perceived liberty.
  10. Shared benefits. AI system should benefit and empower as many people as possible.
  11. Human control. Humans should have the choice of how and whether AI systems get to make decisions to accomplish human-chosen objectives.
  12. Evolutionary. Advanced technology should be designed to allow devices to change in conformance with new discoveries.
Get the best insights in digital health directly to your inbox.

Related
Patients Report Mixed Views on Health-Tech and AI
Why Health Systems Should Build Their Own AI Models
Tackling the Misdiagnosis Epidemic with Human and Artificial Intelligence

SHARE THIS SHARE THIS
13
Become a contributor