• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

FDA Guidance on Machine Learning, AI in Medical Devices a Notable Step Toward Modernization

Article

But the regulator and healthcare innovators must take AI & ML further.

fda modern,fda tech,fda artificial intelligence,fda ai

The U.S. Food and Drug Administration (FDA) is seeking comments on proposed new rules and regulations governing medical devices that are enabled by machine learning (ML) and artificial intelligence (AI). The proposed regulatory framework for AI and ML technologies would allow for modifications to be made following real-word deployments, learnings and adaptations. It’s a worthy first step toward modernization, but the regulator must embrace further innovation.

The move shows the FDA recognizes that the future of medical devices and software involves ML models and AI applications. It also suggests that, as an agency, the FDA is becoming more innovative and knows that past regulations simply aren’t appropriate in the modern context. Medical device innovation was once unhurried and gradual, so it wasn’t all that difficult to regulate. That’s no longer the case, and the FDA has recognized the need to modernize regulation as technology advances.

The FDA’s step toward modernization also shows we’re all working toward the same goal: improved patient outcomes through technology. The FDA proposal clearly recognizes the benefits of incorporating both data and real-world evidence into algorithms — faster and in a more iterative manner — without first requiring a prior premarket submission. The proposal is also in keeping with both industry and academic best practices as they relate to ML implementation.

But there are a number of areas where particular attention should be paid to successfully implement ML solutions into complex and potentially high-risk medical scenarios, which I outline below.

Framing the Machine Learning Problem

Any successful machine learning project is predicated on both a well-formulated problem and an articulated ML approach. It’s critical that the problem is framed appropriately and identifies the question the algorithm is intended to solve. Proper framing is key to defining an appropriate method of solution. Data availability, the complexity of said data and the expected clinical output all play roles in this process.

Let’s use the application of ML to a CT scan to determine the presence of a tumor as a hypothetical scenario. Sounds fairly straightforward: Does this person have a lung lesion? Let’s instead frame the question as, “how many lesions are present?” or, “which areas of the lung appear abnormal?” These questions all have different clinical outputs.

In the context of remote patient monitoring, predicting a patient’s health decline also has multiple potential framings. Are we predicting a general decline in a patient’s vital signs? Are we looking for specific disease exacerbation, with COPD, for example? The manner in which the question is framed yields critical considerations about the data set required, as well as the sensitivity and specificity required in continued monitoring.

Proper framing is critical, and the FDA should incorporate requirements for clear framing of the problem into its pre-specification. And defining this rationale at the onset will go a long way to helping better characterize and justify certain risks during pre-approved model and feature changes.

Building Mechanisms for Explainable AI

Meaningful research is underway in the development of explicable AI to help a user understand why an algorithm has or has not made a decision. Explicability — or explainability — is critical as we develop trust in healthcare ML and adoption. And it’s not just about determining whether an algorithm works but also understanding both why it works or doesn’t work.

In a healthcare setting, this means algorithms — along with the product — deliver supplementary information to contextualize diagnostic or predictive outputs of the model, which can improve both the reasoning and the perception of the person reviewing it. Compare this technology to prescription drugs. The molecular pathway and action of any drug is well understood and available to the prescribing physician. The same should be the case in machine learning and healthcare.

Let’s go back to the case of remote patient monitoring. What if rather than simply identifying that a patient’s health will deteriorate, we were able to state what has changed about a patient’s condition and then provide that context to the clinician based on what the model has seen before? Explainable ML models are a significant step toward model transparency and, therefore, should be incorporated into future FDA guidelines.

Workflow and Real-World Impact

Any machine learning model relies on external data. Changing products, human processes, or other models which are providing— explicitly or implicitly — input to other models can create an entanglement dependency, which will be difficult to understand and regulate. With many ML models, data is from other devices, clinical processes or variable and inherently biased human processes. Some models could be extremely sensitive to the subtlest or most nuanced changes. We can take steps to mitigate the associated risks, but we can’t be totally assured because clinical workflows can’t be controlled.

The FDA’s proposals are remarkable first steps, and I appreciate the FDA providing them to the wider ML industry at a time when they can be appropriately influenced. We’re all working toward the same goal: optimizing AI and ML to improve healthcare delivery and patient care.

Christopher McCann is the CEO and co-founder of Current Health, which is building a platform to continuously and passively monitor the entire human body, using the data to deliver healthcare, preventively and before you become sick. Christopher conceived Current as a medical student, eventually dropping out to focus on the company full time. He holds a master’s degree in engineering and computer science from the University of Dundee, Scotland.

Get the best insights in digital health directly to your inbox.

Dig Deeper

FDA Clears Current Health’s AI RPM Wearable for Post-Acute Care

How the FDA Is Transforming Clinical Trials

FDA to Test Predictive Analytics Against Ongoing Clinical Trials

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.