AI Outperforms Humans in Differentiating Certain Breast Cancer Diagnoses

Samara Rosenfeld
AUGUST 12, 2019
breast cancer ribbon

Pathologists often disagree on the interpretations of breast biopsies. But they could soon have a more accurate way to detect and diagnose breast cancer with the help of artificial intelligence (AI).
 
A new AI model outperformed human doctors when differentiating ductal carcinoma in situ from atypia, which is considered one of the greatest challenges in breast cancer diagnosis, University of California, Los Angeles (UCLA) researchers noted in a study published in JAMA Network Open.
 
Prior research at UCLA revealed that diagnostic errors happened in one out of every six women who had ductal carcinoma in situ, a non-invasive cancer. That earlier study also found that half of the biopsy cases of breast atypia were incorrectly diagnosed as ductal carcinoma in situ.
 
“It is critical to get a correct diagnosis from the beginning so that we can guide patients to the most effective treatments,” said senior author Joann Elmore, M.D., MPH, professor of medicine at the David Geffen School of Medicine at UCLA.
 
In Elmore and her team’s new work, the research team fed a computer 240 breast biopsy images and trained it to recognize patterns associated with several types of breast legions — from benign and atypia to ductal carcinoma in situ and invasive breast cancer.
 
Researchers used digital whole-slide images of breast biopsies from Breast Cancer Surveillance Consortium-associated tumor registries in New Hampshire and Vermont.
 
The research team developed a set of 14 diagnoses, including usual ductal hyperplasia, fibroadenoma and sclerosing adenosis, and four diagnostic categories — benign, atypia, ductal carcinoma in situ and invasive.
 
Three pathologists used a web-based virtual slide viewer to interpret the 240 digital whole-slide images. Each pathologist marked regions of interest on each slide that had features supporting their diagnosis, according to the paper.
 
The final set of 428 regions of interest included 102 benign, 128 atypia, 162 ductal carcinoma in situ and 36 invasive regions of interest.
 
To compare the AI model to the human eye, 87 pathologists from eight states interpreted a subset of 60 biopsy specimens in each of the four subsets.
 
There were eight clinical labels used to annotate the breast biopsy images:
  1. Background
  2. Normal stroma
  3. Malignant epithelium
  4. Blood
  5. Benign epithelium
  6. Secretion
  7. Desmoplastic stroma
  8. Necrosis
The AI model correctly deciphered between ductal carcinoma in situ from atypia more often than doctors, with a sensitivity between 0.88 and 0.89. The pathologists’ average score was 0.70, according to the results.
 
The program performed almost as well as human doctors in differentiating cancer from non-cancer cases.
 
“These results are very encouraging,” Elmore said. “There is low accuracy among practicing pathologists in the U.S. when it comes to the diagnosis of atypia and ductal carcinoma in situ, and the computer-based automated approach shows great promise.”
 
The research team is currently working to train the AI model to diagnose melanoma.

Get the best insights in digital health directly to your inbox. 

RELATED
Deep Learning Model Predicts Risk of Breast Cancer Better than Traditional Practice
Genomic Testing Linked to Lower Healthcare Costs in High-Risk Breast Cancer Patients
Most Breast Cancer Studies Don't Address Race or Socioeconomic Factors

SHARE THIS SHARE THIS
20
Become a contributor