Model Predicts Cognitive Decline in Patients with Alzheimer's

Samara Rosenfeld
AUGUST 05, 2019
brain

A new model can help predict if patients at risk for Alzheimer’s disease will experience cognitive decline, according to a new paper from researchers at Massachusetts Institute of Technology.

The model, which predicts a patient’s cognition test score up to two years in the future, could be used to improve the selection of candidate drugs and participant cohorts for clinical trials.

“Being able to accurately predict future cognitive changes can reduce the number of visits the participant has to make, which can be expensive and time-consuming,” said Oggi Rudovic, Ph.D., a Media Lab researcher. “Apart from helping develop a useful drug, the goal is to help reduce the costs of clinical trials to make them more affordable and done on larger scales.”

Researchers used an Alzheimer’s disease clinical trial dataset from the Alzheimer’s Disease Neuroimaging Initiative. The dataset has data from nearly 1,700 participants with and without the disease. Data were recorded during semiannual doctor’s visits over 10 years and included the patients’ Alzheimer’s disease Assessment Scale-cognition sub-scale scores. The test measures language, memory and orientation on a scale of increasing severity up to 85 points.

Also included in the data were MRI scans, demographic and genetic information and cerebrospinal fluid measurements.

The research team trained and tested their model on 100 participants who made more than 10 visits and had less than 85% missing data. Of that sub-cohort, 48 were diagnosed with Alzheimer’s disease. The participants had different combinations of features missing.

Researchers then used the data to train a population model powered by a nonparametric probability framework, called the Gaussian Processes. This framework measures similarities between variables to predict a value for an unseen data point.

When the research team evaluated that model, they found the model’s predictions could be more accurate. Researchers then personalized the population model for each new patient. The model progressively filled in data gaps with each new patient visit and updated the scale prediction score.

After four visits, the models significantly reduced the error rate in predictions and outperformed various traditional machine-learning approaches used for clinical data.

The researchers then invented a metalearning scheme that learns to automatically choose which type of model, population or personalized, works best for any participant at any time. This depended on the data being analyzed.

This approach simulates how different models perform on a given task and learns the best fit.

Population models made more accurate predictions when patients had noisy, sparse data during early visits. When patients collected more data during subsequent visits, the personalized models performed better.

The method reduced the error rate for predictions by a further 50%.

“We couldn’t find a single model or fixed combination of models that could give us the best prediction,” Rudovic said. “So, we wanted to learn how to learn with this metalearning scheme. It’s like a model on top of a model that acts as a selector, trained using metaknowledge to decide which model is better to enjoy.”

Clinicians may use the model to select at-risk patients for clinical trials who are likely to show a rapid cognitive decline, even before other symptoms emerge. Early treatment could help clinicians better track which medicines are and are not working.

Get the best insights in digital health directly to your inbox.

Related
The Clinical Divide: Using VR for More Accurate Diagnoses
VR Navigation Task Outperforms Cognitive Tests in Identifying Alzheimer's Disease
Gates Bets on Big Data in Alzheimer's Fight

SHARE THIS SHARE THIS
19
Become a contributor