C-Suite Q&A: Health Fidelity's Chief Development Officer on NLP and More

Ryan Black
AUGUST 04, 2017
This week in the C-Suite Q&A, Healthcare Analytics News spoke with Anand Shroff, the Chief Development Officer and Co-Founder of Health Fidelity. Health Fidelity, which he helped to found in 2011, is "focused on helping payer and provider organizations address risk and quality in value-based payment models through prospective and retrospective solutions."

The conversation covered a range of topics, from Shroff's background in the industry to the challenges of natural language procesing and the current uncertainty facing the healthcare market.

Let’s get a quick introduction from you, regarding yourself and the company.

interoperability, machine learning, natural language processing, health fidelity anand, health fidelity NLP, healthcare analytics newsI’m one of the founders of Health Fidelity, which we founded in early 2012. My background is primarily in healthcare information technology. I spent a couple of years at Optum as head of their HIE and HER products before the startup bug bit me again, and I left to found Health Fidelity, so it’s been about a 5-year journey. The company has had several different iterations in terms of its core focus areas, but really we’re a data-driven healthcare IT company that’s looking to leverage all of the data that is now being electronically captured, including administrative, demographic, and clinical data, both structured and unstructured in the electronic health record to solve very different healthcare problems, as they relate to payer-provider collaboration.

Specifically, our focus area right now is how payers and providers come together to manage risk in our populations that have risk as a major factor in both the clinical classification of the member population, as well as the financial outcomes. That basically means where payers and providers are entering into risk-based contracts with each other, and what we found was that the process of identifying, analyzing, and then finally managing that risk was a very manual process. It was not done in a very data-driven manner, and it was certainly not done optimally. And when I say not optimally, it means it had poor outcomes and very high cost, so we’re in the market to really change that game by reducing total costs of managing that risk while at the same time improving both clinical and financial outcomes.

Can you speak to the nature of your clients and the information you provide them?

The problem that we’re trying to solve is that of collaboration between payers and providers, so it follows that our clients are both payers and providers. Payers and providers of a variety of types. The customers that we help are either caring for or have been at risk for populations in the Medicare Advantage, managed Medicaid, the ACA, all of the different flavors of Medicare ACO, and that’s roughly 100 million lives in the country. The organizations that we help include health plans that can be national, regional, or provider-sponsored, like UPMC in Pittsburgh.

On the provider side, there are a substantial number that have not decided to create their own health plans, but they have taken on risk-sharing contracts and risk-bearing contracts. Those are typically large health systems: Trinity Health in Michigan, Mount Sinai and Montefiroe in New York…there’s a whole bunch of examples where health systems have taken on risk contracts and they need to understand how to best manage that risk.

All of these form our customer base. We sell to all of them.

How does that relationship you have with UPMC work?
UPMC is a strategic partner for Health Fidelity. Not only are they one of our marquee customers, but they are also a development partner with whom we develop and test joint solutions before we take them to market. They use Health Fidelity to manage their risk on both their Medicare Advantage and ACA populations and we are starting to work together on their Medicaid population.

One of the core tenets of you work is natural language processing. How specifically do you apply it?

That’s a great question. When we formed the company we realized that the majority of the data in the electronic health record, about 80%, was actually unstructured. Outside of the standard structured data that comes out of the EHR like medication lists, problem lists, lab results, vitals…the majority of the actual treatment information about a patient is in the physician’s notes, and nobody does anything with it. A central principle in founding the company was making use of that data, and that really brought us to the notion of natural language processing.

We realized pretty early on that NLP was hard. That’s why you don’t have tons of companies in healthcare that have robust NLP technology…it’s less than 5 that I believe have strong NLP technologies. When we started off, we had a choice where we could just spend all of our time building out our NLP or we could collaborate with a leading institution that has already shown the understanding and R&D capabilities to build world-class NLP in the healthcare space. We collaborated with Columbia University in New York, their Department of Biomedical Informatics had done some path-breaking research in healthcare natural language processing. We worked with them to develop our NLP, which is called Reveal, and which we believe at this point to be market-leading.

We really pride ourselves on the ability to take electronic health records and do a comprehensive analysis of a patient, not just of the structured data but of the physician notes to develop a true picture of the patient’s health, and most importantly risk factors that our payer and provider partners can leverage to understand that risk and account for it and manage it.

That’s how we use our NLP today, which is to develop a risk profile of the patient to improve their clinical and financial outcomes.

It’s a challenging area of technology. Is it getting better at addressing that outlying unstructured data?

Let me take a step back and describe the overall process. We leverage our data acquisition platform and extract data out of the EHR and we standardize the data using our ingestion processes and feed it into the natural language processing engine, which is then able to extract the important information out of it. Starting from demographics to diagnoses included in the physician note to medications to procedures to the assessment of the patient and the eventual treatment plan. It’s a pretty detailed extraction of meaningful information from the unstructured data, and it presents our analytic engine with a structured representation of the information that was in the physician notes. So now we have a highly structured record that we can run the analytics on, and through the process we always encounter opportunities to make the engines better.

There is an element of machine learning in our infrastructure that really looks at the performance of the NLP and performance of the analytics, improving the algorithms so that over time the engines continue getting better. Pretty much on an ongoing basis, we’re dealing with a smarter system every time.

Become a contributor