C-Suite Q&A: Clarify CEO on Apps, Data Silos, and the Future of Value-Based Analytics

Ryan Black
JULY 14, 2017
In our quest to bring together powerful, informative perspectives in the health tech sphere, Healthcare Analytics News regularly speaks to leaders, influencers, and innovators in our recurring C-Suite Q&A series. This week’s focus is on Jean Drouin, MD, MBA, the CEO and co-founder of Clarify Health Solutions, a San-Francisco based startup focused on value-based care analytics. Clarify uses analytics to inform the best course of care, and offers physicians an integrated app that they can prescribe to patients to stay on top of their treatment.
 
Dr. Drouin, in the second installment of our interview, spoke of his company’s patient engagement efforts, the difference between “analytics” and “AI”, and where he sees the push to value-based treatment models heading in the near future. Part one can be read here.
 
I’d be remiss if we didn’t talk about the app. We’ve seen a lot of analytics companies trying to work to create value-based models, and not all of them have that direct patient engagement. What brought that about, and is it a differentiating factor?
 
It really came from starting by saying “what’s the major unmet need of the paying customer here?” either the provider or the payer. Sometimes people ask why either, and it’s really whoever’s owning the risk for the total cost of care. They’re not looking to purchase a siloed analytics product and a siloed patient engagement product and a siloed care navigation package. In an ideal world, they’ll want fully interoperable best-of-breed products that seamlessly talk to each other.
 
We fully buy that the insights and the analytics should fully inform the patient engagement and guidance, and should directly be linked to whether the care navigator if they need to get involved or not. Interestingly, if you look again at the FedEx analogy, it’s the same 3 components: they have analytics, they have the ability to track a package, and they have the ability to track the work of the drivers and whatever other labor hasn’t been automated. It’s through that lens that we said our network will have all 3 components.
 
Let’s move into the analytics themselves. The AI platform is all internally built?
 
Correct.
 
Can you speak, at any level of depth you want, about how it works and the level of specificity it delivers in its recommendations?
 
Part of the value of having paired up with an engineering team that came out of the financial services sector is that they’re unencumbered by the legacy of what had previously been built in healthcare. The paradigm in healthcare up until now has been that you would need to build a pristine database with every data element going in clean before you then used it to do any analytics. What they’ve learned to do in banking and in consumer and in other industries is to put all data into a data lake, whether it was clean or dirty, and to build algorithms on top of that to call specific data elements and clean them only to the extent that they need to be for that particular use. It makes it far, far easier to bring together data sets that in the past had been quite siloed.
 
We very deliberately bring together claims data, clinical data from electronic health records, demographic data, social data, from the likes of Facebook, and what healthcare economists might call “determinant of health data”, so things like if a patient lives alone, or if they have any kids or that kind of thing. When you’re able to bring in all those different data sets, you end up with an ability to double or triple the predictive capability of the models and algorithms that you built. That gives us the ability to say “hey, orthopedic surgeon, the 72-year-old patient that’s in front of you that also has hypertension and lives in a home with stairs and lives alone…typically your colleagues would put that patient on this pathway rather than that one based on their characteristics and complexity.”
 
A lot of companies might call that AI, but we’ve been careful about using the term because it’s so overhyped, so we don’t refer to it that much. This was a terrific point in your Stanford article, I absolutely agree that you have to be very careful not to over-focus on the idea that [we can predict] a patient is going to have an event on this date. We are a long, long way away from that. But saying that this patient, because of these characteristics, deserves to be in the hospital for another day, or should go to a skilled nursing facility rather than discharged straight to home, I do think the analytics can offer some guidance to physicians on which way to lean in making their decision.
 
I frame it that way because I don’t think the machine at this point can replace clinical judgment. Certainly, it can inform clinical judgment, but if it were my own care I would still want a trained physician to ultimately use the data available to him or her to make a decision based on his or her experience.
 
Reaching back to the quality of data, and also touching on the idea of whether or not to call something “AI,” I was thinking back to the recent example where DeepMind kind of got NHS in trouble in the UK, and for all that trouble of ending up with maybe illegally obtained data, they turned around and went “well the data wasn’t all that useful anyway.” Do you run into trouble, with all of these forms and sources, of trying to sync up this disparate data into something meaningful?
 
I love the question. I think it’s about how one approaches and decides to use siloed data. If the approach is to throw a whole lot of data into a black box to come out with insights (and I’m not painting DeepMind that way), but if the approach is to throw a bunch of data into a lake and use statistics to try to come out with insights, I’m not convinced that we’ll get much out of that.
 
I’ll contrast that with an approach that starts with what we know to date. We know that historically, care has been delivered in a certain way. We also know that there’s literature that says that some things should be done, but we don’t necessarily adhere to those things. So why not go look for the data that gives us transparency on whether or not we’re actually applying the things that should be applied? That’s much more akin to how a manufacturer, or how FedEx would look at data to understand its historical bottlenecks and inefficiencies and put in procedures or technology to improve that. That’s how we look at it. We try to present to clinicians data that allows them to go through the behavioral change journey and say “hey, ok, maybe I can afford to send some of my patients home rather than to a skilled nursing facility.”
 
For that, you actually don’t need AI, you just need an understanding of the workflow you’re trying to change and the way in which to present a morsel of relevant data to a clinician in a way that they’ll be willing to change how they do things. I think people often miss that the goal is to change some element of behavior, whether it’s of a clinician or a patient or a family member helping that patient. How do you present them with a piece of information that allows them to make a better decision.
 
It’s always good when an answer obliterates the next question I was going to ask. So what’s the resistance level like to those changes? There has to be some.
 
This is where the new value-based payment models are so important, because it allows a conversation that goes like this: “Fellow clinicians, we didn’t create the payment model, and we understand that you would’ve like to remain in the old fee-for-service system, but now that you’re under these new sets of rules, would you like information about what you need to do to win, or to beat the new system?” That usually allows them to get over that hump of resistance, and more into a mode of “well, show me what the data says about where it is that I can change things in order to do my bit alongside my colleagues to make sure we’re at least breaking even in this new system.”
 
Let’s take this real wide, and talk about what you see happening in the pursuit of value-based care in the next 5 years.
 
Here’s how I’d characterize the market where it stands today. We’re still in the early adopter phase, and the reality is that for most CFOs and CEOs, they’re still needing to optimize the fee-for-service world. As a result of that, the adoption of both value-based payment models, and the technology and the analytics we’ve been talking about to effectively operate under those models, is slower than obviously some of us on this side of the table might have wished for.
 
That said, as we go market-by-market around the country, there’s been a marked shift. Two years ago people were still saying “well, let’s see what happens to these programs.” Now, everyone is saying that it isn’t a question of “if”, it’s “at what pace?” Interestingly, it’s very different specialty by specialty. In orthopedics, where CMS has really pushed this quite hard, we’re finding very entrepreneurial surgeons who are often co-owners of ambulatory surgery centers, starting to go to commercial payers in their own markets and offering a slightly better deal than the physicians and hospitals next door, asking to be preferentially sent volume. They’re beginning to volunteer themselves into commercial bundles.
 
The signs are there for this to pick up speed. I do think this is a thing where if you get 30 or 40% of your book of business being on these models that it creates a tipping point where we’ll rapidly accelerate to more like 70 or 80%. If you had to ask me when that tipping point is going to occur, it’s probably more in the 5 to 10 year timeframe. In some ways, it might be an advantage to be a startup, because in some ways as long as you’re able to find enough early adopters, you are able to grow alongside them. If you’re an established player, it usually means you have to keep investing in the here and now, the unit that is working on the new bundles is often starved of capital. It’ll be interesting to see how that element plays out.
 

SHARE THIS SHARE THIS
0
Become a contributor