smart-hc-pocast-logo.png

Hi!

Welcome to the Smarter Healthcare Podcast!

Ep. 13: Kevin Ross, CEO of Precision Driven Health. Topic: Ethics in Machine Learning

Ep. 13: Kevin Ross, CEO of Precision Driven Health. Topic: Ethics in Machine Learning

Kathy: Hi everyone, and welcome to episode 13 of the Smarter Healthcare Podcast. Our guest today is Kevin Ross, CEO of Precision Driven Health. Kevin is here to talk to us about machine learning in healthcare and the ethical considerations we should take when implementing machine learning. It’s a timely and fascinating conversation. I hope you enjoy.


Kathy: Thank you, Kevin, for joining the podcast today.

Kevin: Thanks, Kathy.

Kathy: First could you talk to me about your organization, Precision Driven Health? What’s your mission?

Kevin: Precision Driven Health is a partnership in New Zealand, and our mission is to create the capability to optimize health decisions for every individual and their family, we call that whānau in New Zealand, by combining the data that is available, and the models that are available about this. So we actually pull together people from the health sector who are involved in healthcare delivery, along with software developers, academics, and the government even to help understand how decisions are made and how we can improve it.

Kathy: We’ve heard a lot about machine learning in healthcare. How advanced are we right now?

Kevin: So healthcare is in a lot of ways behind the rest of the experience that we have as individuals, especially as consumers. We’re getting much more used to data being used in different ways for how we shop, how we bank, all these other areas. And health always feels somewhat behind. However, at the same time there’s a huge amount of research going on in health, and health has always been a very research-led industry, and so the whole idea of machine learning and artificial intelligence is going to come at a great pace for the next few years because of all the work that’s been going on in the background, trying to integrate the same concepts from other industries into the health sector. But it’s fair to say that health is slower to move. And in a lot of ways that’s a good thing, we’re making very important decisions about individuals and society, and so we don’t want to make those in an ill-informed way. We want to make sure we’re retaining the really high level of trust that we have between people and those who provide care and advice to them. And by bringing machines into that conversation it can introduce all sorts of dynamics that we know can be really beneficial but also can add some risk to that.

Kathy: And what are some of the ethical challenges when it comes to machine learning?

Kevin: So one of the core pieces of machine learning is really understanding the way that models are going to be used in practice. So in healthcare in particular we have a real history of only changing the advice that we give after a really thorough and rigorous process around understanding. So we do clinical trials in certain ways because we know historically that you can make mistakes, that just because something looks like a pattern in the short term doesn’t mean it’s necessarily going to flow out to the rest of the examples that you try that on. So with machine learning you’re almost going from a world where you do really slow, thorough, methodical research to what I think of as real-time research. So you’re actually seeing a patient in real time, and you’re essentially getting a machine to help with the analysis that would usually take the form of a person wracking their brain for everything they know about the individual in front of them, and everything they have gotten in their education, mixing those things together and saying, “The best thing for me to tell you right now is this piece of advice, sort of this is my diagnosis, or this is my recommendation.” When you bring artificial intelligence or machine learning into that you’re really asking to accelerate that whole process and essentially to do that pattern matching in real time, and also sometimes to even invite the machine to discover new things in real time. So you’re asking questions on the spot that you may not have had a – that no one’s ever had a chance to really think about in broader terms and therefore no one’s done a traditional clinical trial to really think about all the things that could go wrong with them there. So as soon as you start asking a machine to say, based on the person in front of me, tell me about people like this, it could tell you all sorts of things, and that invites the possibility of unexpected consequences in there. So the machine might recognize patterns that aren’t true. Or you might see someone, you might have doubted, it’s not particularly reliable either about the person to whom you’re giving advice or within the population that it’s comparing them against, and by doing that sort of real-time on-the-fly research you could easily give advice that’s not actually appropriate to that person. There’s also all sorts of things that happen when you build models that need to be really thought carefully about. In particular, models are only as good as the data that goes into them, and also the incentives that you give them. So one of the classic areas in health is are you aiming to give the best possible medical advice, or treat in the most efficient way, and again depending on the incentives that the machine is building those models on, that advice could be nuanced by what it’s actually trying to achieve. Is it trying to get someone out the door quickly, is it trying to get them the best possible return for the consumer, or the best possible return for the organization that’s funding the healthcare? There’s a lot of different players involved in healthcare.

Kathy: And are there examples of bad machine learning that we’ve done in healthcare that you could give?

Kevin: So one of the biggest challenges in applying machine learning in health is that we often have data that’s either missing or that we have to make inferences from. One of the areas that that’s particularly challenging is when people are trying to approximate the level of need in the population. At one level people who have to see medical professionals a lot are seen to be of high needs, and those who don’t see them very often are seen as low needs. However, if you think of that from an access perspective, those who are unable to get treatment will also appear under those who are seeing their medical professionals less often. So there have been studies out there that have tried to build a model base and compare, for example, different ethnic groups in terms of their actual need for a particular health condition, and because they’ve used the frequency with which they are presented with that condition as their proxy for how prevalent it is within that population – you’ve seen, for example, Black Americans in certain circumstances come up as not a very high needs group for a particular medical need where the reality appears to be that the reason they haven’t presented very often is actually because of the access – they haven’t had access to get that treatment and therefore they don’t seem to get to the doctor as often, they don’t seem to have those services as frequently. So if you think of that from a machine learning perspective, when often we have to put these interpretations across the data, wanting to say when we see a patient present for the first time we want to compare them with people like them, if we use some of these proxies for their needs, based on the population history that we have, and again doing all of the good data analysis that you do, the way that the machine might interpret that could make the mistake that a human makes just as easily. So it could just as easily say, well people like you don’t seem to have this condition, so it’s probably not that. Where actually people like you perhaps haven’t been able to get treatment for this condition at a particularly high rate. And that happens all the time in health these days. People who are not well represented in the historical data tend to not have the same level of accuracy in the models that are being presented. Another one that we come up against being in New Zealand is that we often find we’re translating research that’s been done in the United States or perhaps in Europe that’s almost exclusively been tuned for populations that are not the same makeup as New Zealand’s population. So we’ve seen proposals to bring in models that have never been tested against our Māori population, for example, who are the native people of New Zealand that make up 15 or 20% of our population, and therefore they’re much less likely to benefit from the models and potentially those models that are being brought in from international studies are often likely to miss particular needs in their health, not through any specific design flaw but just simply because the research is being done on a population that doesn’t include people like them at all. And so in bringing in machine learning type models, and they might be models to translate a radiology image to see if there’s a risk of cancer, or a dermatology image to see if there’s a risk of skin cancer, or they could be more sophisticated models, like reading someone’s medical record to say are there notes in there that are particularly representative of people of a certain risk category? Because those have been built on another population, they won’t work as well for a new population. And we see those sorts of things all the time starting to come through. We haven’t seen too many sort of major missteps in that area probably because health is quite conservative in actually automating anything so you’ll often bring something in relatively slowly and you’ll keep the human oversight as part of the process, so in general these models are being brought in one step at a time, but the potential for that to be causing some things that we don’t even know is always there because again you’re doing these things at an incredibly high pace. 

Kathy: Right, and it seems like there is a big risk of relying too much on the machine as opposed to relying on what the provider knows and has learned and maybe some of the nuances of asking some more detailed questions.

Kevin: Yeah, that is true. I mean, we should always keep in mind, though, the alternative. So machines can make the current process much more efficient and effective. Machines can process data way faster than humans ever will and it’s only going to get a bigger difference. We just have to be very careful that when we ask the machines to do that we’re not just replicating something that is less than ideal. So we don’t – the last thing we want to do is just more efficiently make the wrong decisions or what we think of as widening the equity gap because the models went really well for people who actually are already quite well off and not so well for those who are in the sort of minority groups and by nature of being in the minority, they’re not well represented in what the machine can understand. That already happens with humans, and so humans already have this bias towards what they’re used to seeing. Machines will automate a lot of that, and we just need to make sure we’re really careful about how we utilize that amazing power in a way that maximizes the benefit we get from them but protects us from those potential harms that could happen.

Kathy: So where is it in the process that we have to consider some of the ethics? Is it when we’re designing the algorithms? Is it when we are implementing the algorithms in some kind of healthcare setting?

Kevin: The reality is everyone has a responsibility right throughout this whole process, and I think that’s one of the things that’s really come through in the last few years in this whole area of machine learning ethics, artificial intelligence, is a recognition that right from the very start of, am I asking the right question with the right understanding through my choice of where I get data from, how I understand that data, how I interpret that data, what type of models are going to be appropriate, what are decisions I make along the way, right through to hey I’ve got an algorithm that I think I’d like to try on real people, and I want to understand how that actually works, everyone has a role to play in that. And the main role in all of it is transparency, being really clear in our assumptions. We will always have to make assumptions. The reality is in health we always give advice on incomplete information. So any time you see a nurse or you see a doctor, they only know a fraction of what’s potentially knowable about you or about your condition. That doesn’t take away the responsibility to give the best advice they probably can, and it doesn’t necessarily infer on them that they have to know everything about everything before they can say anything, so we need to just be really up front about that and understand that health is an inexact science in that people will always give the best possible advice given incomplete information and the best thing we can do is make sure that’s really clear. So this is where this model was developed, this is why it was developed, this is the setting for which we had in mind when we developed this model. Here’s the data we used, here’s what we did with missing data because we didn’t have everything we wanted, it wasn’t there perfectly, here’s the decisions we made about the model itself, and this is why. And then all these steps asking the question, who could benefit from this, who could be harmed by this, how might this be used in efforts that are ideal, and how might it be used if actually someone doesn’t fully understand it and starts using the model in a way that’s not designed but you can imagine someone actually picking it up and putting it into the decision-making process. And again in all of this you just want to have really good transparency, really open understanding and building into the deployment end really good tools to best explain what’s happening. One of the trade-off we often find when we build machine learning models into practice is we could come up with a more accurate model, in other words it would be more likely to detect the risk than not, but we can’t explain it. Would we rather have something that’s slightly less accurate but more explainable? And often that takes really sitting down and understanding that, is the value in going from 85 to 86% accuracy, is that more valuable than staying at 85% but being able to explain here are the reasons why the model came up with that. And in a lot of health care contexts the answer is you want to be able to explain as much as you can. And perhaps part of the explanation is it appears that the model – here’s another model where they can’t explain, they would say that, and having a conversation with the patient about that in some contexts, or with the organization, and really thinking that through – what difference would it make to get it wrong a bit more often but to be able to explain why we got it wrong, versus actually getting it right slightly more often, but on those ones where we got it wrong, we don’t know how to explain that.

Kathy: Right. I think there’s something that’s very scary about not knowing what is in that black box. You want to somehow explain it.

Kevin: Yeah, that’s certainly true, although to be honest we don’t ask that question when we read ‘people like you buy this book’ or ‘people like you are likely to watch this movie’ and people in society are getting more and more aware at least, if not comfortable with the idea that hey, it’s just based on a lot of data and a lot of comparisons that happen out there and I think there will be cases in healthcare where that’s totally fine. It’s when you’re actually sitting down, especially with someone who you consider to be a real specialist expert in the field, and their training is much more in the physical and biological side than it is in the data side, and they’re the ones that you’re trusting to give you advice, and often it’ll come down to the type of relationship you have there, they’ll be saying, look, this is what I can tell you from the medical knowledge, this is what I can tell you from the data, let’s make an informed decision between us.

Kathy: Where do you think the greatest innovations are in machine learning right now?

Kevin: So machine learning is gradually sort of making those moves from sort of the highly structured areas to the much more sort of unstructured. I sort of think of this trajectory going from we used to always think of analysis being: I know exactly what the question is, I’ve got some data that gives me the inputs and the outputs, and as an expert we get someone to choose the right statistical model in between to link those two things together, machine learning is sort of taking you the next step is I’ve got all the inputs, I’ve got all the outputs, and I’m asking the machine to find the relationship and really build the most accurate model. Ultimately we’ll get to a point where the machine is figuring out what questions even need to be asked, and you can think of that, the sort of areas that that particularly is important for are things like a chat interface, so if you are presented with someone who just starts out with I’m not feeling too well, and then taking you through the conversation that a human might take you through in healthcare and actually helping take you through an unpredictable set of conversations that might include all sorts of different inputs, some of which are really structured information like I’ve had this test and this was the number that came out of the test, as well as just understanding the tone of the words you’re using within that conversation, potentially including things like, oh I’ll take a photo of it now and upload it into the system. So machine learning is gradually taking us to this world where it can kind of replicate the interaction that you have with an expert. It’s going to be a long time before it really replaces it, it’s going to be more like a third person in the room between – you’ve got your patient, you’ve got your adviser, and that adviser might go from having to be a real expert to being a generalist, or from a generalist to being just a general carer, to being a family member, you can take down the level of expertise that someone’s required to have if they’ve got a machine kind of helping guide that process. So I think of it not as a replacement but an aid within that kind of context. So to me those are the really exciting advances. The things that will – that are already there at some level are things like image processing, so you can train a computer really well on images, again taking into account the fact that you’ll have a biased set in the past and you have to make sure you understand that. But image processing is a reasonably clear yes or no type of scenario, this needs further attention. Other structured areas like patterns within large populations in terms of lab results and these sorts of things, and you’ll see this with COVID-19 and other areas like that there’s so many tests out there with so many results that we’ll start being able to analyze them in a way that a machine can really find nuances that it would take a human a long time to find. But to me those really exciting ones are the much more unstructured areas where we have got conversational systems coming along.

Kathy: And where do you think we’ll be in five to ten years as it relates to machine learning and healthcare?

Kevin: So I think there’ll be some disrupters in the sense of new services that come from the outside. It’s one of those areas that the system itself changes very slowly but if someone comes from a bit outside the system and by that I mean sort of technology companies that can come in with a very specific area of expertise that they can really help people with, I think we’ll see a whole range of those that people will actually start to go to first, rather than go to second. So at the moment you’ll still find most people trust their PCP to give them the guidance of where to go, they’ll often, they’ll Google things first or they’ll look a few things up first but then they’ll go through their person, I think you’ll see the order of decision-making move and you’ll see a lot more applications that are very precise for a particular type of patient coming into there. So I think what you’ll see is just a massive market that evolves and those will sort of cluster towards groups of models and organizations that kind of share their capabilities together. Again it’s really fraught to predict anything as far out as 10 years because if you think back in 10 years the number of things that have changed in that time I would take the parallel, think of other service industries, what they were like ten years ago, health might be about ten years behind, and so you’ll see a similar sort of transformation in that time, whether it’s your sort of banking or retail or travel or these sorts of areas where we are now very comfortable, much less inclined to always go to that travel agent, than we were and that’s probably more like twenty years ago now, but there’ll be a lot more computer-first type of kind of thinking.

Kathy: Great, thank you so much, Kevin, this was a great conversation.

Kevin: Sure, most welcome, thanks very much for your time.


Kathy: Thank you for joining me for this episode of the Smarter Healthcare Podcast.

To learn more about Kevin’s work at Precision Driven Health, you can follow the company on Twitter @HealthPrecision.

You can also follow me on Twitter @ksucich or @smarthcpodcast. Feel free to get in touch with comments or guest suggestions.

To listen to more episodes, visit our website at www.smarthcpodcast.com or find us on your favorite podcast app. I’d appreciate if you would subscribe, rate, and review.

Thanks for listening!

 

Ep. 14: Sheetal Shah, Chief Operating Officer of SymphonyRM. Topic: AI for Patient Engagement

Ep. 14: Sheetal Shah, Chief Operating Officer of SymphonyRM. Topic: AI for Patient Engagement

Ep. 12: Lynae Brayboy, MD, Chief Medical Officer, Clue. Topic: Femtech.

Ep. 12: Lynae Brayboy, MD, Chief Medical Officer, Clue. Topic: Femtech.