The way we practice medicine today is broken. We prioritize the business and treat the patient as nothing more than a cog in our great machine. According to today’s guest, ” we have an emotional breakdown, with disenchanted patients largely disconnected from burned-out, depressed doctors.” Dr. Eric Topol calls this epidemic “shallow medicine”. It’s driven by runaway healthcare costs and an insatiable drive for increased efficiency and profits, and unfortunately, is largely the way we deliver healthcare today. Because of this, Topol says, “patients exist in a world of insufficient data, insufficient time, insufficient context, and insufficient presence.”

But, Dr. Topol is optimistic. He believes that Artificial Intelligence (AI), while still finding its place in medicine, and still mostly unproven in real-world clinical situations, can create the space we need to change course.

In his new book, Deep Medicine, Dr. Topol makes an optimistic, but realistic argument for how artificial intelligence can make healthcare human again. He tells us that it’s early, and this is definitely a race with no finish line, but we’ve seen great progress so far and there’s plenty of evidence that the tools will have a profound impact on every part of healthcare. Dr. Topol believes that we will get there, and that AI will create new efficiencies and workflows that can be used to either make things really good, or really bad. He tells us that “the increased efficiency and workflow could either be used to squeeze clinicians more, or the gift of time could be turned back to patients – to use the future to bring back the past.”

On this episode, Dr. Topol explains his thesis and we explore the potential paths from where we are now, to where we might go. Dr. Topol makes a compelling case for breaking that inertia and putting the priority back on the patient.

The Path to Deep Medicine

The path to Deep Medicine is by no means a slam dunk, but there is a path if choose to take it. There are obvious business cases for applying AI to what Dr. Topol calls “Doctors with Patterns”. Algorithms can be trained via machine learning to see things that humans cannot and will never see. This isn’t an indictment of our clinicians and their immense hard-earned skills, but rather an admission of our human limitations and a willingness to seek out tools to move past them. Topol believes the trend will start in radiology, ophthalmology, and pathology, where machines can read images far faster, and in many cases more accurately than humans can.

The application of AI will ultimately expand to aid all clinicians. A simple, but profound example will be “liberation from the keyboard”. Voice recognition and natural language processing will allow doctors to maintain eye contact and physical touch with their patients as their conversation is captured in real-time and converted into medical charts automatically. Additionally, according to Topol, AI will level the medical knowledge landscape and put a new premium on doctors with emotional intelligence. Topol says this is the opportunity to “restore the precious and time-honored connection and trust – the human touch – between patients and doctors.

Breaking the status quo

Today we blindly apply diagnostics and technology to the “average patient”, who Topol says “does not exist”. We use surrogate measures with flimsy evidence because that’s the best we can do with our current knowledge, data, and limitations. This leads to over-testing, false positives, and a missed opportunity to treat the individual patient based on their very specific situation. The rise of AI will give clinicians a “new partner” to help them do just that.

Data Challenges

One of our biggest challenges will be to gather the data necessary to enable the new algorithms. Topol notes, and I emphatically agree, that the difficulty in assembly and aggregation of the data has been underestimated by all tech companies getting involved in healthcare. I’d expand that to say that very few people in general, whether inside or outside of healthcare, fully appreciate the challenges we face on this front. This lack of appreciation is the primary reason why we’re still having a national conversation on lack of interoperability and will be one of the most important obstacles to overcome if we’re ever going to realize the true potential of AI in medicine (more on my position here).

Man Plus Machine

Topol does not believe that clinicians will be replaced by AI. On the contrary, he sees the future of healthcare as one where clinicians welcome the new algorithms as valuable partners in delivering care. And it’s not just about doing what we do now more quickly and efficiently, but it will enable us to do new things that just aren’t possible today. It will enable PCPs to more adequately address issues that require a specialist today (i.e. Dermatology) and it will enable non-clinicians to take on more of the grunt work of medicine. This, rather than replacing clinicians, will free them up to deal with the more important issues. It will enable them to spend more time on addressing the critical issues of their patients, and in pondering the “why” behind what they do, rather than the “how”.

Deep Liabilities and Fear

There are well-founded concerns that AI can also lead to bad things. Insurance companies using it to deny coverage or raise premiums ranks high on that list. And with the deep phenotyping that these algorithms will require, there will be a very rich set of data on every patient and that leads to obvious concerns around privacy and security. Further, a “bad” algorithm can quickly scale physical harm to patients or make inequities worse.

Additionally, there is the issue that we can’t yet explain how many of the algorithms work. How will regulators deal with that uncertainty? And how will patients and doctors feel about applying algorithms that they can’t explain to make critical medical decisions?

What we have here is a bit of a marketing problem (as is often the case). First, we have this expectation that what doctors do today is not “black box”. That is just flat out wrong, but we are more willing to accept a human black box than an artificial one. Interestingly, Topol makes the point that we may someday be able to explain the algorithms better than we’ll ever be able to explain why humans do what they do.

In making this assessment, we ignore the fact that every doctor is prone to human biases and the limitations of their own experience. Topol breaks this down and skillfully applies many of the ideas from Daniel Kahneman’s Thinking Fast and Slow to explain where this occurs. Topol points out that we’ve been doing similar things for a long time, but in the old days we simply labeled it “computer-aided”, rather than AI. Sure, AI sounds sexier for marketers of AI tools, but maybe going back to that “computer-aided” label would prevent some of the fear. Or we could try Topol’s term: “A more human medicine enabled by machine support”.

Finding Deep Empathy

Topol makes a compelling case for how we can use AI to return humanity to medicine, and I hope it plays out that way. To me, the big question is this: How will we override current inertia so that we don’t use the new efficiencies to increase patient throughput and drive revenue even higher? How can we make the case that deep empathy and patient-centeredness are not only good for doctors, and patients, but the business too? If we can find that alignment, then, with time, I think Dr. Topol’s vision will become a reality.

This is a well-researched, well-written book, and I got tremendous value from reading it. I strongly recommend it to anyone working in healthcare innovation, AI, or who is simply interested in finding new ways for our healthcare system to move forward. ~ Don Lee



About Dr. Eric Topol

Eric Topol, MD, is Executive Vice President and Professor of Molecular Medicine at Scripps Research, and the Director and Founder of the Scripps Translational Institute Department of Molecular Medicine.

Voted as the #1 Most Influential Physician Leader in the United States in 2012 in a national poll conducted by Modern Healthcare, Dr. Topol studies technologies that are changing the future of medicine.

A longtime practicing cardiologist, he was widely credited for leading the Cleveland Clinic to become the #1 center for heart care. While there, he also started a new medical school, led many worldwide clinical trials to advance care for patients with heart disease, and spearheaded the discovery of multiple genes that increase susceptibility for heart attacks.

Since 2006, he has led the flagship NIH grant-supported Scripps Translational Science Institute in La Jolla, California. He has published more than 1,100 peer-reviewed articles, has over 230,000 citations, was elected to the National Academy of Medicine, and was named in GQ Magazine as one of the Rock Stars of Science. From 2017-19, he was commissioned by the UK government to lead a team assessing technology and planning the future of the National Health Service. He is also the Editor-in-Chief of Medscape. His previous books The Creative Destruction of Medicine and The Patient Will See You Now were both published by Basic Books.

He lives in La Jolla with his family.

(Photo © John Arispizabal)

Books by Dr. Eric Topol:


Listen to the interview right here:

Or, find it on your favorite podcast network such as:


Trying to drive change within your healthcare organization? Launching a new product? Having trouble getting decision makers attention and buy-in?

We’ll help you understand the whole picture so that you can align your innovation with the things decision makers care about. And then we’ll help you execute It’s not easy, but it’s possible and we’ll help you get there.  Sign up here and we’ll keep you up to date on healthcare industry news with podcasts, blog posts, conference announcements and more. No fluff. No hype. Just the valuable (and often not-so-obvious) information you need to get things done.


The #HCBiz Show! is produced by Glide Health IT, LLC in partnership with Netspective Media.

Music by StudioEtar

Transcription: Deep Medicine with Dr. Eric Topol

Don Lee: [00:00:00] You’re listening to The HCBiz Show, the podcast dedicated to unraveling the business of healthcare. I’m your host Don Lee and I’m welcoming back to the show today my cohost Shahid Shah. Welcome back, sir.

Shahid Shah: [00:00:18] Hey, thanks. Looking forward to this really exciting conversation with Doctor Topol. Everybody knows him from the industry but they don’t always get to have a nice fireside chat with him so I’m looking forward to having this conversation with Eric today and diving deep on artificial intelligence and machine learning in healthcare.

Don Lee: [00:00:37] Yeah, absolutely. Every once and while we really get treated by a superb guest that is going to get to come on and talk about some interesting topics with us. Today is absolutely one of those cases. As you mentioned, we are welcoming to the show Doctor Eric Topol. Welcome, sir.

Dr Eric Topol: [00:00:51] Thanks very much. I’m glad to be with you both, Don and Shahid.

Don Lee: [00:00:54] We are going to be talking about your forthcoming book, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. That’s coming out next week, I believe, on March 12th. Is that correct?

Dr Eric Topol: [00:01:07] That’s it, a week from today.

Don Lee: [00:01:08] Awesome. This will be live on the day of that book launch so that’ll sound kind of silly that I’m talking about the future but that’s okay. This is your third book that you’re coming at us with and a lot of stuff that you’ve been writing about, the future of healthcare and how technology plays a role and really kind of driving us forward in the way that we’re thinking about things. I wanted to open up a little bit there, if you could tell us, I guess my simplest way I could put the question is what are you trying to accomplish with these books? You wrote The Creative Destruction of Medicine. Then you wrote The Patient Will See You and now you’re talking about artificial intelligence in Deep Medicine here. What is the progression of your message to the industry or to the world, if you will? What are you trying to accomplish with these books?

Dr Eric Topol: [00:01:53] Yeah, I’m glad you asked that, Don. Basically healthcare is so behind in technology. It’s actually pathetically behind so the first book I wrote about was how we need to digitize and that’s really only now stating to come into play with respect to things like more use of sensors and genomics and all the things that we can do to bring digital infrastructure into healthcare and that’s what this background of the absolute horror show of the electronic health records. That gave it a terrible start.

Then the second book I did was about democratization because once data was eminently portable, because it was digitized and because people were going to be generating their own data through sensors, that was going to reset the whole way the healthcare interactions would proceed. The new book of Deep Medicine is really circling back and how we can use this technology, the third D, deep learning, to enhance the humanistic side of healthcare in medicine because that’s what’s really been steadily eroding over many decades and we’ve got to turn the clock back. We’ve got to use the future to bring back the past when there was a deep and important relationship between doctors and their patients.

Don Lee: [00:03:17] That’s interesting. Because you’re using the term Deep Medicine right in the title of the book there, obviously there’s this opposite thing that is shallow medicine, that is the thing we’ve devolved into, it sounds like you’re suggesting, and that we need to move away from. Just a little more definition there for the audiences, what do you mean when you say shallow medicine?

Dr Eric Topol: [00:03:35] That’s unfortunately the major way we practice today. Very few minutes with each patient. There’s hardly a way to develop or sustain a meaningful relationship but also with that, we are not even looking at patients because we have keyboards and screens to distract us and they’re mutually hated by both clinicians and patients, these keyboards and lack of eye contact. Beyond the shallowness of the interaction, we have such a bad track record of mistakes, more than 12 million diagnostic serious errors per year, and lots of treatment errors, one of the leading causes of death, in fact. This is a big problem in the way healthcare is practiced. It’s terribly inefficient.

It’s wasteful but most importantly of all, it’s led to the burnout of clinicians. There’s 50% or more of doctors are suffering from burnout, 20% from clinical depression, a record number of suicides, of people leaving the healthcare profession. That actually doubles errors. That’s been shown so we have this kind of vicious cycle in this shallowness where people have lost their way who went into healthcare in the first place. They wanted to care for patients and they can’t provide care. They can’t execute the mission that attracted them or lured them to go into medicine. We got to fix this problem. I think we have a way forward and that’s why I wrote this book.

Don Lee: [00:05:13] Very good. When you do, I think the answer to this one probably seems kind of obvious to people right now but just to hear it in your own words is then, what is that deep medicine? What is the difference between what you just described and what we’re doing today? I guess we’re kind of putting bookends on the conversation, then we’ll work in between those but what is deep medicine? What do you think that future looks like if we do this right?

Dr Eric Topol: [00:05:35] Yeah, that’s the essence of it, Don. I think what we’re talking about is to use the deep phenotyping, the deep understanding of each human being, all the different layers that make up the medical essence of a person so we’re not treating people the same but rather specifically to their story, their biology, their anatomy, their physiology, their environment. Then using deep learning, AI, we can connect the dots between all that data which no human being can assimilate, to deep empathy. What I mean by that is having the gift of time, making human performance of clinicians far more efficient, keyboard liberation and having all that data that doctors couldn’t handle, that overwhelmed but now using machines, which essentially have an insatiable hunger for data. By offloading so much of the burden administratively and data interpretation to algorithms and by enhancing the interactions, the human side, the human touch, we can get this deep empathy which should be, I believe, the most far-reaching and important goal of the AI era in healthcare.

Shahid Shah: [00:06:58] Yeah, that makes sense. Eric, here I loved your progression. Digitize first, then democratize and now go to deep medicine. With the horror show, as you said, given the EHR and now with the interactions being reset, it seems like physicians are certainly going to be ready if systems manufacturers and software engineers like Don and myself can actually solve the right problems at the right time with the right patients in front of them. Given the complexity of what you’re seeing, where are the specialties that you think could begin this deep medicine first? Is it primary care? Is it a particular specialty? Obviously not everyone will move at the same pace so where might there be some, quote, low-hanging fruit, unquote, for some certain specialties where we could apply what you’re thinking about?

Dr Eric Topol: [00:07:47] Right, Shahid. I think the key point here is that it applies to every type of clinician from primary care, every specialty, paramedics, nurses, pharmacists but the first front of this wave will hit, and already have, radiologists because all scans are digitized now so it lends itself perfectly well to algorithmic interpretation. That’s a big step forward because the machines can be trained to find things that humans can’t. We already have a number of FDA algorithms approved in the US for radiologic enhanced performance. Ophthalmology interestingly is the other specialty that’s really in the forefront. The ability to diagnose diabetic retinopathy such that the receptionist in a primary care’s office could do that now rather than a doctor, no less an ophthalmologist and as you know, more than half of diabetics, patients with diabetes, never have screening of their retina and that’s a leading cause of blindness and it’s preventable, that’s blindness. This is something that we’re seeing already today.

These are FDA-approved indications but the biggest thing that will have wide scale impact is the keyboard liberation because voice recognition is so extraordinary now and there’s over 20 companies that are already working on the keyboard liberation mission and that will be a welcome change to everyone where we have the notes synthesized from the conversation and also archived for the patient to review as well as edit the note. This is going to be a step forward not just in getting rid of the keyboard and the screens but also getting the patient activated and involved in making the whole experience far better.

Shahid Shah: [00:09:47] Yeah and I think that that’s probably where, especially physicians like yourself who are literally years ahead of everybody else, could actually help everyone else and that is understanding that you don’t have to start with the entirety of medicine, though the entirety of medicine will be impacted. For example, dermatology is another area where you can actually take pictures of skin and be able to do a quick diagnosis. When you look at these kind of point solutions, as it were, doing the retinopathy scan, doing things like dermatology, certainly radiology, as you look at these, are there other specialties that you can say, “If patients aren’t doing X now, they might as well start,” and there might be some certain applications or certain things that you’re seeing out in the market which seem ripe and ready today, not two or three years from now? Dermatology, we talked about the ophthalmology, we talked about radiology. Anything else that comes to mind?

Dr Eric Topol: [00:10:39] Dermatology, as you mentioned, is very attractive but there isn’t any approved algorithm for AI yet. Hopefully that’ll be eminent and that’s particularly been validated for melanoma and skin cancers, not for all the other types of skin rashes and lesions and we haven’t seen the prospective study yet for that so that’s important. It’s still wanting, if you will.

I think what’s really interesting is gastroenterology, the mission vision during colonoscopy, because small or diminutive polyps are quite important and they’re frequently missed. With machine vision, there’s now been multiple studies to show that they aren’t missed which is pretty extraordinary. If you’re going to go through that tough procedure with all the prep and whatnot, it’d be nice to know that you have a full recognition of any polyps pathology. That one I think is going to be another one of the newer ways that this gets integrated but, as I mentioned, there isn’t any specialty that’s spared of the potential impact of AI.

Shahid Shah: [00:11:45] Yeah, I completely agree. What about the practice of medicine itself? A lot of what we see in the marketplace today are digital health tools that are very basic and that EHR horror show, as you rightfully put it, has caused enough distress that physicians and especially innovation shops and hospitals and other areas, when you go in with a new idea and say, “Hey, I’ve got this new digital health tool, I’ve got this cool AI,” there are walls that are up there saying, “Not now. I’ve got enough to do keeping my EHR up to date, I’ve got enough to do with my current technology.” What do you see as a way of cutting through that noise to say, “Look, that’s IT. What we’re talking about here is new clinical, new medicine, new clinical science,” something that says it’s not just more IT. It’s something else.

Dr Eric Topol: [00:12:34] You’re bringing up the other big part of the story and that is the patient side. There’s already been, as you know, a deep learning algorithm approved by FDA for detecting heart arrhythmia, particularly atrial fibrillation through the Apple Smart Watch. That is a beginning of so many more AI algorithms that are going to be approved for the public. The reason why that’s important is because that’s another way that important data can be generated by people themselves with algorithmic support, further decompressing the load, the burden on clinicians and doctors. This is the flywheel, if you will, that I would refer to because you have not only the decompression directly from AI making the performance of doctors far more efficient and their productivity but you also have the offsetting of many responsibilities, particularly the data gathering, the interpretation, initial one that of course will often require doctor oversight but you have the two of these going on simultaneously. While today it’s just a first heart rhythm story, there’ll be many, many more in the years ahead.

Don Lee: [00:13:56] In the book when you were talking about the Apple Watch and the atrial fibrillation in particular, you also were alluding to the concerns out there about doing too much screening and you told some really interesting stories about, the one that really jumped at me, I believe it was in Korea where they were doing advanced screening for thyroid cancer and that not surprisingly when they did it for the whole country, they increased the number of thyroid cancer diagnoses and over time there was no impact on the mortality caused by thyroid cancer. Basically it was like they found something that was true but it wasn’t helpful to us to find it and you were kind of talking about it around this concept of tools like Apple, the atrial fibrillation watch and all that. I was just curious, as we move forward with this how do we kind of combat that, too, where we have all of these opportunities, too, of things we can go after? How do we make sure we’re going after ones that are the right ones and then how do we measure over time to make sure that we’ve made the right choices?

Dr Eric Topol: [00:14:58] That’s a great question, too, Don. The point here is that if we practice individualized medicine by bringing all the data together rather than dumb down mass medicine, which is how that shallowness we talked about earlier. I’ll give you an example. We have now a genomic risk score for atrial fibrillation so you would only recommend potentially people to do screening of that heart rhythm disturbance if they have higher risk and they have symptoms rather than putting out for the entire public just like the scans for cancer, for mammography. Why do we give mammography, recommend that in all women when only 12% will ever have breast cancer and there’s a polygenic risk score for breast cancer? Of course there’s other clinical features like family history and risk factor that are known. The same for heart disease and so many common conditions, type two diabetes.

If we start to bring the layers of data together, the genomics, the clinical features, the sensor data and on and on, we can deliver medicine, that’s the deep phenotyping part that I referred to, far more wisely and parsimoniously at a time when healthcare economics are so totally out of whack.

Don Lee: [00:16:21] That whole section that really struck me, and it was one of the areas that I think I got the most value out of the book was this concept because it’s one of the things that I hear people saying. Particularly on Twitter there’s a lot of times where you’ll see doctors saying, “Don’t just go get tested for everything because the risk of false positives, et cetera, et cetera,” and I kind of surface-level understood it but this really drove it home for me and I guess just to make sure I’m getting it, the example that I have in my head, again sticking with this Apple Watch, is there’s stories right now about how they’re talking about putting them on Medicare patients to monitor for atrial fibrillation and I’ve seen this Twitter discussion where, “No, that’s a bad idea. Too many false positives.” What you’re suggesting is you don’t put a watch on every Medicare patient. You do this phenotyping and you identify the small percentage of Medicare patients who are at the highest risk for atrial fibrillation and put a watch on them. Is that what you’re saying?

Dr Eric Topol: [00:17:13] That’s right and of course if they have symptoms or they’ve had a stroke or a mini stroke TIA, certain reasons why you would do it. You wouldn’t use any technology indiscriminately which is how we, particularly in the US, work today. Now we know from work coming out from the Mayo Clinic that you can tell just from a 12-week cardiogram again who is especially going to be at risk for atrial fibrillation. We can come at that particular question many different ways. That brings up a fundamental issue, which is that machines can be trained with deep learning, deep neural nets, to see things that humans can’t see, will never see.

I can’t look at a cardiogram, I’ve been a cardiologist for 35 years. I can’t look at a cardiogram and say, “This person’s going to develop atrial fibrillation with X percent probability.” The fact that we can do that now, the fact that we can, with a machine, train to see a retina picture and say whether it’s a man or a woman whereas retinal specialists, it’s a guess. It’s a coin toss where machines are 97% accurate for that question. That, I think, is one of the fundamental axioms of why algorithmic medicine can benefit human performance, because our eyes, our brains can’t do things machines can and vice versa. We [have] context and judgment and all the other qualities of empathy, communication and wisdom. That’s where this synergy really comes into play.

Shahid Shah: [00:18:53] Yeah, I think that synergy probably, Eric, is the most important thing. I’ve done probably about a half dozen lectures on artificial intelligence in medicine in the last few months and the biggest struggle that I see when speaking with physicians and even CIOs and CTOs and the technical groups of the hospitals are always struggling to figure out does this mean we’re obviating someone or does it mean something else? Really the way I try to explain it is antilock brake systems did not cause drivers to stop driving cars. They just made cars safer. It gave you a little bit more time to react to dangers. Same thing with automotive steering, kinds of materials and automation we have in cars and aircraft, et cetera. It doesn’t obviate the need for the driver or the pilot, et cetera, at the moment. Obviously we’re striving to get rid of drivers soon because they’re a little bit more dangerous than they need to be.

How do you explain this world to your fellow physicians saying, “This is not to be feared. It’s not just to be embraced, run towards it because if you run towards and get these tools in, get the innovators to pull these things that are obviously the ones that have already been accepted and have some studies behind them, don’t try to reject them. Don’t try to react to them negatively because if you get them in, then you get more time with patients. Your notes get to be much, much better. Patients can do a little bit more self care so there’s a lot of benefits,” but I struggle to work with physicians often who they think that I’m trying to automate them out of the way and I’m like, “No, I’m trying to make you better. Please let me help you.”

Dr Eric Topol: [00:20:30] Right. I agree with all your points but I think the central thing to emphasize is that we’re very early in this whole AI world and very few things have been validated, especially prospectively in a clinical environment, no less during surveillance when implemented. It’s right to be circumspect. So much of what this is long on promise and potential, very short on real data so it’s appropriate the clinical community is circumspect until things are iced. The reason here is that if you have an algorithm that is faulty, it can hurt a lot of people really quickly and it could actually make things worse. It could make inequities worse. It could make a lot of things worse and in fact, if it’s used by administrators in the default way, it’ll be used to squeeze doctors and nurses and other clinicians more because you got more productivity. I want you to see more patients more quickly. I want you to read more scans and more slides and more everything. That’s of course what’s broken the back of clinicians and why there’s serious dismay and disillusion that’s set in.

We want to more invest in the necessary validation work to secure the place of AI. The attitude could be improve rather than fear that you’re alluding to. We can’t have any exceptionalism and embrace it too early and that’s, I think, something that can’t be underscored enough.

Shahid Shah: [00:22:03] That’s a great point and there’s a group that I work with called NODE Health. That’s the Network of Digital Evidence. That’s an nonprofit that’s been working for the last few years on this idea of where does the evidence come from? How do we validate these things? Of course we know FDA and their new program, the pre-serve program, wants to do a little bit with artificial intelligence algorithms, et cetera. What do you think we are at this state of validation? Are we in the 50s where drug companies were and we still have decades to go before we figure out how to test these things or are we much farther along and it’s not as bad as it seems? As I work on software, many of us including Don and I work with teams. One of the biggest issues we come up with is, “Okay, I’ve got this great idea. My algorithm is finished. The work I need now is to get data in and have it flesh through the algorithm.”

Then finally you come up with a model and one of the biggest problems with a machine learning model or an AI model in more common parlance would be that if you actually understand what your model is doing, the model probably is not working because the idea is input data generates output data. You’ve created deep learning. You’ve created proper neural nets, et cetera. Just like you and I don’t understand exactly how our brains work, there’s a model in there but we know that the outputs work. Where do you think we are in this problem of how do you validate, how do you verify? If I go to my doctor and say, “Trust me, it works,” it won’t be just as simple as saying, “IT’s FDA-cleared,” because the FDA can’t clear something that they don’t even understand how it works. How do you see this going forward?

Dr Eric Topol: [00:23:45] This is a really interesting controversy that you’re alluding to and it’s about black box algorithm and explainability of AI and algorithms. Just as you point out, there are a lot of things we do in medicine that are totally unexplainable or at least yet unexplained. The algorithms are kind of in a similar state. Most of them, we don’t have them deconstructed. They’re not transparent and so the question of course is whether to accept these if they’re totally validated in clinical environment with large numbers of patients, diverse and all the qualities that we want, replication, or do we hold them to, well, if not fully explained, we’re not going to use it? That hasn’t really been settled, that debate, but the good part is there’s more AI work being done now to basically dissect the algorithms, to find out what are the features through the artificial neuron layers that make them work.

I think over time the explainability aspects will be improved of AI. That doesn’t mean we’re going to understand all the things in medicine the we haven’t understood for decades either. We have this kind of interesting demand or threshold where we’re holding AI in some respects at a higher level of accountability and explainability than we are for so much that we do in medicine today.

Shahid Shah: [00:25:11] That’ll be very similar to the way we’re going to treat automobiles, right? When you have the first driverless cars killing two or three or four people on an accident, we’re going to treat them much differently than the thousands of people that kill other thousands of people every day and that is a challenge. Are you seeing anybody? You don’t have to mention the name of a company specifically but do you know of any groups that are doing a pretty good job on how to define the intended use, as it were. In the old parlance we used to say, “This is my medical device or this is my biologic. Here’s its intended use. Here is its indications for use.” What are you seeing that’s working well in that area in terms of machine learning and AI?

Dr Eric Topol: [00:25:50] There’s no shortage of entries here. Every tech titan company is all over this space. Then there’s just hundreds of startups so I don’t know that there’s a general thing. It’s just that more and more finally we’re seeing the convergence of the computer science, data science world with clinicians and that’s a good thing to identify the unmet needs, to be able to make a call as to whether this is ready to go into real clinical trials and what those clinical trials should look like. This is, again, relatively nascent. Only in recent years have the tech titans been hiring physicians and other clinicians in any reasonable numbers. We’re finally getting this into high gear and I think we’ll certainly see the output of that in the near term.

Don Lee: [00:26:42] Eric, where do you stand on that whole debate where if there is evidence that something’s going to work but we don’t necessarily understand how, where do you draw the line in terms of when we should start to think about incorporating something into the actual practice of medicine versus it needs more validation? I know this is an area you addressed a lot in the book, too, when you were talking about some of the really out-sized claims that come out after studies that maybe are a little bit exaggerated. I’m just curious, how do you evaluate these things? When do you think we start to think about putting something into practice?

Dr Eric Topol: [00:27:16] It’s interesting. During all the research, the years of research I did on the book, I spent time with some of the real AI experts around the world and one of them was Pedro Domingos from University of Washington who wrote the book The Master Algorithm which is an exceptional book. When I talked to Pedro about this, he said he doesn’t care if it’s explainable if it’ll help him or his family for their health. He’s a leading computer scientist and he would be happy to have the algorithm used. I can identify with that. That is, there’s so many mistakes today in medicine, as we discussed in diagnosis and in treatment, that if an algorithm can take a person’s data and come up with a better answer and it isn’t fully explained, that might be worth consideration. Unlike Pedro, I’m really used to high quality medical research. That’s not his field, of course. I’ll still be requiring that. That is, if it’s validated in that diverse large population, unquestionably and replicated, even though it’s unexplained, I’ll definitely be seriously considering using it in myself or patients, family, whatever.

We haven’t gotten there yet, that dilemma that you’re bringing up. I think we will get there but I also think that the computer science world is really getting serious about deconstructing algorithms. We’re starting to see that more. One of the best examples of that is at Moorfields Eye Institute in London, the leading eye institute in the world, they used OCT, optical coherence images of the retina, and they used that in thousands of people to determine over 50 different conditions, whether people needed an urgent referral. This was of course in a massive retrospective data set that was highly labeled and annotated carefully, accurately with ground truths and what they did was they spend a lot of effort to explain how the algorithm was working because it was perfect. It didn’t miss one urgent referral.

If we see what they did, and that was with DeepMind and Pearse Keane and his team at Moorfields, if we see that more, that would be what we’d love to embrace which is not just the validation as they showed at least retrospectively but also the precise way that it was accomplished through the algorithm.

Don Lee: [00:29:51] Yeah and you talk a lot about that validation gap, too, where most of the studies that we see in the news, that we read about, hear about, most of them are done, as that one was, retrospectively on cleaned-up data sets, things that were really prepared for them and they’ve in some cases been able to, quote, outperform the physicians and then there’s the challenge that is … I wish I could remember who you quoted but you said someone in there that said if you take that same algorithm and you put it out in the real world prospectively, it’ll probably underperform the physicians. Obviously that’s anecdotal but it draws that interesting line that you were just talking about.

Dr Eric Topol: [00:30:28] That will improve. It may even be some day that we surpass the human side of this in medicine, where we can explain the algorithms better as compared to many things the we do in a routine basis.

Shahid Shah: [00:30:39] Eric, as you think about the keyboard liberation and the voice recognition and those kind of things, I know that we don’t have to do only one thing or the other but if you look at the capabilities of where the digital part of note-taking and other information gathering is going, removing the horror show of the EHR and getting that better versus these better algorithms to actually focus on clinical science and medicine, et cetera, which of the two do you think has more importance or should have more importance right now? Do we try to optimize their current productivity because of the damage we’ve done on the information gathering or do we actually focus on new algorithms, new mechanisms for better observations, better diagnostics and things like that? Again, we can walk and chew gum at the same time but which one do you think is more important?

Dr Eric Topol: [00:31:29] I don’t know that it’s an either/or for that. I think what we have is the health information companies like Epic and Cerner that are enterprise companies that are just basically gouging health systems with this horrendous software and this whole notion of a patient-centric world is a farce. The first thing we could do is give patients all their data. They should rightfully own all their data and that’s going to be vital in the AI era because if you don’t have all the inputs, and no one has them in this country, all their data, then you don’t have good outputs and you’re not going to get the benefits of deep learning for you, no less at a population level. We need a complete reboot of how we take care of medical data in this country and it isn’t the way that has been going for the last 20 plus years.

As far as the voice recognition, that is a step in the right direction because that note, that voice archive which is synthesizing the note, the note will have a powerful synthetic capability that far surpasses the notes that are generated today by Epic and Cerner and all these other IT clunky software, which are just unacceptable. That gets us with the tech companies that have come into this space, it’s a welcome event because if we do really provide the data to each person, they rightfully should own and be able to control and parse it to give it to the clinicians as needed or to use it for medical research as they see fit or they want to sell their data or part of their data, that should be their right but that isn’t the way it is right now and that has to get fixed.

Shahid Shah: [00:33:19] In order to fix that, one of the biggest challenges we’re seeing across the IT community and the innovator community at large is prioritization which is why I’m asking as many different ways as I can these questions about what should I prioritize. If you have 1,000 things that are important it means nothing is important so there’s got to be something that we can say to our chief technology offices, chief information officers. The security officers are busy trying to keep more stuff from coming in so if you were to give some advice to chief executive officers to help provide proper prioritization to their chief information and technology officers, how would you say those things? We know all of these things are really important. What would we say to them, “Hey, work on X before Y before Z,” or do we need to give more outcomes-oriented kind of things and then not provide more specifics about prioritization? Do you agree that prioritization is a problem or is that a cop-out where people are just saying, “No,” they don’t want to do something and then that’s why they ask for priorities?

Dr Eric Topol: [00:34:22] The priorities are not at the patient level, that’s the problem. Priorities are at the business level and that accounts for why we’re seeing some steady erosion of the doctor-patient relationship and the rise of companies like Epic, Cerner and the rest of them that don’t cater to patients at all. The priority has to go back to the patient and if we had that as the fundamental driving force and what AI can certainly help by giving the data the algorithmic support of patients, of the public so they can do a lot of things on their own. Many conditions, like you mentioned skin rashes and ear infections and routine things that drive so much interactions with patients, that can be done with bypassing the current health system path. The priority ought to be to liberate, to support autonomy, to give as much support to the patient-centric mission which hasn’t really been the case.

Don Lee: [00:35:26] Shahid, thinking back to this conversation, overall the premise of the book here, the deep medicine premise, if you will, that Doctor Eric Topol brought to us and talked about here today, step one is to take these machine learning and AI algorithms and apply them to what he calls doctors with patterns. That’s your radiologist, dermatologist, things like that where there has been, at least in the studies, evidence to suggest that we can do more radiological studies in less time with a machine than you can do with an individual human radiologist. If that proves out to be true, then it makes perfect sense to me from a business standpoint why you would go implement that. There’s plenty of incentive for a business to do so. I can get there.

Then what Doctor Topol is suggesting is that we hit this inflection point where either we let current inertia take us to the place where we use that new space to be more profitable and keep more money in the health system or we have this option, which obviously he thinks is better and certainly sounds better, of we can give the time back to the doctor-patient relationship which, in the long run, he believes will produce better outcomes and ultimately produce things that are better for the business. It’s that gap, though, and it’s this conversation we just had at the end there about creating tools that will promote autonomy for patients and empower them or promote autonomy for patient-doctor diagnostics and empower them as a team.

Outside of that sounding really good, what I’m stuck on is what is going to make the health system prioritize that work? What is going to make someone say, “I’m going to break from this current inertia and I’m going to go and focus on creating autonomy for patients and creating autonomy for patients and doctors as a team? How does that work? How do we make that leap?

Shahid Shah: [00:37:21] Obviously we’re about the business of healthcare. That’s what the HCBiz is all about and it’s a very, very important question to answer. That’s why a lot of these kinds of ideas end up taking a lot longer to implement, because we haven’t said, “Here’s how you restructure your business. Here’s how you restructure things,” so here’s a few places where it could have some real meaningful impact. If we said that these doctors with patterns, as Eric mentioned in the book, are the ones we’re focusing on, it means that we can help reduce physician burden and reduce clinical burnout by tying them to very specific CMS mandates as it were. We know that there’s a physician burden reduction mandate and so if you have a cool new idea, if you’ve got a strategy, if you’ve got a tool that you want to take to a hospital, a health systems or someone, if you can say, “My algorithm helps reduce physician burden in these very specific ways,” that is measurable and that is something that is sellable because people want to buy things that create reduction in clinician burden as well as physician burden more specifically. That’s one area.

Another area is just old fashioned quality metrics. If we are doing a proper job improving doctors with patterns and related kinds of work, it means we should be able to demonstrate either higher productivity or much better diagnostics through some reasonable level of quality measures. I am by no means saying we should invent new ones but if you could tie your tool or technology to existing NCQA or National Quality Foundation or payer, like CMS quality measures, so there are scores that are computed and if our algorithms can improve those scores, then there’s a way of getting to it.

Finally there is this just general idea of what Eric is mentioning in his book is this patient-centeredness. We know that the FDA, CMS, a bunch of other government agencies are all promoting the idea of direct patient connectivity, the idea of making sure that we are preparing tools that give patients choice in algorithms or giving choice in therapies. It becomes very important.

If we can hit either significant burden reduction or we can hit quality scores or we can hit this patient-centeredness through autonomy, I think there’s an easy … I shouldn’t say easy, a quick case to be made. Whether these cases fit the priorities that are there in our CIOs’ and our CEOs’ strategies, that’s a whole nother question and that’s where I think when I asked Eric about which hospitals are doing this well, he basically said none of them at the moment which means there’s a lot of space for good hospitals to start declaring that they are good at autonomy, good at a true patient self-service, et cetera, and make it a useful unique selling proposition.

Don Lee: [00:40:19] Right on. I think those are the things, you got to grab onto those key tangible, measurable things, obviously that’s what we’re always talking about on the show, so that you can start to implement this stuff. There’s these grandiose claims out there and obviously there’s kind of these silly conversations that are like, “We’re going to replace doctors with machines and that makes the doctors mad and they resist.” There’s a lot of these almost silly conversations going on that are understandable from a “we’re all human” standpoint but that’s the real thing is there is evidence of very real opportunity here and its early on, obviously, and a lot of it is unproven but there’s definitely evidence of it.

I think normally because I don’t get into AI stuff a lot and machine learning … Any of the really future-looking stuff, I don’t necessarily dig into that often. I tend to always stay on this very practical what am I doing at work on Monday type approach but reading this book and thinking about some of these different concepts, I am actually starting to think that this stuff is closer than I thought it was before and I think it could turn quickly.

Shahid Shah: [00:41:29] That’s a great point and really you know it’s closer and one of the reasons I brought up the dermatology when he brought up the ophthalmology is I did want Eric to talk about readiness here doesn’t mean that the tech is not ready. Readiness means we don’t know how to validate it and prove its efficacy in clinical environments to the FDA and other regulatory bodies, which is really what’s holding us up. I don’t think there are very many areas where e don’t understand the tech. It is we don’t understand the tech well enough and how it applies to digital biology and digital chemistry for us to express how we did our machine teaching so that we can tell the FDA and regulators, “Hey, you’re not just trusting us. Here’s how we evaluated it.”

Today, especially in the medical devices world where this is … AI and ML will more be akin to medical devices than digital therapeutics, than biologics and drug areas. Because of that, we have to reconcile one major point and that is medical devices and existing software systems are validated through the idea of determinism. I know in advance what my intended use is. I know in advance how I’ve indicated the use of this particular thing that I’ve created. Here are all the inputs that I had. This is my failure, modes, effects and analysis, FMEA or this is my ISO 13485 standard where I can validate everything. Now if it’s deterministic, I have a unique set of inputs, an algorithm runs and I have a unique set of outputs.

That’s how we evaluate things today but in a non-deterministic world, because many machine learning and artificial intelligence capabilities seem on the outside, even though they are digital they seem on the outside to be non-deterministic like one input right now might give you a different output tomorrow because the weather might be different or other sensor data might have come in differently than it is today. It is, at its core, still deterministic because it is digital in most cases but it still feels non-deterministic. That is our fundamental problem is that if I’m building an algorithm, how do I move from a non-deterministic to a deterministic-looking thing that I can then take to regulators and say, “Here’s what I got that works?” That’s really why the problem is on the clinical side.

The good news is this problem doesn’t exist on the administrative side. We could still work on administrative AI and ML eligibility and pricing and other things while we’re trying to solve the clinical side.

Don Lee: [00:43:55] Yeah, I’m just curious, just taking a step back there is what is it about the AI and ML algorithms that makes them feel non-deterministic? As an engineer, outside of a artificial intelligence course I took at University of Buffalo about 20 years ago, who has very little background in these technologies but what you just described is exactly what I intuitively thought as an engineer. It’s like, no, there’s code in there and the code is doing what somebody told it to do. It’s not magic. There is no magic. If it’s input, process, output, which is all of software including artificial intelligence and it sounds like you just confirmed that, what is it that makes it feel non-deterministic? Is it that the massive amounts of inputs are just more than we can wrap out heads around and that’s what makes it feel different or what is it? What creates that gap?

Shahid Shah: [00:44:48] It’s the latter. It’s that when you have determinism with a single flow, just imagine for example you had a table and you had a little steel ball on ta table. If you push the steel ball on the table and if it’s flat with no surface area impediments, no friction or anything, you could precisely predict, based on how much you push that, you’re going to get a normal force equals mass times acceleration that you can use to tell you how long that ball will take to roll on a flat table.

 Now imagine that that flat table, you did not know in advance that it had some ripples on it so there’s some friction. How the wind is blowing, there’s a fan blowing on that and the ball is going to get some friction from that as well. Now what have we done? We’ve taken a flat surface, same ball, same everything and added just two pieces of data, friction on the table and air coming towards it. That changes everything, right? It doesn’t make it any less deterministic. It just feels non-deterministic because we don’t know all of the things that we thought we knew in advance.

If you could truly know all data in advance, every problem is deterministic. The issue is if I had sensors today and tomorrow I had sensors, like let’s say weather sensors or GPS, location-based, et cetera, if I don’t change anything between today and tomorrow, of course it will look deterministic but that’s very rare. The patient changes from system to system. The sweat glands, for example, if you’re going to measure sweat on a device, it will look different based on whose sweat you’re doing because some people sweat more, some people sweat less. That doesn’t make it any less deterministic. It just means that the data is different and we don’t know how to test all these different variations.

 I really like what Eric said is that we have to get over the problem that we could somehow always know in advance how to deterministically prove these algorithms. If we assume they’re better than what we have today, could that be a valid case? I can’t imagine, having been in the med device for a long time, I cannot imagine talking to a regulator saying, “Trust me, this will work better than where we are today.” That for sure is not going to work. That’s one extreme. The other extreme of course is spending years or decades gathering all data, making it possible. That’s also not reasonable so the answer is somewhere in the middle more closer to, hey, is my algorithm going to do any worse damage than I would do if I didn’t have that algorithm? We don’t know how to regulate that. We don’t know how to say it’s okay to put something out there as long as it doesn’t make matters worse. That’s the piece that our regulators need to learn a little bit more about, I think.

Don Lee: [00:47:26] Yeah and you’re never going to avoid just the sheer fact that we’re all going to freak out when some day an algorithm causes a problem, even if it was demonstrably better by a factor of 1,000 than what we had been doing before because it sounds scary and we’re programmed to think about machines … We talked about this with the gentleman from the Nicholson Center whose name is escaping me right now. I’m embarrassed but it is escaping me. We talked about this with him is there’s almost like a marketing problem around robotics where we’ve got movies about robots turning on us and turning into our overlords so people just have this weird fear. It’s the same thing here with artificial intelligence.

The example that comes to mind is there was a car accident involving one of the self-driving cars in Arizona and the headline read, “Self-driving car involved in three-car injury accident,” or something like that. Most people don’t read the stories. They read the headline. They’re like, “See, the self-driving cars are out there causing mayhem,” but you read the story and inside of it, it was something like a woman driving her car crossed over six-lane highway, jumped a curb and T-boned this car as it was driving by on a different road and what the headline would suggest in any way shape or form but we’re going to be programmed to freak about this stuff. That’s always going to be a hurdle to get over no matter what we say or do and no matter what the regulators say or do, too, and these are elected officials or at least people who are responsible to elected officials so that kind of stuff matters.

Shahid Shah: [00:48:55] It does but also let’s keep in mind, let’s not kid ourselves that somehow our current biologics and medical devices, et cetera, are completely safe. Med devices kill people regularly. They harm people regularly, not in the thousands of course but fairly routinely. We’ve been living with that. I think that until we figure out how to really get over this hump of the idea that you have a closed system, like most medical devices are operating in a closed system, and get over the idea that we have to be able to deterministically prove all the different failure modes that could ever happen, every scenario is possible because even the FAA when you have avionics systems on aircraft don’t always treat it that way. That’s why in contracts we say, quote, acts of God. All that means is I’m not smart enough to know what happened so I’ll just call it an act of God.

I think that as we work with regulators, especially the larger companies because they have more influence on the regulators and how they will perceive these kinds of input, that’s where I think as we start to have more people on future podcasts about AI and ML, we really should just center our thinking around that is, sure, you’ve got a great idea but, one, how do you move beyond the fact that it can operate in a closed system and, two, how do you make the non-deterministic look more deterministic so it can get evaluated and approved from regulatory? I’m less concerned just personally because as a builder of things, as an innovator, you can’t go by what might come in a newspaper article. You can absolutely go by what might come from a question from a regulator, though so definitely where we should focus our attention.

Don Lee: [00:50:35] Just one last thing to kind of pile on that whole segment there is he mentioned this notion that in many ways we are holding the machines more accountable than we do the humans, you just alluded to that with med devices and things like that but he opens the whole book with a big long breakdown of all of the biases and heuristic problems basically that already exist today in medical diagnoses that we just put up with. Not taking shots at anybody. This is really hard work and, again, we’re all humans and these are just things that we all succumb to and our life experiences and the very limited experience that any individual doctor has, even someone who’s been doing it for 20, 30, 40 years, is at the top of their game, top of their field, the number of patients that they’ve seen is still relatively small. You know what I mean? The instance of variance there are still relatively small and the biases are going to take hold and they’re going to impact their decision making.

When we think about it from the machine standpoint, we got to remember that stuff. I think it’s parallel to the whole medical device conversation we just had so definitely fascinating stuff.

Shahid Shah: [00:51:44] Exactly. If we look at bringing this down to brass tax and say, “Well, how do I move forward on all this?” These ideas that Eric has outlined are great. We need to obviously give everything, especially the ones with doctors with patterns, give them some help, give them back their time so they can spend more time with patients. Hopefully the time we give back to them doesn’t mean they have to see more patients. It means that they’ll have more quality time and I appreciated his comment on that because it didn’t occur to me that if I gave you back more time, doesn’t mean you’re going to spend it more with the patient. Just means you’re going to use another slot for another patient so-

Don Lee: [00:52:19] Inertia right now, for sure.

Shahid Shah: [00:52:20] Absolutely, yeah. That’s really where I think, as we start to think about the innovator side and maybe what we need to do is in an upcoming talk, let’s bring in some of the regulators. We grab the FDA and other standards bodies working with the FDA to say, “What are you thinking about in terms of this idea of closed systems and determinism and is there any way to get around that? If not, we’re just not going to see any high performing algorithms coming in and making the case because they’re going to be just as hard to get past the regulators.”

Don Lee: [00:52:52] Yeah, this is a fascinating subject area for sure. I think we could just dig into so many aspects of it and really honestly if I just took this book and I just kept breaking it apart and digging into each of the subject areas that he addressed, I could probably keep myself busy for the next couple of years very easily. I will say to those of you listening, definitely a solid read. It’s a very complicated subject matter and I would say that Doctor Topol took a definitely optimistic but realistic take on this stuff, too. Throughout the book he points out where things that people are saying don’t quite make sense and are kind of fantasy and then on the flip side he points to things that are starting to work that he thinks could get traction.

Definitely he stops short of trying to suggest when and how and if these things are really going to start to take hold but he’s definitely of the mindset that it’s going to be in, I would say my interpretation would be next couple of years you’re going to start to see some of this stuff. As I mentioned earlier, I had previously been of the mindset that we were probably 10, 20 years away before you started seeing anything significant but because of this read and some of the research that I’ve done and a couple other things that I’ve read recently not even necessarily about this but just generally about the progression of humans over time, I’m starting to come around. I really do believe this could happen, that there could be significant advances that actually do impact healthcare in the next couple of years. I think it’s closer than I was giving it credit for.

I would say great book. Absolutely worth the read. It’s well-written. It’s well-researched and definitely will get you thinking about things in some different ways. We’ll link up the book and information about Eric Topol in the show notes, as always, and yeah. Shahid, I definitely think we should come back to this and break this down from a couple of different angles in the coming weeks and months.

Awesome. Everyone, you can check us out at TheHCBiz.com. As always, thank you for coming and letting us learn with you and we’ll be back again soon with more on The HCBiz Show.

End Interview