AI will eventually drive healthcare, but not anytime soon

A merging of artificial intelligence and healthcare is tougher than many realize.

TechCrunch recently published a guest post from Vinod Khosla with the headline “Do We Need Doctors or Algorithms?“. Khosla is an investor and engineer, but he is a little outside his depth on some of his conclusions about health IT.

Let me concede and endorse his main point that doctors will become bionic clinicians by teaming with smart algorithms. He is also right that eventually the best doctors will be artificial intelligence (AI) systems — software minds rather than human minds.

That said, I disagree with Khosla on almost all of the details. Khosla has accidentally embraced a perspective that too many engineers and software guys bring to health IT.

Bear with me — I am the guy trying to write the “House M.D.” AI algorithms that Khosla wants. It’s harder than he thinks because of two main problems that he’s not considering: The search space problem and the good data problem.

The search space problem

Any person even reasonably informed about AI knows about Go, an ancient game with simple rules. Those simple rules hide the fact that Go is a very complex game indeed. For a computer, it is much harder to play than chess.

Almost since the dawn of computing, chess was regarded as something that required intelligence and was therefore a good test of AI. In 1997, the world chess champion was beaten by a computer. In the year after, a professional Go player beat the best Go software in the world with a 25 stone handicap. Artificial intelligence experts study Go carefully precisely because it is so hard for computers. The approach that computers take toward being smart — thinking of lots of options really fast — stops working when the number of options skyrockets, and the number of potentially right answers also becomes enormous. Most significantly, Go can always be made more computationally difficult by simply expanding the board.

Make no mistake, the diagnosis and treatment of human illness is like Go. It’s not like chess. Khosla is making a classic AI mistake, presuming that because he can discern the rules easily, it means the game is simple. Chess has far more complex rules than Go, but it ends up being a simpler game for computers to play.

To be great at Go, software must learn to ignore possibilities, rather than searching through them. In short, it must develop “Go instincts.” The same is true for any software that could claim to be a diagnostician.

How can you tell when software diagnosticians are having search problems? When they cannot tell the difference between all of the “right” answers to a particular problem. The average doctor does not need to be told “could it be Zebra Fever?” by a computer that cannot tell that it should have ignored any zebra-related possibilities because it is not physically located in Africa. (No zebras were harmed in the writing of this article, and I do not believe there is a real disease called Zebra Fever.)

The good data problem

The second problem is the good data problem, which is what I spend most of my time working on.

Almost every time I get over-excited about the Direct Project or other health data exchange progress, my co-author David Uhlman brings me back to earth:

What good is it to have your lab results transferred from hospital A to hospital B using secure SMTP and XML? They are going to re-do the labs anyway because they don’t trust the other lab.

While I still have hope for health information exchange in the long term, David is right in the short term. Healthcare data is not remotely solid or trustworthy. A good majority of the time, it is total crap. The reason that doctors insist on having labs done locally is not because they don’t trust the competitor’s lab; it’s more of a “devil that you know” effect. They do not trust their own labs either, but they have a better understanding of how and when their own labs screw up. That is not a good environment for medical AI to blossom.

The simple reality is that doctors have good reason to be dubious about the contents of an EHR record. For lots of reasons, not the least of which is that the codes they are potentially entering there are not diagnostically helpful or valid.

Non-healthcare geeks presume that the dictionaries and ontologies used to encode healthcare data are automatically valid. But in fact, the best assumption is that ontologies consistently lead to dangerous diagnostic practices, as they shepherd clinicians into choosing a label for a condition rather than a true diagnosis. Once a patient’s chart has a given label, either for diagnosis or for treatment, it can be very difficult to reassess that patient effectively. There is even a name for this problem: clinical inertia. Clinical inertia is an issue with or without computer software involved, but it is very easy for an ontology of diseases and treatments to make clinical inertia worse. The fact is, medical ontologies must be constantly policed to ensure that they do not make things worse, rather then better.

It simply does not matter how good the AI algorithm is if your healthcare data is both incorrect and described with a faulty healthcare ontology. My personal experiences with health data on a wide scale? It’s like having a conversation with a habitual liar who has a speech impediment.

So Khosla is not “wrong” per-se; he’s just focused on solving the wrong parts of the problem. As a result, his estimations of when certain things will happen are pretty far off.

I believe that we will not have really good diagnostic software until after the singularity and until after we can ensure that healthcare data is reliable. I actually spend most of my time on the second problem, which is really a sociological problem rather then a technology problem.

Imagine if we had a “House AI” before we were able to feed it reliable data? Ironically it would be very much like the character on TV: constantly annoyed that everyone around him keeps screwing up and getting in his way.

Anyone who has seen the show knows that the House character is constantly trying to convince the other characters that the patients are lying. The reality is that the best diagnosticians typically assume that the chart is lying before they assume that the patient is lying. With notable exceptions, the typical patient is highly motivated to get a good diagnosis and is, therefore, honest. The chart, on the other hand, be it paper or digital, has no motivation whatsoever, and it will happily mix in false lab reports and record inane diagnoses from previous visits.

The average doctor doubts the patient chart but trusts the patient story. For the foreseeable future, that is going to work much better than an algorithmically focused approach.

Eventually, Khosla’s version of the future (which is typical of forward-thinking geeks in health IT) will certainly happen, but I think it is still 30 years away. The technology will be ready far earlier. Our screwed up incentive systems and backward corporate politics will be holding us back. I hardly have to make this argument, however, since Hugo Campos recently made it so well.

Eventually, people will get better care from AI. For now, we should keep the algorithms focused on the data that we know is good and keep the doctors focused on the patients. We should be worried about making patient data accurate and reliable.

I promise you we will have the AI problem finished long before we have healthcare data that is reliable enough to train it.

Until that happens, imagine how Watson would have performed on “Jeopardy” if it had been trained on “Lord of the Rings” and “The Cat in the Hat” instead of encyclopedias. Until we have healthcare data that is more reliable than “The Cat in the Hat,” I will keep my doctor, and you can keep your algorithms, thank you very much.

Meaningful Use and Beyond: A Guide for IT Staff in Health Care — Meaningful Use underlies a major federal incentives program for medical offices and hospitals that pays doctors and clinicians to move to electronic health records (EHR). This book is a rosetta stone for the IT implementer who wants to help organizations harness EHR systems.


tags: , , , , ,
  • http://Http:// Louizos Alexander

    As a doctor I would greatly appreciate a decision support system because of the many data problem ( all these data that shout for my attention). This would help me to relieve the burden from the easy cases and keep my intuition for the difficult cases

  • Nat Torkington

    Bang on, mate. Patients complain about doctors continually asking for details again and again, and hope that electronic records will prevent it. But this implies authoritative accurate info exists and perfect transfer into and out of. The ER folks I spoke to said they keep asking because people omit things the first time, and situations change and new pains develop. You’re generally seen by the most qualified person last: do you want the first person’s interpretation of your story to become the golden one in your permanent record?

  • Lindsay Edmunds

    I read the Khosla blog and while I agreed with his basic point about AI, my gut reaction was “no no no, that is NOT how it works.”

    Your point about good data is important. No, more than important: central. Also, there is the human element. The interaction between patient and doctor is not the same as the interaction between patient and home healthcare diagnostic device. In fact, every interaction is different. And these different interactions affect the data collected.

    I am neither a healthcare professional nor an AI expert, but a great deal of medical research crosses my desk in my work. Also, like everyone, I am a patient from time to time.

  • http://www, Alasdair McLeod

    I can’t comment on possible solutions to the search space issue because I don’t have the relevant expertise.

    There are possible approaches to at least improving the data quality problem. Many of the variations in lab results are due to things like variations in sample handling in transit to the lab, process issues in the lab, etc. A great deal of this can be resolved by automation and moving the analysis as close to the doctor’s office as possible, using technological approaches that are either available now or can be soon.

    The problem of bad data in published research is very real – see the link to the work of John Ioannidis in Kohsla’s article.

    Ultimately the answer is to allow and encourage health consumers (patients needing treatment and healthy people hoping to stay that way) in charge of their own destiny.

  • Jon-Kenneth haugen

    I don’t know much about healthcare,but I understand the problem with reliable data.

    Like so many other I think you fail to see the power of exponential growth. And I think you start in the wrong place. It probably is true that ai will be good enough to use a large base of data, before we have reliable data to use. Well then we have to use ai to find that reliable data, instead of using unreliable data.

    Like you say Watson have reliable data to work with when playing jeopardy. One thing Watson does when learning from this huge amount of information is to see connections between different pieces of informations. if we gave Watson all the unreliable data, a large piece of the data would be correct. As the computing power continues to skyrocket, and Watson and similar systems get better and smarter, the system will see when data in different documents and research is inconsistent. If we see all this information as a huge Equation, the correct data will often be hidden in the information by seeing connections between and differences between different information. In the future Watson and similar systems will be able to see these solutions in a way humans could newer do.

    another important thing is that ai in combination with advanced sensors and nanotechnology will be doing the actual research. Also here the systems well be able to see connections between data collected troughout the body in a way that human can newer do. This is not because the ai have more inteligence than humans, but because of the way computers work in comparison to the brain. Our brain is all about inaccurate, and unreliable data. That’s the reason that we don’t have reliable data available. We have to let ai do the research first, then let ai use that information to make diagnoses.

    We can’t predict future technologies and development by looking at the past. The rate of paradigm shifts is doubling every decade. That’s more powerful than most people understand.

  • Mike Lacey

    Excellent stuff – very thought provoking.