Over the last few decades, medical research has shifted from treating transient illnesses to curing long-term disease. This work, which built on the efforts of men like Lister, Pasteur, and Salk, has been slow and difficult, with many promising drugs and treatments ultimately failing their clinical trials. The heyday of antibiotics is waning, but we still have designs on eradicating disease. What’s next?
AI stands poised to act as a force multiplier across every field of medicine, because rather than being useful against one kind of ailment – like antibiotics or radiation – AI can work alongside humans to make better decisions in the day-to-day, regardless of what the use case might be. In the same way that antimicrobial agents are the corollary and companion of germ theory, there’s every reason to believe that AI is what will enable us to apply our knowledge of “omics” (genomics, proteomics, metabolomics, etc) to human health. We’ve started to interact directly with the information contained in the genome, so it stands to reason that the next big leap will have to deal with information processing.
Multivariate analysis is by far the greatest strength of AI, because it allows the kind of contextual decision-making intelligence used in systems like the human mind, while also drawing from the eidetic memory of a hard disk. No parsing through the emotions is required, and there are no attentional omissions. AI doesn’t need sleep, and doesn’t get fatigued after focusing on one topic for too long. At the same time, AI has the benefit of massively parallel processing. The ability to handle huge volumes of data is of increasing value, and AI can drink from the firehose. With enough memory and processing power, a medical AI could hold a whole family tree’s worth of medical records in context, scour databases for pertinent diagnostic information, and call up banks of medical and social resources – all at the same time.
For the purposes of this discussion, I’m defining AI as a computerized system that can perform tasks usually requiring human intelligence, like speech and image recognition, translation between languages, or decision-making. But there are degrees of sophistication in such systems, and they can be under more or less computerized control depending on what humans can currently ask computers to do within polynomial time. We don’t currently trust AI enough to let it be fully autonomous; you’ll notice that even in planes with autopilot, there are always trained human aviators. But there are smart systems that have varying degrees of intelligence and automation, operating in real time – like Google’s self-driving car. Weighted decision-making is a technique that lets software inch closer to human-level situational awareness, even in silico. A system doesn’t have to be HAL to be AI. (Given how that worked out, it probably shouldn’t be).
The health applications of software AI seem to stem mainly from its ability to remember and relate things, but also from its ability to personalize medicine, work fluently in natural language, and handle big data. Humans use context to determine the meaning of otherwise ambiguous words or events, and with natural language processing, so can AI. And these systems are in use today. A couple of worthwhile examples are the partnership between IBM’s Watson and Sloan-Kettering, and a medical AI called Praxis.
Watson has been in the news because of its recent performance at Jeopardy and chess. It’s well versed in game theory, but it’s also capable of learning and analyzing new information, and now it’s applying its talents as a diagnostician. Watson is also working with a group called Wellpoint, and Wellpoint’s Samuel Nessbaum has said that in tests, Watson got a 90% correct diagnosis rate for lung cancer, while doctors only got 50%. IBM, Sloan-Kettering and Wellpoint are trying to train Watson as a cloud-based diagnostic aid, available to any doctor or hospital willing to pay.
But even Watson, with its formidable talents, wasn’t built for medicine. To see a medical AI in the field, look to Praxis: a piece of medical records handling software, built around a concept processing AI. It uses a learning model that records a doctor’s vocal or typed input, and then classifies it into a net of semantic nodes, based on how closely the words or phrases are related to concepts the program has already seen. Praxis remembers those relationships, too, so as it gets more use, it gets smarter and faster.
If you’ve ever wondered whether there’s a way to do what 23andMe wanted to do with regards to fitting patient care to risk factor relationships found in the genome, by the way, there may be. 23andMe was very ambitious in terms of what they tried to claim, which is why they ended up in trouble with the FDA, but the basic premise is sound. Genetically personalized medicine can already account for single-nucleotide mutations that impair a drug’s function, as demonstrated in the design of different drugs for different stages in the progression of CML, a form of leukemia. The Geisinger hospital system in Pennsylvania, which treats about three million people, is participating with a company called Regeneron (PDF) in a huge longitudinal genomics study that will work with anonymized data on patient exomes from DNA samples they’ve volunteered. They intend to use the unaltered data to tailor health care to the patients in the study. As pioneers in the field, no doubt they’ll experience problems and setbacks, but the example Geisinger sets will be an important proof of concept.