Personalized Medicine and Machine Learning
Have you ever had a friend recommend a book or a diet or a workout program because “it changed my life!,” only to try it and think … “meh”? Very often, what works really well for one person doesn’t work for another because we’re all different in ways that are sometimes obvious, but in most cases are subtle and difficult or impossible to observe directly. To make matters more complicated, figuring out which of the myriad differences (assuming we could write them all down) is responsible for you loving that book and me not making it past the first chapter is like looking for a needle in a haystack. That’s both the promise and the peril of personalized medicine. If we can figure out what’s unique and causally relevant about how you respond to medical treatment, you’ll get much better outcomes, but it’s often easier to treat the haystack instead of the needle.
To be a bit less metaphorical about it, the problem boils down to sample size. Lots of people take lots of different medications every day. Let’s assume half are male and half are female. We’ve got big samples of each from which we can make accurate inferences about how men and women respond differently to medicine. Now suppose that ⅔ are adults and ⅓ are children. Now we’ve got four smaller buckets of people - male adults (⅓), female adults (⅓), male children (⅙), and female children (⅙). But what about age, and race, and where you live, and health history, and lifestyle, and family history, and so on? You quickly wind up ever smaller fractions of the original patient pool.
In machine learning terms, there’s an old, simple, and shockingly powerful algorithm known as k-nearest neighbors that is often used in cases like this. I represent you and me as a bunch of numbers (age, gender, etc.) and I look in a very large pool of other people to find your “nearest neighbors”, i.e., the people with the most similar numbers. Then I ask what treatments worked well for them and say the same treatments will work well for you. The problem is that:
- we don’t know which features are important, and they can change depending on the medical condition we’re treating, so we include lots and lots of features;
- because there are lots of features, most of them are irrelevant for any one condition, so your nearest neighbors may be similar to you along lots of irrelevant dimensions;
- we could easily be missing the key feature because we didn’t know that it was relevant or it was expensive to observe.
What’s the solution? First, more data is better. That means more people in the pool, and more information about each person. That may seem to contradict what I said above, but the more people in the pool, the further I can slice up the pool and still have a large enough sample to make generalizations about you and me. Second, we’ll use more sophisticated learning algorithms that don’t treat all features the same, but that weight some features more than others (like support vector machines) or that construct new features from raw data (like deep neural networks). Third, multitask and transfer learning can help. Rather than treating each medical condition as independent, we can leverage what we’ve learned about treating one condition when learning to treat another, related condition.
The bottom line is that as more and more patient data becomes available and we become increasingly clever about how to leverage that data, the promise of personalized medicine will become more fully realized. Rather than getting a “meh” medical treatment, you’ll be more likely to get an “it changed my life!” treatment.