Welcome. This is Functional Medicine Update, July 2014, and this is an epic issue. Well, if you hear excitement in my voice, it’s real. I am totally excited about this time that we’re going to spend together with our researcher of the month, an individual that I had the privilege of meeting, oh, I guess nearly 10 years ago who I’ve followed very, very closely, his work. It’s pioneering, groundbreaking, paradigm-shifting, mind-expanding, and probably—if I was to be really realistic—I think it is the closest to really forecasting where medicine and health care is going in the 21st century. It’s part of a movement of individuals who are really creating the new medicine. I’m speaking about Dr. Eric Schadt. Dr. Schadt is Professor and Chair of Genetics and Genomic Sciences, Director of the Icahn Institute for Genomics and Multiscale Biology. Don’t you love that term? Multiscale biology. I think that says a lot in itself.
He’s an expert in bioinformatics, computational neuroscience, epigenetics—both human genomics and genetics. But more than that, he’s an expansive thinker. This is one of those visionaries that makes news-to-use from the theory to the practice. He and his colleague, Stephen Friend, have really, I think, pioneered a landscape within health care that is uncharted territory.
Just to give you a little bit of his background, because it is certainly symbolic of the breadth and expansiveness of his thinking because he does have kind of a non-traditional background, he has degrees in mathematics, ultimately went on to become a doctoral student and a successful candidate for a doctoral degree at UCLA and was faculty there. He then moved on from there to do a whole bunch of things, including Rosetta Inpharmatics, where he was Director of Information Sciences, and from that went on to a series of steps and you’ll hear more from him about this. Prior to joining Mount Sinai in 2011, he was the Chief Scientific Officer at Pacific Biosciences, and through all of this work at Rosetta, the work at Pacific Biosciences, the work at Merck (which as you probably know Rosetta was a company owned by Merck), he has left an irreversible impact on the field. His publication record of over 200 publications since 1998-99 is legendary in top-tier journals. And his work with Dr. Stephen Friend has really opened up kind of a new concept as to how we even do big data analysis by collaboration, by opening up the blinders, by getting rid of academic pedagogy and self-protection, and moving to the democratization of ideas and sharing of data and information. Kind of a novel concept in the academic world of discovery, but they have been pioneering this concept very successfully. If you look at a number of his papers, they are authored by more than 20 authors from collaborative centers.
Clues from the Resilient: Looking at Genetics in a New Way
We’re very fortunate to actually have an article that was just published in Science magazine authored by Stephen Friend and Eric Schadt, which for me, when I read it, it just blew me away. It just stopped me in my tracks. This was in the May 30 issue of Science, volume 344, and the title of it is: “Clues from the Resilient.” I don’t want to steal the thunder—I’d like you to hear about this from the brain and mouth of the convener and the author of this paper, but this, to me, is the “ah-ha” paper, because we have so focused our concepts of genomics on disease. Our whole culture is tied to disease. It’s all this fear-based mentality of skirting around the edges and hoping your number doesn’t get pulled by the Monte Carlo effect of life and ending up with a disease diagnosis when locked in our genes is extraordinary symphonic information related to health and related to function. If it wasn’t there, we as a species would not have survived. So this penchant to always deal with the disease construct and the fear-based mentality, which by the way is the economic driver of our system, is turned on its head when talking about resilience: how it is that people that may carry genes that might encode for what we consider disease susceptibility don’t express those characteristics as a consequence of other governors on their genetic expression? And this is the article that I think is the “ah-ha” that Dr. Schadt and Dr. Friend recently authored in Science.
With that as a probably longer-winded introduction, Eric, than you ever got, thanks so much for being part of us here at Functional Medicine Update
Researcher of the Month
Eric Schadt, PhD
Professor and System Chair, Genetics and Genomic Sciences
Director, Icahn Institute for Genomics and Multiscale Biology
Mount Sinai School of Medicine
One Gustave Levy Place
New York, NY 10029
ES: Thank you, Jeff, certainly for all those kind words. Hopefully I’ll live up to that height and thanks for the opportunity to be on your program.
JB: Could you tell our listeners about the basis and the understanding of this “Clues from the Resilient” article, because I think it’s a good way to contextualize where we’re going to go in our discussion.
ES: Sure, well this idea of studying resilience in the genetic context grew out of more a frustration of having been part of the pharmaceutical industry and seeing how diseases were being approached from this—very much as you said—a very disease-oriented perspective where only diseased individuals are studied and healthy subjects are brought in only as controls to better assess the diseased individuals, and how that hasn’t really delivered the amazing therapeutics that one would have imagined could come out of that. So it’s not as if using that kind of approach has led to dramatic improvements into well-being through pharmaceutical interventions, and the reason for that—I believe—is that the types of hits that happen to a system, especially if you take, at the extreme, the rare Mendelian disorders that are highly penetrant and lead to catastrophic disease, that these diseases are not occurring because of a gain of function; they are typically occurring because of a loss of function. And when you have a loss of function (a protein that’s broken), to come up with a small molecule that can fix that broken protein, it’s a very, very hard problem. Unable to achieve that, we have fell short, I believe, in addressing a lot of these human conditions.
On the other hand, if you can identify individuals who harbor these highly penetrant, highly deleterious mutations, yet nature has found a way to circumvent and buffer against those mutations, then however that individual has buffered, that becomes the therapeutic. While it’s sort of obvious when you hear it, it’s not something that the human genetics community and beyond has addressed on a systematic level, so our idea was: Let’s look for those individuals who have something either in their DNA or in the environment in which they are part of that has enabled them to circumvent these hard-hitting mutations, identify what those are, and then pursue that as a preventative measure.
JB: That would seemingly bring into discussion some very, very interesting questions related to where these buffering messages might reside within our genome. Do they reside in coding regions, or do they reside in noncoding regions? It would also beg questions as it relates to are there signals from the environment—diet, lifestyle, chemical exposures, whatever—that transduce these messages into functional phenotype? So this opens up the door to an extraordinarily robust, different way of approaching health and disease, it would appear to me.
There is No “Bad” DNA
ES: Exactly, and I think it’s sort of a different perspective in that we think of, in the disease community, DNA having mutations that are bad, right? So they increase your susceptibility to a given disease, so we think in terms of good and bad DNA, and if you have these mutations that’s bad and that gets to the whole fear-mongering that you were talking about. We’re doing testing on individuals and then what they are expecting to get back is do you have some bad DNA that’s causing perhaps the potential to cause bad things in you. Our view would be more from the perspective that there really is no bad DNA, per se, in general—that it is all about, for your genetic background, how do you create the right environment that maximizes the potential that’s built into your DNA? So whether it is taking a certain therapeutic that may change microenvironments in certain cells of tissue types that then enable you to live a normal life, or changes in your diet, or changes in your exercise patterns, or other behavioral changes that can then create the right environment and maximize what the DNA is able to do for you. That’s the way we’ve started thinking about it—that it’s all about how do we create the right microenvironments and macroenvironments in an individual’s life to optimize the gift they have been given through their DNA?
JB: So you authored…well, you’ve authored many articles—over 200 of them—but a previous article to the article on the resilient was found in Science in 2012. I love the title. It was titled “A GPS for Navigating DNA.” If I was to really look at the body of your work over the last 20 years that you’ve been actively publishing, that would be a good sound bite—an elevator speech—of your work, that you are helping us to understand how to navigate our genome and its complexity. Tell us a little bit about this “GPS for Navigating DNA” article that you authored with Rui Chang. I think it was a very interesting insight.
ES: Yes, so it is built on the idea that during your life course you have coinciding with that your health course, and the health course is a very complicated trajectory that is based on the DNA you’re born with, the environmental context in which you’re living, your lifestyle choices, and many, many different variables that are defining all the different susceptibilities to diseases, your protections against disease, and so on.
What we really want to be able to provide individuals isn’t the subfraction of a percentage of medical advice when they happen to visit their doctor or a medical center one time during the year. You know, if you think about, the amount of time you’re in a doctor’s office or in a medical center is probably, for most of us, less than one percent of our time and that’s not really giving you the right kind of snapshot of what is going on in yourself. So if we can instead take into account what’s going on in you at the molecular level, what’s going on in your environment, and all the other variables that we can collect on an individual longitudinally, can we define the landscape in which they are operating?
If you imagine your health course as being this vast expanse—this vast landscape—of peaks and valleys that may represent disease peaks or valleys of wellness, and that you’re moving through this complicated landscape over your life course, what we want to be able to do is at any given point in time, we want to be placing you on that landscape, identify the trajectory that you are on, identify whether that trajectory is leading to overall good improved wellness, protection against disease, unless you are on a trajectory that is going to lead with high probability to certain diseases, or maybe you’re in a disease state trajectory and how do we bounce you out of that? But the main idea is can we more accurately identify the trajectory you are on, whether it’s good or bad, and then by using quantitative, probabilistic methods that are taking into account the vast expanse of data that we can collect on individuals, so it’s building these predictive models, can we identify what are the features in you, whether it’s your behavior, whether it’s therapeutic intervention and so on, what are the features that will either maintain you on a positive trajectory to realize all the benefits on that trajectory, or how do we move you off a trajectory that’s moving you towards disease or keeping you in a disease state? So it’s this more continuous time monitoring. It’s not going into the doctor’s office once a year to have your blood pressure checked and your glucose levels and so forth; it’s continuous monitoring, and continuous feedback to you that can help you progress.
We Fine Tune the Performance of Machines—Can We Do the Same for Humans?
And I’ll say this isn’t something that’s foreign, right, to people who want to tune machines. If you look at the Oracle America’s Cup team (from Oracle), on that boat—the very sophisticated boat that they race in the America’s Cup—they have over 300 sensors on that boat monitoring every aspect of that boat’s operation including the crew. They are sampling data at roughly ten Hertz and they are generating around 300 gigabytes of data a day on that boat, crunching that data in real time, making decisions in real time on how to tune parameters to optimize the function of that boat. They do the same thing for the Honda team and the Indy car races have the same sort of set up on their cars for monitoring all these features. If we can do that on cars and advanced machines, why can’t we do that on people and why can’t we do the same kind of modeling to tune the performance of every individual so that their maximum potential is achieved?
JB: Holy mackerel!
ES: Sorry if I’m getting carried away, here.
JB: I love it! No, you’re not. We want you to be carried away. That’s the vision that Eric Schadt has really stood for. I guess the question is these are lofty, aspirational goals, but then a person might ask: Well, that sounds like a huge amount of data, and that sounds like, wow, it would crunch supercomputers and bring them to their knees. Can we really do this? Is this technology that’s really possible?
ES: It’s a perfect question, and I would say that maybe ten or twenty years ago this sort of vision would have been great, but I think very difficult if not impossible to carry out. But what we have seen over the last decade-plus is such an amazing advance in the technologies—those in the biotech arena and beyond—to allow assaying of individuals in a very comprehensive fashion, whether it’s whole-genome sequencing, or whole-transcriptome sequencing, or proteomic or metabolomic profiling. These assays can now be carried out cheaply enough and comprehensively enough to be done on a more routine basis, so I think the ability to generate very large-scale, high-dimensional data on individuals and longitudinally is now possible, and you’re seeing companies like Theranos that is now appearing in Walgreens—you know, where you can go right down to your Walgreens and, with a drop of blood, choose from a panel of tests that you can have run for very low cost, a lot of which are covered by your insurance provider. So you’re already seeing this go direct-to-consumer.
Complementing that molecular revolution is the physiological-sensing revolution, so many of your listeners probably have Fitbit devices, or Jawbone Ups, or the Flex Fuel Bands, or whatever—a variety of these wearable devices that today are doing measures that are maybe somewhat primitive, but the next generation coming is getting more and more sophisticated. So whereas today maybe it is measuring your activity and sleep cycles with a now hand-held EKG devices that snap right on to your iPhone and then in 30 seconds deliver you an EKG that rivals what you get in a hospital and that device costs fifteen dollars. There are wearable devices that are simultaneously measuring blood pressure, and skin conductants, and bioimpedance, and pulse oximetry, and all these different physiological measures that absolutely can be informative as to your current state, and if you are in a disease condition, the mobile glucometers, the Bluetooth-enabled inhalers, the portable spirometers for people with COPD and asthma—all of these measurable devices getting cheaper and cheaper, where you can have these in the home, you can wear them on a routine basis, and collect large-scale amounts of data without thinking about it. And Apple just announced their foray into the health arena with what they call the HealthKit, which is a whole number of tools that are enabling the building of apps that will aggregate data from all these wearable devices and your electronic medical records.
JB: The possibility of measuring physiological metabolites by the same mechanism?
ES: Exactly. So I think all of the ability to generate these really large-scale and informative sets of data on individuals over time is no longer an obstacle. In fact, that will be the next big wave that hits the consumer market and it’s already emerging. How you make sense of that data, of course, is the game. So to be able to manage very large scales of data, to place it in the context of the digital universe of information so that you’re using as much data and knowledge that is available to make interpretations on an individual—that does require supercomputing hardware, it requires people who know how to manage that scale of data, it requires people who know how to integrate the data and build these predictive models that requires how do you apply those models in a clinical setting to produce results that a physician will buy? That they can interact with, understand, and help guide their thinking as well as the thinking of the consumer? I don’t know if we’re there yet to do this to the mass market, but one of my moves to Mount Sinai was to figure this out. This was a medical center—the fourth largest network of hospitals in the US—so a very large patient population and a leadership committed to figuring out how do we take all this information, build those models to improve the well-being of our patient population? So we’ve hired the right kinds of teams to help flesh this out, and I think in the next five years you’ve going to see real examples of this absolutely aiding the more appropriate diagnosis and treating of patients that are improving outcomes of the patients and also reducing the overall healthcare burden. We’re in the midst of that revolution right now.
JB: Let’s move from this extraordinary vision level down to maybe the ground level for a second and talk about 23andMe, because I think that’s a very interesting model discussion about how culture, regulatory environments, and maybe even people are going to integrate and make part of these opportunities in terms of our social system. We know we can do GWAS, we can do exome analysis, we can do whole-genome sequencing, when there is even now starting to be epigenetic analysis of methylated promoter regions of various genes, and then we take that down to a $99 test of specific SNPs offered by 23andMe and the FDA’s incursion into what they consider a company that’s providing a medical device without proper checking of the boxes. Tell us a little bit about how you see the 23andMe example being a history lesson for us as we move forward.
ES: Yes, it’s somewhat of a complicated landscape. I think the regulatory agencies have been completely overwhelmed by the pace of technology, so I think this hit them all very hard and very fast to the point where they, you know, were sort of just not knowing what they should do. They’re used to looking at tests as being this very simple here’s the measure you’re taking, and this measure could be over this threshold or under this threshold and based on that you’re going to make a recommendation as to the diagnosis or treatment of a patient, and now we’re moving into the space where it is no longer a single feature that’s predicting that it’s constellations of hundreds of thousands, or maybe even tens of thousands, of features, and it’s more probabilistic, and it’s more dynamic from the standpoint of the models that exist today are rapidly evolving on a daily basis as we learn more, as we apply it to patient populations and look at their outcomes we can refine those models, and so that sort of adaptive learning is not something that the FDA, in a probabilistic context, was fully able to comprehend and take up. So I think companies like 23andMe that were definitely revolutionary from the standpoint of hitting out to the consumer and getting this kind of information generated and played, I think a really big service on how to convey complicated information to a population that doesn’t understand any of the underlying science or complexity, but being able to convey that information to them in ways that they understand, can appreciate, can act on, that that’s a really hard problem and one I think that 23andMe has hit really well and has been effective and they now have probably one of the largest collections of individuals with DNA assays on the planet. So I think that has been a very big positive.
On the regulatory side, though, if you’re going to use this information to convey risks or treatment courses to a patient, there does need to be some bar that you are going over to both protect the patients but also protect the medical community from the standpoint of how patients’ treatment is being driven. So I think probably one of the mistakes of 23andMe was providing more and more risk information without the appropriate validation, without showing that there is clinical utility, without showing that they had really protected the patients’ interest and had all of the protections in place to be able to address a patient’s concern if they learned that they were at high risk of a disease like Alzheimer’s, where there wasn’t necessarily an effective treatment to prevent their slide into that disease or being able to tell that patient what to do. You don’t want to cause more damage—more anxiety—to the patient than you have to, especially if you can’t do anything about what you’re telling them. If you’re going to give, for example, a patient their breast cancer risk or ovarian cancer risk score, you want to be basing that on all the known risk factors for those disease. So if you’re giving, for example, breast cancer risk information without understanding the variation of, say, BRCA 1 and 2, well that’s a very misleading resulting. You shouldn’t be conveying that kind of risk information without taking the most important risk factors into account. So I think there are a number of missteps that way in both how risk information was being conveyed and how the regulatory agencies were being integrated into that process, and the bottom line is today, you need to be working in that framework to push forward while maybe simultaneously pushing forward disruption and transformation of these regulatory agencies to fully appreciate all that’s coming up.
JB: So with that in mind, then, it begs the question: Who helps to be the navigator for the consumer through this landscape? I know you have authored a very nice jointly authored paper that recently appeared in Genome Medicine on informed decision-making among medical students analyzing their personal genomes on the whole genome sequencing course and how this education of them becomes very important because if they’re going to become the providers of interface with their patients they better be informed. So, this question of who teaches 21st century medicine based on this genomic revolution is a very, very powerful question. I have the privilege of being in a study club with a variety of practicing physicians and I’ve been amazed at how vigorous, aggressive, and dedicated they are to their own self-education: reading papers, every week we have a discussion leader who takes us through various concepts that are related to how these genomic discoveries relate to patient management. So there is at least a body politic within medicine who wants to be informed. It’s not just medical geneticists. Tell us a little bit about how we’re going to transfer this information properly to docs who are going to be the interface with patients in terms of translation.
Clinical Utility of Genomics Needs to be Demonstrated to Physicians
ES: Yes, I think there are a couple of things related to that. First is in order to get a physician buy-in, in order to demonstrate to a physician that they should be paying attention, their biggest criticisms on the genomics revolution more generally are the variants that are being identified that predispose you to risk of any sort of common human disease, that the effect sizes are so small, even when taken in aggregate over all the variants found, that the clinical utility of their increase or decrease risk that is given is so small that it’s not something that a physician would take into account in managing a patient. So I think we have to be really thoughtful in how we push things forward into the medical community in ensuring that the things we’re pushing forward will have some actionable benefit. So there are definitely genetic findings, a lot of them relating to the metabolism of drugs for example, that are immediately actionable and should be standard of practice today even though they’re not. But then we have to also be protective against giving risk for obesity, and diabetes, and Alzheimer’s based on genetic findings that may not increase your risk enough to have any meaningful clinical utility. So I think teaching not just the physician, but the practicing geneticist who push these models, where that balance is and what kind of studies do you have to do and to what extent do you have to build the evidence to provide to the physicians is important.
But equally important is how do you educate the next generation of physicians and other healthcare professionals? And it’s not just the genomic data, although that’s maybe one of the most well-known and comprehensive and widely available today, but as you know, there are metabolomic and proteomic and RNA sequencing. So in addition to the genomic information, there are other high-dimensional data that can be assayed now, whether it’s RNA sequencing, metabolomics, proteomics, microbiome—what are the bugs that are living in and on you and how predictive are they of your current health condition. All of that information and how it gets integrated and the models that get built and how that modeling can be leveraged in the clinic to actually base decisions on a patient’s care, those are all things that are going to require pretty significant changes to the curriculum and medical schools and other related disciplines to appropriately train this next generation coming and how to accommodate this.
A Novel (and Controversial) Experiment: A Personal Genome Interpretation Course for Medical Students
One of the things we attempted to do towards that end was the first course of its kind in the country—a personal genome interpretation course—where students, which were medical students, genetic counselors, and other healthcare professionals who are in training, where they could take this course and have the opportunity during the course to have their genome completely sequenced, and through the course learn how to analyze and interpret their genome. So it’s a way to directly engage that next generation on all aspects from the generation of this whole-genome data, to the analysis, to the interpretation, to how do you appropriately counsel people based on what you’re learning and the fact that they would experience that first hand I think gives some advantages.
Of course, that course was not without significant controversy. There were many thoughtful leaders in the genetics community who were very, very opposed to students sequencing their own genomes and learning about themselves in that way through an actual course, but our thought was this information is going to become so prevalent, so cheap to generate—the fact that you can go to 23andMe without anybody’s approval and get that information (if you don’t live in the state of New York), you don’t need anybody’s approval to get that information generated and why should you be left on your own to try to figure out how you see that. That doesn’t seem like the most efficient way to be educating people on how they interact with that data, so we think this type of course could be very instrumental for helping change the thinking and the mindset of what’s going to be needed in the practice of medicine in the future.
JB: Well, it’s my deep hope that that course, as it moves forward, will become an e-learning opportunity. That could be one of the great contributions to all our education. There’d be no better person that I think than you to help arrange the right kinds of resource people to really make that available. That could be a paradigm-shifting opportunity.
ES: Yes, and I’ll just add, you know, one of the most interesting findings from the course—and we have a paper coming on this as a follow-on to the genomic medicine one—is the students who were having their own genome sequenced spent much more time analyzing the genome and learning how to identify things of concern or things that were protecting. Like just the investment—the amount of time they spent learning, playing with the data, and so on was much higher if it was on their own genome versus if it was some anonymous reference that they had no personal interest in, so we think even from the pedagogical standpoint that there is going to be an argument there—that you’re just going to pay more attention to something that’s informing on you.
JB: Let me, if I can, shift bench to bedside to some of the clinical applications that come out of your—and your colleagues’—work. I’d like to take a little walk down memory lane with you. I can’t say that I’ve read all of your 2014 or more publications, but I have gone back and read a good portion of them, so I’d like to just cherry pick a few to kind of help our listeners to understand the landscape of how intellectual thoughts and discoveries evolve. Let’s go back to—this is not the start of your publications, but in the earlier stage—2001, Toxicology and Applied Pharmacology, this article titled “Clustering of Hepatotoxins Based on Mechanism of Toxicity Using Gene Expression Profiles.” I think that’s a very interesting way of addressing pharmacogenomics and also addressing the concept of toxigenomics and how various substances may influence different people in different ways based upon gene expression patterns. In fact, I think in the close of this paper you say something like, “The results suggest that microarray assays may provide a highly sensitive technique for safety screening, not only for drug candidates but also for environmental toxins. Tell us a little bit about how you think that can apply to this future application of the concepts.
ES: So the RNA data holds a special place in my heart because DNA is fixed—at least largely fixed. We’re learning more and more that it may be more dynamic than we appreciate, but basically you’re born with all these different variants and with this blueprint that then defines much of what you become at the physical level; whereas RNA is changing in real time. It’s reflecting what’s going on in your system in a given cell, in a given tissue, at any given point in time, so because of that, RNA is just an exquisite sensor for reflecting on what is happening in a system at any given point in time and the fact that whether you smoke or not, whether you eat fruits and nuts or not, whether you take a particular vitamin—all of these different environmental stresses or perturbations affect, you know, ripple through your system, in very unique ways, whether it’s a toxin or a natural helpful product that’s in a food or whatever, that they will have a certain impact—a certain signature on your system—that we can then map to these interpretive models and go back to the landscape map that we were talking about earlier. The fact that I can take all of the toxins that NIEHS has generated across hundreds of cell lines to understand what’s the RNA-based perturbation that this toxin induces in this type of cell that we can map this back to the landscape map and understand that the trajectory you’re on, is it going to be promoted by any one of these toxins, or is it going to promote you onto another trajectory that may be worse or better? And not just for the toxins. We can do that for any marketed drug, we can do that for any food type or any natural product that occurs in food types. The fact that we can be mapping, using the RNA as the intermediate to map between the action of that compound and the effect it has on your system, that’s how I view that earliest work. It was the first representation of how we can use RNA as an exquisite sensor to determine, on a much more rigorous basis, whether a given compound or other such element was having a favorable or unfavorable impact on that part of the system.
JB: So for the listeners, we’ve been talking about gene expression and how that regulates the phenotype for many, many years and use the simple sound bite saying food is information to our genes. We don’t just eat calories and nutrients, we eat information, and so this is one application for that construct that you’re describing, here: using RNA expression as a marker for how the pluripotential of the genome gets translated into some message or signal that controls our phenotype.
JB: Very powerful. So let me move from there to 2005—a fantastic paper titled “Embracing Complexity, Inching Closer to Reality.” I quote from this article that you and Stephen Friend authored, in which you say, “Drugs designed against targets and presumably simple linear signaling pathways found to be associated with disease are often less effective than predicted. One reason for this is the overly simplistic view of the molecular mechanisms underlying common human diseases. This viewpoint is a consequence of biological reductionism brought about by the need to form a basic understanding of the fundamental attributes of biological systems and by limitations in the set of tools available for the analysis of biological systems.” Now you go on to say, “However, complex biological systems are best modeled as… fluid systems.” So tell us a little bit about moving from targets to networks.
ES: So that’s probably one of my more favorite papers because it sort of occurred at a time when this thinking that biological pathways, for example, weren’t linearly ordered with respect to the action of any given enzyme or receptor, but that they are occurring in highly nonlinear network-based structures just wasn’t something that many were thinking was true or were wanting to accept as true and that a lot of biology was still done in this very reductionist sort of way. So one of my favorite papers to just begin challenging that status quo that the most important pathways that we study, whether they’re metabolic, or signaling, or whatever, they’re not simple, linearly ordered pathways. Those simple, linearly ordered pathways are occurring in the context of thousands, or hundreds of thousands, of different variables, different proteins, and metabolites, and RNA species, and different constituent components of cells, and so on. So it’s a very, very complicated network of interacting parts that if we really want to understand how a given pathway or feature is operating we need to understand the context in which it is occurring.
So we know a given enzyme is going to catalyze some event that’s going to go on to produce something meaningful to the cell, that if we really want to understand the operation of that pathway we need to understand the context in which it is operating: what are the other genes that are modulating the levels or the activity of that particular enzyme and how can it affect the flow of that pathway? So what are all the different features that are impeding on that? And so it sort of introduced the fact that even the most simple constructs of an enzyme producing some action was occurring in the context of thousands of different variables and that if we understood those thousands of different variables we could understand how it was operating in a given context—whether it was a normal functioning context or a disease context that was precipitated by increased stress, or some toxin, or something in your diet. The fact that we can now model this more holistic representation of what state these constituent pieces are operating in, we can just make a more holistic model that at the end of the day is just a better reflection of the system. I don’t think anybody would deny that our systems are really complicated and there are literally hundreds of thousands of variables at play in a given cell and these cells are interacting with each other to form tissues, and the tissues are forming organs, and the organs are communicating through complex signaling—the endocrine system, the nervous system, the immune system. So all of those pieces are showing a degree of connectivity, a degree of integration that, from an engineering standpoint, you have to model if you want to best understand the system and how to manipulate the system to achieve a good impact. So I think that shift in thinking away from the reductionist ideas of the earlier biology, which was really driven by the reagents—the tools we had available, the query biology—are now well-complemented by these systems level views that I think in the end are going to lead to a better understanding of living systems.
JB: Okay, so let’s take that beautiful, paradigm-shifting concept to now how you start applying that as we go to 2005. Again, I’m just cherry-picking a few of the tremendous number of publications you’ve authored. This one is in Nature Genetics titled “Integrating Genotypic and Expression Data in a Segregating [Mouse] Populations to Identify 5-Lipoxygenase as a Susceptibility Gene for Obesity and Bone Traits.” Now the reason I chose this article is it ties together different disciplines of medicine. We might say orthopedics or endocrinology owns one aspect and internal medicine and diabetology owns another aspect, and then immunology owns lipoxygenase and inflammation. Gee whiz, it seems like we’re all pushed together into one network of thinking, so it seems like it’s an application in this paper of what you were describing in the previous work.
ES: Yes, exactly. One of the, what I think, cool concepts to come out of that paper was, so if you have all this complexity, all of this interaction going on, how do we resolve down to something that’s actually actionable? And the only way you can do that is through understanding causality. In this vast network of interactions, of correlations, how do we understand which is the driver and which is the responder or passenger? How do we start resolving that? In that paper we sort of laid out that naturally occurring variations in DNA could be considered as a systematic source of perturbations as opposed to an unnatural perturbation, which would be something like knocking out a gene on purpose, or overexpressing it at ten thousand times the normal level, or chemically perturbing it. Here what we’ve said is if we leverage naturally occurring DNA, we can leverage that as a perturbation source and resolve causality between variables in a very data-driven fashion.
We don’t need to get into the complexity of that kind of modeling, but just to know that we’re now making causal connections in a completely data-driven fashion—we’re not accepting anything as true, a priori. We’re saying we’re going to generate the data, we’re going to let the data speak, and then come up with these relationships in a completely de novo fashion. And what came out of that was this concept that pathways aren’t simple, linearly ordered constructs, that they are very complex, integrated, nonlinear constructs that connects things that had previously been thought to be independent or unconnected. So whether it’s the impact certain genes can have on obesity as well as bone growth, or whether some of our later papers on linking diabetes with Alzheimer’s, these different conditions—these different perturbations—that hit our system aren’t isolated events to one part of the system generally. Generally, they are affecting multiple parts of the system and we’ll start linking diseases together into different classes that before the network view would have been considered completely independent.
JB: Yes, and I could spend at least an hour talking to you about this specific paper because this integrates, also, basic biology. We now know that lipoxygenase—5-lipoxygenase—which controls the production of a series of inflammatory mediators, that those inflammatory mediators interconnect the immune system with cell types like the adipocytes, so they have an effect on what I call angry obesity, which is this inflammation, disease-related form of obesity. It ties together with osteoporosis and the change of the osteolytic/osteoporotic situation that is driven by inflammatory signals in the bone, and it ties together with other conditions that are associated with inflammation. So you have demonstrated through this model of segregating gene expression with genotypic data that, in fact, we can predict biology in ways that you can’t do by looking at monogenetic traits.
Objectivity is the Strength of a Data-Driven, Network-Based Approach
ES: Exactly right. Some of the findings can be so surprising. I just like to tell this little story because it pre-dated the 2005 paper. It was all the way back in ’99 when I was at Roche, where I started before I went on to Rosetta, and it sort of pioneered what we called genetics and gene expression approach there, and studying allergic asthma. One of the genes I identified in using this approach was the complement system C5 and complement 5 receptor, the C3 receptor, and so on. That was completely nearly heretical to the assay field at the time, and I remember giving the presentation at Roche on my analysis of this allergic asthma mouse population where we applied for the first time ever this new gene chip from Affymetrix to be able to assay all the genes being expressed in the state of allergic asthma.
So I was able to put together this picture where I said this complement gene is really coming up as the top hitting thing, and the response to the asthma experts in the room at the time were if you knew anything about asthma, you would know that the complement system is absolutely not involved, therefore you must not understand anything that you’re talking about and so we don’t believe your conclusions. So it was sort of a depressing thing, but several years later, every pharma company thinking about asthma had a complement program going because ultimately that was all validated and shown to be one of the key mechanisms involved in allergic asthma. So it’s, again, the ability to be completely objective, completely data-driven, making completely novel connections is really the great strength of this network-based approach.
Perturbagens: Food and Pharmaceuticals Can Push Molecular States Toward or Away From Disease
JB: Let’s move that on quickly, and I think that people will see the intellectual lineage here of the discovery. So we move on to articles like the article in 2007 on the pharmacogenetics of metformin response, in which the subtitle is “A Step in the Path Toward Personalized Medicine.” That article ties together with a number of your other subsequent publications, but one that I want to talk about specifically was in Human Molecular Genetics in 2010, “The Effect of Food Intake on Gene Expression in Human Peripheral Blood.” The reason I’m tying these two together is that you point out in the metformin discussion that these drugs that are used to manage type 2 diabetes all have different mechanisms of action. They hit different genes and different gene expression patterns, but also diet and lifestyle play roles in modulating these different genes and different expression patterns as well, and therefore if we’re really going to understand how to personalize or tailor—because if you have a hundred type 2 diabetic patients you have a hundred different diabetic patients; they’re not all the same, they may have the same diagnosis, but they have different molecular etiologies and so you start asking, how do things like food, in that individual, affect mRNA expression and how that could influence the progression of what we later call type 2 diabetes? I think there is a theme of this series of articles that really is showing the transformation of medicine to open up the therapeutic window to the whole array of things that cause perturbations or have influence on gene expression patterns.
ES: Yes, I love it. I think that’s exactly right and I think it comes down to what I was saying earlier in response to your characterization on the toxicogenomic stuff, that the RNA…not only is it a key driver and is playing a role in defining the processes that are carrying out functions in your cells, but it’s this amazing sensor, and so, again, the way your diet pushes the molecular states of the system and whether it is pushing it in a favorable protective way against disease or is encouraging disease, the things we are eating are perturbagens, just like a drug is a perturbagen, albeit food is more complicated. But the beautiful thing is through these advanced technologies we can understand the molecular response to the system in response to different types of diet, different food groups, different types of drugs. And again, back to the landscape map, we have the ability to project these different perturbations onto the same map to understand how are they connected? Which parts of the system are they hitting? Are they hitting those parts of the system in ways that promote or protect against disease? And they define the different subtypes, as you were getting at. They stratify patient populations into different subgroups because the underlying molecular mechanisms can be directly observed. So all of those combined does take you more towards molecular-based medicine/precision-based medicine that enables you to connect the molecular biology to the physiology. If there’s one thing we learned in molecular biology through the one-protein targeting approach, is that if you ignore the physiology as a system—if you’re not connecting the molecular biology with the physiology—your ability to impact clinical medicine is severely limited. So what I view in all of these maps that we talk about—these different connections—is the ability to directly link molecular biology to the physiology of the system, and through that have the right impact on clinical medicine.
JB: So I hope everyone is catching this. This is a kaleidoscope of how a great medical/industrial complex changes overnight. This is absolute frame-shifting, perspective-changing, reference-point altering perspectives. I think this article that you published in Pharmacoeconomics in 2011 used a term that I find a very, very nice term because it ties together much of what we’ve been talking about, and that’s “integrated genomics,” elucidating the complexity of response to an environmental agent. In this case you’re looking at complexity of a drug response, showing variation from person to person. We now know that CYPs, the cytochrome P450s from one person to another, may vary in their functional state by a factor of a thousand, three orders of magnitude. These are tremendously wide variations in biological response, processed by the same name enzyme based upon different both genomic SNPs and different expression patterns. So integrated genomics is really forming a new type of healthcare system that is this heuristic that really relates to integrated physiology, which relates to function. All of what we later call pathology, changes in time and space based upon the lens that we use to define it. What doesn’t change is the perturbations of function that ultimately give rise to the dysfunction that we later call disease. I think this article “Integrated Genomics,” which was the first time I saw you use that term in print, is a very powerful placeholder.
Future of Medicine: Mastering Information and Integration
ES: Yes, and I think maybe the one thing in the earlier days that drew me to the work you were doing in the great state of Washington was having the exact right kind of vision, that viewing things from that functional perspective and the need to integrate these different dimensions of data to sort of achieve a more comprehensive view of how functions were being impacted and whether they were being impacted in the right or the wrong direction. That was just the right…in my mind, you know, it was the right kind of thinking at the time, and it’s awesome to see that over the last decade since that time we’ve seen a lot of this play out on a level that is even surprising to me and I think the future of medicine is going to clearly be in those who can master this kind of information and integration, and that the patients who are being interpreted in that way are going have far better outcomes, and so that will keep driving this revolution.
JB: Well, obviously I could go on and on ad nauseam because your work has been so expansive, but I think, being sensitive to your time, let me just bring this to close by saying that your studies, Eric, are quite remarkable because of their collaborative nature. I’m thinking of one of the studies, “Novel Loci for Adiponectin Levels and their Influence on Type 2 Diabetes and Metabolic Traits,” this multi-ethnic meta-analysis of 45,891 individuals. So if people say there’s not enough data, I think you’re covering a pretty wide swath, and I look at the number of co-authors on this paper. I didn’t count them up but there must be over a hundred co-authors from different international consortia, so you’re really creating a new science. You’re not just creating a new paradigm, you’re creating a new way of gathering data, using data, analyzing data, and creating solutions to complex problems for which one-pill-for-one-ill is not going to work. Those models are over and a new model has to emerge, and you’re actually generating that new model. You’re looking at how genomics plays roles in protection or increasing the relative risk to macromolecular damage and how that relates to biological aging and age-related diseases. You’ve done collaborative studies on hypertension, breast cancer, ovarian cancer, coronary heart disease, diabetes, obesity, increased BMI and cardiometabolic disease, osteoporosis, and the list goes on. This model that you designed and you’re working on with your colleagues is a scalable model that is transformative in terms of health care. It takes a while to get with it and understand it when we come from a pill-for-an-ill mentality—you know, the antibiotic mentality of the past century where you had a drug that was so selective you could interrupt a specific differential effect in bacteria in terms of cell wall biosynthesis and you could solve the disease entity. This complex model requires a different stretching of our imagination, our thought, and our information gathering, but you’re doing it. You’re doing it in a very, very logical step-wise fashion by collaborating with some really broad thinkers, mining the human phenome so that we can actually use the Lille scores to look at these biological effects, and analyzing our exome and whole-genome sequencing so we can really apply these to clinical problems. I just want to applaud you and give you one small voice in the wilderness, here—an extraordinary “attaboy.” I just think what you’re up to is actually going to be the medicine of our 21st century that is going to reduce the burden of unnecessary disease and provide solutions to these complex chronic conditions by integrating the best information that we can use, which will be lifestyle, environment, diet, new drugs, new biologics, new ways of thinking about disease that empower people toward wellness. Thank you, is my answer.
ES: Thank you, Jeff, for that very kind interpretation. You know, we always feel like we’re maybe not quite nailing it, and it is very complex, and there’s a long way to go, but I think we’re on the right trajectory and groups like yours as well and I think the future is going to play out that we’re going to improve outcomes in a dramatic way by generating and interpreting information in this fashion.
JB: Well, we wish you the very, very best of success to you and your colleagues. Be it known that the functional medicine field is following right along with you and are some of your strongest cheerleaders.
ES: Thank you, Jeff.