Thank you, Nigel. And thank you for--Agilent for letting me put in my $0.02. Actually, backtrack a little bit from what Nigel described about using a lot of these multi-omics in toxicology. And since this is my--like my fifth talk in two days I decided to take a bit of perspective and to actually give more of a philosophical presentation about some opinions about how at least I've learned over the 12, 14 years I've been doing omics in toxicology and giving some opinion and some perspective of how I believe some of these newer omic technologies and the integration of those essentially will be able to be informed in--there we go, cool--in toxicology and risk assessment.
So, that said--and some of these are probably going to be intentionally somewhat provocative. But, this is really what I've learned over the past 12, 14 years about how we can apply these particular tools.
So, my first perspective--and this is really broken out into three perspectives. And it's going to be short. And I'm going to give you data describing why I take each position and each perspective.
But, essentially, I believe at least over the last 12, 14 years that using omics, whatever omics you prefer to try to predict hazard is a fool's errand. Despite the fact that I've done that considerably in the past 12 years and the fact that I'm doing that now, using omics to try to predict currently the local lymph node assay, right?
And I do believe that using these type of omic proposals to predict a hazard per se, high-dose hazard in animals is like Sisyphus pushing that boulder up the mountain only to have it roll back down and have to push it up again throughout his lifetime.
And why do I say that? Well, first part, myself included, some of these large scale transcriptomic studies to try to predict complex chronic toxicity in animal models, let alone in vitro, I've really only achieved 70 to 80 percent accuracy when you--for those studies that have incorporated a large number of chemicals.
And this is one example of what I did in my lab trying to use transcriptomics to predict lung tumor formation, chemically induced lung tumor formation in a mouse model. All right? So, we took a number of different chemicals and both positive carcinogens as well as negative carcinogens, exposed them to mice for 90 days, used transcriptomics to try to predict what the tumor response would be at a two-year time point.
And ultimately, we came up with--come on. Am I getting it? There we go. Is it flashing? It's kind of dull--but, came up with five different statistical models that predicted, both with a sensitivity of about 70 percent and a specificity of about 80 to 90 percent. And that's the best we could do. And that's an average of what people have been showing for predicting things like rodent-induced or rodent tumors in rats, chemically induced tumors in rats with the iconics [sp] dataset.
Not only that, there are too many rare toxicities with not enough chemicals to build predictive models. So, if you look at the sets where you have chemically induced tumors as just an example, there's a complete set of disease-specific biomarkers. And a cancer bio-assay will require 36 tissues which had at least one positive chemical.
And if you don't lump those tissues based on organ, you actually have 45 tissues, of which you're going to have to build these predictive signatures or just for carcinogenesis, let alone neurotoxicity and developmental and reproductive toxicity, amino toxicity, and toxicity in your left toe.
So, if you look at that--and in the cancer bio-assay, really, of those 45 tissues and 36 tissues when you group them by organ, only 24 tissues have at least five possible chemicals in one species and one sex.
And you do need a robust set of probably 30, 50 chemicals in order to build that biomarker set to predict hazard, right? So, what are you going to do with those very toxicities that you only have maybe two, three chemicals that produce that?
So, the next part is to build a robust predictive model. One needs multiple chemicals that acts also through a redundant path, right? So, even though you have 10 chemicals that give you a toxicity and say that you think that that's enough to make a predictive signature, consider for the fact that these like carcinogenesis probably occurs through probably 100 different paths, right?
And you're going to have to have chemicals redundantly in each of those paths in order to develop having a statistical power to build a predictive biomarker. You're going to have to have two, three, four chemicals by each path in order to identify that path as a significant path to leading to carcinogenesis, neuro or developmental or reproductive toxicity.
And finally, the last point is, do you really want to predict high-dose animal studies anyways? This is sort of the point that Thomas made. That's kind of a--also a fool's errand as well.
Also, so my next perspective, perspective number two, using omics to infer mode of action is currently in challenge in environmental chemicals and unlikely to be achieved consistently in the near future. And what do I mean by that? All right. And there's a number of efforts saying that, you know, we can use omics to infer mode of action.
Well, regardless, I've collected a number of different omic studies with a number of different chemicals in time and dose series. And I have yet been able to conclusively say, just based solely on omic data how that particular chemical produces its effect, okay, whether it's by oxidative stress or by damaging mitochondria, or other routes of toxicity.
And the reason that is I believe is that the majority of environmental chemicals, and I would say probably a good 60, 70 percent or more, actually act--and this is probably more specific for environmental and industrial chemicals than it is particularly for pharmaceuticals.
But, they probably act to cause toxicity through weak nonspecific interactions, essentially more than one mode of action. So, by the time that you're producing toxicity through these weak nonspecific interactions, you're producing toxicity through four or five different modes of action through four or five different mechanisms. You're mucking up the mitochondria. You're messing up with cell membranes into the same time. You're messing up with the proteozom, right?
And so, how are you going to with one broad transcriptomic signature infer and tease apart all those mixed mode of actions? So, that's also going to be a challenge, particularly for these nonspecific chemicals.
The resulting toxicity is through multiple modes of action and challenges to decipher all the key events, not that we can't get there and not that I don't think we should try. But, I think that in the near future, that's going to be one of the major challenges we face.
And even for--even potent and highly specific interactions, we currently do not do a good job of identifying mode of action. For example, dioxin, okay, we still don't know conclusively how dioxin produces liver tumors, right, despite the fact that we've investigated this compound for more than 30 years and have multiple omic experiments on them and knock out animals of the receptor, so on and so forth. We still don't know conclusively what the pathway of toxicity is for dioxin.
And this just kind of summarizes the fact that I was a workshop I think it was--was it last year or two years ago, where we're trying to describe the mode of action for dioxin. And ultimately, the result of that expert workshop, which contained multiple lectures on dioxin, was broken down into fact that it binds to the H receptor, which we've already known. It increases supliferation [sp] and reduces--and decreases apoptosis.
So, essentially, that was the mode of action for--that's currently the state of the art for our understanding of dioxin's mode of action.
But, I believe--do believe that some strategies, such as using these cross-species differences in toxicity, as well as integrated omic approaches can significantly help in addressing that particular challenge. But, we still have a long way to go.
For example, we did a study on using--trying to use cross-species differences and try to infer mode of action from chloroprene-induced lung tumors. In this particular study, we looked at female mice, which are sensitive to chloroprene-induced lung tumors. You can see that this is a two-year bio-assay on chloroprenin, even a low dose of chloroprene-induced lung tumors in this bio-assay, whereas rats did not get significantly increases in lung tumors.
And if you took the cross-species, differences, what pathways were essentially enriched in the mouse lung at tumorogenic doses and were not enriched in the rat lung at non--regardless of any dose, essentially, that we exposed them to, there were actually--oops, a very limited number of pathways that actually came out to be enriched.
And they were all consistent with what we believed would be the mode of action for chloroprene. And this was essentially things like NRF2, regulation of oxidative stress, which is also induced not just by oxidative stress but by also other sorts of reactive metabolites and also things like glutathione, which is also induced by reactive metabolites, so induction of these type of pathways using these cross-species differences and these omic approaches can identify and put forth these modes of action for these chemicals, which can be followed through with additional type studies.
All right. And perspective number three and the most practical--and this is the one I'm going to end in. And this is kind of where I've evolved in my omics and toxicology, essentially, over the past--particularly the last five years.
The most practical near-term application of omics in toxicology in testing and risk assessment is really to identify a region of dose response curve with no excess perturbations, essentially a no-tell or a no-omic effect level, right, a no-mell [sp], so to speak.
And why do I think that? Well, one thing is the one thing that the omics is extremely good at, right, is casting the broadest net possible, right? You're going to interrogate every gene in the genome, every transcriptome. You're going to interrogate all the different metabolites. So, it's casting in our particular profession the broadest net possible to capture all the different perturbations that you're going to see, right?
And what--research that I've shown in the last three years is those that no transcriptional effect level or no-mell or the no-omic effect level is highly correlated with apical responses. For example, this was a study that we did with chemicals about--it was about 15 different chemicals that had different target organs, so about four different target organs, everything from bladder, liver, thyroid, lung, two different species, rat and mice. We expose these animals for 90 days.
And we looked at how the no-transcriptional effect level on a pathway basis. So, essentially, we looked at, within a particular pathway, if there were 10 genes, what--where you began to perturb that particular pathway as a function of dose. Okay?
And we looked at not just the most correlated pathways but essentially the most sensitive pathway, right? And then we compared that--the most sensitive pathway on the transcriptional perturbation and compared that to what's called a tumor-base point of departure, which is essentially a dose at which you began to see tumors in animals.
And in fact, where you began to see transcriptional perturbations at a correlation of where you began to see tumors of about 0.94. And not only that, it predicted it within a factor of two. So, on average, 1.5, 1.54-fold was the difference between when you began to see transcriptional perturbation and when you began to see tumors, right?
So--and I think for a lot of this, it's driven by the fact that the majority of our environmental chemicals out there produce toxicity through these weak nonspecific interactions. And so, where you begin to muck up the cell is also where you begin to see those certain pathologies.
And since I knew Thomas was going to be presenting ahead of me, recently, we also ran three of those chemicals in vitro as well, three of the liver-specific toxicants. Okay. And what we did is we compared--again, on the Y axis is the--in this case, it was non-cancer effects, but essentially, the point of departure for non-cancer effects in the liver. Okay?
And we ran on the X axis what's called the most sensitive pathway point of departure as well and looking at the relationship between where you began to see transcriptional perturbation in primary rodent hepaticytes and where you began to see pathological effects for those chemicals in vivo. Okay?
And we ran that. We exposed essentially these primary hepatocytes at blood concentrations that are at concentrations in the well of the tissue culture that were equivalent to blood concentrations for those chemicals from in vivo studies. So, we essentially dose matched what we were exposing our primary hepatocytes to, to what was being exposed in vivo.
And the alignment that we got at--and this is at five days of exposure in vivo, two weeks of exposure in vivo, four weeks of exposure in vivo, and 13 weeks of exposure in vivo.
And we compared a in vivo transcriptomic point of departure with in vitro transcriptional point of departures at 12 hours and at five days. All right? And our apical pathological response is obviously the same for each particular chemical.
But, what I point out is the fact that the yellow dots and the red dots are pretty well correlated, regardless of whether you're looking for these transcriptional perturbations in vivo or in vitro.
What it's actually saying, at least, for an N of three and for primary hepatocytes, is you can get a pretty good estimation of where you're going to start to begin to see pathological apical responses in vivo by just looking at where you begin to see transcriptional perturbations in vitro, all right, and regardless of what time you're beginning to look.
So, I think that essentially application of this no-transcriptional effect level or no no-mell in a margin of exposure framework is probably going to triage a significant percentage of chemicals as we do toxicity testing.
And what do I mean by that? Well, if you take these in vitro tools as well as the fact that the short-term in vivo transcriptomic studies and look for these estimations of our regions of safety or point of departures and use each of those technologies to really define where you begin to see pathological effects, at what dose essentially you're going to depart from that region of safety and begin to see pathological effects.
I'm not going to tell you what that pathological effect is going to be. I can't predict that it's going to be liver toxicity or liver tumors or whether you're going to get biliary hyperplasia or necrosis. But, I can tell you at what dose you're going to begin seeing that.
And if you take that definition of that point of departure and compare it with estimates of human exposure and develop that within a margin of an exposure framework, you're going to be able to triage about 40 percent of chemicals at a--if you're willing to accept a margin of an exposure of about 100.
If you're more conservative and you want a margin of exposure of 1,000, you can triage at least at this particular step probably about 25 percent of chemicals. Okay? And if you're willing to then, for those chemicals that are less than a margin of an exposure of 100, and you're going to go into more refined in vivo models and use the in vivo transcriptomics to get within a factor of two of your point of departure, you're going to then be able to triage an additional 50 percent of the chemicals.
So, essentially, then what you're sending on to normal traditional bio-assays and traditional testing is probably about 5 percent of your chemicals overall, which has you considerable cost savings down the road.
So, in order to show the economic and animal sparing capabilities of this type of multi-omic approach, we did a comparison of what would the economics be. And this is kind of our proposed tier-testing scheme. And we had the breakdown of fraction of chemicals in each tier and the approximate cost per chemical, say about 10,000, let's say, for the tier-one test, a little bit more for the tier two, and so on.
And if you're going to test 10,000 chemicals, it's going to cost you in the neighborhood of almost $2 billion, right, and very unfortunately a large number of animals. But, if you benchmark against what's currently being used in Europe for reach, what's currently being out there for what you're forced to do at reach, all right, it's a considerable cost savings.
Currently reach is broken down by tonnage. And there's a fraction of chemicals that are binned into each tonnage class. And it would cost you for the same 10,000 chemicals close to $5 billion and use four times as many animals. All right?
So, just this kind of tier-testing scheme, using these omics to kind of weed out where your point of departure is going to be using this particular approach is going to be economical, animal sparing, and much quicker than a number of these traditional in vivo models.
So, that's at least, in my opinion, going to be the most practical near-term application of these technologies until--and I still say this--until we can really get better of integrating these technologies to identify mode of action.
I do believe mode of action ultimately is where we want to go because, being able to understand a mode of action of chemicals is really going to help us and understand what toxicities are going to be rat specific, what toxicities are going to manifest themselves in humans.
So, I think the mode of action argument is really important to understand the cross-species extrapolation of these particular toxicities. But, at least until we get better at inferring mode of action from these multi-omic responses, at least we can do something the near term by avoiding that transcriptional perturbation.
And with that, I'd like to thank everybody from my lab as well as a lot of the funding agencies that supported me in that. And of course, I'll take any questions.