What would an ideal drug discovery/drug development process look like?

0
248

Given an attrition rate of >90% for new drug candidates, what would an ideal drug discovery/development process look like? Is it a failure to understand disease biology that drives the attrition rate, a failure to recruit the appropriate target patient population for clinical trials or something more systemic?

As a person who has made drugs and is planning on making a career of out it, I hope that a lot of things change by the time I reach the middle of my career. The ideal drug development process would and should look very different from the current system in place.

The interesting thing is that most of the interactions that already exists should stay in place. A large problem with the current inefficiency of the drug discovery / development process is that the incentives and goals are misaligned for all of the individual players in the drug making process. It’s certainly an acknowledged problem and everyone in the drug industry talks about it. To describe how deep of a hole we’re in, I show you Eroom’s law (Moore backwards) which suggests that the cost of developing a new drug doubles every nine years.

EROOM’s Law [1]

This is pretty unsustainable

and Pharma knows it so there are a lot of experiments and proposals to change the drug development process so that failures occur earlier and successes are identified early and pushed through.

Everything I will describe tackles the underlying goal of reducing the attrition rate and costs of the drug development pipeline. To briefly outline MY OWN opinionson what needs to happen to have an cost effective system I’m going to dive into:

  • Adjusting the goals of academic research to focus on clinically relevant areas.
  • Changing how medical research appropriately informs and guides that basic research
  • Bridging the gaps between academic and industrial research by both academic efforts and increased drug companies’ funding
  • Diversifying risk between the various stages of drug development by focusing on individual strengths.
  • Revamping the clinical trial process, drug approval system, and influence of marketing to allow for smaller but faster trials.
  • Integrating the drug distribution system into the healthcare network
  • Creating and rewiring the feedback loops between all of these systems

 


 

The Role of Basic Science

Let’s start from the beginning. Academics are certainly the seed and start of medical discovery and innovation. They ask the questions we don’t know the answers to and also find them and from their understanding of biology, chemistry, and medicine, the drug companies can take things to the next level.

Unfortunately, the targets that are hot topics in academic will very often be undruggable targets or at least very hard to hit. Targets like protein-protein interactions, transcription factors, protein aggregates, ubiquitin modifiers, RNA, and epigenetic regulators.[2] A great example of a protein target that receives a lot of attention but is completely undruggable is p53 as described in Can p53 be synthesized into a drug to cure cancer? It’s a target that is extremely dynamic, has a multitude of interactions, and has plenty of off-target effects. People may get the impression that maybe one day we’ll figure a way how to make it a useful drug but in reality, we’ll probably never hit it and it’s merely an very interesting topic of study in molecular biology (with great scientific importance I would add).

The result is ~2% of the human genes are actually druggable which largely separates what academics work on and say we can cure vs. what drug companies can actually cure. [3] I describe this more in my answer to Human Genome Project: Was all the promise publicized by the media during the mapping of the human genome simply hype? and Why has genomics been so unsuccessful in the discovery of new medicines?

Every now and then, one of those “undruggable” targets become druggable with the invention of new technologies or chemistries. Things like recombinant technology, antibodies, stapled peptides, and PEGylation have made it possible to attack new targets. Indeed one of the widely assumed “undruggable” targets, K-RAS, was recently targeted by a team from Max Planck using a combination of structure-based drug design.[4] Yet there is still is separation from what academics are working on and what companies can actually do.

This was well stated by Stuart Schreiber (who, I note, gave me much of the structure of this portion of the argument): Academic research … might have a greater impact if it were redirected to developing methods that change our view of what is doable. [5] While there is much talk about target-based drug discovery, the modern era hasn’t produced much in terms of drugs and a large part is the failure for basic science researchers to choose good targets. [6]

The fixture to this issue would be to

  1. Have real MSTPs. A good number of MD/PhDs don’t end up going into research or they lose too much momentum because of residency. Having true hybrid scientists will help bridge that gap between what patients need and what is actually possible.
  2. Clearer discussions on what is and isn’t “druggable”. As others have mentioned, scientists should be doing a better job looking at the final product rather than thinking about what applications their recent discovery can be applied to.
  3. Improvement on target identification. People need to recognize signal from noise and unfortunately there is a lot of noise.
  4. Commitments to developing new chemistries and technologies to target the “undruggable” space.
  5. Better academia/pharma interactions.

The first two are cultural things that academics need to be less stubborn about. The third is an area ripe for progress. As mentioned by Taffy Williams and Mike Thompson, the advent of personalized medicine drastically improves the ability to relate a disease to it’s molecular mechanism of action. Furthermore, HTS technologies are designed to be more amenable to diseased-based drug discovery rather than target or gene-based. To better connect academic research with disease, we need to go further into the chain to medical research.


 

Connecting Physicians and Scientists

Drug discovery really starts with observations in the clinic. Doctors will observe patients and from recognizing patterns, they will have a better idea of what makes up a disease and maybe begin to have an idea of the underlying mechanism. I go into this more in myanswer to How do pharmaceutical companies go about finding cures for diseases?

The problem with this model is that doctors are notoriously bad at doing science and statistics and either see patterns out of nothing or will run trivial investigator initiated trials that are under-powered, biased, non-randomized, non-placebo controlled, and poorly designed. Again, a large reason why better Medical Scientists are required in medicine.

The ideal situation is better data collection using Electronic Medical Recordsand releasing that data from the EMR companies so that information about patient habits, diagnoses of disease can be used. Unfortunately, this information is very difficult to get unless we have a complete overhaul of the healthcare system which I will go into later. I illustrate the value of having this data with the alternative route.

For now, I suggest reading the 5 part series: How to build a good EMR by Jae Won Joh

The existing model for data collection is the use of patient communities like Susan Komen and Cystic Fibrosis Foundation, which have been extremely helpful to both doctors and pharmaceutical companies. It helps to link symptoms to the underlying case of disease and pools together the patient populations to understand the epidemiology and guide companies to which drugs will have the broadest effect.

As seen in questions below, there are very good reasons for pharmaceutical companies to create a strong patient community to better understand the disease and to help with clinical trial enrollment.

This has been extremely effective in the Rare disease community in collecting and sharing data to better inform patients, doctors, and drug companies. However, a shared worldwide network that better captures all of the variance of the disease will vastly improve physicians’ ability to systematically track trends along with maintain a consistent standard of care. Furthermore, in post-approval studies, this type of network allows us to better identify side effects and  drug-tolerant patient populations.


 

The Bridge to Pharmaceutica

Obviously this isn’t merely academia’s fault. Pharmaceutical companies need to carry their weigh in the drug discovery side. Given the large amount of money that is already dumped into research it is important to prioritize research funding. However, the use of those funds are currently poorly utilized.

I go more into the economics in the marketing section but for now, it is important to realize that there are large non-trivial cost barriers from translating an idea from academia to company. As every startup knows, there is something called the “valley of DEATH”.[7]

Ignoring the y-axis “cumulative profit/loss” and replacing it with “expected value”, the graph is essentially the same in the eyes of Venture Capital and investors. During the early stages of drug development, the probability of success is extremely low and expected value of the drug is equally low. Only after a lot of time and money does the “commercialization” or the proof of concept occurs and a drug becomes worth investing in.

Unfortunately for the biotech industry, the valley of death usually coincides with a Phase II clinical trial which takes ~$20-100 million dollars to get to which can be demonstrated with the next figure [8].

So either VCs need to start doing a series A earlier during the process and regularly fund companies pre-IND, pre-Phase III or another large player needs to step in. In addition, academic groups need to do a better job connecting their publications to the final product to reduce uncertainty and risk.

This is probably the most exciting current area of drug development as it requires the least amount of momentum to achieve large meaningful results. Universities, Drug companies, and VCs are largely experimenting with how they are tacking this cap.


 

Academics making their drug fundable

I’ll start with what Academics. I mentioned earlier that Academics tend to work on problems that don’t usually yield to tangible results. However a deeper issue is not realizing the disconnect between a successful publication and the commercialization of that idea.

The inability to draw in a licensing deal or VC funding can be summarized by:

  • A poor understanding of the economics of the disease
  • Lack of meaningful clinically relevant data
  • An inability in academia to weed out false positives.

A poor understanding of the economics of the disease
Since most PhDs aren’t MBAs, they really have no clue how health insurance works and how much drugs actually cost. Typically the way how research is funded is:

  1. Find something cool
  2. Find what that cool thing does
  3. See if that thing it do is useful
  4. Justify doing more research on that cool thing based on what it does

Totally reasonable way of doing research but it’s also the reason why the NSF is getting in trouble with Lamar Smith. Essentially most of biological research is driven by finding random applications of the science rather than finding the appropriate application and making involved hypotheses to guide that science.

For academics to seriously make an impact, they must first check in with the physicians to see what actually happens in the disease that they are interested in and then adjusting the drug in a manner than is suitable for that disease.

For instance, several “cures” of HIV including bone marrow transplants and aggressive antibody treatments are impractical since a handful of “inexpensive” oral drugs will essentially do the same + be safer.

Lack of meaningful Clinically relevant data
Everyone has seen the article “X cures cancer”. What most people forget to do is to read the small text “this might be useful as a drug in 15-20 years”. Typically these high-impact publications go along the lines of demonstrating efficacy in an early model system and then following up those observations with the next logical series of experiments.

The common saying in the drug discovery world is that “you get what you screen for“. As critics of the pharmaceutical industry will say, we’re good at curing mice. While we still face the same drug development issues when we attempt to treat mice, the result remains the same, our drug discovery pipeline isn’t optimized for finding drugs that treat human diseases. That is, things like chemical-based screening and target-based screening doesn’t necessarily produce clinically relevant results. As mentioned, later, the major sources of failure come from lack of efficacy or toxicity. This basically suggests that you’ve chosen the wrong target to attack.

The alternative is to design the screens to identify clinically relevant compounds from the start. Using disease-specific cell-based assays are one method. Using several filters for activity is another. There are also several efforts to build better mice models which actually have human immune systems and die from human cancers. The world of iPSCs also opens the door to the creation of immortalized cells that come directed from a diseased patient.

The ideal scenario is to change drug discovery from a linear process to an integrated research pipeline which eliminates false positives from the start. I’ll go more into the research integration later.

Proposal for bridging the valley of death [9]

An inability in academia to weed out false positives.
Certainly a sensitive topic in research is the question Is most medical research wrong? Why or why not?

A classic paper Why Most Published Research Findings Are False by John Ioannidis suggests that there is an unfortunate tendency for publications to select for positive data. In my own answer, I claim that this falsehood comes from the misinterpretation of the data and answer by Manish Kothari and Michael W. Long also go along those lines. It surprisingly isn’t because of fraud or data manipulation, it’s more that people are pressured into seeing what they want to see and making the wrong analysis.

This issue also reflects the very difficult task of reproducing research. As a personal example, we have one company that is trying to replicate our data using a similar experimental setup and they were failing to do so. In fact, they had to send “experts” to directly observe my labmate doing his experiment and even made him use their own reagents to confirm. Ultimately we narrowed it down to them using a poor source of a few reagents along with forgetting to mix certain chemicals in a certain order. Unfortunately the guy who figured this all out left so after we taught this information, we had to teach it all over again.

A lesser known study done by Bayer and Amgen formed validation teams that essentially spend a year trying to reproduce other people’s data. Their conclusion: ~20-25% of the data was reproducible; 2/3 of the data there were inconsistencies. [10]

Again, this isn’t because of fraud or data manipulation. In many of these cases, the teams had to replace cell lines or change the assay formats to get the hypothesis to work. However, even then, there were inconsistencies. There is a lot of variation in biology and are several factors that may cause a false positive.

The moral is: Just because your paper was accepted in Nature, it still doesn’t mean that it’s scientifically sound enough to spend $1 billion dollars on it. To successfully and scientifically validate your idea to the point where a company is willing to take a risk requires several confirmations of your idea. If the drug works in an assay, use a new assay; if it works in another assay, use a cell-based assay; if it works in a cell-based assay, use another cell-based assay; if it works in that cell-based assay, use a mouse; if it works in a mouse, use a rat; etc. See How do drug researchers address effects that only occur in rats?

For intellectual pursuits, these studies may not be particularly rewarding but they are the scientifically correct thing to do and ultimately brings in investors. There is also the whole revamping of the publishing model which I will also go into later.

I conclude this chapter with a brief telling of The Sirtuin Saga regarding Resveratrol.[11]

Triggered by a study by David Sinclair in 2003 that suggested that the molecule in red wine extended the lifespan of yeast cells and at one point showed the reduction of aging in mice. The resulted in the starting of the biotech company Sirtis which ultimately got acquired by GSK for $720 million. However, later studies suggested that the in vitro assays that suggested this activity had some artifacts due to the presence of the fluorescent molecule used in the experiment. Recent data suggests that the assay only worked in specific conditions but still worked. In the end, the scientists did isolated activity, they just started a ~$1 billion company off the wrong lead compound. [12] [13]

This debate itself has lead to multiple Quora questions

If you ask Alex K. Chen for his opinion, the answer is maybe.

Academic pipelines
In order to get researchers to recognize these pitfalls, Universities have created internal pipelines to help academically minded people solidify business-friendly science that can be outsourced.

Most of these groups help Professors and students get through the hurdles mentioned above and uses industrial expertise to indicate what risks remain with the proposed technology. Ultimately these ideas would become mature enough to be licensed or spun out into a company and allow Professors to get back to their professing.

A few of these programs already exist and the best examples include

  • Stanford SPARK
  • MIT NEWDIGS
  • Northwetern CMIDD
  • Emory Institute for Drug Development
  • U Toronto MaRS
  • UCSF CDDS

 


 

Drug companies funding academics

As the academics reduce their risk, companies and VCs need to do a better job taking risks. There are usually two schools of though on how to approach this problem.

  1. Drug companies should start pulling out the checkbooks and with an aggressive M&A or partnerships fund early stage research
  2. Diversify the risk to Contract Research Organizations and let them handle early stage Clinical development.

Those who believe in virtual and lean startups will tell you to go with option 2 and since they have a MBA, they are probably right. I will tell you to go with option 1 since only Pharmaceutical companies had the long term discipline and vision to prevent further fragmentation of an already shaky potential drug. The current reality is something in the middle since Pharma companies are too unweldy to move quickly through development and small biotechs are too desperate to do good science.

The ideal model is a Pharma funded early stage pipeline program that  operates independently of the mother program but has the financial and intellectual capital to success.

Good examples of these early pipeline programs include

  • Genentech (gRED) / Roche
  • Chorus / Lilly
  • CORTEX / Pfizer
  • Centocor/ JnJ

Bad examples of early pipeline programs that weren’t independent include

  • Groton / Pfizer
  • Sandwich / Pfizer
  • Kalamazoo / Pfizer
  • Wyeth / Pfizer
  • Kent / Pfizer
  • Sirtris / GSK
  • Research Triangle / GSK
  • Harlow / GSK
  • Whitehouse Station / Merck

Basically don’t be Pfizer. What essentially happened was that pharmaceutical management interfered with early R&D and started outsourcing certain functions to other countries with “expertise” for the sake of “efficiency”. However, what that actually means is waiting for a chemist in China to ship their compound to the assay development team in North Carolina which uses a protein created in Switzerland. It’s pretty much guaranteed to not work. What you really want is a small well funded mini biotech that cranks out a bunch of compounds.

GSK had shifted to a Therapy Area Units (TAUs) system but they are in trouble since they keep on changing the model every 8 years whenever they get a new CEO. Novartis uses the NBIR model; JnJ never bothered to integrate their units; Roches has pRED and gRED; Merck stuck with MRL but the new R&D chief is proposing an aggressive reshuffling at the time of writing. [14]

Pfizer has changed their research model from independent research labs to “Centers of Therapeutic Innovation” which collaborate heavily with several Universities. They are essentially outsourcing all of their R&D to academic labs. Probably a wise move but probably not worth imploding their research units.

In summary: Pharmaceutical Managers needs to stop moving units around every ten years especially involving products that take 15 years to work.

Part of this is a disciplined approach to outsourcing which gets back to the MBA’s approach to doing research. There is a lot of value in contracting out research.

  1. It allows companies to focus on what they are good at.
  2. Reduces training time of new hires
  3. Spreads out capital costs (especially with contract manufacturing)

Where this quickly goes wrong is when expertise gets lost and communications gets severed. I mentioned my own outsourcing story earlier and Derek Lowe has several deep and bitter discussions on the problems of outsourcing. When outsourcing causes you to spend more time troubleshooting your supply chain rather than doing science, you’re sacrificing time and money. [15]


 

Making Marketing Departments SHUT UP

According to Adithya Balasubramanian’s answer to What is the detailed cost breakdown of an expensive clinical trial? ~90% of the cost to approve a single drug comes during the Phase III clinical trial. Phase III trials are expensive and unfortunately still fail 40% of the time.

The major reasons behind failure: efficacy and toxicity. [16]

At some point we had information from Phase II trials that informed us that this drug had a pretty good shot at working. As indicated in the cost breakdown, In addition to the fact that they already cost a lot, Phase III trials are get more expensive because they are getting longer, more complicated, have a lower patient retention rate, lower patient enrollment rate. All in all, we are being too aggressive with the way how we design clinical trials and pushing compounds into phase III.

A good example of getting impatient and going blindly into Phase 3 trial was the recent Pfizer, JnJ, and Elan efforts with Alzheimer’s and blew over $1 billion on two trials with Bapineuzumab despite very indifferent Phase II data. See Where did most of the money in the failed Bapi phase III trial go? The companies had their eye on the $5 billion / year Alzheimer’s market but didn’t allow the science to dictate their strategy and placed a bad gamble. [17]

My hypothesis is that we rush to Phase III too quickly and design the trials to be too broad. If the Phase II data indicates that the drug works in half of the patients, we should be testing the drug in the responsive half. The marketing team will say, that’s too complicated, let’s test all of the patients and get twice as much money obtaining a blockbuster.

The unfortunate result is to appropriately design a suitable trial, you will need a larger subject population and a longer enrollment period to sufficiently power the trial. This ultimately will cost several times more and has a higher chance of failure than designing a smaller well powered trial. This comes back to efficacy and toxicity. If you have a good idea which patients will likely show the best efficacy and least toxicity, you should design your trial for those patients. As mentioned earlier, this is partially due to choosing an incorrect target. However, enrolling the patients that have the wrong target is also a sure way to get a dud.

The clinical trial that goes against this tide was Herceptin which looked to attack a gene that was expressed in only 20% of Breast Cancers. From a disciplined scientific approach, Genentech resisted the pressure of increasing their market size 5x by narrowing into the smaller group of Her2-positive patients. Doing so allowed them to use a significantly smaller population (10-20 times smaller) and get approval more quickly.

However, there is a reasonable question whether the marketing people were right. Recent reports do suggest that Herceptin works even for certain Her2 negative patients and a non-trivial proportion of Doctors don’t pay attention to Her2 status prior to prescribing the drug. In the end, you want the broadest indication since it gets you the most money in the brief period of time the drug patent exists. Marketing got their Blockbuster anyways.

You can’t blame Pharma companies for thinking this way. I;m sure that some MBA has shown that taking these type of these aggressive gambles should actually get more money in the short term despite the higher failure rate. As a result, to incentivize a trend towards these smaller but better designed trials we need to have an overhaul of the drug approval process.


 

A new interaction between the FDA, Insurance, and Pharmacy

While a lot of drugs fail simply because we didn’t identify failiures early like in the case of Bapi, there are some drugs that failed since too many non-responders were enrolled. In mynon-medical opinion, drugs like Vioxx and Avandia should probably be on the market since they do work despite what Steve Nissen says. Their problem is that they are being marketed to the wrong people.

Despite the large controversy in data reporting for Rofecoxib (Vioxx), it was an extremely effective drug for some patients. It certainly had risks but both the US and Canada voted in favor of allowing the drug to be returned since they thought that the benefits outweighed the risks. However, the publicity hit already happened and Merck decided to permanently withdraw the drug.

Another good drug that had devastating side effects was Thalidomide, the drug that triggered the strengthening of the FDA in the first place. The drug infamously caused numerous birth-defects and was quickly withdrawn from the market. As we can see in Can Thalidomide — a drug with an exceptionally controversial history — actually be used to treat multiple diseases as claimed in the article cited below? the once dangerous drug has reemerged as a potential Multiple Myeloma drug.

This brings us into need to change the drug approval process. The potential of Pharmacogenomics can significantly change our ability to rationally identify patients who will respond well to drugs. This allows us to better design clinical trials that will enroll patients.

The risk of these trials are still high and there are real concerns about generating enough of a profit to bridge the valley of death.

The appropriate proposal is the use of adaptive licensing, which takes advantage of accelerated approvals to start charging patients to recuperate the costs of drug development but under extremely restrictive conditions. However, while it may cost more per patient initially, it does lower the barrier to entry and reduces the time spend in the valley of death.

Depiction of Adaptive licensing [18]

A good example is the recent approval of Lomitapide which ran a tiny 29 patient phase III trial for the ultra-rare genetic disease homozygous familial hypercholesterolaemia and got FDA and EMA approval for only that indication. However, the compound has potential efficacy in hetereozygous patients and with the initial approval in the small patient population, they should have enough cashflow to initiate the larger phase III trials.

This is heavily on the FDA to allow these trial designs to occur. Their role is to ensure that efficacy and safety are in place. With that in mind, they need to reassure companies that they won’t consider these early stage Phase III trials as “proof of concept” trials and are willing to look at earlier NDA fillings.

It should be acknowledged that reducing the patient population does complicate enrollment and as indicated by several answerers in What are some of the biggest challenges with setting up and conducting clinical trials? enrollment is one of the hardest steps with clinical trials. However, by tapping in to the patient communities and the use of smaller trial designs, I am hopeful that this dilemma can be resolved.


 

Closing the Feedback Loop

Typically when you see something about drug development you see funnel like this:

I’ve always hated this diagram.

It makes the entire drug development process seems extremely linear and essentially the secret to getting a drug is taking more shots. Also it assumes that failure is built into the process.

The real drug development diagram looks more like the next two diagrams [19]

 

The key thing that makes these proposed systems work is the ability to use the current data to better design future experiments. Rather than working on several compounds and removing them by a process of elimination, you’re working on a single product that gradually gets refined and polished by the time it reaches approval. Failures should lead to new hypotheses and guide the development rather than close the door. For this loop to be complete several things need to change.

Doctors and Insurance
These folks were blamed before but now they are getting blamed again. For adaptive approvals to work, Doctors will have to restrain one of their most powerful tools: off-label use. At the same time, insurance companies need to do a better job enabling off-label use when it is appropriate.

As mentioned, Doctors need to do a better job observing and reporting patient outcomes. With the increased role of Phase IV monitoring this becomes even more important. Doctors will also need to adjust to the increased role of companion diagnostics and personalized genomics information. For instance, an abnormally large percentage of doctors prescribe herceptin without checking their patient’s HER2 status. While the next wave of doctors are beginning to be trained with this mindset, a full overhaul of medical practice won’t occur for at least another 30 years when veterans finally die out (however, we still want our Drs on Quora to live forever). However, even the current medical education that was given to people like Jae Won Joh and James Pan doesn’t fully integrate a mindset of using personalized medicine.

Insurance companies will also need to shift from a high-deductible mindset to a preventative mindset. Drugs in the US are still extremely expensive to the end-user and insurance companies aren’t doing enough to negotiate those prices down and appropriately. They will also need to shift from a disease-based model to a target-based model. We can no longer treat breast cancer as breast cancer but instead, treat HER2-positives vs. EGFR-positive cancers. With these systems in place, drug repurposing would become more easy to recognize and push through.

Completing this side of the feedback loop will be a key step. For this to happen, Electronic Medical Records will have to be commonplace and better and systematic data collection needs to be implemented.

The interesting arena of clinical trials with personalized medicine are the MD Anderson BATTLE Trial and the British Columbia Cancer Agency’s Utilization of Genomic Information to Augment Chemotherapy Decision-making for People With Incurable Malignancies in combination with PREDICT. These efforts use full genome sequencing from single-patients in attempts to do personalized cancer treatments. However, according to Marco Marra (I saw him at a conference), there are all sorts of logistical hurdles including biopsy collection and access to off-label drugs. There is also the whole inability to making meaningful connections between genomic datasets and the root cause of cancer.

LEAVE A REPLY

Please enter your comment!
Please enter your name here