Science has one very fundamental mantra: standing on the shoulders of giants. Scientific discoveries are meant to build on the information obtained in prior studies. We are building a mountain of knowledge, where the next person in the scientific line is adding another stone to the top, getting us closer and closer to understanding (if this is ever attainable, but that’s the goal). However, for this system to work, we have to make one giant assumption; the studies we base our work on must be reproducible.
One of the better meta-analyses of premiere scientific research found that only 11% of cancer research studies are reproducible. C. Glenn Begley, the Head of Global Cancer Research at Amgen, identified 53 highly regarded studies from top tier journals that could potentially lead to new cancer therapeutics. Before starting to build upon the work, he went about replicating each of the studies. Only 11% (6 studies out of 53) could be replicated. When the team of 100 scientists at Amgen couldn’t replicate the studies, they contacted the authors directly. Some of these experiments were repeated under the original authors direction. Still, 47 of these studies could not be replicated.
Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true.― Jonah Lehrer
Reproducibility is essential if we are to try and develop therapies based upon findings in basic research.
The biotechnology and pharmaceutical industry depend on discoveries from the academic community. These studies provide insight into the appropriate drug targets. Keep in mind that developing a new therapy can cost north of $1B in development and regulatory hurdles (when accounting for all the failures as well). Imagine the catastrophic impact of going through this development process based on high impact science that cannot be reproduced.
We take scientific discoveries at face value and represent them as fact. Should we not trust science?
Science is very hard. That’s not quite right. It’s finicky. Gone are the days when individual investigations lead to massively and broadly impactful results. All the big, simple questions with big answers are (mostly) resolved. Generally speaking, we understand how gravity works. We have a pretty good idea that blunt trauma to the head can lead to cognitive and memory issues. Aspirin and penicillin are probably two of the best medications known to man, even though they were discovered approximately a century ago. However, the more we’ve learned, the more we realize have very little we understand. Science has become extraordinarily complex. It’s now a game of subtlety. I believe this is the problem. The problem is like replicating a recipe with a 1000 ingredients, where the timing of when each ingredient is introduced is essential, and the oven temperature fluctuates as a function of the weather.
What would be helpful is if we had a resource for what doesn’t work. Publishing scientific experiments that don’t work should be just as valuable and important. We need a place where scientists and engineers can openly discuss their failures in great detail. And be rewarded for it. Right now, nothing like this exists (although Mark Hahnel & Figshare are making a go of it). In fact, it is discouraged. Having a resource that describes what doesn’t work would probably move science forward much faster than our descriptions of success.