top of page
  • yourbrainscience

The Reproducibility Crisis

If you are in the world of science or interested in science, you have undoubtedly heard about the reproducibility crisis. This crisis refers to when we see data coming out of labs making big claims based on some interesting results, but then when others go to repeat these experiments and try and get the same outcomes- they are unable to do so and those miraculous findings seem to become a one off event.


In 2016, Nature publication ran an article titled “1,500 scientists lift the lid on reproducibility." Of the 1,576 researchers surveyed, 52% said that they believe there to be a significant crisis. One of the core tenants of science is that your work- if it is answering a question using the scientific method, should be reproducible by others. So how is it that after all these years of advancement and of methodological improvements we so often get confusing and non-reproducible results? The answer is a bit complicated.


To start, it's very important to note that the idea of reproducibility itself is a bit confusing, there doesn’t seem to be consensus on what this means and what it should be. For our purposes we will describe reproducibility as the ability to recapitulate results and analysis outcomes from a specific clearly defined set of experiments in a publication. In order to attempt to reproduce scientific results- you need access to a few different types of information:

  • Sample population information (whether this is animals, humans, or cells)

  • Method used in the experiments (what kinds of tests they used in their experiments, their variables, and the hypothesis)

  • The types of analysis and statistics conducted on the data (how they analyzed their data to decide whether it was significant or not)

You typically also want access to the data the group collected- this will allow you to run your own statistics if you have any questions about what they are reporting. Problems tend to arise in a few ways

  1. If there is incomplete reporting of sample population, methods, or statistics

  2. If there is the use of incorrect sample population, methods, or statistics

  3. If there is the blatant misreporting and falsification of data

The first two points are more commonly seen in that some papers will fail to mention results that may not have been fully in support of their hypothesis or results that may not further their narrative. It's good to remember that even negative data is important data!


Similarly, it can be that scientists used a method that might not be best suited to answer their questions or use incorrect analytic techniques to get an answer more in line with their hypothesis. We have a fairly recent example of the use of incorrect statistics in the highly sensational paper titled “Increased global integration in the brain after psilocybin therapy for depression”. In this paper, there was some incorrect use of statistics and statistical techniques used to make some fairly large claims about the results of the paper. This lead to a highly publicized response by some respected scientists pointing out these statistical missteps. Fortunately, the public response for this specific situation has resolved.


Although I would say that #3, the falsification of data, is a less common occurrence, a meta analysis from 2009 found that 1-2% of scientists admitted to falsifying and fabricating their data! This sounds like a negligible number but the impact of falsifying data cannot be underscored enough. Very recently it has come to light that a prominent Alzheimer’s researcher may have fabricated images that changed the face of Alzheimer’s research significantly. If this falsification is true, the impact this will have is monumental, as these data directly informed the amyloid-beta plaque theory of Alzheimer’s development. Stuff like this is what nightmares are truly made of (mine at least).


As scientists it is our job to present data to the scientific community and others in an objective and honest way. If we can’t be honest about our data, then really what’s the point? One problem in academic research is the pressure to produce and provide, which can certainly be a reason that scientists feel the need to present data in a rushed or perhaps fabricated way, but we need to move towards accepting all results even null results as equally important as positive results.


Science is a collaborative art that builds upon the work of others and so it is our job to do our best to provide our peers and the public with the most clear and accurate representation of our work.

31 views0 comments

Recent Posts

See All
bottom of page