Our trust in science is based on the assumed replicability of research results; however, scientific research is in the midst of a replication or reproducibility crisis following the realization that many individual findings are, in fact, not replicable. The field of neuroimaging is no exception.
What does ‘reproducibility’ mean?
Replication is independently repeating the methodology of a previous study and obtaining the same results.
According to Ludmer Centre researcher Dr Jean-Baptiste (JB) Poline at the Montreal Neurological Institute and Hospital (The Neuro) and McGill University, investigations into the problem are highlighting the importance of paying attention to the details, particularly those related to process. Researchers need to be meticulous in replicating the same, not just similar, methodologies, standards, conditions, and inputs. The investigator’s base training also factors into this. Institutions have tried, with limited success, to address this need by adding short courses or boot camps to existing trainings. In his view, we need a new training approach, which he and his colleagues K. Jarrod Millman (UC Berkeley), Matthew Brett (University of Birmingham), and Ross Barnowski (Lawrence Berkeley National Laboratory) addressed in a June 2018 paper, Teaching computational reproducibility for neuroimaging.
The co-authors noted that rigor and clarity are hard to retrofit. Adding extra trainings as a post-training adjunct is doomed to failure, because students are often building on an insecure foundation. Once young researchers have learned to work in an informal and undisciplined way, it is difficult to switch to a process that demands more rigor and thought.
The authors tested a hypothesis that we can effectively teach both the tools for efficient and reproducible work and the principles of neuroimaging, by building a course around a substantial collaborative project, one that put the tools into immediate practice. The students in the study had little to no prior exposure to neuroimaging or to the computational tools and processes. They were given an open-ended task of designing and executing a project, which was either a replication or an extension of a published neuroimaging analysis, built from code they had written themselves.
The final analysis of the new training program demonstrated and underscored that some of the issues with reproducibility could be better addressed by rethinking how we approach training. Read the paper to learn more.
K. Jarrod Millman, Matthew Brett, Ross Barnowski. Jean-Baptiste Poline. Teaching computational reproducibility for neuroimaging. arXiv:1806.06145 June 15, 2018

Dr Poline is a leader and long-time member of the international organizations guiding neuroimaging and neuroinformatics developments, notably, the International Neuroinformatics Coordinating Facility (INCF); the Organization for Human Brain Mapping (OHBM) and the Findable, Accessible, Interoperable, & Reusable (FAIR) Data Sharing initiative. Dr Poline is the co-Chair of the NeuroHub and Chair of the Technical Steering Committee for the Canadian Open Neuroscience Platform (CONP) at the Montreal Neurological Institute & Hospital (the Neuro) and a Primary Investigator at the Ludmer Centre for Neuroinformatics & Mental Health.