Replication, Replication

Being a good undergrad, I walk around with the smug face of a scientific zealot – it’s easy to criticise research practices when you aren’t running any studies yourself.1 The critical approach to experimental methods and statistical analyses, however, was also one factor that attracted me towards psychology. My Intro to Psych teacher started the first class with a picture of Sigmund Freud: “You know this guy? I’m gonna trash him!” And so he did, and the entire field of personality research with him. Other lecturers have been more subtle, but most have encouraged me to examine research findings carefully and with a critical eye.2

Last semester, being one of the rare people who find true joy in the depth of spreadsheets, I took a course called ‘Advanced Statistics’. It was taught by Dr. Maurits de Klepper, who is Amsterdam University College’s Teacher of the Year – and deservedly so. Maurits is not just an amazingly committed educator; he’s also a fervent defender of scientific integrity. So while the class covered some of the more advanced statistical methods, we teamed up to put our skills to use in replicating the analysis of a previously published paper of our choice.

There was a curious atmosphere about the class – who, after all, would take a non-compulsory applied statistics class, if not the greatest geeks? – such that none of the groups stopped at just replicating the analysis of their chosen target, but all improved on it. And improvement was necessary! Because it appeared that none of the six papers – from psychology, economics, and public health – lived up to the standards Maurits had taught us (see my paper with my friend Zuzanna Fimińska here). If a bunch of undergrads with a liking for statistics can so easily criticise (in part) highly regarded researchers, doesn’t that show a severe problem?

So this is what I had in the back of my mind when reading the latest issue of Perspectives on Psychological Science (no subscription required), which is entirely devoted to replication. The editors describe a dire situation of rising doubts in the discipline, fuelled by a series of highly publicised cases of fraud and studies showing just how wide-spread improper practices are:

“These doubts emerged and grew as a series of unhappy events unfolded in 2011: the Diederik Stapel fraud case, the publication in a major social psychology journal of an article purporting to show evidence of extrasensory perception followed by widespread public mockery, reports by Wicherts and colleagues that psychologists are often unwilling or unable to share their published data for reanalysis, and the publication of an important article in Psychological Science showing how easily researchers can, in the absence of any real effects, nonetheless obtain statistically significant differences through various questionable research practices (QRPs) such as exploring multiple dependent variables or covariates and only reporting these when they yield significant results.”3 The list continues.

In what makes a cure even more evasive, psychology has its very own problem with replication: it’s becoming ever more rare.4 Even for graduate students, merely replicating existing findings is an ungainly endeavour, scarcely rewarded and discouraged by what Christopher J. Ferguson and Moritz Heene call an “aversion to the null” in psychology.5 If it ain’t significant, it ain’t gonna be published. The problem goes so far that it appears appealing to let undergraduates do the work.6 But when discussing my capstone project, even I was discouraged from merely replicating known findings.

It could be disheartening to see how far psychological research is from the ideal that I am taught. In particular, as it appears that many don’t exactly want to change anything about it. One paper by Heather M. Fuchs, Mirjam Jenny, and Susann Fiedler investigated how researchers feel about stricter requirements for studies.7 The paper is entitled “Psychologists Are Open to Change, yet Wary of Rules”, and it shows that many researchers endorse stricter good practices – but not rules for publication. But what change is that supposed to be? It gets even worse if you look at the statements themselves. For instance, not even half of the respondents (n = 1292, so this is not just a quick asking-around-in-the-department) thought it should be good practice that “Authors must collect at least 20 observations per cell or else provide a compelling cost-of-data-collection justification.” This is one of these famous ‘you learn that in your first statistics class’ statements that you would hope to be deeply ingrained in the mind of anybody who made it as far as grad school, not to speak tenure.8 Yes, it might not be feasible in some disciplines, especially neuropsychology9, but denying it being good practice?

Then again, there are also signs of positive change; not least the current issue of Perspectives on Psychological Science, and the ongoing debate that has motivated it. Elsewhere, the medical researcher John Ioannidis has made an iconoclastic career out of his claim that “most published research findings are false”10 And the Internet, too, opens up new opportunities. The Reproducibility Project is one: an attempt at large-scale, open collaboration to reproduce prominent findings, pooling together small efforts at what might become hundreds of institutes.

  1. Which of course is not exactly true in my case. []
  2. This, by the way, was also what deterred me most strongly from the study of economics: there appears to be little space in undergraduate curricula for research methodology, not to speak of critical questioning. It is not just that assumptions are made quite implicitly; I have hardly found them defended argumentatively upon asking. []
  3. Pashler, H. & Wagenmakers, E.-J. (2012). Editors’ Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence? Perspectives on Psychological Science, 7(6), 528-530. doi:10.1177/1745691612465253 []
  4. Ferguson, C. J. & Heene, M. (2012). A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null. Perspectives on Psychological Science, 7(6), 555-561. doi:10.1177/1745691612459059 []
  5. ibid. []
  6. Grahe, J.E., Reifman, A., Hermann, A. D., Walker, M., Oleson, K. C., Nario-Redmond, M., & Wiebe, R. P. (2012). Harnessing the Undiscovered Resource of Student Research Projects. Perspectives on Psychological Science, 7(6), 605-607. doi:10.1177/1745691612459057 []
  7. Fuchs, H. M., Jenny, M., & Fiedler, S. (2012). Psychologists Are Open to Change, yet Wary of Rules. Perspectives on Psychological Science, 7(6), 639-642. doi:10.1177/1745691612459521 []
  8. Another one that comes up time and again is statistical significance. How often have I read sentences à la “X was faster/higher/stronger than Y, but the difference was not significant”. If you are writing this, you’ve either slept through Stats 101, or you’re trying to imply something your data simply does not support. []
  9. Then again, it might spare us certain neuropsychology studies and the associated yellow press headlines… []
  10. Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8), e124. doi:10.1371/journal.pmed.0020124 []

Comments are closed.