Being a good undergrad, I walk around with the smug face of a scientific zealot – it’s easy to criticise research practices when you aren’t running any studies yourself. The critical approach to experimental methods and statistical analyses, however, was also one factor that attracted me towards psychology. My Intro to Psych teacher started the first class with a picture of Sigmund Freud: “You know this guy? I’m gonna trash him!” And so he did, and the entire field of personality research with him. Other lecturers have been more subtle, but most have encouraged me to examine research findings carefully and with a critical eye. #
Last semester, being one of the rare people who find true joy in the depth of spreadsheets, I took a course called ‘Advanced Statistics’. It was taught by Dr. Maurits de Klepper, who is Amsterdam University College’s Teacher of the Year – and deservedly so. Maurits is not just an amazingly committed educator; he’s also a fervent defender of scientific integrity. So while the class covered some of the more advanced statistical methods, we teamed up to put our skills to use in replicating the analysis of a previously published paper of our choice. #
There was a curious atmosphere about the class – who, after all, would take a non-compulsory applied statistics class, if not the greatest geeks? – such that none of the groups stopped at just replicating the analysis of their chosen target, but all improved on it. And improvement was necessary! Because it appeared that none of the six papers – from psychology, economics, and public health – lived up to the standards Maurits had taught us (see my paper with my friend Zuzanna Fimińska here). If a bunch of undergrads with a liking for statistics can so easily criticise (in part) highly regarded researchers, doesn’t that show a severe problem? #
So this is what I had in the back of my mind when reading the latest issue of Perspectives on Psychological Science (no subscription required), which is entirely devoted to replication. The editors describe a dire situation of rising doubts in the discipline, fuelled by a series of highly publicised cases of fraud and studies showing just how wide-spread improper practices are: #
“These doubts emerged and grew as a series of unhappy events unfolded in 2011: the Diederik Stapel fraud case, the publication in a major social psychology journal of an article purporting to show evidence of extrasensory perception followed by widespread public mockery, reports by Wicherts and colleagues that psychologists are often unwilling or unable to share their published data for reanalysis, and the publication of an important article in Psychological Science showing how easily researchers can, in the absence of any real effects, nonetheless obtain statistically significant differences through various questionable research practices (QRPs) such as exploring multiple dependent variables or covariates and only reporting these when they yield significant results.” The list continues. #
In what makes a cure even more evasive, psychology has its very own problem with replication: it’s becoming ever more rare.
Even for graduate students, merely replicating existing findings is an ungainly endeavour, scarcely rewarded and discouraged by what Christopher J. Ferguson and Moritz Heene call an “aversion to the null” in psychology.
If it ain’t significant, it ain’t gonna be published. The problem goes so far that it appears appealing to let undergraduates do the work.
But when discussing my capstone project, even I was discouraged from merely replicating known findings. #
It could be disheartening to see how far psychological research is from the ideal that I am taught. In particular, as it appears that many don’t exactly want to change anything about it. One paper by Heather M. Fuchs, Mirjam Jenny, and Susann Fiedler investigated how researchers feel about stricter requirements for studies. The paper is entitled “Psychologists Are Open to Change, yet Wary of Rules”, and it shows that many researchers endorse stricter good practices – but not rules for publication. But what change is that supposed to be? It gets even worse if you look at the statements themselves. For instance, not even half of the respondents (n = 1292, so this is not just a quick asking-around-in-the-department) thought it should be good practice that “Authors must collect at least 20 observations per cell or else provide a compelling cost-of-data-collection justification.” This is one of these famous ‘you learn that in your first statistics class’ statements that you would hope to be deeply ingrained in the mind of anybody who made it as far as grad school, not to speak tenure. Yes, it might not be feasible in some disciplines, especially neuropsychology, but denying it being good practice? #
Then again, there are also signs of positive change; not least the current issue of Perspectives on Psychological Science, and the ongoing debate that has motivated it. Elsewhere, the medical researcher John Ioannidis has made an iconoclastic career out of his claim that “most published research findings are false” And the Internet, too, opens up new opportunities. The Reproducibility Project is one: an attempt at large-scale, open collaboration to reproduce prominent findings, pooling together small efforts at what might become hundreds of institutes. #