Two years ago the team of Brian Nosek (University of Virginia) published its, now world-famous, replication study. The paper replicating 100 (predominantly) psychological experiments caused international uproar for they found that the vast majority of previously published results could not be reproduced. ScienceGuide spoke to him about the future of open science during the World Research Integrity Conference in Amsterdam.
When your replication study gets reported on by journalists, what bothers you the most about the coverage?
To be honest it is the same thing as happens in science itself: overinterpretation. This craving for definitiveness, rather than facing the challenge of actually delving into the harder message of how difficult it is to do science. If we want to involve the broader community in understanding science, then the real thing people need to understand is the process. I think that it is a misconception to assume that the public isn’t interested in this part, and we should include it in science communication.
Science is a gradual process that attempts to reduce uncertainty and it’s tedious and very difficult. I am not saying we should communicate the method sections of papers directly, but in science communication we should talk about how we get knowledge and what it tells us. That was the real challenge with the reproducibility article to be honest. To send out the right lesson that was to be drawn about these 100 experiments. It wasn’t: ‘the studies couldn’t be reproduced’ it was: ‘the studies are irreproducible’.
Let’s delve a bit deeper on that topic then, are we in a replication crisis at the moment?
Well truly, I don’t know. It really depends on what that word means. Is there a crisis of confidence? I would say that people are much more worried about replication and reproducibility then they were before, and in a productive way. Are things worse than they used to be? How would we know?
The main drivers behind the replication crisis – read: the drivers behind the lack of reproducibility haven’t changed all that much. Ego is a huge driver for scientific research and always has been, reasoning biases are the same as they were in the past, and the motivation to find something new is pretty much unchanged. What is different is the amount of competitiveness.
The number of people that are applying for the same job, or fighting over the same grant money has increased which could exacerbate the problem. But simultaneously it is much easier to reproduce things. There are many more journals, it’s easier to generate data than it used to be. It could work both ways. To be honest it really doesn’t matter whether it’s better or worse than it used to be. What we know is that we can and should do better.
Is science, as a social structure, strong enough to deal with the inside and outside pressures you describe?
It is to the extent that it can live up to the claim of self-correcting. Science is not really that good at eliminating bias upfront, but it does have strong critical checks and balances that filter out whether there is proof for a certain claim.
Where we are not self-correcting in the way we thought we were is in the way we communicate with each other. We are living in a world where we think or pretend that the report that I write is sufficient to critique my claim, but really it isn’t. It is my presentation about what I thinks is important in what he did, and what I found. David Donoho who works at Stanford university also calls scientific publications ‘an advertisement for the scholarship’.
That isn’t sufficient for the mechanisms of self-correction because it is my version of what I think is important. I had some process in generating my idea and methodology, my research, my analysis that rarely makes it into a paper. Even the data itself, or the scripts used to analyse the data are not included.
The aspiration of full transparency might be an honourable cause, but can we truly contain the individuals’ dislike of being proven wrong?
I do think we tend to forget that researchers are ‘just human’, and they are bound by the same limitations as other people. Even more than is required from let’s say a judge, society demands ‘objectivity’ from us, but people aren’t. Researchers have ego’s and crave recognition just like anybody else. The difference should be in the process, in transparency.
There’s this unspoken rule in science that bad theories tend to stick around until the professor that came up with it passes away. I think there is some truth to that statement. I recently read a study that showed that as researchers pass away, the citation score of their publications tends to drop dramatically.
And it is hard to correct false claims. Especially when you are in a close relationship, even a power relationship with someone, there are no good mechanisms that allow for example a grad student to safely confront their advisor and say: ‘that what you found, what you are famous for and have built your career upon is not true’. That’s just a human reality we have to deal with.
As a system science is decentralised enough to allow for this type of correction, but it isn’t easy. And still, in the current process, there are many ways in which the original author can stifle perceived opponents but there are new mechanisms that are reducing these possibilities. I’m thinking about preprint publications for example, where I can get public vetting even though my article might not make it into a ‘top journal’.
You are one of the founders of The Open Science Center that promotes openness and integrity in science. What developments can we expect in the near future?
I don’t think a single revolution will do the trick. The way forward is one that is incremental and we’ll eventually get there. Right now I am focused on making the techniques required for more transparency publicly available.
The first step will be towards making the preprint method available to everyone, which will dramatically increase the speed of sharing information. After that I think ‘versioning’ is the next logical step. Due to its very nature it doesn’t make sense to regard scientific publications as definitive and versioning allows us to track the development within a field. Both of these issues we’ve now tackled and it’s time to make it available for everyone.
The next is to add review service layers and other tools that facilitate the peer review process. In that way scientific communities can start their own publication channels. Right now the only thing the journals still have to offer is valuation. Once we reclaim this for the community, and create democratic evaluation, it is out of the hands of the publishers the field can decide for itself what they deem important. In that way the publisher is no longer the gatekeeper of valuation.
To conclude, quite a lot of researchers aspire to be published in the ‘top journals’, how do you convince the sceptics to switch gears?
We won’t. The last thing I want to do is waste my time trying to convince the sceptics. We want to work with the idealists, the early adopters and make sure the rest knows that they are missing out. For example experiencing the preprint process will show people what it is really like, and take away potential arguments like the fear of being ‘scooped’. And once a significantly large part of the community starts working with these more transparent concepts I’m convinced the rest will want to join.