DFG Statement on the Replicability of Research Results


The German Research Foundation (DFG) published a statement on the state of replicability of research results today in which they take a to me surprisingly weak position for replicability by saying e.g. that replicability is just one of several ways of “testing empirical knowledge claims” (among the others are modelling and simulation, which frankly doesn’t make sense to me). They also mention that “there are forms of research that have reached such a degree of complexity in their experimental methodology that replicative repetition can be difficult” and argue that there are systemic reasons for quality issues in science, like the pressure to publish.

While I agree with some of the arguments in there it’s statements like “Scientific results can be replicable, but they need not be.” that disturb me. Your opinions on this interest me.


For the record, the citation for the statement is:

DFG. (April 2017). Replicability of research results: a statement by the German Research Foundation. Bonn, Germany: Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). Five pages.



I agree the statement is not a strong as one might hope. It certainly does not traverse many of our openmod interests in relation to open data and open code. It does however say:

The DFG will continue to pay particular attention to questions of research data management
and current challenges that emerge from digitalisation

Quite a useful statement but I think they should have added the phrase “… current challenges and opportunities …”

Do you think we should write to the DFG and put our position?



There are indeed several useful statements in there, but the overall conclusion isn’t the one I was expecting. To be fair I haven’t been following the mentioned discussion in detail and so I’m not aware which arguments have been brought up in the past which is why I wouldn’t feel in the right position to write a response at the moment.

Part of my confusion stems from imprecise terminology: there is a lack of consensus what reproducibility or replicability mean [Blog post of Konrad Hinsen] [The Practise of Reproducible Research]. So it would have been better to define in the statement what exactly they mean when they talk about replicability and to distinguish it from reproducibility. I assume that with replicability they mean other researcher using other data (and the same or another experimental setup) to come up with the same results; which indeed can be tricky in some cases. But that doesn’t impact “reproducibility” of these studies – meaning same data, same setup, repeated experiment, to come up with close enough results – which still must be given.



Similar care is required when advocating for open data and open models. In terms of “openness”, some author distinguish between transparency and comprehensibility:

[we] consider open source approaches to be an extreme case of transparency that does not automatically facilitate the comprehensibility of studies for policy advice (Cao et al 2016 p4)

In some sense, reproducability and replicability are scientific doctrines, whereas transparency and comprehensibility are open government ideals.

The openmod works in the intersection between science and policy, so I guess all these concepts apply to some degree. It really depends how you wish to frame your argument.

Cao, Karl-Kiên, Felix Cebulla, Jonatan J Gómez Vilchez, Babak Mousavi, and Sigrid Prehofer. (28 September 2016). “Raising awareness in model-based energy scenario studies — a transparency checklist”. Energy, Sustainability and Society. 6: 28–47. ISSN 2192-0567. doi:10.1186/s13705-016-0090-z. Open access.