John A List, Zacharias Maniadis, Fabio Tufano
Cited by*: 0 Downloads*: 3

In his comment, Mitesh Kataria (2014) makes three main points about a specific part of our paper (Maniadis, Tufano, and List 2014), namely about Tables 2 and 3. In our paper, we employ these tables in order to illustrate the idea that very inconclusive post-study probabilities that a tested phenomenon is true may result from novel, surprising findings. The main arguments in Kataria (2014) are the following: First, if P(H0) is unknown, as is often the case with economic applications, the post-study probability can lead to even worse inference than the Classical significance test, depending on the quality of the prior. Second, the simulation in Maniadis et al. (2014) ignores previous assessments of P(H0) and instead utilizes a selective empirical setup that favors the use of post-study probabilities. [Third,] contrary to what Maniadis et al. (2014) argue, their results do not allow for drawing general recommendations about which approach is the most appropriate. (Kataria 2014, abs.) We believe that our work might have been misunderstood by Kataria. Moreover, it seems that some of his claims are not supported by relevant empirical evidence.
John A List, Zacharias Maniadis, Fabio Tufano
Cited by*: Downloads*:

The sciences are in an era o fan alleged "credibility crisis'. In this study, we discuss the reproducibility of empirical results, focusing on economics research. By combining theory and empirical evidence, we discuss the import of replication studies, and whether they improve our confidence in novel findings. The theory sheds light on the importance of replications, even when replications are subject to bias. We then present a pilot meta-study of replication in experimental economics, a subfield serving as a positive benchmark for investigating the credibility of economics. Our meta-study highlights certain difficulties when applying meta-research (Ioannidis et al., 2015) and systematizing the economics literature.
  • 1 of 1