Working Paper: NBER ID: w31548
Authors: Abel Brodeur; Scott E. Carrell; David N. Figlio; Lester R. Lusher
Abstract: We use unique data from journal submissions to identify and unpack publication bias and p-hacking. We find that initial submissions display significant bunching, suggesting the distribution among published statistics cannot be fully attributed to a publication bias in peer review. Desk-rejected manuscripts display greater heaping than those sent for review i.e. marginally significant results are more likely to be desk rejected. Reviewer recommendations, in contrast, are positively associated with statistical significance. Overall, the peer review process has little effect on the distribution of test statistics. Lastly, we track rejected papers and present evidence that the prevalence of publication biases is perhaps not as prominent as feared.
Keywords: publication bias; p-hacking; peer review; test statistics
JEL Codes: A0
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
initial submissions display significant bunching around statistical significance thresholds (C46) | distribution of published statistics (C46) |
desk rejected manuscripts exhibit greater heaping than those sent for review (C92) | statistical significance thresholds (C12) |
reviewer recommendations improve (Y30) | distribution of test statistics shows excess mass around significance thresholds (C46) |
peer review process does not significantly influence the distribution of test statistics (C46) | issues of p-hacking (C90) |
rejected papers that publish elsewhere show less bunching at the 10% threshold but greater bunching at the 5% threshold (C46) | publication biases (C46) |
30% of surveyed authors refrained from submitting papers after finding null results (C90) | behavioral response to perceived publication biases (D91) |
actions causing skewed distributions (C46) | statistical bunching among published manuscripts (C46) |