Working Paper: NBER ID: w30726
Authors: Emily Silcock; Luca Damico; Jinglin Yang; Melissa Dell
Abstract: Identifying near duplicates within large, noisy text corpora has a myriad of applications that range from de-duplicating training datasets, reducing privacy risk, and evaluating test set leakage, to identifying reproduced news articles and literature within large corpora. Across these diverse applications, the overwhelming majority of work relies on N-grams. Limited efforts have been made to evaluate how well N-gram methods perform, in part because it is unclear how one could create an unbiased evaluation dataset for a massive corpus. This study uses the unique timeliness of historical news wires to create a 27,210 document dataset, with 122,876 positive duplicate pairs, for studying noise-robust de-duplication. The time-sensitivity of news makes comprehensive hand labelling feasible - despite the massive overall size of the corpus - as duplicates occur within a narrow date range. The study then develops and evaluates a range of de-duplication methods: hashing and N-gram overlap (which predominate in the literature), a contrastively trained bi-encoder, and a ``re-rank'' style approach combining a bi- and cross-encoder. The neural approaches significantly outperform hashing and N-gram overlap. We show that the bi-encoder scales well, de-duplicating a 10 million article corpus on a single GPU card in a matter of hours. We also apply our pre-trained model to the RealNews and patent portions of C4 (Colossal Clean Crawled Corpus), illustrating that a neural approach can identify many near duplicates missed by hashing, in the presence of various types of noise. The public release of our NEWS-COPY de-duplication dataset, de-duplicated RealNews and patent corpuses, and the pre-trained models will facilitate further research and applications.
Keywords: deduplication; noisy text corpora; neural methods; n-gram methods; machine learning
JEL Codes: C81
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
Noise (OCR errors and text abridgement) (Y50) | Performance degradation of traditional n-gram methods (C69) |
Traditional n-gram methods (C45) | Inadequate for robust deduplication (L15) |
Neural methods (contrastively trained biencoder) (C45) | Superior performance in deduplication (L15) |
Neural methods (C45) | More adept at identifying duplicates overlooked by traditional methods (C52) |
Choice of method (neural methods vs. traditional n-gram methods) (C45) | Efficiency of deduplication (H21) |