Methods for Evaluating Text Extraction Toolkits: An Exploratory Investigation

By Timothy Allison , Paul Herceg

Text extraction tools are vital for obtaining the textual content of computer files. However, when extraction tools fail, they convert once reliable electronic text into garbled text, or no text at all. This paper contributes to closing this gap.

Download Resources


PDF Accessibility

One or more of the PDF files on this page fall under E202.2 Legacy Exceptions and may not be completely accessible. You may request an accessible version of a PDF using the form on the Contact Us page.

Text extraction tools are vital for obtaining the textual content of computer files and for using the electronic text in a wide variety of applications, including search and natural language processing. However, when extraction tools fail, they convert once reliable electronic text into garbled text, or no text at all. The techniques and tools for validating the accuracy of these text extraction tools are conspicuously absent from academia and industry. This paper contributes to closing this gap. We discuss an exploratory investigation into a method and a set of tools for evaluating a text extraction toolkit. Although this effort focuses on the popular open source Apache Tika toolkit and the govdocs1 corpus, the method generally applies to other text extraction toolkits and corpora.​