Abstract
Digitization and the large-scale preservation of digitized content have engendered new ways of accessing and analyzing collections concurrent with other data mining and extraction efforts. Distant reading refers to the analysis of entire collections instead of close reading individual items like a single physical book or electronic document. The steps performed in distant reading are often common across various types of data collections like books, journals, or web archives, sources that are very valuable and have often been neglected as Big Data. We have extended our tool ArchiveSpark, originally designed to efficiently process Web archives, in order to support arbitrary data collections being served from either local or remote data sources by using metadata proxies. The ability to share and reuse researcher workflows across disciplines with very different datasets makes ArchiveSpark a universal distant reading framework. In this paper, we describe ArchiveSpark's design extensions along an example of how it can be leveraged to analyze symptoms of Polio mentioned in journals from the Medical Heritage Library. Our experiments demonstrate how users can reuse large portions of their job pipeline to accomplish a specific task across diverse data types and sources. Migrating an ArchiveSpark job to process a different dataset introduces an additional average code complexity of only 4.8%. Its expressiveness, scalability, extensibility, reusability, and efficiency has the potential to advance novel and rich methods of scholarly inquiry.