2018 IEEE International Symposium on Workload Characterization (IISWC)
Download PDF

Abstract

Automata Processing is an important kernel for many application domains, and is challenging to accelerate using general purpose, von Neumann computers. New research into accelerators for automata processing motivated the creation of ANMLZoo, a standardized automata benchmark suite that reflects the many modern use-cases for automata processing. While researchers have adopted ANMLZoo as the de-facto benchmark suite to measure improvements in automata processing, certain drawbacks have emerged after years of use. In this work, we first examine opportunities for improvement over ANMLZoo's benchmarking methodology. We then propose a new benchmarking methodology that aims to generate benchmarks better suited for modern automata processing research. We then present AutomataZoo, a new automata processing benchmark suite that uses our new benchmarking methodology. AutomataZoo is composed of 24 automata processing benchmarks from well-known domains, as well as from newly published and novel use-cases for automata processing. AutomataZoo benchmarks use open-source tools to generate full automata applications and provide standard, large input stimuli. Where arbitrary choice is involved, we design multiple benchmarks that vary important design parameters in order to help architects evaluate application-level trade-offs affecting performance/power and/or chip capacity. This paper shows the advantages of these new properties by performing previously difficult-to-perform experiments and comparisons.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles