25th International Conference on Computer Design, ICCD 2007
Download PDF

Abstract

In single processor architectures, computationally- intensive functions are typically accelerated using hardware accelerators, which exploit the concurrency in the function code to achieve a significant speedup over software. The increased design constraints from power density and signal delay have shifted processor architectures in general towards multi-core designs. The migration to multi-core designs introduces the possibility of sharing hardware accelerators between cores. In this paper, we propose the concept of a hardware library, which is a pool of accelerated functions that are accessible by multiple cores. We find that sharing provides significant reductions in the area, logic usage and leakage power required for hardware acceleration. Contention for these units may exist in certain cases; however, the savings in terms of chip area are more appealing to many applications, particularly the embedded domain. We study the performance implications for our proposal using various multi-core arrangements, with actual implementations in FPGA fabrics. FPGAs are particularly appealing due to their cost effectiveness and the attained area savings enable designers to easily add functionality without significant chip revision. Our results show that is possible to save up to 37% of a chip's available logic and interconnect resources at a negligible impact (< 3%) to the performance.
Like what you’re reading?
Already a member?Sign In
Member Price
$11
Non-Member Price
$21
Add to CartSign In
Get this article FREE with a new membership!

Related Articles