A Configurable and Efficient Memory Hierarchy for Neural Network Hardware Accelerator

Conference: MBMV 2024 - 27. Workshop
02/14/2024 - 02/15/2024 at Kaiserslautern

Proceedings: ITG-Fb. 314: MBMV 2024

Pages: 10Language: englishTyp: PDF

Authors:
Bause, Oliver; Palomero Bernardo, Paul; Bringmann, Oliver (Chair for Embedded Systems, University of Tübingen, Germany)

Abstract:
As machine learning applications continue to evolve, the demand for efficient hardware accelerators, specifically tailored for deep neural networks (DNNs), becomes increasingly vital. In this paper, we propose a configurable memory hierarchy framework tailored for per layer adaptive memory access patterns of DNNs. The hierarchy requests data on-demand from the off-chip memory to provide it to the accelerator’s compute units. The objective is to strike an optimized balance between minimizing the required memory capacity and maintaining high accelerator performance. The framework is characterized by its configurability, allowing the creation of a tailored memory hierarchy with up to five levels. Furthermore, the framework incorporates an optional shift register as final level to increase the flexibility of the memory management process. A comprehensive loop-nest analysis of DNN layers shows that the framework can efficiently execute the access patterns of most loop unrolls. Synthesis results and a case study of the DNN accelerator UltraTrail indicate a possible reduction in chip area of up to 62.2% as smaller memory modules can be used. At the same time, the performance loss can be minimized to 2.4%.