Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Tytuł pozycji:

Minimizing Global Buffer Access in a Deep Learning Accelerator Using a Local Register File with a Rearranged Computational Sequence.

Tytuł:
Minimizing Global Buffer Access in a Deep Learning Accelerator Using a Local Register File with a Rearranged Computational Sequence.
Autorzy:
Lee M; Department of Electronic Engineering, Hanyang University, Seoul 04763, Korea.
Zhang Z; Department of Electronic Engineering, Hanyang University, Seoul 04763, Korea.
Choi S; Department of Electronic Engineering, Hanyang University, Seoul 04763, Korea.
Choi J; Department of Electronic Engineering, Hanyang University, Seoul 04763, Korea.
Źródło:
Sensors (Basel, Switzerland) [Sensors (Basel)] 2022 Apr 18; Vol. 22 (8). Date of Electronic Publication: 2022 Apr 18.
Typ publikacji:
Journal Article
Język:
English
Imprint Name(s):
Original Publication: Basel, Switzerland : MDPI, c2000-
MeSH Terms:
Deep Learning*
Records
Grant Information:
2020-0-01297 Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government (MSIT)
Contributed Indexing:
Keywords: deep learning accelerator; field-programmable gate array (FPGA); local register file; rearrangement of computational sequence
Entry Date(s):
Date Created: 20220423 Date Completed: 20220426 Latest Revision: 20220429
Update Code:
20240105
PubMed Central ID:
PMC9032599
DOI:
10.3390/s22083095
PMID:
35459079
Czasopismo naukowe
We propose a method for minimizing global buffer access within a deep learning accelerator for convolution operations by maximizing the data reuse through a local register file, thereby substituting the local register file access for the power-hungry global buffer access. To fully exploit the merits of data reuse, this study proposes a rearrangement of the computational sequence in a deep learning accelerator. Once input data are read from the global buffer, repeatedly reading the same data is performed only through the local register file, saving significant power consumption. Furthermore, different from prior works that equip local register files in each computation unit, the proposed method enables sharing a local register file along the column of the 2D computation array, saving resources and controlling overhead. The proposed accelerator is implemented on an off-the-shelf field-programmable gate array to verify the functionality and resource utilization. Then, the performance improvement of the proposed method is demonstrated relative to popular deep learning accelerators. Our evaluation indicates that the proposed deep learning accelerator reduces the number of global-buffer accesses to nearly 86.8%, consequently saving up to 72.3% of the power consumption for the input data memory access with a minor increase in resource usage compared to a conventional deep learning accelerator.

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies