Neuron Grouping and Mapping Methods for 2D-Mesh NoC-based DNN Accelerators


Nacar F., Cakin A., Dilek S., Tosun S., Chakrabarty K.

JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, cilt.104949, ss.1-42, 2024 (SCI-Expanded)

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 104949
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1016/j.jpdc.2024.104949
  • Dergi Adı: JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, zbMATH
  • Sayfa Sayıları: ss.1-42
  • Hacettepe Üniversitesi Adresli: Evet

Özet

Deep Neural Networks (DNNs) have gained widespread adoption in various fields; however, their computational cost is often prohibitively high due to the large number of layers and neurons communicating with each other. Furthermore, DNNs can consume a significant amount of energy due to the large volume of data movement and computation they require. To address these challenges, there is a need for new architectures to accelerate DNNs. In this paper, we propose novel neuron grouping and mapping methods for 2D-mesh Network-on-Chip (NoC)-based DNN accelerators considering both fully connected and partially connected DNN models. We present Integer Linear Programming (ILP) and simulated annealing (SA)-based neuron grouping solutions with the objective of minimizing the total volume of data communication among the neuron groups. After determining a suitable graph representation of the DNN, we also apply ILP and SA methods to map the neurons onto a 2D-mesh NoC fabric with the objective of minimizing the total communication cost of the system. We conducted several experiments on various benchmarks and DNN models with different pruning ratios and achieved an average of 40-50% improvement in communication cost.