IMPROVING THE EFFICIENCY OF DEEP LEARNING HARDWARE THROUGH ARCHITECTURAL OPTIMIZATION
Abstract
Deep learning hardware efficiency is becoming increasingly important as the use of deep learning grows. As deep learning becomes more widely used, it is essential to have efficient deep learning hardware in order to make the best use of these powerful algorithms. Architectural optimization is one of the ways to improve the performance and efficiency of deep learning hardware. Architectural optimization involves designing hardware specifically for deep learning applications that may not necessarily be best optimized for just any application. This could involve utilizing specialized memory and computer architectures, as well as new form factors that allow for more efficient power consumption. Additionally, a number of methods such as bypassing cached memory or avoiding random memory accesses can also be employed to increase the efficiency of the hardware. By applying these and other architectural optimizations to deep learning hardware, performance can be improved and power consumption reduced, making for a more cost-effective and efficient deep learning system.

Authors
K.C. Avinash Khatri1, Krishna Bikram Shah2
University of East London, United Kingdom1, Nepal Engineering College, Nepal2

Keywords
Deep Learning, Hardware, Efficiency, Algorithm, Consumption
Yearly Full Views
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
000000000000
Published By :
ICTACT
Published In :
ICTACT Journal on Data Science and Machine Learning
( Volume: 4 , Issue: 3 , Pages: 456 - 460 )
Date of Publication :
June 2023
Page Views :
144
Full Text Views :
11

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.