Deep learning hardware efficiency is becoming increasingly important as the use of deep learning grows. As deep learning becomes more widely used, it is essential to have efficient deep learning hardware in order to make the best use of these powerful algorithms. Architectural optimization is one of the ways to improve the performance and efficiency of deep learning hardware. Architectural optimization involves designing hardware specifically for deep learning applications that may not necessarily be best optimized for just any application. This could involve utilizing specialized memory and computer architectures, as well as new form factors that allow for more efficient power consumption. Additionally, a number of methods such as bypassing cached memory or avoiding random memory accesses can also be employed to increase the efficiency of the hardware. By applying these and other architectural optimizations to deep learning hardware, performance can be improved and power consumption reduced, making for a more cost-effective and efficient deep learning system.
K.C. Avinash Khatri1, Krishna Bikram Shah2 University of East London, United Kingdom1, Nepal Engineering College, Nepal2
Deep Learning, Hardware, Efficiency, Algorithm, Consumption
January | February | March | April | May | June | July | August | September | October | November | December |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 |
| Published By : ICTACT
Published In :
ICTACT Journal on Data Science and Machine Learning ( Volume: 4 , Issue: 3 , Pages: 456 - 460 )
Date of Publication :
June 2023
Page Views :
191
Full Text Views :
13
|