EXPLORING NEUROMORPHIC COMPUTING IN VLSI FOR EFFICIENT AI INFERENCE
Abstract
In artificial intelligence (AI), the demand for efficient and accelerated inference processes has spurred the exploration of neuromorphic computing paradigms implemented in Very Large Scale Integration (VLSI) systems. This study addresses the escalating need for energy-efficient and high-performance AI inference solutions by delving into the potential of neuromorphic VLSI architectures. As AI applications proliferate, traditional computing architectures face challenges in meeting the burgeoning computational demands while maintaining energy efficiency. Neuromorphic computing, inspired by the human brain’s neural networks, offers a promising alternative by mimicking parallel processing and event-driven communication. Current AI inference systems grapple with power consumption and latency issues, hindering real-time applications and scalability. This research identifies the need for innovative solutions to optimize these parameters without compromising accuracy and performance. While neuromorphic computing in VLSI has shown potential, a comprehensive exploration of its efficacy in addressing the specific challenges of AI inference is lacking. This study bridges this gap by investigating the intricacies of neuromorphic VLSI architectures and their impact on inference efficiency. The research employs a two-fold methodology, encompassing the design and implementation of neuromorphic VLSI architectures and rigorous performance evaluations. Customized neural network models are adapted to exploit the unique features of the proposed VLSI designs, aiming to achieve optimal trade-offs between accuracy, speed, and power consumption. The results demonstrate a significant enhancement in AI inference efficiency, showcasing the potential of neuromorphic VLSI architectures.

Authors
J. Muralidharan1, B. Srinivasa Rao2, Davinder Kumar3, T. Lakshmi Narayana4
KPR Institute of Engineering and Technology, India1, Gokaraju Rangaraju Institute of Engineering and Technology, India2, Micron Technology, Telangana, India3, KLM College of Engineering for Women, India 4

Keywords
Neuromorphic Computing, VLSI, AI Inference, Efficiency, Parallel Processing, Event-driven Communication
Yearly Full Views
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
111020000000
Published By :
ICTACT
Published In :
ICTACT Journal on Microelectronics
( Volume: 9 , Issue: 3 , Pages: 1620 - 1627 )
Date of Publication :
October 2023
Page Views :
187
Full Text Views :
17

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.