SECURED HARDWARE PLATFORM FOR FPGA AI MODEL PROTECTION
Abstract
The widespread adoption of Field-Programmable Gate Arrays (FPGAs) in deploying Artificial Intelligence (AI) models has ushered in a new era of computational efficiency. However, the vulnerabilities associated with these platforms have raised concerns regarding the protection of sensitive AI models from malicious attacks. This study addresses the pressing need for a secured hardware platform to safeguard FPGA-based AI models, employing Deep Neural Networks (DNNs) as a robust defense mechanism. As FPGAs become integral to AI model deployment, the risk of unauthorized access and tampering increases. Existing security measures often fall short in providing comprehensive protection, leaving AI models vulnerable to exploitation. This research aims to bridge this gap by developing a novel hardware platform that integrates DNNs to fortify the security of FPGA-based AI models While previous studies have explored FPGA-based AI models and security measures independently, a significant research gap exists in the integration of DNNs specifically tailored for protecting these models. This study fills this void by proposing a holistic solution that combines the adaptability of FPGAs with the robustness of DNNs to create a secure and resilient hardware platform. The research employs a two-fold methodology, starting with the design and implementation of a secure FPGA architecture. Subsequently, DNNs are integrated into the hardware platform to detect and respond to potential security threats. The model is trained on diverse datasets to ensure adaptability to various AI applications. Preliminary results showcase a significant enhancement in the security posture of FPGA-based AI models. The integrated DNNs effectively identify and mitigate potential threats, providing a robust layer of defense against unauthorized access and tampering.

Authors
S. Aramuthakannan1, S. Lokesh2, R. Kumar3, P. Gajendran4
PSG Institute of Technology and Applied Research, India1,4, PSG Institute of Technology and Applied Research, India2, Sri Ramakrishna Institute of Technology, India3

Keywords
FPGA, AI Model Protection, Deep Neural Networks, Hardware Security, Threat Detection
Yearly Full Views
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
403100000000
Published By :
ICTACT
Published In :
ICTACT Journal on Microelectronics
( Volume: 9 , Issue: 3 , Pages: 1628 - 1633 )
Date of Publication :
October 2023
Page Views :
354
Full Text Views :
20

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.