ENSEMBLE STRATEGY TO MITIGATE ADVERSARIAL ATTACK IN FEDERATED LEARNING
Abstract
Concerns about privacy are crucial in the data-driven healthcare industry of today. Federated Learning (FL) lowers the danger of data breaches by facilitating cooperative model training without exchanging raw patient data. Differential Privacy (DP), which introduces noise into model updates to protect patient data, improves FL’s decentralized methodology. This is particularly useful for applications like early cardiovascular disease detection, allowing accurate models while maintaining privacy. Hospitals train models locally, sharing updates with a central server that refines a global model. Challenges include achieving model convergence and managing communication overhead. Ongoing research aims to optimize these processes, ensuring secure, privacy-preserving healthcare solutions.

Authors
R. Anusuya1, D. Karthika Renuka2, Ashok Kumar3, S.K. Prithika4, S. Mridula5, T. Subhaashini6, R. Tharsha7
PSG College of Technology, India1,2,4,5,6,7, Thiagarajar College of Engineering, India3

Keywords
Federated Learning (FL), Differential Privacy (DP), Data-driven Healthcare, Privacy-preserving Solutions, Early Cardiovascular Disease Detection, Model Convergence, Communication Overhead
Yearly Full Views
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
443300000000
Published By :
ICTACT
Published In :
ICTACT Journal on Soft Computing
( Volume: 15 , Issue: 3 , Pages: 3646 - 3652 )
Date of Publication :
January 2025
Page Views :
135
Full Text Views :
14

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.