TRUST-AWARE FEDERATED LEARNING WITH SOFT COMPUTING FOR PRIVACY-PRESERVING HEALTHCARE ANALYTICS

ICTACT Journal on Soft Computing ( Volume: 16 , Issue: 4 )

Abstract

The rapid adoption of the data-driven healthcare analytics has raised serious concerns regarding the patient privacy, data integrity, and collaborative intelligence across distributed medical institutions. Traditional centralized learning approaches have relied on extensive data sharing that has increased the risk of data leakage and regulatory noncompliance. Federated learning has emerged as a promising paradigm that has enabled collaborative model training without direct data exchange. However, the presence of unreliable or malicious participants has limited its practical deployment in real-world healthcare environments. Although federated learning has preserved data locality, it has not fully addressed the issue of trust among participating clients. The contribution of low-quality or adversarial updates has degraded the global model performance and has compromised the clinical reliability. Existing aggregation strategies have ignored behavioral uncertainty and contextual trust, which has resulted in biased or unstable healthcare predictions. This study has proposed a trust-aware federated learning framework that has integrated soft computing techniques for adaptive client evaluation. A fuzzy logic-based trust model has assessed each participant using the historical update consistency, model divergence, and communication reliability. The trust scores that have been computed have dynamically weighted the local updates during aggregation. A privacy-preserving mechanism that has incorporated differential noise has further strengthened data confidentiality. The framework has been validated using distributed healthcare datasets that have represented diagnostic classification tasks under heterogeneous data distributions. The experimental evaluation demonstrates that the proposed trust-aware federated learning framework achieves a classification accuracy of 0.94 and an F1-score of 0.94 at 200 iterations, which outperforms Federated Averaging, Differentially Private Federated Learning, and Trimmed Mean aggregation by margins of 10–15%. The framework reduces convergence time to 95 rounds, compared with 140–175 rounds for existing methods. These results confirm that trust-guided aggregation improves robustness, accelerates convergence, and preserves privacy in distributed healthcare analytics.

Authors

Ojasvi Pattanaik1, R. Gayathri2
FH Aachen University of Applied Sciences, Germany1, Dr. D.Y. Patil Institute of Technology, India2

Keywords

Federated Learning, Trust Management, Soft Computing, Healthcare Analytics, Privacy Preservation

Published By
ICTACT
Published In
ICTACT Journal on Soft Computing
( Volume: 16 , Issue: 4 )
Date of Publication
January 2026
Pages
4139 - 4144
Page Views
23
Full Text Views
1