Abstract
This paper investigates professionals’ perceptions on bias in AI powered performance evaluation tools and mitigation strategies. The research analyzed survey data collected from 260 respondents across multiple roles and industries to see if user roles relate to their concerns about bias and their views on the characteristics of AI tools. A statistical analysis further showed no significant association between bias concerns and primary role (?²(24)=11.2, p=0.987), indicating that concerns are widespread throughout all positions. Nevertheless, there was a strong positive correlation in the level of concern that bias would be perpetuated by AI (ß=0.499, p<.001, R²=0.381), and the perceived importance of human oversight in curbing it. Generally, respondents perceived the bias potential of AI tools as moderate-to-high, had limited-to-moderate trust, and transparency was generally lacking while AI tools should be overseen by a human and specific training is overwhelmingly supported. The results show key ways to enhance fairness, clarity, and assurance in AI used for HR matters.
Authors
Ramya Vishwanath Acharya, P. Srikanth
RV Institute of Management, India
Keywords
AI Bias, Performance Evaluation, Algorithmic Fairness, HR Technology, Mitigation Strategies