BiasTrap: Runtime Detection of Biased Prediction in Machine Learning Systems

Mamman, H. and Basri, S. and Balogun, A.O. and Imam, A.A. and Kumar, G. and Capretz, L.F. (2024) BiasTrap: Runtime Detection of Biased Prediction in Machine Learning Systems. Journal of Advanced Research in Applied Sciences and Engineering Technology, 40 (2). pp. 127-139.

Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

Machine Learning (ML) systems are now widely used across various fields such as hiring, healthcare, and criminal justice, but they are prone to unfairness and discrimination, which can have serious consequences for individuals and society. Although various fairness testing methods have been developed to tackle this issue, they lack the mechanism to monitor ML system behaviour at runtime continuously. This study proposes a runtime verification tool called BiasTrap to detect and prevent discrimination in ML systems. The tool combines data augmentation and bias detection components to create and analyse instances with different sensitive attributes, enabling the detection of discriminatory behaviour in the ML model. The simulation results demonstrate that BiasTrap can effectively detect discriminatory behaviour in ML models trained on different datasets using various algorithms. Therefore, BiasTrap is a valuable tool for ensuring fairness in ML systems in real time. © 2024, Semarak Ilmu Publishing. All rights reserved.

Item Type: Article
Additional Information: cited By 0
Depositing User: Mr Ahmad Suhairi UTP
Date Deposited: 04 Jun 2024 14:19
Last Modified: 04 Jun 2024 14:19
URI: https://khub.utp.edu.my/scholars/id/eprint/19563

Actions (login required)

View Item
View Item