An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Artificial Intelligence (AI) and Machine Learning (ML) have made tremendous advancements in recent decades. AI/ML models have been used in demographic research to gain insights for specific populations and research focuses. While these advanced models are certainly capable of providing novel and in-depth analysis, challenges related to bias and fairness remain a major issue. To address this issue of bias identification and mitigation, AI/ML models must be designed with fairness and trustworthiness as a core component of the model. Towards the fairness and trustworthiness of AI/ML models, explainable AI (XAI) has garnered interest in filling the gaps where traditional AI/ML models fall short. Explainability plays a central role in ensuring the fairness and trustworthiness of AI/ML models. In this paper, we highlight the use of XAI to identify bias within AI/ML models and the datasets used for these models. We present use case examples for applying post-hoc explainability to traditional AI/ML classification models to highlight the bias in the models. We used an open-source dataset to create six different ML classification models. These models were then evaluated for performance based on accuracy, precision, recall and F1 score. The top performing model was then chosen to apply XAI via SHAP. Using SHAP, we generated feature relevance and beeswarm plots to highlight the bias within the model and data. We conclude with a robust discussion on how XAI can be used by policy makers and practitioners to mitigate bias by taking a data driven explainable approach to policy decision making.
Share
Related Information
Top