Author

Date of Graduation

5-2025

Degree Type

Dissertation/Thesis

Degree Name

Bachelor of Science (BS)

Major

Computer Science

Advisor(s)

Zakirul Alam Bhuiyan

Abstract

With the rise of generative AI infiltrating many areas of our everyday lives, it's essential to acknowledge how AI can be trained on discriminatory data. When these AI systems are trained with information that is racist, they produce answers that possess a racial bias, which further perpetuates and enforces the systemic inequalities that minorities experience. Prior research discusses that racial bias is a problem in the use of AI and provides case studies on how racial bias has been enacted in large-scale projects. However, this research does not explain or acknowledge the long-term effects of racial bias on minorities. This thesis will investigate different discriminative algorithms used to train AI models, focusing on how these algorithms disproportionately impact minority communities and exploring ways to address these impacts. The key findings are that the lack of representation in the data selection of AI models and data being historically underrepresented causes algorithms to present inaccurate results and exclude all the demographics they are catering to. With the aim of being more inclusive of all demographics and reducing racial bias, mitigation techniques, including preprocessing, model selection, and postprocessing techniques, work toward including fairness metrics in results. Incorporating mitigation techniques and fairness metrics is critical in the future use of AI to help prevent further discrimination against minorities, as it has significantly impacted them regarding policing and healthcare.

Share

COinS