Specks: Spectrogram of Large Scale Knowledgebase for Interpreting Stereotypes in Dataset
Abstract
In this study, we propose a novel approach for identifying stereotypical biases in text using deep graph neural networks (DGNNs). Our approach utilizes graph-based representations of text data to capture complex relationships between words and demographics. The DGNN is trained using a set of novel loss functions that impose specific constraints on the individual operator properties to identify the underlying biases in the text data effectively. Experimental results on a dataset of text data show that our approach is able to achieve high accuracy in identifying biases. Our approach does not compare to a baseline model as it is a novel approach; it is explicitly designed to identify biases in text data. However, it is important to note that our approach did not directly address the issue of reducing biases but only identifying them. The high loss could be indicative of overfitting. Future work could include implementing regularization techniques such as dropout and evaluating the model's performance on a held-out test set to address this issue. Our study provides a new approach to identifying stereotypical biases in text, and it has the potential to inform future research on how to mitigate biases in natural language processing (NLP) systems. The use of graph-based representations and attention mechanisms in DGNNs allows for improved performance in identifying biases in text. However, it is necessary to consider overfitting when using these models and take appropriate steps to mitigate this risk. Further research is required to address the issue of reducing biases.
Subject Area
Computer science|Artificial intelligence|Information science
Recommended Citation
Ryan, Kyle John, "Specks: Spectrogram of Large Scale Knowledgebase for Interpreting Stereotypes in Dataset" (2023). ETD Collection for Fordham University. AAI30522008.
https://research.library.fordham.edu/dissertations/AAI30522008