Nationality Bias Detection in Text Generation
This research investigates nationality biases in NLP models and their impact on fairness and justice in AI systems. Using a mixed-methods approach, the study quantitatively measures bias in AI-generated articles and qualitatively analyzes its implications through interviews. Findings show that biased NLP models amplify societal biases, potentially leading to harm in sociotechnical settings. The qualitative analysis reveals readers’ altered perceptions of countries influenced by biased articles. The research emphasizes the importance of addressing biases in AI systems, correcting them to ensure ethical and equitable deployment, and recognizing the role of public perception in shaping AI’s societal impact.