Olumide Ogunmakinwa

MIT Department: Electrical Engineering and Computer Science
Faculty Mentor: Prof. Negar Reiskarimiam
Undergraduate Institution: Howard University
Website:
Biography
Olumide T. Ogunmakinwa is a rising sophomore and Karsh STEM Scholar at Howard University, pursuing a bachelor’s in Computer Engineering. Inspired by technology’s potential to solve real-world problems, he is focused on ethical AI development and hardware acceleration through VLSI design. His coursework in Python programming and engineering fundamentals provides the technical foundation for his research interests in transparent machine learning systems and energy-efficient computing architectures. As a Karsh STEMScholar, Olumide is passionate about increasing diversity in STEM fields. He actively mentors peers and advocates for inclusive innovation that considers both technical excellence and social impact. His independent research into AI alignment challenges has strengthened his commitment to developing trustworthy technologies. He plans to pursue a Ph.D. to advance responsible computing solutions that bridge technical innovation with societal needs. Beyond academics, Olumide enjoys gaming and exploring superhero narratives, which continue to inspire his creative approach to problem-solving in technology. These interests reinforce his belief in technology’s power to create equitable futures while entertaining and connecting people across cultures.
Abstract
Beyond Sentiment: Detecting and Mitigating Cultural Bias in Emotion Classification
Olumide Ogunmakinwa1, and Negar Reiskarimiam2
1Department of Electrical, Computer and Energy Engineering, Howard University
2Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology
Emotion recognition systems often fail to account for linguistic and cultural diversity, leading to biased predictions that disproportionately impact marginalized communities. To address this issue, we investigate cultural bias in emotion classification by developing a system that detects and mitigates disparities in predictions across dialects, with a focus on African American Vernacular English (AAVE). Our approach leverages DistilBERT-based models for multi-label emotion classification, trained on a dataset combining American English and AAVE social media text. We implement bias detection techniques to identify vulnerable prediction pathways, along with mitigation strategies such as class reweighting and adversarial training. The system also incorporates a continuous improvement pipeline with user feedback functionality, integrating cultural awareness throughout the machine learning lifecycle, from data collection to real-time bias monitoring. This work establishes a framework for developing equitable emotion recognition systems that better account for linguistic diversity.