Skip to Content

Morayo Adeyemi

Morayo Adeyemi

MIT Department: Electrical Engineering and Computer Science
Faculty Mentor: Prof. Paul Liang
Undergraduate Institution: Howard University
Website:

Biography

Morayo Adeyemi is an incoming master’s student in Data Science and Analytics atHoward University and a Cohort 6 Karsh STEM Scholar. She completed her B.S. in ComputerScience with a minor in Mathematics in three years and is continuing her studies to prepare for a Ph.D. in Computer Science. She has interned at Google, conducted human-computer interaction research at Stanford advised by Dr. James Landay, and studied computer vision atBrown University under Dr. Daniel Ritchie. At Howard, she has conducted natural language processing research in Dr. Anietie Andy’s lab and is currently researching AI and human behavior in Dr. Paul Liang’s Multisensory Intelligence Group at MIT. Morayo is the founder ofBisonBytes, Howard’s first computer science hackathon and has served as a teaching assistant for CS0 and Fundamentals of Advanced Algorithms. Her research focuses on AI-powered multimodal systems that support human understanding and she hopes to become a professor leading a human-centered research lab.

Abstract

Personalized Multimodal AI Home-Based Speech Therapy for Children with ASD

Morayo D Adeyem1, and Paul Pu Liang2
1Department of Physics and Astronomy, Howard University
2MIT Media Lab and the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology

Children with autism spectrum disorder (ASD) often struggle with expressive language, and while Augmentative and Alternative Communication (AAC) tools offer support, they are frequently abandoned due to complexity and limited engagement. We present a multimodal AI system that delivers personalized, home-based speech therapy by embedding familiar caregivers into interactive, story-driven visual scenarios based on the child’s daily life. Using minimal caregiver input, the platform generates engaging context-aware narratives designed to guide children through step-by-step situations that build descriptive language and social communication skills. To promote self-expression, we prioritized a neurodiversity-affirming continuous storyline that connects scenarios to reduce masking, a learned response in which autistic individuals suppress natural behaviors to fit societal expectations. An explainable AI module offers transparent reasoning and routes all generated content through a parent and therapist approval process to ensure therapeutic relevance. The interface is designed with autism-centered UI/UX principles, adaptively tuned to each child’s sensory and cognitive needs. This work contributes to a human-centered framework for scalable, personalized speech therapy that supports communication development in neurodiverse children through familiar context and continuous narrative structure.

« Back to profiles