Skip to Content

Tanisha Shende

Tanisha Shende

MIT Department: Media, Arts and Sciences
Faculty Mentor: Prof. Canan Dagdeviren
Research Supervisor: Jason Hou, Shrihari Viswanath
Undergraduate Institution: Oberlin College
Website:

Biography

Tanisha Shende is a rising senior at Oberlin College, double majoring in ComputerScience and Mathematics with a minor in Sociology and an integrative concentration inData Science. Tanisha is a human-computer interaction researcher in Dr. Shiri Azenkot and Dr. Andrea Stevenson-Won’s labs at Cornell University, studying ways to make technology, especially extended reality, more accessible to disabled people. Last summer, she conducted research at Gallaudet University under Dr. Abraham Glasser to explore the effectiveness of augmented reality in the technical education of d/Deaf and hard-of-hearing students. Tanisha is a student worker in the Office of Undergraduate Research and the chair of the MathematicsMajors Committee to support students’ STEM success. She is also studying technology and foreign affairs through political research sanctioned by the U.S. State Department and as aStudent Delegate to the Athens Democracy Forum. Tanisha is broadly interested in applying extended reality and artificial intelligence to healthcare, education, and accessibility while ensuring responsible technology development. She plans to pursue a PhD in Computer Science alongside a JD or a technology policy degree.

Abstract

A Mixed Reality System for Real-Time 3D Ultrasound: Improving Usability and Accuracy

Tanisha Shende1, Jason Hou2, Shrihari Viswanath2, and Canan Dagdeviren2

1Department of Computer Science, Oberlin College

2Media Lab, Massachusetts Institute of Technology

Ultrasound imaging is widely used for diagnosis and guidance due to its safety, cost-effectiveness, and real-time capabilities. However, traditional systems rely on 2D slices, demanding extensive training and spatial reasoning, which can lead to interpretation errors and operator variability. While 3D and 4D imaging have been introduced, interfaces for intuitive, real-time interpretation of volumetric ultrasound data remain lacking. We present a novel mixed reality interface that displays real-time 3D ultrasound data, aiming to enhance interpretability and usability. To evaluate its effectiveness, we are conducting a user study with both novices and experts in ultrasound imaging, comparing our system to conventional 2D, screen-based 3D, and mixed reality 2D interfaces. Participants perform object identification and structure reconstruction tasks; performance is measured by accuracy, speed, perceived workload (NASA Task Load Index), and qualitative feedback from semi-structured interviews. Results will inform improvements in user experience and system accuracy, with the goal of advancing ultrasound imaging interfaces toward greater interpretability and clinical utility.
« Back to profiles