Skip to Content

Ojas Sanghi

Ojas Sanghi

MIT Department: Electrical Engineering and Computer Science
Faculty Mentor: Prof. Priya Donti
Research Supervisor: Alvaro Carbonero, Anvita Bhagavathula
Undergraduate Institution: University of Arizona
Website:

Biography

Ojas Sanghi is a rising senior at the University of Arizona (UofA) studying ComputerScience and Future Earth Resilience. He is the Tucson Co-Lead of the AZ Youth ClimateCoalition and the Vice President of U Arizona Divest. In those roles, he has driven community-wide change, including a successful campaign to get Tucson’s largest school district to adoptAmerica’s most comprehensive school climate action resolution. He serves on the City of Tucson’s Climate Commission, and is the Chair of the re-election campaign for Tucson Council member Kevin Dahl. He advances research applying AI to clean energy solutions withDr. Priya Donti at MIT, Dr. Jennifer Braid at Sandia National Laboratories, and Dr. Adam Printz at the U of A’s Printz Research Group. In his free time, he performs weekly improv comedy with Comedy Corner at the U of A, the country’s oldest collegiate improv comedy club.He is a 2024 Udall Scholar and a 2025 Truman Scholar.

Abstract

Developing Generalizable Graph Neural Networks for Power Systems to Enable the Clean Energy Transition

Ojas Sanghi1, Alvaro Carbonero2, Anvita Bhagavathula2, and Priya Donti2,3

1Department of Computer Science, University of Arizona

2Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology

There are increasing strains on energy systems due to the surge in clean energy generation and climate change-induced weather anomalies. To manage these changes and prevent blackouts, grid operators must drastically increase the number of simulations of how power would flow on the grid. However, such an increase in these power flow (PF) simulations is currently infeasible due to the time-consuming nature of traditional solvers. To successfully transition to clean energy and maintain a stable grid in a more volatile climate, this computational bottleneck must be addressed. Machine learning (ML) models such as Graph Neural Networks (GNN) offer a potential scalable alternative. However, GNNs currently fail to generalize across different grid sizes, one of many limitations preventing their real-world deployment. Here, we study various architectural components, model hyperparameters, and data setups to understand what drives generalizability across grid sizes. We find that training on a larger grid size—e.g., 500 nodes—will generalize well to a smaller grid size—e.g., 30 nodes—but not vice versa. We additionally find that training on a set of mixed grid sizes—e.g., 118 and 500 nodes—generalizes better to smaller grid sizes than training on any single size, thus informing future research directions for ML researchers and grid operators.
« Back to profiles