Skip to Content

Jeremiah Bailey

Jeremiah Bailey

MIT Department: Electrical Engineering and Computer Science
Faculty Mentor: Prof. Tess Smidt
Research Supervisor: Ryley McConkey, Julia Balla
Undergraduate Institution: Howard University
Website:

Biography

Jeremiah Bailey is a sophomore (class of 2028) at Howard University with a 3.93 GPA, pursuing a B.S. in Computer Science. His academic interests center on machine learning, high-performance computing, and network simulation, with an emphasis on super-resolution of turbulent fluid flows and wireless system modeling. During the 2025 MIT Summer Research Program, Jeremiah developed and evaluated 3D super-resolution convolutional neural networks to predict high-resolution turbulent fluid fields from coarse inputs, addressing computational constraints in meteorology and aerodynamics. He also explored isotropic versus anisotropic behavior under the Kolmogorov hypothesis, implemented equivariant neural architectures, and deployed data-augmentation pipelines. Previously, as a research intern atLawrence Berkeley National Laboratory (2022–2023), he parallelized numerical simulations using MPI and OpenMP—reducing runtimes by 30%—and integrated mixed-precision training for more efficient deep-learning experiments. Beyond research, Jeremiah volunteers in his community, enjoys golfing and listening to music, and is passionate about translating computational insights into real-world impact. His blend of analytical rigor and collaborative experience positions him to contribute meaningfully to interdisciplinary research teams.

Abstract

Rotational Equivariance in Turbulent Superresolution

Jeremiah Bailey1, Ryley McConkey2, Julia Balla2, and Dr. Tess Smidt2

1Department of Computer Science, Boston University

2Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology

Machine learning-based superresolution of turbulent flow fields presents a viable way to overcome the computational constraints of sparse experimental measurements and high-fidelity simulations. In this work, we examine the interaction between data augmentation and model inductive biases to produce rotational equivariance in learned superresolution mappings. Kolmogorov’s local isotropy hypothesis states that small-scale turbulence becomes statistically isotropic despite large-scale anisotropy. To explore this, we train traditional convolutional neural networks (CNNs) on multiscale isotropic and anisotropic turbulence datasets. We quantify how rotational symmetry is enforced by equivariant model design versus implicitly learned through randomized augmentation by measuring equivariance error across spatial scales. Our findings show that CNNs using only rotational data augmentation exhibit low equivariance error primarily at the smallest scales, consistent with Kolmogorov’s theory, while equivariant networks preserve symmetry across all resolved scales. These results support the development of physics-aware machine learning models that honor fluid-dynamic invariances and highlight the scale-dependent nature of learned symmetries in turbulent super resolution.

« Back to profiles