|MIT Department: Electrical Engineering and Computer Science
Faculty Mentor: Prof. Aleksander Mądry
Undergraduate Institution: Beloit College
Website: LinkedIn, Website
My name is Jerry, and I use he/him pronouns. My hometown is Can Tho, Vietnam. Currently, I’m a rising senior at Beloit College pursuing a double major in Computer Science and Math. My previous research experienced involved applied mathematics, number theory, machine learning, and deep learning. Although I’m still exploring new areas, my current interests include representation learning and generative model. After graduation, I plan to further my education in a Ph.D. program in machine learning and related fields. Outside of the classroom, I enjoy spending time with friends, playing games, practicing guitar, and playing badminton.
The Effect Of Data Augmentation on Deep Representations
Phuc Ngo1, Dimitris Tsipras2, Saachi Jain3, and Aleksander Mądry4
1Department of Computer Science and Maths, Beloit College
2, 3, 4Department of Electrical Engineering & Computer Science, Massachusetts Institute of Technology
Data augmentation is a simple and common technique that increases the model’s robustness to class-preserving transformations. However, our understanding of how data augmentation affects deep representations is limited. In this work, we attempt to study this further and hypothesize two mechanisms that could happen. The earlier layers of the model could map augmented inputs to similar representations to the standard inputs counterpart. Or, the model could use an entirely different set of prediction rules to classify augmented samples. To test the hypothesis, we trained standard and augmented models to analyze the similarity between their predictions and representations. Our results suggest data augmentation has a range of behavior on deep representations. Depending on the severity of the augmentation, models can vary between learning invariance or learning entirely separate augmented subpopulations.