|MIT Department: Electrical Engineering and Computer Science
Faculty Mentor: Prof. Aleksander Madry
Undergraduate Institution: New York University Tandon School of Engineering, New York
I am currently a Junior at NYU Tandon School of Engineering, majoring in Computer Science and minoring in Mathematics, Data Science and CyberSecurity. I am from Patna, a small city in India. Since both my parents are doctors, I’ve always had a keen interest in medicine but also discovered my love for technology in high school. I love anything and everything related to Machine Learning and hope to study the applications of ML particularly in the Medical Field. I plan on pursuing a PhD after graduating and hope to be a Professor/Researcher later in life. Outside of school, I love heights and scary rides, dancing, playing VR games, painting and something I’ve always wanted to do is travel (a lot)!
Feature Selection in Image Classification
Mohini Anand1, Kai Xiao2, Shibani Santurkar2, Aleksander Madry2
1Computer Science, NYU Tandon School of Engineering
2Electrical Engineering and Computer Science, MIT
Image Classification is a machine learning method that attempts to predict the class of objects present in each image. There are numerous models that have been developed with great performance and functionality. However, there is little understanding about how they work and make predictions. The aim of this work is to investigate whether the models focus on the important features while making predictions. To this end, we leverage saliency maps, which are a standard primitive in the field of model interpretability to gain insight into how important each pixel in the image is for the model’s prediction. We use pre-annotated segmentation maps, which are objects manually labelled by humans beforehand in images and compare them with the saliency maps of standard as well as robust models. Our analysis shows that standard E.R.M. trained classifiers do not always focus on the object of interest, but also on other features in the objects’ surroundings. While we find preliminary evidence that this problem is slightly alleviated by adversarially training classifiers, there still remains significant room for improvement.