Carson Sobolewski
MIT Department: Mechanical Engineering
Faculty Mentor: : Prof. Navid Azizan
Research Supervisor: Young Jin Park
Undergraduate Institution: University of Florida
Hometown: Ponte Vedra, Florida
Website: LinkedIn, Website
Biography
Carson Sobolewski is a rising senior at the University of Florida majoring in Computer Engineering. At a young age, several of his family members were severely injured in car accidents. As a result, he has a strong interest in developing technologies that help reduce road collisions. More specifically, he is interested in robust autonomous control systems, uncertainty quantification, and meta-learning. At the University of Florida, he is a member of the Trustworthy Engineered Autonomy (TEA) lab, where he works on safety-chance prediction using conformal prediction and control systems using the F1TENTH platform. He also has a strong passion for encouraging other students to get involved in research and serves as a peer advisor for the UF Center for Undergraduate Research. This summer, he is researching uncertainty quantification for object detection transformers (DETRs) under Dr. Navid Azizan at the MIT Schwarzman College of Computing. After graduating, he hopes to pursue a PhD and continue researching uncertainty quantification and statistical guarantees for autonomous vehicle systems.
Abstract
UQ-DETR: Uncertainty Quantification for Object Detection
Transformers
Carson Sobolewski1, Young Jin Park2 and Navid Azizan2
1Department of Electrical and Computer Engineering, University of Florida
2Department of Mechanical Engineering, Massachusetts Institute of Technology
Recent work has proposed object detection transformers (DETRs), but these models are
frequently overconfident about their predictions. When using such models for safety-critical
applications, it is crucial to know how trustworthy a model is with respect to a particular object. If we can evaluate the reliability of a model’s predictions, it extends the possibility of automated image labeling with minimal human assistance. This project first aims to investigate the scenarios where standard uncertainty quantification methods, including softmax probability, are unreliable. In addition, the project aims to provide a framework that can identify overconfident object prediction in both local and image-wide scenarios. For the local, or query scale, we investigate the probability distributions of the model’s outputs across all layers of the decor, rather than just the final layer. By examining all layers, we are better able to distinguish between the positive and negative predictions. To identify these trends, we performed numerical analysis on the in-distribution COCO object detection dataset and while transfer learning to the out-of- distribution Cityscapes and Foggy Cityscapes datasets. By leveraging this separation between positive and negative predictions, our method achieves better uncertainty quantification than existing methods.