Sicheng Zhu

✉ sczhu[at]umd[dot]edu Google Scholar Icon Google Scholar Twitter Icon Twitter

I am a Ph.D. student in the Computer Science Department at the University of Maryland, College Park, advised by Prof. Furong Huang.

Research Interest

My goal is to build trustworthy machine learning models.

When you are sitting in a self-driving car controlled by such a machine learning model, you know it can recognize trucks because it can identify the wheels, body, and cab that make up the truck, and it can recognize the wheels because it notices contours, tire materials, and hubs (interpretability). It can identify trucks at various angles, lighting conditions, and even under adversarial stickers (robustness). It can also recognize a wide variety of trucks, from heavy trailers to Cybertrucks (generalization). Therefore, you can confidently hand over the steering wheel to it and take a nap on your way to work without worrying about it inexplicably crashing into an overturned white truck.

My approach to achieving this goal is by endowing machines with common sense about the physical world, with a current focus on modeling the symmetries of objects under various changes that could potentially reduce the learning complexity. For example, when applying angle transformations, material changes, or some pattern changes on a wheel in an image, it still appears as a wheel to humans and maintains its function. My current work aims at incorporating these symmetries into the model.

Bio

Previously, I was a visiting scholar at the University of Virginia where I was fortunate to be advised by Prof. David Evans. I received my M.E. from Institute of Electronics, Chinese Academy of Sciences and B.S. from University of Electronic Science and Technology of China.

Preprints
Benchmarking the Robustness of Image Watermarks
Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang
arXiv 2401.08573
[arXiv]
AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models
Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Barrow, Zichao Wang, Furong Huang, Ani Nenkova, Tong Sun
arXiv 2310.15140
[arXiv] [Webpage] [3rd-Party Code]
On the Possibilities of AI-Generated Text Detection
Souradip Chakraborty*, Amrit Singh Bedi*, Sicheng Zhu, Bang An, Dinesh Manocha, Furong Huang
arXiv 2304.04736
[arXiv]
Publications More Context, Less Distraction: Visual Classification by Inferring and Conditioning on Contextual Attributes
Bang An*, Sicheng Zhu*, Michael-Andrei Panaitescu-Liess, Chaithanya Kumar Mummadi, Furong Huang
ICLR 2024
[arXiv] [Code]
Like Oil and Water: Group Robustness Methods and Poisoning Defenses Don't Mix
Michael-Andrei Panaitescu-Liess, Yigitcan Kaya, Sicheng Zhu, Furong Huang, Tudor Dumitras
ICLR 2024
[link]
Learning Unforeseen Robustness from Out-of-distribution Data Using Equivariant Domain Translator
Sicheng Zhu, Bang An, Furong Huang, Sanghyun Hong
ICML 2023
[Link] [Code]
Understanding the Generalization Benefit of Model Invariance from a Data Perspective
Sicheng Zhu*, Bang An*, Furong Huang
NeurIPS 2021
[Link] [arXiv] [Code]
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
Sicheng Zhu*, Xiao Zhang*, David Evans
ICML 2020
[Link] [arXiv] [Code]


Website template credit