Jiefeng's bio photo

Jiefeng Chen

Ph.D. Candidate
Department of Computer Sciences
University of Wisconsin-Madison

  G. Scholar LinkedIn Github Twitter e-Mail

441 Pageviews
Nov. 30th - Dec. 30th

About


I am an Applied Scientist at AWS AI Labs, working on foundation models. I obtained my Ph.D. degree from the Computer Science Department at the University of Wisconsin-Madison. I was co-advised by Prof. Yingyu Liang and Prof. Somesh Jha. I was supported by the Center for Trustworthy Machine Learning (CTML). I worked on trustworthy machine learning with research questions like "How can we produce models that are robust to imperceptible perturbations?", "How can we train models that produce robust interpretation of their behaviors?", and "How can we produce out-of-distribution detectors which are robust against small adversarial perturbations?". I obtained my Bachelor's degree in Computer Science from Shanghai Jiao Tong University (SJTU).


News


06/19/2023: I successfully defended my Ph.D. thesis and was granted the title of Dr. Chen. Please note that this website is no longer being updated.
04/24/2023: Our paper Stratified Adversarial Robustness with Rejection was accepted by ICML 2023.
01/20/2023: Our paper The Trade-off between Universality and Label Efficiency of Representations from Contrastive Learning was accepted by ICLR 2023, as a Spotlight pressentation.
01/20/2023: Our paper Is Forgetting Less a Good Inductive Bias for Forward Transfer? was accepted by ICLR 2023. This work was done while I was interning at DeepMind.
02/28/2022: Our paper Revisiting Adversarial Robustness of Classifiers With a Reject Option received Best Paper Award at AAAI Workshop 2022.
01/20/2022: Our paper Towards Evaluating the Robustness of Neural Networks Learned by Transduction was accepted by ICLR 2022.
11/25/2021: Wrote a blogpost about our paper Detecting Errors and Estimating Accuracy on Unlabeled Data with Self-training Ensembles, which was accepted by NeurIPS 2021.
06/18/2021: Our paper ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining was accepted by ECML 2021 (Acceptance Ratio: 21%).
06/01/2020: Our paper Concise Explanations of Neural Networks using Adversarial Training was accepted by ICML 2020.
10/31/2019: Wrote a blogpost about our paper Robust Attribution Regularization, which was accepted by NeurIPS 2019.
02/19/2019: Our paper Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks was accepted by EuroS&P 2019.
05/11/2018: Our paper Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training was accepted by ICML 2018.