I research multiplicity in machine learning – the fact that many models may perform equally well on a task according to standard accuracy metrics. Multiplicity can pose a problem for machine learning because individual decisions, if subject to change given multiplicity, become arbitrary. This arbitrariness may be unavoidable, but is often hidden, because alternative models and decisions are not considered.

My work specifically focuses on how datasets impact multiplicity: how different datasets may be equally well suited to a prediction task, yet yield models that behave differently in practice. My work uses techniques from formal methods and machine learning to computationally measure the impact that dataset multiplicity has on machine learning robustness. I am also branching out to using techniques from human-computer interaction to study gain a deeper understanding of how multiplicity in machine learning impacts fairness.

I’m currently in my fourth year in the Computer Sciences PhD program at the University of Wisconsin - Madison. I’m part of the MadPL group and am co-advised by Aws Albarghouthi and Loris D’Antoni.

News

  • March 2024 - This May, I will attend the DMLR workshop at ICLR to present our paper, Verified Training for Counterfactual Explanation Robustness under Data Shift.
  • Fall 2023 - As part of the STEM Public Service Fellows program, I am working with UW-Madison’s Data Science Hub to develop a workshop on fair and explainable machine learning.
  • Summer 2023 - I taught a course (CS 220: Data Science Programming 1) during UW-Madison’s summer session.

Publications

(+) Equal contribution

On Minimizing the Impact of Dataset Shifts on Actionable Explanations
Anna P. Meyer (+), Dan Ley (+), Suraj Srinivas, and Himabindu Lakkaraju
UAI 2023 (Oral Presentation)
[pdf] [code]
The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions
Anna P. Meyer, Aws Albarghouthi, and Loris D’Antoni
FAccT 2023
[pdf] [video] [code]
Certifying Robustness to Programmable Data Bias in Decision Trees
Anna P. Meyer, Aws Albarghouthi, and Loris D’Antoni
NeurIPS 2021
[pdf] [slides] [video] [code]


Preprints and workshop papers

Verified Training for Counterfactual Explanation Robustness under Data Shift
Anna P. Meyer (+), Yuhao Zhang (+), Aws Albarghouthi, and Loris D’Antoni
DMLR workshop at ICLR 2024
[pdf] [code]