Misha
Misha

I am a fellow at Princeton Language & Intelligence, where I study foundations and applications of machine learning. Most recently, I have been working on incorporating AI tools into algorithm design, scientific computing, and statistical estimation. My email is mkhodak@princeton.edu.

In the past, I have worked on fundamental theory for modern meta-learning (scalable methods that "learn-to-learn" using multiple learning tasks as data) and end-to-end guarantees for learning-augmented algorithms (algorithms that incorporate learned predictions about their instances to improve performance). These results are based on a set of theoretical tools that port the idea of surrogate upper bounds from supervised learning to learning algorithmic cost functions. In addition to providing natural measures of task-similarity, this approach often yields effective and practical methods, such as for personalized federated learning and scientific computing. I have also led the push to develop automated ML methods for diverse tasks and have worked on efficient deep learning, neural architecture search, and natural language processing.

Prior to PLI, I completed a PhD in computer science at CMU, where I was a TCS Presidential Fellow and a Facebook PhD Fellow. I will join UW-Madison as an assistant professor of CS in Fall 2025 and am recruiting students during the 2024-2025 graduate admissions cycle.

 
Recent Work:

Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances. ICLR 2024.

Mikhail Khodak, Edmond Chow, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [code]

 
Selected Papers:


Hovering over an image reveals a paper summary and retrospective.

Cross-Modal Fine-Tuning: Align then Refine. ICML 2023.

Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar.
[paper] [arXiv] [code] [slides]

Learning Predictions for Algorithms with Predictions. NeurIPS 2022.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar, Sergei Vassilvitskii.
[paper] [arXiv] [poster] [talk]

Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing. NeurIPS 2021.

Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith, Ameet Talwalkar.
[paper] [arXiv] [code] [poster] [slides] [talk]

Rethinking Neural Operations for Diverse Tasks. NeurIPS 2021.

Nicholas Roberts*, Mikhail Khodak*, Tri Dao, Liam Li, Christopher RĂ©, Ameet Talwalkar.
[paper] [arXiv] [code] [slides] [talk] [Python package]

Adaptive Gradient-Based Meta-Learning Methods. NeurIPS 2019.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [poster] [slides] [code] [blog] [talk]

A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019.

Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi.
[paper] [arXiv] [poster] [slides] [data] [blog] [talk]

A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors. ACL 2018.

Mikhail Khodak*, Nikunj Saunshi*, Yingyu Liang, Tengyu Ma, Brandon Stewart, Sanjeev Arora.
[paper] [arXiv] [slides] [code] [data] [blog] [talk] [R package]