ORIE 7790 (Spring 2020)
Final Project Instructions

Back to course home page.

The final project can be completed individually or in groups of two. The project can be any of the following:

  • Literature review: Critical summary of one or several papers related to the topics studied.
  • Original research: It can be either theoretic or experimental (ideally a mix of the two). 

We particularly welcome projects that may be extended for submission to a peer-reviewed journal or conference (e.g., MOR/AoS/T-IT/COLT/ICML/NeurIPS/ICLR). Project topics must be approved by the instructor.

You are expected to submit a final project report summarizing your findings. The report can have up to 5 pages of main text (we encourage conciseness), with unlimited appendix. Due May 23rd.

A few suggested (theoretical) papers for literature review. (Updated 4/13/2020. You are more than welcome to propose a paper of your own interest.)

 Statistical Learning

  1. "Learning with Semi-Definite Programming: new statistical bounds based on fixed point analysis and excess risk curvature," Stéphane Chrétien, Mihai Cucuringu, Guillaume Lecué, Lucie Neirac, 2020
  2. "Reconciling modern machine learning practice and the bias-variance trade-off," Mikhail Belkina, Daniel Hsu, Siyuan Maa, and Soumik Mandala, 2019
  3. "Two models of double descent for weak features," Mikhail Belkin, Daniel Hsu, and Ji Xu, 2019
  4. "How Many Variables Should Be Entered in a Regression Equation?" L. Breiman, and D. Freedman
  5. ‘‘SLOPE is adaptive to unknown sparsity and asymptotically minimax,’’ W. Su and E. Candes, The Annals of Statistics, 2016
  6. ‘‘Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima,’’ P. Loh and M. Wainwright, 2013
  7. ‘‘The landscape of empirical risk for non-convex losses,’’ S. Mei, Y. Bai, and A. Montanari, 2016.
  8. ‘‘Phase transitions in semidefinite relaxations,’’ A. Javanmard, A. Montanari, and F. Ricci-Tersenghi, Proceedings of the National Academy of Sciences, 2016
  9. "Singularity, Misspecification, and the Convergence Rate of EM," Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Michael I. Jordan, Martin J. Wainwright, Bin Yu, 2018
  10. "Randomly initialized EM algorithm for two-component gaussian mixture achievesnear optimality in O(√n) iterations," Y. Wu and H. H. Zhou, 2019
  11. ‘‘Spectral methods meet EM: A provably optimal algorithm for crowdsourcing,’’ Y. Zhang, X. Chen, D. Zhou, and M. Jordan, Advances in Neural Information Processing Systems, 2014
  12. ‘‘Spectral algorithms for tensor completion,’’ A. Montanari, N. Sun, 2016
  13. ‘‘Tensor SVD: Statistical and Computational Limits,’’ A. Zhang, Dong Xia, 2020

Neural Networks

  1. ‘‘What Can ResNet Learn Efficiently, Going Beyond Kernels?’’ Z. Allen-Zhu, Y. Li, 2019
  2. ‘‘Can SGD Learn Recurrent Neural Networks with Provable Generalization?’’ Z. Allen-Zhu, Y. Li, 2019
  3. ‘‘A Mean Field View of the Landscape of Two-Layers Neural Networks,’’ S. Mei, A. Montanari, P. Nguyen, 2018
  4. ‘‘Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data,’’ Y. Li, Y. Liang, 2018
  5. ‘‘On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization,’’ S. Arora, N. Cohen, E. Hazan, 2018
  6. ‘‘On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition,’’ M. Mondelli, A. Montanari, 2018
  7. ‘‘Learning One-hidden-layer Neural Networks with Landscape Design,’’ R. Ge, J. Lee, T. Ma, 2017
  8. ‘‘Approximability of Discriminators Implies Diversity in GANs,’’ Y. Bai, T. Ma, A. Risteski, 2018
  9. ‘‘Plug-and-Play Methods Provably Converge with Properly Trained Denoisers,’’ E. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, W. Yin, 2019

Non-convexity in Learning and Statistics

  1. ‘‘Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence,’’ V. Charisopoulos, Y. Chen, D. Davis, M. Diaz, L. Ding, D. Drusvyatskiy, 2019
  2. ‘‘On the Optimization Landscape of Tensor Decompositions,’’ R. Ge and T. Ma, 2016
  3. ‘‘No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis,’’ R. Ge, C. Jin, Y. Zheng, 2017
  4. ‘‘Characterizing Implicit Bias in Terms of Optimization Geometry,’’ S. Gunasekar, J. Lee, D. Soudry, N. Srebro, 2018
  5. ‘‘Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion and Blind Deconvolution,’’ C. Ma, K. Wang, Y. Chi, and Y. Chen, 2017
  6. ‘‘Gradient Descent Learns Linear Dynamical Systems,’’ M. Hardt, T. Ma, B. Recht, 2016
  7. Model-free Nonconvex Matrix Completion: Local Minima Analysis and Applications in Memory-efficient Kernel PCA,’’ J. Chen, X. Li, 2017
  8. ‘‘Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent,’’ A. Dalalyan, Conference on Learning Theory, 2017

Reinforcement Learning

  1. "Value function estimation in Markov reward processes: Instance-dependent L∞-bounds for policy evaluation," Ashwin Pananjady, Martin J. Wainwright, 2019
  2. ‘‘Variance-reduced Q-learning is minimax optimal,’’ M. Wainwright, 2019
  3. "Provably Efficient Reinforcement Learning with Linear Function Approximation," Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan
  4. "Provably Efficient Exploration in Policy Optimization," Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang, 2019
  5. "Learning Zero-Sum Simultaneous-Move Markov Games Using Function Approximation and Correlated Equilibrium," Qiaomin Xie, Yudong Chen, Zhaoran Wang, and Zhuoran Yang, 2020

Optimization
The following papers are not directly related to this course, but you may still choose from them provided that you discuss with the instructor.

  1. ‘‘How to Escape Saddle Points Efficiently,’’ C. Jin, R. Ge, P. Netrapalli, S. Kakade, M. Jordan, 2017
  2. ‘‘Natasha 2: Faster Non-Convex Optimization Than SGD,’’ Z. Allen-Zhu, 2017
  3. ‘‘An Alternative View: When Does SGD Escape Local Minima? ’’ R. Kleinberg, Y. Li, Y. Yuan, 2018
  4. ‘‘Sharp analysis for nonconvex SGD escaping from saddle points,’’ C. Fang, Z. Lin, and T. Zhang, 2019
  5. ‘‘Gradient Descent Can Take Exponential Time to Escape Saddle Points, ’’ S. Du, C. Jin, J. Lee, M. Jordan, B. Poczos, A. Singh, 2017
  6. ‘‘Stochastic Cubic Regularization for Fast Nonconvex Optimization, ’’ N. Tripuraneni, M. Stern, C. Jin, J. Regier, M. Jordan, 2017
  7. ‘‘On the Sublinear Convergence of Randomly Perturbed Alternating Gradient Descent to Second Order Stationary Solutions, ’’ S. Lu, M. Hong, Z. Wang, 2018
  8. ‘‘Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solutions for Nonconvex Distributed Optimization, ’’ M. Hong, J. Lee, M Razaviyayn, 2018
  9. ‘‘Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems, ’’ M. Hong, Z.Q. Luo, and M. Razaviyayn, 2016
  10. ‘‘On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems,’’ T. Lin, J. Chi, and M. Jordan, 2019
  11. ‘‘Stochastic methods for composite and weakly convex optimization problems,’’ J. Duchi, R. Feng, SIAM Journal on Optimization, 2018
  12. ‘‘Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets, ’’ D. Garber, E. Hazan, 2014
  13. ‘‘Mirror descent in non-convex stochastic programming, ’’ Z. Zhou, P. Mertikopoulos, N. Bambos, S. Boyd, P. Glynn, 2017