| ScheduleIn general, classes will be held on MWF every week, and most lectures will be 60 minutes, but may take 70 minutes on a few occasions. (I will be absent on a number of class days, and the longer lectures will make up for these absences.)  | 
                    | General Course InformationPrerequisite
                        Linear Algebra, some Analysis. See guidebook for specifics.
                        
                      The course will involve some programming to test algorithms. One useful option is to use Matlab with the free add-on  cvx. A second option (particularly appealing if you took 524 recently) is to use Julia with the JuMP optimization toolbox. A third option is to use Python, but I can provide less support for this.  Text
                         J. Nocedal and S. J. Wright, Numerical Optimization, Second Edition, Springer, 2006. (It's essential to get the second edition!) Here is the current list of typos.
                       References
                        D. P. Bertsekas, with A. Nedic and A. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003. Nesterov, Y., Introductory Lectures on Convex Optimization, Kluwer, 2004.S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press. Available here. D. P. Bertsekas, Nonlinear Programming, Second Edition, Athena Scientific, Belmont, MA, 1999. R. Fletcher, Practical Methods of Optimization, 2nd Edition, Wiley, Chichester & New York, 1987.R. T. Rockafellar and R. J.-B. Wets, Variational Analysis, Springer, 1998. (This is a more advanced book and an invaluable reference.)A. Ruszczynski, Nonlinear Optimization, Princeton University Press, 2006. S. J. Wright, Primal-Dual Interior-Point Methods, SIAM, 1997.                       | 
                    | Course Outline This will likely be adadpted as the semester proceeds, but most of the following topics will be covered. 
                        Introduction 
                          Optimization paradigms and applicationsMathematical background: convex sets and functions, linear algebra, topology, convergence rates Smooth unconstrained optimization: Background
			   Taylor's theoremOptimality conditions First-Order Methods
			  Steepest descent. Convergence for convex and nonconvex cases.Accelerated gradient. Convergence for convex case.Line search methods based on descent directionsConjugate gradient methodsConditional gradient for optimization over closed convex sets Higher-order methods
                          Newton's methodLine-search NewtonTrust-region Newton and cubic regularizationConjugate gradient-NewtonQuasi-Newton methodsLimited-memory quasi-Newton  Stochastic optimization
			  Basic methods and their convergence propertiesreduced-variance approaches Differentiation
			  Adjoint calculationsAutomatic differentiation Least-squares and nonlinear equations
                          Linear least squares: direct and iterative methodsNonlinear least squares: Gauss-Newton, Levenberg-MarquardtNewton’s method for nonlinear equationsMerit functions for nonlinear equations, and line searches Optimization with linear constraints
			  Normal cones to convex setsFarkas Lemma and first-order optimality conditions (KKT)Gradient projection algorithms |