Optimization
06 Dec 2022 00:27
A more than usually inadequate palceholder.
See also: Computation; Control Theory; Decision Theory; Economics; Evolutionary Computation; Learning in Games; Learning Theory; Low-Regret Learning; Math I Ought to Learn; Planned Economies; Stochastic Approximation
- Recommended, big picture:
- Aharon Ben-Tal and Arkadi Nemirovski, Lectures on Modern Convex Optimization [PDF via Prof. Nemirovski]
- Cristopher Moore and Stephan Mertens, The Nature of Computation [Cris and Stephan were kind enough to let me read this in manuscript; it's magnificent. Review: Intellects Vast and Warm and Sympathetic]
- Recommended, close-ups:
- Alekh Agarwal, Peter L. Bartlett, Pradeep Ravikumar, Martin J. Wainwright, "Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization", arxiv:1009.0571
- Aureli Alabert, Alessandro Berti, Ricard Caballero, Marco Ferrante, "No-Free-Lunch Theorems in the continuum", arxiv:1409.2175
- Léon Bottou and Olivier Bosquet, "The Tradeoffs of Large Scale Learning", in Sra et al. (eds.), below [PDF reprint via Dr. Bottou]
- Venkat Chandrasekaran and Michael I. Jordan, "Computational and Statistical Tradeoffs via Convex Relaxation", Proceedings of the National Academy of Sciences (USA) 110 (2013): E1181--E1190, arxiv:1211.1073
- Paul Mineiro, Nikos Karampatziakis, "Loss-Proportional Subsampling for Subsequent ERM", arxiv:1306.1840
- Maxim Raginsky, Alexander Rakhlin, "Information-based complexity, feedback and dynamics in convex programming", arxiv:1010.2285
- Suvrit Sra, Sebastian Nowozin and Stephen J. Wright (eds.), Optimization for Machine Learning
- Martin L. Weitzman, Income, Wealth, and the Maximum Principle
- Margaret H. Wright, "The interior-point revolution in optimization: History, recent developments, and lasting consequences", Bulletin of the Amererican Mathematical Society (New Series) 42 (2005): 39--56
- To read:
- Dennis Amelunxen, Martin Lotz, Michael B. McCoy, Joel A. Tropp, "Living on the edge: Phase transitions in convex programs with random data", arxiv:1303.6672
- G. Ausiello, Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties
- Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, "Optimization with Sparsity-Inducing Penalties", Foundations and Trends in Machine Learning 4 (2011): 1--106, arxiv:1108.0775
- Aharon Ben-Tal, Laurent El Ghaoui and Arkadi Nemirovski, Robust Optimization
- Dimitri P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods [PDF reprint via Prof. Bertsekas]
- Yatao Bian, Xiong Li, Yuncai Liu, Ming-Hsuan Yang, "Parallel Coordinate Descent Newton Method for Efficient $\ell_1$-Regularized Minimization", arxiv:1306.4080
- Jan Brinkhuis and Vladimir Tikhomirov, Optimization: Insights and Applications
- Sébastien Bubeck
- Introduction to Online Optimization
- "Convex Optimization: Algorithms and Complexity", Foundations and Trends in Machine Learning 8 (2015): 231--357, arxiv:1405.4980
- C. Cartis, N. I. M. Gould and Ph. L. Toint, "On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization" [PDF preprint via Dr. Cartis]
- Giuseppe C. Calafiore and Laurent El Ghaoui, Optimization Models
- Miguel Á. Carreira-Perpiñán and Weiran Wang, "Distributed optimization of deeply nested systems", arxiv:1212.5921 [Remarkable if true]
- Paulo Cortez, Modern Optimization with R
- Giorgio Giorgi and Tinne Hoff Kjeldsen (eds.), Traces and Emergence of Nonlinear Programming
- George B. Dantzig, Linear Programming and Extensions
- B. Guenin, J. Könemann and L. Tunçel, A Gentle Introduction to Optimization
- I. Gumowski and C. Mira, Optimization in Control Theory and Practice
- Elad Hazan, Satyen Kale, "Projection-free Online Learning", arxiv:1206.4657
- Philipp Hennig and Martin Kiefel, "Quasi-Newton Methods: A New Direction", arxiv:1206.4602
- Prateek Jain, Purushottam Kar, "Non-convex Optimization for Machine Learning", arxiv:1712.07897 (195 pp. review)
- Adel Javanmard, Andrea Montanari, and Federico Ricci-Tersenghi, "Phase transitions in semidefinite relaxations", Proceedings of the National Academy of Sciences 113 (2016): E2218--E2223
- Soeren Laue, "A Hybrid Algorithm for Convex Semidefinite Optimization", arxiv:1206.4608
- Mehrdad Mahdavi, Rong Jin, "MixedGrad: An O(1/T) Convergence Rate Algorithm for Stochastic Smooth Optimization", arxiv:1307.7192
- Mehrdad Mahdavi, Rong Jin, Tianbao Yang, "Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints", arxiv:1111.6082
- N. Parikh and S. Boyd, "Proximal Algorithms", Foundations and Trends in Optimization 1 (2014): 123--231 [PDF reprint and other resources via Prof. Boyd]
- Debasish Roy and G. Visweswara Rao, Stochastic Dynamics, Filtering and Optimization
- Andrzej Ruszczynski, Nonlinear Optimization
- Ohad Shamir, Tong Zhang, "Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes", arxiv:1212.1824
- James C. Spall, Introduction to Stochastic Search and Optimization
- Rangarajan K. Sundaram, First Course in Optimization Theory
- Rachael Tappenden, Peter Richtárik, Jacek Gondzio, "Inexact Coordinate Descent: Complexity and Preconditioning", arxiv:1304.5530
- William Thomas, Rational Action: The Sciences of Policy in Britain and America, 1940--1960
- Matthew D. Zeiler, "ADADELTA: An Adaptive Learning Rate Method", arxiv:1212.5701
- Lijun Zhang, Tianbao Yang, Rong Jin, Xiaofei He, "O(logT) Projections for Stochastic Optimization of Smooth and Strongly Convex Functions", arxiv:1304.0740
- Hui Zhang, Wotao Yin, "Gradient methods for convex minimization: better rates under weaker conditions", arxiv:1303.4645