Provable Non-convex Optimization for Learning Parametric Models

Provable Non-convex Optimization for Learning Parametric Models
Author: Kai Zhong (Ph. D.)
Publisher:
Total Pages: 866
Release: 2018
Genre:
ISBN:


Download Provable Non-convex Optimization for Learning Parametric Models Book in PDF, Epub and Kindle

Non-convex optimization plays an important role in recent advances of machine learning. A large number of machine learning tasks are performed by solving a non-convex optimization problem, which is generally NP-hard. Heuristics, such as stochastic gradient descent, are employed to solve non-convex problems and work decently well in practice despite the lack of general theoretical guarantees. In this thesis, we study a series of non-convex optimization strategies and prove that they lead to the global optimal solution for several machine learning problems, including mixed linear regression, one-hidden-layer (convolutional) neural networks, non-linear inductive matrix completion, and low-rank matrix sensing. At a high level, we show that the non-convex objectives formulated in the above problems have a large basin of attraction around the global optima when the data has benign statistical properties. Therefore, local search heuristics, such as gradient descent or alternating minimization, are guaranteed to converge to the global optima if initialized properly. Furthermore, we show that spectral methods can efficiently initialize the parameters such that they fall into the basin of attraction. Experiments on synthetic datasets and real applications are carried out to justify our theoretical analyses and illustrate the superiority of our proposed methods.

Non-convex Optimization for Machine Learning

Non-convex Optimization for Machine Learning
Author: Prateek Jain
Publisher: Foundations and Trends in Machine Learning
Total Pages: 218
Release: 2017-12-04
Genre: Machine learning
ISBN: 9781680833683


Download Non-convex Optimization for Machine Learning Book in PDF, Epub and Kindle

Non-convex Optimization for Machine Learning takes an in-depth look at the basics of non-convex optimization with applications to machine learning. It introduces the rich literature in this area, as well as equips the reader with the tools and techniques needed to apply and analyze simple but powerful procedures for non-convex problems. Non-convex Optimization for Machine Learning is as self-contained as possible while not losing focus of the main topic of non-convex optimization techniques. The monograph initiates the discussion with entire chapters devoted to presenting a tutorial-like treatment of basic concepts in convex analysis and optimization, as well as their non-convex counterparts. The monograph concludes with a look at four interesting applications in the areas of machine learning and signal processing, and exploring how the non-convex optimization techniques introduced earlier can be used to solve these problems. The monograph also contains, for each of the topics discussed, exercises and figures designed to engage the reader, as well as extensive bibliographic notes pointing towards classical works and recent advances. Non-convex Optimization for Machine Learning can be used for a semester-length course on the basics of non-convex optimization with applications to machine learning. On the other hand, it is also possible to cherry pick individual portions, such the chapter on sparse recovery, or the EM algorithm, for inclusion in a broader course. Several courses such as those in machine learning, optimization, and signal processing may benefit from the inclusion of such topics.

Topics in Non-convex Optimization and Learning

Topics in Non-convex Optimization and Learning
Author: Hongyi Zhang (Ph. D.)
Publisher:
Total Pages: 186
Release: 2019
Genre:
ISBN:


Download Topics in Non-convex Optimization and Learning Book in PDF, Epub and Kindle

Non-convex optimization and learning play an important role in data science and machine learning, yet so far they still elude our understanding in many aspects. In this thesis, I study two important aspects of non-convex optimization and learning: Riemannian optimization and deep neural networks. In the first part, I develop iteration complexity analysis for Riemannian optimization, i.e., optimization problems defined on Riemannian manifolds. Through bounding the distortion introduced by the metric curvature, iteration complexity of Riemannian (stochastic) gradient descent methods is derived. I also show that some fast first-order methods in Euclidean space, such as Nesterov's accelerated gradient descent (AGD) and stochastic variance reduced gradient (SVRG), have Riemannian counterparts that are also fast under certain conditions. In the second part, I challenge two common practices in deep learning, namely empirical risk minimization (ERM) and normalization. Specifically, I show (1) training on convex combinations of samples improves model robustness and generalization, and (2) a good initialization is sufficient for training deep residual networks without normalization. The method in (1), called mixup, is motivated by a data-dependent Lipschitzness regularization of the network. The method in (2), called Zerolnit, makes the network update scale invariant to its depth at initialization.

Non-Linear Parametric Optimization

Non-Linear Parametric Optimization
Author: BANK
Publisher: Birkhäuser
Total Pages: 227
Release: 2013-12-21
Genre: Science
ISBN: 3034863284


Download Non-Linear Parametric Optimization Book in PDF, Epub and Kindle

Modern Nonconvex Nondifferentiable Optimization

Modern Nonconvex Nondifferentiable Optimization
Author: Ying Cui
Publisher: Society for Industrial and Applied Mathematics (SIAM)
Total Pages: 0
Release: 2022
Genre: Convex functions
ISBN: 9781611976731


Download Modern Nonconvex Nondifferentiable Optimization Book in PDF, Epub and Kindle

"This monograph serves present and future needs where nonconvexity and nondifferentiability are inevitably present in the faithful modeling of real-world applications of optimization"--

First-order and Stochastic Optimization Methods for Machine Learning

First-order and Stochastic Optimization Methods for Machine Learning
Author: Guanghui Lan
Publisher: Springer Nature
Total Pages: 591
Release: 2020-05-15
Genre: Mathematics
ISBN: 3030395685


Download First-order and Stochastic Optimization Methods for Machine Learning Book in PDF, Epub and Kindle

This book covers not only foundational materials but also the most recent progresses made during the past few years on the area of machine learning algorithms. In spite of the intensive research and development in this area, there does not exist a systematic treatment to introduce the fundamental concepts and recent progresses on machine learning algorithms, especially on those based on stochastic optimization methods, randomized algorithms, nonconvex optimization, distributed and online learning, and projection free methods. This book will benefit the broad audience in the area of machine learning, artificial intelligence and mathematical programming community by presenting these recent developments in a tutorial style, starting from the basic building blocks to the most carefully designed and complicated algorithms for machine learning.

Non-Linear Parametric Optimization

Non-Linear Parametric Optimization
Author: B. Bank
Publisher: Walter de Gruyter GmbH & Co KG
Total Pages: 228
Release: 1982-12-31
Genre: Mathematics
ISBN: 3112706919


Download Non-Linear Parametric Optimization Book in PDF, Epub and Kindle

No detailed description available for "Non-Linear Parametric Optimization".

Parametric Optimization: Singularities, Pathfollowing and Jumps

Parametric Optimization: Singularities, Pathfollowing and Jumps
Author: J. Guddat
Publisher: Vieweg+Teubner Verlag
Total Pages: 191
Release: 1990-12-01
Genre: Technology & Engineering
ISBN: 9783519021124


Download Parametric Optimization: Singularities, Pathfollowing and Jumps Book in PDF, Epub and Kindle

This volume is intended for readers who, whether they be mathematicians, workers in other fields or students, are familiar with the basic approaches and methods of mathematical optimization. The subject matter is concerned with optimization problems in which some or all of the individual data involved depend on one parameter. Such problems are called one-parametric optimization problems. Solution algorithms for such problems are of interest for several reasons. We consider here mainly applications of solution algorithms for one-parametric optimization problems in the following fields: (i) globally convergent algorithms for nonlinear, in particular non-convex, optimization problems, (ii) global optimization, (iii) multi-objective optimization. The main tool for a solution algorithm for a one-parametric optimization problem will be the so-called pathfollowing methods (also called continuation or homotopy methods) (cf. Chapters 3 and 4). Classical methods in the set of stationary points will be extended to the set of all generalized critical points. This could be helpful since the path of stationary points stops in this set, but there is a continuation in the broader set of generalized critical points. However, it will be shown that pathfollowing methods only are not successful in every case. This is the reason why we propose to jump from one connected component in the set of local minimizers and generalized critical points, respectively, to another one (Chapter 5).

Convex Optimization Meets Formal Methods

Convex Optimization Meets Formal Methods
Author: Murat Cubuktepe
Publisher:
Total Pages: 456
Release: 2021
Genre:
ISBN:


Download Convex Optimization Meets Formal Methods Book in PDF, Epub and Kindle

This dissertation studies the applicability of convex optimization to the formal verification and synthesis of systems that exhibit randomness or stochastic uncertainties. These systems can be represented by a general family of uncertain, partially observable, and parametric Markov decision processes (MDPs). These models have found applications in artificial intelligence, planning, autonomy, and control theory and can accurately characterize dynamic, uncertain environments. The synthesis of policies for this family of models has long been regarded theoretically and empirically intractable. The goal of this dissertation is to develop theoretically sound and computationally efficient synthesis algorithms that provably satisfy formal high-level task specifications in temporal logic. The first part is on developing convex-optimization-based techniques to parameter synthesis in parametric Markov decision processes where the values of the transitions are functions over real-valued parameters. The second part builds on the formulations of the first part and utilizes sampling-based methods for verification and optimization in uncertain MDPs that allow the probabilistic transition function to belong to a so-called uncertainty set. The third part develops inverse reinforcement learning algorithms in partially observable MDPs to several limitations of existing techniques that do not take the information asymmetry between the expert and the agent into account. Finally, the fourth part synthesizes policies for uncertain partially observable MDPs that allow both of the probabilistic transition and observation functions to be uncertain. In each part, a unifying theme is, the resulting algorithms approximate the underlying optimization problem as a convex optimization problem. Additionally, by combining techniques from convex optimization and formal methods, the algorithms bring strong performance guarantees with respect to task specifications. The computational efficiency and applicability of the resulting algorithms are demonstrated in numerous domains such as aircraft collision avoidance, spacecraft and unmanned aerial vehicle motion planning, and joint active perception and planning in urban environments

Inverse Parametric Optimization For Learning Utility Functions From Optimal and Satisficing Decisions

Inverse Parametric Optimization For Learning Utility Functions From Optimal and Satisficing Decisions
Author: Elaheh Hosseiniiraj
Publisher:
Total Pages: 0
Release: 2021
Genre:
ISBN:


Download Inverse Parametric Optimization For Learning Utility Functions From Optimal and Satisficing Decisions Book in PDF, Epub and Kindle

Inverse optimization is a method to determine optimization model parameters from observed decisions. Despite being a learning method, inverse optimization is not part of a data scientist's toolkit in practice, especially as many general-purpose machine learning packages are widely available as an alternative. In this dissertation, we examine and remedy two aspects of inverse optimization that prevent it from becoming more used by practitioners. These aspects include the alternative-based approach in inverse optimization modeling and the assumption that observations should be optimal. In the first part of the dissertation, we position inverse optimization as a learning method in analogy to supervised machine learning. The first part of this dissertation provides a starting point toward identifying the characteristics that make inverse optimization more efficient compared to general out-of-the-box supervised machine learning approaches, focusing on the problem of imputing the objective function of a parametric convex optimization problem. The second part of this dissertation provides an attribute-based perspective to inverse optimization modeling. Inverse attribute-based optimization imputes the importance of the decision attributes that result in minimally suboptimal decisions instead of imputing the importance of decisions. This perspective expands the range of inverse optimization applicability. We demonstrate that it facilitates the application of inverse optimization in assortment optimization, where changing product selections is a defining feature and accurate predictions of demand are essential. Finally, in the third part of the dissertation, we expand inverse parametric optimization to a more general setting where the assumption that the observations are optimal is relaxed to requiring only feasibility. The proposed inverse satisfaction method can deal with both feasible and minimally suboptimal solutions. We mathematically prove that the inverse satisfaction method provides statistically consistent estimates of the unknown parameters and can learn from both optimal and feasible decisions.