Optimal Control, Expectations and Uncertainty

Optimal Control, Expectations and Uncertainty
Author: Sean Holly
Publisher: Cambridge University Press
Total Pages: 258
Release: 1989-07-20
Genre: Business & Economics
ISBN: 0521264448


Download Optimal Control, Expectations and Uncertainty Book in PDF, Epub and Kindle

An examination of how the rational expectations revolution and game theory have enhanced the understanding of how an economy functions.

Uncertain Optimal Control

Uncertain Optimal Control
Author: Yuanguo Zhu
Publisher: Springer
Total Pages: 211
Release: 2018-08-29
Genre: Technology & Engineering
ISBN: 9811321345


Download Uncertain Optimal Control Book in PDF, Epub and Kindle

This book introduces the theory and applications of uncertain optimal control, and establishes two types of models including expected value uncertain optimal control and optimistic value uncertain optimal control. These models, which have continuous-time forms and discrete-time forms, make use of dynamic programming. The uncertain optimal control theory relates to equations of optimality, uncertain bang-bang optimal control, optimal control with switched uncertain system, and optimal control for uncertain system with time-delay. Uncertain optimal control has applications in portfolio selection, engineering, and games. The book is a useful resource for researchers, engineers, and students in the fields of mathematics, cybernetics, operations research, industrial engineering, artificial intelligence, economics, and management science.

Suboptimal Control Policies Via Convex Optimization

Suboptimal Control Policies Via Convex Optimization
Author: Brendan O'Donoghue
Publisher:
Total Pages:
Release: 2012
Genre:
ISBN:


Download Suboptimal Control Policies Via Convex Optimization Book in PDF, Epub and Kindle

In this dissertation we consider several suboptimal policies for the stochastic control problem with discounted objective and full state information. In the general case, this problem is difficult to solve exactly. However, solutions can be found for certain special cases. When the state and action spaces are finite, for example, the problem is readily solved [13]. Another such case is when the state and action spaces are finite dimensional real vector spaces, the system dynamics are linear, the cost function is convex quadratic, and there are no constraints on the action or the state. In this case optimal control policy is affine in the state variable, with coefficients that are readily computable [13,16,25,98]. One general method for finding the optimal control policy is to use dynamic programming (DP). DP relies on characterizing the value-function of the stochastic control problem. The value function evaluated at a state is the expected cost incurred by an optimal policy starting from that state. The optimal policy can be written as an optimization problem involving the value function [13,16,18]. However, due to the 'curse of dimensionality', even representing the value function can be intractable when the state or action spaces are infinite, or as a practical matter, when the number of states or actions is very large. Even in cases where the value function can be represented, evaluating the optimal policy may still be intractable. In such cases a common alternative is to use approximate dynamic programming (ADP) [19, 143, 179]. In ADP we replace the true value function with an approximate value function in the expression for the optimal policy. The goal is to choose the approximate value function so that the performance of the resulting policy is close to optimal, or at least, good. In the first part of this dissertation we develop two ADP control policies which we refer to as the min-max ADP policy and the iterated approximate value function (AVF) policy respectively. Both of these policies rely on our ability to parameterize a family of lower bounds on the true value function of the stochastic control problem. The condition we use to parameterize our family of lower bounds is related to the 'linear programming approach' to ADP, which was first introduced by Manne in 1960 [118], and extended to approximate dynamic programming in [47, 159]. The basic idea is that any function which satisfies the Bellman inequality is a pointwise lower bound on the true value function [13,16]. The min-max ADP policy uses the pointwise supremum of the family of lower bounds as a surrogate value function. Evaluating the control policy involves the solution of a min-max or saddle-point problem. For the iterated AVF policy we first develop a sequence of approximate value functions which are optimized to a trajectory of states, we then perform control at each time-step using the corresponding mem- ber of the sequence. The trajectory and approximate value functions are generated simultaneously as the solutions and dual variables to a single convex optimization problem. For the class of problems we consider, finding the control action under either policy requires solving a convex optimization problem. For the min-max ADP policy we solve a semidefinite program (SDP) [26,30] at each time-step. The iterated AVF policy requires solving a single SDP offline, and then solving a much smaller convex problem at each iteration. Model predictive control (MPC) is a widespread technique for generating a suboptimal control policy that often performs very well in practice [37,64,76,105,117, 125,162,184]. In the second part of this dissertation we introduce a new algorithm for solving optimal control problems, of which model predictive control is a special case. The algorithm, which we refer to as operator splitting for control (OSC), solves MPC problems quickly and robustly. In many cases the resulting algorithm can be implemented in fixed-point arithmetic and is thus suitable for embedded applications. The algorithm relies on an operator splitting technique, referred to as the alternating direction method of multipliers (ADMM), or as Douglas-Rachford (D-R) v splitting [28,57,62,63,70]. The third part of this document investigates the efficacy and computational burden of the suboptimal policies we developed in earlier sections through an in-depth multi-period portfolio optimization example [1, 55, 94, 141, 166, 191]. We exhibit a lower bound on the performance (our problem statement involves minimizing an objective) which we compare to two of the policies detailed in the previous chapters. We present timing results and demonstrate that the performance for both policies is very close to the lower bound and thus very close to optimal for several numerical instances.

Illinois Agricultural Economics Staff Paper

Illinois Agricultural Economics Staff Paper
Author: University of Illinois at Urbana-Champaign. Dept. of Agricultural Economics
Publisher:
Total Pages: 458
Release: 1977
Genre:
ISBN:


Download Illinois Agricultural Economics Staff Paper Book in PDF, Epub and Kindle

Optimization and Optimal Control

Optimization and Optimal Control
Author: Panos M. Pardalos
Publisher: World Scientific
Total Pages: 380
Release: 2003
Genre: Mathematics
ISBN: 9812775366


Download Optimization and Optimal Control Book in PDF, Epub and Kindle

This volume gives the latest advances in optimization and optimal control which are the main part of applied mathematics. It covers various topics of optimization, optimal control and operations research.

Applied Optimal Control

Applied Optimal Control
Author: Alain Bensoussan
Publisher: North-Holland
Total Pages: 216
Release: 1978
Genre: Business & Economics
ISBN:


Download Applied Optimal Control Book in PDF, Epub and Kindle

Cybernetics And Systems '94 - Proceedings Of The 12th European Meeting On Cybernetics And Systems Research (In 2 Volumes)

Cybernetics And Systems '94 - Proceedings Of The 12th European Meeting On Cybernetics And Systems Research (In 2 Volumes)
Author: Robert Trappl
Publisher: World Scientific
Total Pages: 1964
Release: 1994-03-15
Genre:
ISBN: 9814550949


Download Cybernetics And Systems '94 - Proceedings Of The 12th European Meeting On Cybernetics And Systems Research (In 2 Volumes) Book in PDF, Epub and Kindle

The papers in this volume reflect the most recent research findings in cybernetics and systems research. They were selected from 298 draft final papers which were submitted to the conference by authors from more than 30 different countries from five continents.