Approximate dynamic programming: Solving the curses of dimensionality

Free Download

Authors:

Edition: 1

Series: Wiley Series in Probability and Statistics

ISBN: 0470171553, 9780470171554, 9780470182956

Size: 3 MB (3255234 bytes)

Pages: 487/487

File format:

Language:

Publishing Year:

Category:

Warren B. Powell0470171553, 9780470171554, 9780470182956

A complete and accessible introduction to the real-world applications of approximate dynamic programming
With the growing levels of sophistication in modern-day operations, it is vital for practitioners to understand how to approach, model, and solve complex industrial problems. Approximate Dynamic Programming is a result of the author’s decades of experience working in large industrial settings to develop practical and high-quality solutions to problems that involve making decisions in the presence of uncertainty. This groundbreaking book uniquely integrates four distinct disciplines—Markov design processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully model and solve a wide range of real-life problems using the techniques of approximate dynamic programming (ADP). The reader is introduced to the three curses of dimensionality that impact complex problems and is also shown how the post-decision state variable allows for the use of classical algorithmic strategies from operations research to treat complex stochastic optimization problems.
Designed as an introduction and assuming no prior training in dynamic programming of any form, Approximate Dynamic Programming contains dozens of algorithms that are intended to serve as a starting point in the design of practical solutions for real problems. The book provides detailed coverage of implementation challenges including: modeling complex sequential decision processes under uncertainty, identifying robust policies, designing and estimating value function approximations, choosing effective stepsize rules, and resolving convergence issues.
With a focus on modeling and algorithms in conjunction with the language of mainstream operations research, artificial intelligence, and control theory, Approximate Dynamic Programming:
Models complex, high-dimensional problems in a natural and practical way, which draws on years of industrial projects
Introduces and emphasizes the power of estimating a value function around the post-decision state, allowing solution algorithms to be broken down into three fundamental steps: classical simulation, classical optimization, and classical statistics
Presents a thorough discussion of recursive estimation, including fundamental theory and a number of issues that arise in the development of practical algorithms
Offers a variety of methods for approximating dynamic programs that have appeared in previous literature, but that have never been presented in the coherent format of a book
Motivated by examples from modern-day operations research, Approximate Dynamic Programming is an accessible introduction to dynamic modeling and is also a valuable guide for the development of high-quality solutions to problems that exist in operations research and engineering. The clear and precise presentation of the material makes this an appropriate text for advanced undergraduate and beginning graduate courses, while also serving as a reference for researchers and practitioners. A companion Web site is available for readers, which includes additional exercises, solutions to exercises, and data sets to reinforce the book’s main concepts.

Table of contents :
Instead of a cover……Page 1
Wiley……Page 4
Title page……Page 5
Date-line……Page 6
CONTENTS……Page 7
Preface……Page 13
Acknowledgments……Page 17
1 The challenges of dynamic programming……Page 19
1.1 A dynamic programming example: a shortest path problem……Page 20
1.2 The three curses of dimensionality……Page 21
1.3 Some real applications……Page 24
1.4 Problem classes……Page 28
1.5 The many dialects of dynamic programming……Page 30
1.6 What is new in this book?……Page 32
1.7 Bibliographic notes……Page 34
2 Some illustrative models……Page 35
2.1 Deterministic problems……Page 36
2.2 Stochastic problems……Page 41
2.3 Information acquisition problems……Page 54
2.4 A simple modeling framework for dynamic programs……Page 58
Problems……Page 61
3 Introduction to Markov decision processes……Page 65
3.1 The optimality equations……Page 66
3.2 Finite horizon problems……Page 71
3.3 Infinite horizon problems……Page 73
3.4 Value iteration……Page 75
3.5 Policy iteration……Page 79
3.7 The linear programming method for dynamic programs……Page 81
3.8 Monotone policies……Page 82
3.9 Why does it work?……Page 88
3.10 Bibliographic notes……Page 103
Problems……Page 104
4 Introduction to approximate dynamic programming……Page 109
4.1 The three curses of dimensionality (revisited)……Page 110
4.2 The basic idea……Page 111
4.3 Sampling random variables……Page 118
4.4 ADP using the post-decision state variable……Page 119
4.5 Low-dimensional representations of value functions……Page 125
4.6 So just what is approximate dynamic programming?……Page 128
4.7 Experimental issues……Page 130
4.8 Dynamic programming with missing or incomplete models……Page 136
4.9 Relationship to reinforcement learning……Page 137
4.10 But does it work?……Page 138
4.11 Bibliographic notes……Page 140
Problems……Page 141
5 Modeling dynamic programs……Page 147
5.1 Notational style……Page 149
5.2 Modeling time……Page 150
5.3 Modeling resources……Page 153
5.4 The states of our system……Page 157
5.5 Modeling decisions……Page 165
5.6 The exogenous information process……Page 169
5.7 The transition function……Page 177
5.8 The contribution function……Page 184
5.9 The objective function……Page 187
5.10 A measure-theoretic view of information……Page 188
Problems……Page 191
6 Stochastic approximation methods……Page 197
6.1 A stochastic gradient algorithm……Page 199
6.2 Deterministic stepsize recipes……Page 201
6.3 Stochastic stepsizes……Page 208
6.4 Computing bias and variance……Page 213
6.5 Optimal stepsizes……Page 215
6.6 Some experimental comparisons of stepsize formulas……Page 222
6.7 Convergence……Page 226
6.8 Why does it work?……Page 228
6.9 Bibliographic notes……Page 238
Problems……Page 239
7 Approximating value functions……Page 243
7.1 Approximation using aggregation……Page 244
7.2 Approximation methods using regression models……Page 255
7.3 Recursive methods for regression models……Page 264
7.4 Neural networks……Page 271
7.5 Value function approximation for batch processes……Page 275
7.6 Why does it work?……Page 281
7.7 Bibliographic notes……Page 283
Problems……Page 285
8 ADP for finite horizon problems……Page 289
8.1 Strategies for finite horizon problems……Page 290
8.2 $Q$-learning……Page 294
8.3 Temporal difference learning……Page 297
8.4 Policy iteration……Page 300
8.5 Monte Carlo value and policy iteration……Page 302
8.6 The actor-critic paradigm……Page 303
8.7 Bias in value function estimation……Page 304
8.8 State sampling strategies……Page 308
8.9 Starting and stopping……Page 312
8.10 A taxonomy of approximate dynamic programming strategies……Page 314
8.12 Bibliographic notes……Page 316
Problems……Page 317
9 Infinite horizon problems……Page 321
9.2 Algorithmic strategies……Page 322
9.3 Stepsizes for infinite horizon problems……Page 331
9.4 Error measures……Page 333
9.6 Finite horizon models for steady-state applications……Page 335
9.8 Bibliographic notes……Page 337
Problems……Page 338
10.1 A learning exercise: the nomadic trucker……Page 341
10.2 Learning strategies……Page 344
10.3 A simple information acquisition problem……Page 348
10.4 Gittins indices and the information acquisition problem……Page 350
10.5 Variations……Page 355
10.6 The knowledge gradient algorithm……Page 357
10.7 Information acquisition in dynamic programming……Page 360
Problems……Page 364
11 Value function approximations for special functions……Page 369
11.1 Value functions versus gradients……Page 370
11.2 Linear approximations……Page 371
11.3 Piecewise linear approximations……Page 373
11.4 The SHAPE algorithm……Page 377
11.5 Regression methods……Page 380
11.6 Cutting planes……Page 383
11.7 Why does it work?……Page 395
11.8 Bibliographic notes……Page 401
Problems……Page 402
12 Dynamic resource allocation problems……Page 405
12.1 An asset acquisition problem……Page 406
12.2 The blood management problem……Page 410
12.3 A portfolio optimization problem……Page 419
12.4 A general resource allocation problem……Page 422
12.5 A fleet management problem……Page 434
12.6 A driver management problem……Page 439
Problems……Page 445
13.1 Will ADP work for your problem?……Page 451
13.2 Designing an ADP algorithm for complex problems……Page 452
13.3 Debugging an ADP algorithm……Page 454
13.4 Convergence issues……Page 455
13.5 Modeling your problem……Page 456
13.6 On-line vs. off-line models……Page 458
13.7 If it works, patent it!……Page 459
Index……Page 475
WILEY SERIES IN PROBABILITY AND STATISTICS……Page 479

Reviews

There are no reviews yet.

Be the first to review “Approximate dynamic programming: Solving the curses of dimensionality”
Shopping Cart
Scroll to Top