ISE 416 - Dynamic Programming
ISE 416 Stochastic Dynamic Programming, or just Dynamic Programming, is the principle governing
how optimal sequential decisions are made in the face of uncertainty. Applications are found in
operations research (inventory management, supply chain, resource allocation, maintenance),
transportation (fleet operations), robotics (optimal control), economics (consumer models),
finance (option pricing, risk management), etc. The curse of dimensionality imposes to the modeler the adoption of a compact,
state-based representation of the decision processes. However, approximate dynamic programming leverages Monte-Carlo
techniques and statistical modeling techniques to be able to tackle higher-dimensional problems.
The course introduces the dynamic programming modeling framework (the basic steps to model a sequential decision making problem under uncertainty) and the main algorithmic approaches used to solve stochastic dynamic programming models. It covers approximation techniques used to circumvent the curse of dimensionality, which includes the use of stochastic optimization and machine learning techniques.
ISE-316 Optimization Models and Applications is the listed prerequisite, which is waived by filling an online form sent upon request by the ISE graduate coordinator. Linear algebra, regression, linear and convex optimization are used, but in practice, in a black-box fashion through Matlab tools explained in class.
Why is it interesting for Probabilistic Modelers?
Decision processes is an important class of stochastic processes playing a major role in many design problems.
For instance, traffic patterns, operations, and all kinds of human behaviors are best modeled as decision processes
that adapt to random events (e.g. realization of demand, availability of resources) while utilizing existing infrastructure
(e.g. location of facilities) to optimize a certain objective (e.g. long-run expected service performance).
Correctly modeling probability distributions influenced by decision processes involves identifying an underlying objective and solving the dynamic optimization problem embedded in the decision process. One of the simplest yet meaningful class of decision processes is the class of Markov Decision Processes under a discounted expected cumulated cost objective, which are determined by dynamic programming. More advanced models include the use of risk-sensitive criteria, constraints, etc.; these are also explained in class given their importance in applications.
Since its inception the course has been offered each year during the Fall Semester. The schedule varies according to departmental-level decisions; in Fall 2016 the schedule was Monday-Wednesday 4:30pm-5:45pm.