Tuesday, December 13, 2016 at 11:45am - 12:45pm
Andrzej Ruszczynski, Rutgers University: We shall focus on modeling risk in dynamical systems and discuss fundamental properties of dynamic measures of risk. Special attention will be paid to the local property and the property of time consistency. Then we shall focus on risk-averse control of discrete-time Markov systems. We shall refine the concept of time consistency for such systems, introduce the class of Markovian risk measures, and derive their structure. This will allow us to derive a risk-averse couterpart of dynamic programming equations. Then we shall extend these ideas to partially-observable systems and continuous-time Markov chains and derive the structure of risk measures and dynamic programming equations in these cases as well. In the last part of the talk, we shall discuss risk-averse control of diffusion processes and present a risk-averse counterpart of the Hamilton--Jacobi--Bellman equation. Finally, we shall review some solution methods for risk-averse control problems.
Bio: Andrzej Ruszczynski received his PhD and habilitation degrees in control engineering from Warsaw University of Technology in 1976 and 1983, respectively. He has been with Warsaw University of Technology (Poland), University of Zurich (Switzerland), International Institute of Applied Systems Analysis (Laxenburg, Austria), Princeton University, University of Wisconsin-Madison, and Rutgers University. Dr. Ruszczynski is one of the creators of and main contributors to the field of risk-averse optimization, author of "Nonlinear Optimization" (Princeton University Press, 2006), co-author of "Lectures on Stochastic Programming" (Society of Industrial and Applied Mathematics, 2009), "Stochastic Programming" (Elsevier, 2003), and author of more than 100 articles in the area of optimization. Dr. Ruszczynski was plenary speaker at several major international conferences and held positions in large scientific societies.