The project pursues both advancing knowledge and contributing to improve industry applications by leveraging model-based optimal control approaches to scheduling/planning/learning under uncertainty combining approximate dynamic programming concepts with state-of-the-art nonlinear/stochastic/robust model predictive control tools. The goal is to improve the range and applicability of planning techniques in industrial systems (scheduling, robot trajectories, …) including on the one hand as much low-level physical models as possible and, on the other hand, guiding realtime
optimisation (nonlinear predictive control, with computation time and memory limits) with dynamic programming value function estimates. Model identification (and estimation of its uncertainty) from available data will be instrumental to the approach. The actual application domain 9 de 20 for the developed techniques to be tested will be production/maintenance scheduling, nonlinear predictive control, and robotic motion planning.
1) Learning optimal policies: efficient approximate dynamic programming (ADP) solutions; mixed model/data-based reinforcement learning.
2) Planning problems under uncertainty in industry: applications of stochastic and chanceconstrained optimization, large-scale problems, scheduling, partially observable Markov decision processes. Influence/compensation of time delay.
3) Combination of ADP solutions and uncertain nonlinear predictive control (NMPC) to avoid local minima. Uncertainty estimates of solutions, cautious policies.
4) Applications: improve computational efficiency and robustness under uncertainty in planning and scheduling problems applied to optimisation in industry, including unstable and delayed processes, robotic systems, (uncertain) computer vision data, etc.
Other entities IP: Leopoldo Armesto Ángel
Other entities participants: González Sorribes, Antonio; Leopoldo Armesto Ángel