63rd IEEE Conference on
Decision and Control
December 16-19, 2024
MiCo, Milan, Italy



Tutorial Sessions


Data-Driven Learning of Stable Control Laws for Adaptive Robot Planning: A Dynamical Systems Perspective

Organizer(s) Aude Billard, Bernardo Fichera, Aradhana Nayak
Date and Time MoA01 - Monday 10:00-12:00
Location Auditorium

Abstract: Control laws based on Autonomous Dynamical Systems (DS) provide a robust and reactive framework for high-level control in robotics, ensuring stability of the vector field. Recent advancements, particularly in geometry-based DS shaping techniques, leverage differential geometry tools to enhance expressivity and adaptability, while maintaining stability assurances. This tutorial focuses on the theoretical foundation for data-driven learning of DS-based control laws, with examples from robotic applications. Key topics include estimating non-linear dynamical systems, enabling highly reactive obstacle avoidance, and leveraging differential geometry for controllability of nonlinear dynamics. Hands-on exercises in MATLAB and real-world video illustrations will cover safe navigation in crowds and robust control of robotic manipulators around human co-workers.




Hybrid Feedback Control Design

Organizer(s) Ricardo G. Sanfelice, Pedro Casau, Francesco Ferrante
Date and Time MoB01 - Monday 13:30-15:30
Location Auditorium

Abstract: The goal of this tutorial session is to present a self-contained introduction to hybrid feedback control and to design tools that make hybrid control a powerful design method. The session introduces hybrid feedback control through a self-contained examination of hybrid control systems modeled by the combination of differential and difference equations with constraints. Using multiple examples, it illustrates the power of hybrid feedback control, which stems from the integration of continuous and discrete dynamics, where state variables update instantaneously at specific events while flowing continuously otherwise. The session introduces hybrid closed-loop systems as interconnected hybrid plants and controllers with designated inputs and outputs and formalizes their solutions. It summarizes key properties of hybrid systems and reviews various control strategies, including supervisory control with logic variables to select feedback controllers, event-triggered control to minimize control input updates, and strategies using multiple Lyapunov-like functions for stabilization. Pointers to further reading and other strategies in the literature are provided.




Half a Century of Multi-Pass Systems: Iterative Learning Control and Repetitive Processes - A Retrospective Tutorial

Organizer(s) Kevin L. Moore, Eric Rogers, Ying Tan, Tom Oomen, Bing Chu, Yuxin Wu
Date and Time TuA01 - Tuesday 10:00-12:00
Location Auditorium

Abstract: Multi-pass systems are those that repeat or iterate their basic system operation, often from the same starting conditions. Such systems arise naturally from the system dynamics (repetitive processes) and through operational procedures and control actions (iterative learning control). Research in multipass systems dates from the middle 1970’s to the present. In this tutorial session we will present an overview of the primary results in the field along with comments on future research directions. The session will be self-contained so that it is suitable for systems and control researchers and practitioners who may not be familiar with the concepts of multi-pass systems as well as of interest to those with some background in the field.




Model predictive control for tracking using artificial references: Fundamentals, recent results and practical implementation

Organizer(s) Daniel Limon, Melanie N. Zeilinger, Antonio Ferramosca, Johannes Köhler, Pablo Krupa, Ignacio Alvarado, Teodoro Alamo
Date and Time TuB01 - Tuesday 13:30-15:30
Location Auditorium

Abstract: We propose a comprehensive tutorial on a family of recent Model Predictive Control (MPC) formulations, known as MPC for tracking, that are characterized for making use of an artificial reference. These formulations have several benefits with respect to the classical tracking MPC formulations, including guaranteed recursive feasibility under online reference changes, as well as asymptotic stability and an increased domain of attraction. The tutorial introduces the concept of using artificial references in MPC and the benefits and theoretical guarantees obtained by its use by first presenting the original linear MPC for tracking formulation. We then cover more recent advances and variations of the original formulation, including its non-linear extension, its application to robust control or its extension to tracking periodic reference trajectories, among others. We also discuss optimization aspects related to the implementation of MPC for tracking and its application to learning-based MPC. The goal of this tutorial is to increase the visibility of this family of MPC controllers whose use is ever increasing in the overall control community.




Forty plus years of model reduction and still learning

Organizer(s) Carolyn L. Beck, Henrik Sandberg, Jacquelien M.A. Scherpen, Alessandro Astolfi
Date and Time WeA01 - Wednesday 10:00-12:00
Location Auditorium

Abstract: The construction of reduced order models for dynamical systems has long been considered an important research problem, not only in the field of control, but also in economics, image processing, circuit analysis and statistical mechanics, to cite just a few examples. The classic problem is this: given a model of a dynamical system, determine a simplified model that facilitates tractable system analysis and controller synthesis where none may have existed before. For example, optimal controller synthesis may grow in computational complexity faster than O(n^3), where n represents the state dimension of the model or the order of the differential equations describing the dynamical behavior of the system. Any particular reduction algorithm is then judged upon the level of complexity reduction achieved, how closely the reduced model captures the original system behavior, and the complexity of the reduction process itself. This classic formulation of the model reduction problem has been of interest in the control community for over 40 years, beginning perhaps with the linear system realization problem proposed by Kalman and Ho, or more directly with the balancing and principal component analysis perspective of Moore. As time passes, the mathematical models considered relevant in control problems increase in both dimension and complexity, involving distributed and interconnected systems of dynamical systems, switching systems, and complex non-linear dynamics. In this tutorial session, we begin with a historical overview of the classic problem formulation and review the timeline of developments in linear, nonlinear, and data-based model reduction methods.




Bringing Quantum Systems under Control

Organizer(s) Julian Berberich, Robert L. Kosut, Thomas Schulte-Herbrueggen
Date and Time WeB01 - Wednesday 13:30-15:30
Location Auditorium

Abstract: Quantum computing comes with the potential to push computational boundaries in various domains including, e.g., cryptography, simulation, optimization, and machine learning. Exploiting the principles of quantum mechanics, new algorithms can be developed with capabilities that are unprecedented by classical computers. However, the experimental realization of quantum devices is an active field of research with enormous open challenges, including robustness against noise and scalability. While systems and control theory play a crucial role in tackling these challenges, the principles of quantum physics lead to a (perceived) high entry barrier for entering the field of quantum computing. This tutorial session aims at lowering the barrier by introducing basic concepts required to understand and solve research problems in quantum systems. First, we introduce fundamentals of quantum algorithms, ranging from basic ingredients such as qubits and quantum logic gates to prominent examples and more advanced concepts, e.g., variational quantum algorithms. Next, we formalize some engineering questions for building quantum devices in the real world, which requires the careful manipulation of microscopic quantities obeying quantum effects. To this end for N-level systems, we introduce basic concepts of (bilinear) quantum systems and control theory including controllability, observability, and optimal control in a unified frame. Finally, we address the problem of noise in real-world quantum systems via robust quantum control, which relies on a set-membership uncertainty description frequently employed in control. A key goal of this tutorial session is to demystify engineering aspects of quantum computing by emphasizing that its mathematical description mainly involves linear algebra (for quantum algorithms) and the handling of bilinear control systems (for quantum systems and control theory) but does not require too much detailed knowledge of quantum physics.




Recent advances in partially observed Markov decision processes: regularity, existence, approximations, and learning with agent-state policies

Organizer(s) Amit Sinha, Ali Devran Kara, Aditya Mahajan, Serdar Yuksel
Date and Time ThA01 - Thursday 10:00-12:00
Location Auditorium

Abstract: Partially observed Markov Decision processes (POMDPs) present a versatile and appropriate model for many applications but this richness is accompanied by challenging and fundamental mathematical problems. Many practical physical models with control involve a hidden, unobservable, state dynamics which is controlled by a decision maker having access to partial information on the hidden state. Despite the practical significance, research in POMDPs still entails fundamental open questions and the past few decades has seen significant breakthroughs in our understanding of POMDPs, in view of regularity results such as continuity, existence, and filter stability; and the implications of these regularity results on robustness, approximations, learning theoretic aspects and practical reinforcement learning algorithms. Our goal in this tutorial is to present, to the control theory community, such recent findings within the broader panorama of prior knowledge and open problems for the future.




A Jump Start to Stock Trading Research for the Uninitiated Control Scientist: A Tutorial

Organizer(s) B. Ross Barmish, Simone Formentin
Date and Time ThB01 - Thursday 13:30-15:30
Location Auditorium

Abstract: The target audience for this tutorial session are control systems researchers with an interest in algorithmic stock trading but without substantial background in related topic areas such as finance and economics. To this end, the speakers will explain basic ideas apropos to stock trading from a control-theoretic point of view. Once the market mechanics and the stock-price models of interest are set up and explained in the first two talks, our ``bread and butter'' tools such as feedback and optimization will come into play. To motivate tutorial attendees and get them rapidly ``up to speed,'' we will consider a selected subset of the literature which can be easily understood by control community members without the need to invest an unduly large amount of time to on the underlying financial theory motivating the formulations.