CSS Bode Lecture and Plenary Lectures


The CSS Bode Lecture:
A Quest to Tame Complex Control Systems: from Theory towards Applications

Speaker: Jacquelien Scherpen
University of Groningen, the Netherlands
Date and Time: Friday, December 12, 12:45-13:45
Location: Asia I+II+III+IV

Abstract: Continuous innovation developments have resulted and still are resulting in a dominating trend towards increasing complexity of control systems. The systems of systems that are required for example for the energy transition require novel ways to handle the increasing complexity. In this Bode lecture, a brief overview of the theory about balanced realizations that is based on input-output structure and the model reduction that can be done based on this will be provided. Then, other relevant structure will be discussed, such as dissipativity, multi-physics structure from the port-Hamiltonian, Brayton-Moser, and Euler-Lagrange frameworks, network structure, uncertainty structure, structure in data, etc. Computational aspects, the relation with abstraction and uncertainty and error analysis will be discussed as well. Furthermore, the recent increase of available data and its impact on the methods will be discussed as well. Illustrations of the theory for large scale energy systems and for various high-tech systems will be provided. The presentation highlights joint work with many collaborators, including students and post-docs.

Biography: Jacquelien Scherpen received her MSc and PhD M.Sc. and Ph.D. degrees in 1990 and 1994 from the University of Twente, the Netherlands. She then joined Delft University of Technology and moved to the University of Groningen (UG) in 2006 at the Engineering and Technology institute Groningen (ENTEG), Faculty of Science and Engineering. She was scientific director of ENTEG and Director of Engineering at the UG. Since September 2023, she is the rector magnificus of the UG. Furthermore, she has been Captain of Science of the Dutch top sector High Tech Systems and Materials (HTSM). Jacquelien has held various visiting research positions, e.g., University of Tokyo, Kyoto University, Old Dominion University, Université de Compiegne, SUPÉLEC.

She has been at the editorial board of a few international journals among which the IEEE Transactions on Automatic Control, and the International Journal of Robust and Nonlinear Control. She received the 2017-2020 Automatica Best Paper Prize. In 2019 she received a royal distinction as Knight in the Order of the Netherlands Lion, and she is fellow of IEEE. Recently, she also got elected as fellow of IFAC. In 2023 she was awarded the Prince Friso prize for Engineer of the Year in The Netherlands. She has been active at the International Federation of Automatic Control (IFAC) and at the IEEE Control Systems Society. She was president of the European Control Association (EUCA), and has chaired the SIAM Activity Group on Control and Systems Theory.

Her current research interests include model reduction methods for networks, nonlinear model reduction methods, nonlinear control methods, modeling and control of physical systems with applications to electrical circuits, electromechanical systems, mechanical systems, and smart energy systems.




Using Transient Controllers to Satisfy High Level Multi-Robot Tasks

Speaker: Dimos Dimarogonas
KTH Royal Institute of Technology, Sweden
Date and Time: Wednesday, December 10, 8:00-9:00
Location: Asia I+II+III+IV

Abstract: Multi-robot task planning and control under temporal logic specifications has been gaining increasing attention in recent years due to its applicability among others in autonomous systems, manufacturing systems, service robotics and intelligent transportation. Initial approaches considered qualitative logics, such as Linear Temporal Logic, whose automata representation facilitates the direct use of model checking tools for correct-by-design control synthesis. In many real world applications however, there is a need to quantify spatial and temporal constraints, e.g., in order to include deadlines and separation assurance bounds. This led to the use of quantitative logics, such as Metric Interval and Signal Temporal Logic, to impose such spatiotemporal constraints. However, the lack of direct automata representations for such specifications hinders the use of standard verification tools from computer science, such as model checking. Motivated by this, the use of transient control methodologies that fulfil the aforementioned qualitative constraints becomes evident. In this talk, we review some of our recent results in applying transient control techniques, and in particular control barrier functions, prescribed performance control and model predictive control, to high level robot task planning under spatiotemporal specifications, treating both the case of a single and a multi-agent system. We further review approaches to task decomposition and consider the case when there are discrepancies between the task and communication graph topologies. The results are supported by relevant experimental validations.

Biography: Dimos V. Dimarogonas was born in Athens, Greece. He received the Diploma in Electrical and Computer Engineering in 2001 and the Ph.D. in Mechanical Engineering in 2007, both from National Technical University of Athens (NTUA). Between 2007 and 2010, he held postdoctoral positions at the KTH Royal Institute of Technology, Dept of Automatic Control and MIT, Laboratory for Information and Decision Systems (LIDS). He is currently Professor at the Division of Decision and Control Systems, School of Electrical Engineering and Computer Science, at KTH. His current research interests include multi-agent systems, hybrid systems and control, robot navigation and manipulation, human-robot-interaction and networked control. He serves in the Editorial Board of Automatica and the IEEE Transactions on Control of Network Systems. He is a recipient of the ERC Starting Grant in 2014, the ERC Consolidator Grant in 2019, the Knut och Alice Wallenberg Academy Fellowship in 2015 and is a Fellow of IEEE, class of 2023.




AI-Human Games

Speaker: Asu Ozdaglar
Massachusetts Institute of Technology, USA
Date and Time: Thursday, December 11, 8:00-9:00
Location: Asia I+II+III+IV

Abstract: We are at the beginning of an AI revolution, whereby digital tools much more capable than what people could imagine just a few years ago are being developed and are already deployed in almost every sector of the economy and every aspect of our social lives. These AI agents promise to perform many tasks autonomously and much more efficiently than humans, provide complementary information and knowledge to professional decision-makers and become flexible, usable assistants to most humans. AI agents are predicted to be everywhere. But little asked is the key question of how they will interact with humans and even more fundamentally, how they will interact with each other, since the humans that they advise or act on behalf of are locked in myriad social relations.

In this talk we focus on the decision-making performance of AI agents. We propose new metrics for evaluating the performance of AI agents in sequential decision-making and show that minimizing regret for an AI model (under a single-layer self-attention parametrization) implements a no-regret algorithm, hence offers insightful connections to game-theoretic settings. In the second part of the talk, I will focus on how AI recommendations can be combined with human expertise. I will formulate the human-AI collaboration problem as an incomplete information communication game and highlight what types of miscommunications can arise between AI agents and humans and how this can be fixed.

Biography: Asu Ozdaglar is the Mathworks Professor of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology (MIT). She is the department head of EECS and deputy dean of academics of the Schwarzman College of Computing at MIT. Her research expertise includes optimization, machine learning, economics, and networks. Her recent research focuses on designing incentives and algorithms for data driven online systems with many diverse human-machine participants. She has extensive work on distributed optimization and control, and has investigated issues of data ownership and markets, spread of misinformation on social media, economic and financial contagion, and social learning.

Professor Ozdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council, the 2014 Spira teaching award, and Keithley, Distinguished School of Engineering and Mathworks professorships. She is an IEEE fellow, IFAC fellow, and was selected as an invited speaker at the International Congress of Mathematicians. She received her Ph.D. degree in electrical engineering and computer science from MIT in 2003.




Generalization in Reinforcement Learning: From Foundational Results to New Frontiers

Speaker: Csaba Szepesvari
University of Alberta, Canada
Date and Time: Friday, December 12, 8:00-9:00
Location: Asia I+II+III+IV

Abstract: Reinforcement learning (RL) and optimal control share a deep intellectual heritage, centered on the design and analysis of algorithms for sequential decision-making under uncertainty. This talk will provide a high-level overview of the theoretical foundations of modern RL, focusing on the significant progress that has been made in understanding generalization, sample efficiency, and computational tractability. We will examine foundational results on efficient learning algorithms and their fundamental limits, concluding with a look at new frontiers and posing the question of what role control theory can play in advancing the capabilities of large language models at the heart of modern AI systems.

Biography: Csaba Szepesvari (PhD'99) is the team lead of DeepMind's Foundation team, in addition to serving as a Professor of Computing Science at the University of Alberta and as a Principal Investigator of the Alberta Machine Intelligence Institute. He is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) (2023) for his significant contributions to adaptive control, reinforcement learning, and the theory of bandit problems. He is also a Fellow of the IEEE (2024) for his contributions to the theory of sequential decision-making under uncertainty. Prof. Szepesvari is best known for his theoretical work on reinforcement learning and as the co-inventor of a Monte-Carlo tree search method "UCT," which serves as the basis for much subsequent work. He has published three books, one in control theory, one about reinforcement learning, and the most recent specializing in the theory of bandit algorithms. His current editorial duties include serving as an Editorial Board Member of Foundations and Trends in Machine Learning and as an Action Editor for the Journal of Machine Learning Research. Since 2021, he has enthusiastically co-organized and co-hosted the weekly "Reinforcement Learning Theory" virtual seminar aimed at everyone in the world who wants to learn about the latest advances in this fast-moving field.