CDC'23 Special Session on Control with Learning for Autonomous Robots

16:00-18:00, Track 5, Simpor Junior 4912, Dec. 13, 2023, Singapore

Published: by

Control system theory plays an instrumental role in enabling reliable autonomous operations, including SLAM (simultaneous localization and mapping), path planning, trajectory tracking, etc. In particular, physics-based optimization policies such as LQR (linear quadratic regulator), MPC (model predictive control), and MHE (moving horizon estimation), etc., have been the powerhouse behind many advanced robot applications. On the other hand, deep learning methods can often achieve state-of-the-art performance and generalizability through training on a large amount of data in a model-free fashion.

There is an increasing interest in combining them to achieve the best of both worlds: efficient training, fast adaptation to environments, safety and robust guarantees, semantic understanding, agile and dexterous performance, etc. For example, incorporating system models can help significantly reduce training samples and time. Especially, recent works have developed efficient analytical gradients for differentiation through MPC and MHE policies, enabling seamless embedding of optimal policies into deep neural networks, which achieved superior control performance.

However, fundamental challenges still exist in further pushing the performance boundary in both areas. This invited session aims to provide a platform for a diverse group of researchers and practitioners from different areas to report their recent research and application progress on learning and control for autonomous robots.

Topics of interest include, but are not limited to:

  • Learning-based control
  • Safe reinforcement learning
  • Joint optimization of MPC, MHE with neural networks
  • Hyperparameter optimization and bilevel optimization
  • Hybrid model-free and model-based optimal control
  • Distributed learning and optimization for autonomous robots
  • Meta-learning, continual learning for autonomous robots
  • Neural ODE for adaptive optimal control

Accepted Paper list and Schedule

Time Title Presenter(s)
16:00 - 16:20 Learning for Online Mixed-Integer Model Predictive Control with Parametric Optimality Certificates Nair, Siddharth, University of California, Berkeley
Russo, Luigi, Università del Sannio
Glielmo, Luigi, University of Napoli Federico II
Borrelli, Francesco, University of California at Berkeley
16:20 - 16:40 Nonlinear MPC for Quadrotors in Close-Proximity Flight with Neural Network Downwash Prediction Li, Jinjie, Beihang University
Han, Liang, Beihang University
Yu, Haoyang, Beihang University
Lin, Yuheng, Beihang University
Li, Qingdong, Beihang University
Ren, Zhang, Beijing University of Aeronautics and Astronautics
16:40 - 17:00 Optimal Scheduling for Remote Estimation with an Auxiliary Transmission Scheme Li, Zitian, Guangdong University of Technology
Yang, Lixin, Queensland University of Technology
Jia, Yijin, Guangdong University of Technology
Huang, Zenghong, Guangdong University of Technology
Lv, Weijun, Guangdong University of Technology
Xu, Yong, Guangdong University of Technology
17:00 - 17:20 Deriving Rewards for Reinforcement Learning from Symbolic Behaviour Descriptions of Bipedal Walking Harnack, Daniel, German Research Center for Artificial Intelligence, DFKI
Lüth, Christoph, Deutsches Forschungszentrum für Künstliche Intelligenz
Gross, Lukas, DFKI
Kumar, Shivesh, German Research Center for Artificial Intelligence, DFKI GmbH
Kirchner, Frank, Robotics Innovation Center, DFKI and Department of Mathematics and Informatics, University of Bremen
17:20 - 17:40 Optimizing Field-of-View for Multi-Agent Path Finding via Reinforcement Learning: A Performance and Communication Overhead Study Cheng, Hoi Chuen, The Hong Kong University of Science and Technology
Shi, Ling, The Hong Kong University of Science and Technology
Yue, Chik Patrick, The Hong Kong University of Science and Technology
17:40 - 18:00 Learning Koopman Operators with Control Using Bi-level Optimization Huang, Daning, Pennsylvania State University
Prasetyo, Muhammad Bayu, Pennsylvania State University
Yu, Yin, Penn State University
Geng, Junyi, Pennsylvania State University

Organizers

Lin Zhao

National University of Singapore

Chen Wang

University at Buffalo

Guanya Shi

Carnegie Mellon University

Changliu Liu

Carnegie Mellon University

Shaoshuai Mou

Purdue University