Imperative learning (IL) is a self-supervised neural-symbolic learning framework for robot autonomy.
A prototype of IL was first mentioned in the iSLAM paper, while it was then formally defined in this long article:
Imperative Learning: A Self-supervised Neural-Symbolic Learning Framework for Robot Autonomy.
Chen Wang, Kaiyi Ji, Junyi Geng, Zhongqiang Ren, Taimeng Fu, Fan Yang, Yifan Guo, Haonan He, Xiangyu Chen, Zitong Zhan, Qiwei Du, Shaoshu Su, Bowen Li, Yuheng Qiu, Yi Du, Qihang Li, Yifan Yang, Xiao Lin, Zhipeng Zhao.
arXiv preprint arXiv:2406.16087, 2024.
SAIR Lab Recommended
@article{wang2024imperative,
title = {Imperative Learning: A Self-supervised Neural-Symbolic Learning Framework for Robot Autonomy},
author = {Wang, Chen and Ji, Kaiyi and Geng, Junyi and Ren, Zhongqiang and Fu, Taimeng and Yang, Fan and Guo, Yifan and He, Haonan and Chen, Xiangyu and Zhan, Zitong and Du, Qiwei and Su, Shaoshu and Li, Bowen and Qiu, Yuheng and Du, Yi and Li, Qihang and Yang, Yifan and Lin, Xiao and Zhao, Zhipeng},
journal = {arXiv preprint arXiv:2406.16087},
year = {2024},
url = {https://arxiv.org/abs/2406.16087},
code = {https://github.com/sair-lab/iSeries},
website = {https://sairlab.org/iseries},
addendum = {SAIR Lab Recommended}
}
Wang, Chen and Ji, Kaiyi and Geng, Junyi and Ren, Zhongqiang and Fu, Taimeng and Yang, Fan and Guo, Yifan and He, Haonan and Chen, Xiangyu and Zhan, Zitong and Du, Qiwei and Su, Shaoshu and Li, Bowen and Qiu, Yuheng and Du, Yi and Li, Qihang and Yang, Yifan and Lin, Xiao and Zhao, Zhipeng, "Imperative Learning: A Self-supervised Neural-Symbolic Learning Framework for Robot Autonomy," arXiv preprint arXiv:2406.16087, 2024.
This iSeries collects articles from the SAIR lab, named after a leading character “i” from “imperative learning”. In the iSeries collection, IL has been applied to various tasks including path planning, feature matching, and multi-robot routing, etc.
The list of iSeries articles
iWalker: Imperative Visual Planning for Walking Humanoid Robot.
European Conference on Computer Vision (ECCV), 2024.
SAIR Lab Recommended
@inproceedings{zhan2024imatching,
title = {{iMatching}: Imperative Correspondence Learning},
author = {Zhan, Zitong and Gao, Dasong and Lin, Yun-Jou and Xia, Youjie and Wang, Chen},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024},
url = {https://arxiv.org/abs/2312.02141},
code = {https://github.com/sair-lab/iMatching},
website = {https://sairlab.org/iMatching},
addendum = {SAIR Lab Recommended}
}
Zhan, Zitong and Gao, Dasong and Lin, Yun-Jou and Xia, Youjie and Wang, Chen, "iMatching: Imperative Correspondence Learning," European Conference on Computer Vision (ECCV), 2024.
iMTSP: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning.
Yifan Guo, Zhongqiang Ren, Chen Wang.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
SAIR Lab Recommended
@inproceedings{guo2024imtsp,
title = {{iMTSP}: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning},
author = {Guo, Yifan and Ren, Zhongqiang and Wang, Chen},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2024},
url = {https://arxiv.org/abs/2405.00285},
code = {https://github.com/sair-lab/iMTSP},
video = {https://youtu.be/h0oflFcvPSc},
website = {https://sairlab.org/iMTSP},
addendum = {SAIR Lab Recommended}
}
Guo, Yifan and Ren, Zhongqiang and Wang, Chen, "iMTSP: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
iSLAM: Imperative SLAM.
Taimeng Fu, Shaoshu Su, Yiren Lu, Chen Wang.
IEEE Robotics and Automation Letters (RA-L), 2024.
SAIR Lab Recommended
@article{fu2024islam,
title = {{iSLAM}: Imperative {SLAM}},
author = {Fu, Taimeng and Su, Shaoshu and Lu, Yiren and Wang, Chen},
journal = {IEEE Robotics and Automation Letters (RA-L)},
year = {2024},
url = {https://arxiv.org/abs/2306.07894},
code = {https://github.com/sair-lab/iSLAM/},
video = {https://youtu.be/rtCvx0XCRno},
website = {https://sairlab.org/iSLAM},
addendum = {SAIR Lab Recommended}
}
Fu, Taimeng and Su, Shaoshu and Lu, Yiren and Wang, Chen, "iSLAM: Imperative SLAM," IEEE Robotics and Automation Letters (RA-L), 2024.
iA*: Imperative Learning-based A* Search for Pathfinding.
Xiangyu Chen, Fan Yang, Chen Wang.
arXiv preprint arXiv:2403.15870, 2024.
SAIR Lab Recommended
@article{chen2024iastar,
title = {{iA*}: Imperative Learning-based A* Search for Pathfinding},
author = {Chen, Xiangyu and Yang, Fan and Wang, Chen},
journal = {arXiv preprint arXiv:2403.15870},
year = {2024},
url = {https://arxiv.org/abs/2403.15870},
addendum = {SAIR Lab Recommended}
}
Chen, Xiangyu and Yang, Fan and Wang, Chen, "iA*: Imperative Learning-based A* Search for Pathfinding," arXiv preprint arXiv:2403.15870, 2024.
iPlanner: Imperative Path Planning.
Fan Yang, Chen Wang, Cesar Cadena, Marco Hutter.
Robotics: Science and Systems (RSS), 2023.
SAIR Lab Recommended
@inproceedings{yang2023iplanner,
author = {Yang, Fan and Wang, Chen and Cadena, Cesar and Hutter, Marco},
title = {{iPlanner}: Imperative Path Planning},
booktitle = {Robotics: Science and Systems (RSS)},
url = {https://arxiv.org/abs/2302.11434},
code = {https://github.com/sair-lab/iPlanner},
year = {2023},
website = {https://sairlab.org/iPlanner/},
addendum = {SAIR Lab Recommended}
}
Yang, Fan and Wang, Chen and Cadena, Cesar and Hutter, Marco, "iPlanner: Imperative Path Planning," Robotics: Science and Systems (RSS), 2023.
This blog will briefly explain IL in a high-level perspective, as the reader may find more in-depth explanation in the paper.
Readers may also find a slide in this link, which provides a more interactive format.
IL is to alleviate the challenges of robot learning frameworks such as reinforcement learning and imitation learning.
Why do we need Neural-Symbolic AI?
To combine the advantages of both neural and symbolic methods.
To overcome the challenges of existing robot learning frameworks.
What is Neural-Symbolic AI?
There is still NO consensus on Neural-Symbolic (NeSy) AI.
We have a narrow and a broader definition, where the difference is mainly on the scope of “symbols”.
Examples of existing Neural-Symbolic AI?
Although many methods haven’t explicitly say this, but they can be viewed as Neural-Symbolic AI.
Why do we need Imperative Leanring?
Imperative learning is a self-supervised neural-symbolic learning framework.
It is designed to overcome the four challenges by a single design based on bilevel optimization.
The framework of imperative learning (IL) consists of three primary modules including a neural perceptual network, a symbolic reasoning engine, and a general memory system.
IL is formulated as a special bilevel optimization, enabling reciprocal learning and mutual correction among the three modules.
Denote the neural system as \(\boldsymbol z = f({\boldsymbol{\theta}}, \boldsymbol{x})\), where \(\boldsymbol{x}\) represents the sensor measurements, \({\boldsymbol{\theta}}\) represents the perception-related learnable parameters, and \(\boldsymbol z\) represents the neural outputs such as semantic attributes; the reasoning engine as \(g(f, M, {\boldsymbol{\mu}})\) with reasoning-related parameters \({\boldsymbol{\mu}}\) and the memory system as \(M({\boldsymbol{\gamma}}, {\boldsymbol{\nu}})\), where \({\boldsymbol{\gamma}}\) is perception-related memory parameters and \({\boldsymbol{\nu}}\) is reasoning-related memory parameters. Therefore, imperative learning (IL) is formulated as a special BLO:
where \(\xi\) is a general constraint (either equality or inequality); \(U\) and \(L\) are the upper-level (UL) and lower-level (LL) cost functions; and \(\boldsymbol \psi \doteq [{\boldsymbol{\theta}}^\top, {\boldsymbol{\gamma}}^\top]^\top\) are stacked UL variables and \(\boldsymbol \phi \doteq [{\boldsymbol{\mu}}^\top, {\boldsymbol{\nu}}^\top]^\top\) are stacked LL variables, respectively.
Alternatively, \(U\) and \(L\) are also referred to as the neural cost and symbolic cost, respectively.
The term “imperative” is used to denote the passive nature of the learning process:
Once optimized, the neural system \(f\) in the UL cost will be driven to align with the LL reasoning engine \(g\)
E.g., logical, physical, or geometrical reasoning process with constraint \(\xi\).
Therefore, IL can learn to generate logically, physically, or geometrically feasible semantic attributes or predicates.
In some applications, \(\boldsymbol \psi\) and \(\boldsymbol \phi\) are also referred to as neuron-like and symbol-like parameters, respectively.
Self-supervised Nature
Since many symbolic reasoning engines including geometric, physical, and logical reasoning, can be optimized or solved without providing labels.
For example, A\(^*\) search, geometrical reasoning such as bundle adjustment (BA), and physical reasoning like model predictive control (MPC) can be optimized without providing labels.
The IL framework leverages this phenomenon and jointly optimizes the three modules by bilevel optimization, which enforces the three modules to mutually correct each other.
Consequently, all three modules can learn and evolve in a self-supervised manner by observing the world.
Although IL is designed for self-supervised learning, it can easily adapt to supervised or weakly supervised learning by involving labels either in UL or LL cost functions or both.
Overcoming the other Challenges.
The symbolic module offers better Interpretability and Generalization Ability due to its explainable design.
The Optimality is brought by bilevel optimization, compared to separately training the neural and symbolic modules.
Optimization Challenge
The solution to IL mainly involves solving the UL parameters \({\boldsymbol{\theta}}\) and \({\boldsymbol{\gamma}}\) and the LL parameters \({\boldsymbol{\mu}}\) and \({\boldsymbol{\nu}}\).
Intuitively, the UL parameters which are often neuron-like weights can be updated with the gradients of the UL cost $U$:
Since \(U\), \(L\), \(M\), \(g\), and \(f\) are often well defined, the challenge is to compute the derivative of lower-level (symbol-like) parameters w.r.t the upper-level (neuron-like) parameters, \(\color{blue}\frac{\partial \boldsymbol \phi^*}{\partial \boldsymbol \psi}\), which takes the form:
There are generally two ways to compute it, i.e., unrolled differentiation and implicit differentiation. See paper for more details.
Since \(\boldsymbol \psi \doteq [{\boldsymbol{\theta}}^\top, {\boldsymbol{\gamma}}^\top]^\top\) are LL parameters, the solution depends on the specific LL tasks.
Applications and Examples
The paper provides five distinct examples covering the different cases of LL tasks.
Path Planning
In the case of LL tasks have closed-form solutions, we provide examples in both global and local path planning.
Global Path Planning
A\(^*\) is widely used due to its optimality, but often suffers low efficiency due to its large search space.
Therefore, we could leverage a neural module to predict a confined search space, leading to overall improved efficiency.
We take A\(^*\) as the symbolic reasoning engine and train the neural module in a self-supervised way based on IL.
This results in a new framework, which is referred to as iA\(^*\).
Due to the confined search space and generalization ability from A*, iA\(^*\) outperforms both classic and other learning methods.
The following figure shows the qualitative results of path planning algorithms on datasets, including MP, Maze, and Matterport3D.
Local Path Planning
End-to-end local path planning has recently attracted considerable interest, particularly for its potential to enable efficient inference.
Reinforcement learning-based methods often suffer from sample inefficiency and difficulties in directly processing depth images.
Imitation learning-based methods rely heavily on the availability and quality of labeled trajectories.
To solve those problems, we leverage a neural module to predict sparse waypoints, leading to overall improved efficiency.
The waypoints are then interpolated using a trajectory optimization engine based on a cubic spline.
We use IL to train this new framework, which is referred to as iPlanner.
The following figure shows real-world experiment for local path planning using iPlanner with a legged robot.
Logical Reasoning
In the case of the LL task needs first-order optimization, we provide an example in inductive logical reasoning.
Existing works only focus on toy examples, such as Visual Sudoku, and binary vector representations in BlocksWorld.
They cannot simultaneously perform grounding (high dimensional data) and rule induction.
Based on IL, we use a neural network for concept and relationship prediction, and a neural logical machine (NLM) for rule induction.
We denote this new framework as iLogic.
In the following figure, iLogic conducts rule induction with perceived groundings and the constraining rules exhibited on the right side and finally gets the accurate action prediction exhibited on the left side.
Optimal Control
In the case of the LL task needs constrained optimization, we provide an example of UAV attitude control based on IMU.
Differentiable model predictive control (MPC) to combine the physics-based modeling with data-driven methods, enabling learning dynamic models and control policies in an end-to-end manner.
However, many prior studies depend on expert demonstrations or labeled data for supervised learning.
They often suffer from challenging conditions such as unseen environments and external disturbances.
Based on IL, we use a neural network for IMU denoising and predict the hyperparameters for MPC.
We denote this new framework as iMPC.
We evaluate the control performance under the wind disturbance to validate the robustness of the proposed approach.
Visual Odometry
In the case of the LL task needs second-order optimization, we provide an example of simultaneous localization and mapping (SLAM).
Existing SLAM systems only have single connection between the front-end odometry and back-end pose graph optimization.
This leads to sub-optimal solutions since there is no feedback from the back-end to the front-end.
We proposed to optimize the entire SLAM system based on IL, leading the self-supervised reciprocal correction between the front-end and the back-end.
We refer to this new framework as iSLAM.
With more training iterations, the front-end odometry can be kept improving in the following figure.
Multi-agent Routing
In the case of the LL task needs discrete optimization, we provide an example of multiple traveling salesman problem (MTSP).
Traditional methods for MTSP needs combinatorial optimization, which is discrete optimization in a very large space.
Classic MTSP solvers such as Google’s OR-Tools routing library meet difficulties for large-scale problems (>500 cities).
We introduce IL and use a neural network for city allocation to agents and then use single TSP solvers for divided smaller problems.
To compute the differentiation in discrete space, we introduce a surrogate network to estimate the gradient based on control variate.
We refer this new framework as iMTSP.
Due to the generalization abilities of IL, iMTSP outperforms both classic solvers and RL-based methods.
Please refer to the iSeries articles for more technical details!
iWalker: Imperative Visual Planning for Walking Humanoid Robot.
@article{lin2024iwalker,
title = {{iWalker}: Imperative Visual Planning for Walking Humanoid Robot},
author = {Lin, Xiao and Huang, Yuhao and Fu, Taimeng and Xiong, Xiaobin and Wang, Chen},
journal = {arXiv preprint arXiv:2409.18361},
year = {2024},
url = {https://arxiv.org/abs/2409.18361},
video = {https://youtu.be/FPV74PznzTU},
website = {https://sairlab.org/iwalker},
addendum = {SAIR Lab Recommended}
}
Lin, Xiao and Huang, Yuhao and Fu, Taimeng and Xiong, Xiaobin and Wang, Chen, "iWalker: Imperative Visual Planning for Walking Humanoid Robot," arXiv preprint arXiv:2409.18361, 2024.
Imperative Learning: A Self-supervised Neural-Symbolic Learning Framework for Robot Autonomy.
Chen Wang, Kaiyi Ji, Junyi Geng, Zhongqiang Ren, Taimeng Fu, Fan Yang, Yifan Guo, Haonan He, Xiangyu Chen, Zitong Zhan, Qiwei Du, Shaoshu Su, Bowen Li, Yuheng Qiu, Yi Du, Qihang Li, Yifan Yang, Xiao Lin, Zhipeng Zhao.
arXiv preprint arXiv:2406.16087, 2024.
SAIR Lab Recommended
@article{wang2024imperative,
title = {Imperative Learning: A Self-supervised Neural-Symbolic Learning Framework for Robot Autonomy},
author = {Wang, Chen and Ji, Kaiyi and Geng, Junyi and Ren, Zhongqiang and Fu, Taimeng and Yang, Fan and Guo, Yifan and He, Haonan and Chen, Xiangyu and Zhan, Zitong and Du, Qiwei and Su, Shaoshu and Li, Bowen and Qiu, Yuheng and Du, Yi and Li, Qihang and Yang, Yifan and Lin, Xiao and Zhao, Zhipeng},
journal = {arXiv preprint arXiv:2406.16087},
year = {2024},
url = {https://arxiv.org/abs/2406.16087},
code = {https://github.com/sair-lab/iSeries},
website = {https://sairlab.org/iseries},
addendum = {SAIR Lab Recommended}
}
Wang, Chen and Ji, Kaiyi and Geng, Junyi and Ren, Zhongqiang and Fu, Taimeng and Yang, Fan and Guo, Yifan and He, Haonan and Chen, Xiangyu and Zhan, Zitong and Du, Qiwei and Su, Shaoshu and Li, Bowen and Qiu, Yuheng and Du, Yi and Li, Qihang and Yang, Yifan and Lin, Xiao and Zhao, Zhipeng, "Imperative Learning: A Self-supervised Neural-Symbolic Learning Framework for Robot Autonomy," arXiv preprint arXiv:2406.16087, 2024.
European Conference on Computer Vision (ECCV), 2024.
SAIR Lab Recommended
@inproceedings{zhan2024imatching,
title = {{iMatching}: Imperative Correspondence Learning},
author = {Zhan, Zitong and Gao, Dasong and Lin, Yun-Jou and Xia, Youjie and Wang, Chen},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024},
url = {https://arxiv.org/abs/2312.02141},
code = {https://github.com/sair-lab/iMatching},
website = {https://sairlab.org/iMatching},
addendum = {SAIR Lab Recommended}
}
Zhan, Zitong and Gao, Dasong and Lin, Yun-Jou and Xia, Youjie and Wang, Chen, "iMatching: Imperative Correspondence Learning," European Conference on Computer Vision (ECCV), 2024.
iMTSP: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning.
Yifan Guo, Zhongqiang Ren, Chen Wang.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
SAIR Lab Recommended
@inproceedings{guo2024imtsp,
title = {{iMTSP}: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning},
author = {Guo, Yifan and Ren, Zhongqiang and Wang, Chen},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2024},
url = {https://arxiv.org/abs/2405.00285},
code = {https://github.com/sair-lab/iMTSP},
video = {https://youtu.be/h0oflFcvPSc},
website = {https://sairlab.org/iMTSP},
addendum = {SAIR Lab Recommended}
}
Guo, Yifan and Ren, Zhongqiang and Wang, Chen, "iMTSP: Solving Min-Max Multiple Traveling Salesman Problem with Imperative Learning," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
iSLAM: Imperative SLAM.
Taimeng Fu, Shaoshu Su, Yiren Lu, Chen Wang.
IEEE Robotics and Automation Letters (RA-L), 2024.
SAIR Lab Recommended
@article{fu2024islam,
title = {{iSLAM}: Imperative {SLAM}},
author = {Fu, Taimeng and Su, Shaoshu and Lu, Yiren and Wang, Chen},
journal = {IEEE Robotics and Automation Letters (RA-L)},
year = {2024},
url = {https://arxiv.org/abs/2306.07894},
code = {https://github.com/sair-lab/iSLAM/},
video = {https://youtu.be/rtCvx0XCRno},
website = {https://sairlab.org/iSLAM},
addendum = {SAIR Lab Recommended}
}
Fu, Taimeng and Su, Shaoshu and Lu, Yiren and Wang, Chen, "iSLAM: Imperative SLAM," IEEE Robotics and Automation Letters (RA-L), 2024.
iA*: Imperative Learning-based A* Search for Pathfinding.
Xiangyu Chen, Fan Yang, Chen Wang.
arXiv preprint arXiv:2403.15870, 2024.
SAIR Lab Recommended
@article{chen2024iastar,
title = {{iA*}: Imperative Learning-based A* Search for Pathfinding},
author = {Chen, Xiangyu and Yang, Fan and Wang, Chen},
journal = {arXiv preprint arXiv:2403.15870},
year = {2024},
url = {https://arxiv.org/abs/2403.15870},
addendum = {SAIR Lab Recommended}
}
Chen, Xiangyu and Yang, Fan and Wang, Chen, "iA*: Imperative Learning-based A* Search for Pathfinding," arXiv preprint arXiv:2403.15870, 2024.
iPlanner: Imperative Path Planning.
Fan Yang, Chen Wang, Cesar Cadena, Marco Hutter.
Robotics: Science and Systems (RSS), 2023.
SAIR Lab Recommended
@inproceedings{yang2023iplanner,
author = {Yang, Fan and Wang, Chen and Cadena, Cesar and Hutter, Marco},
title = {{iPlanner}: Imperative Path Planning},
booktitle = {Robotics: Science and Systems (RSS)},
url = {https://arxiv.org/abs/2302.11434},
code = {https://github.com/sair-lab/iPlanner},
year = {2023},
website = {https://sairlab.org/iPlanner/},
addendum = {SAIR Lab Recommended}
}
Yang, Fan and Wang, Chen and Cadena, Cesar and Hutter, Marco, "iPlanner: Imperative Path Planning," Robotics: Science and Systems (RSS), 2023.