PyPose is officially released

Published: by

A Library for Robot Learning with Physics-based Optimization

robot


Deep learning has had remarkable success in robotic perception, but its data-centric nature suffers when it comes to generalizing to ever-changing environments. By contrast, physics-based optimization generalizes better, but it does not perform as well in complicated tasks due to the lack of high-level semantic information and the reliance on manual parametric tuning. To take advantage of these two complementary worlds, we present PyPose: a robotics-oriented, PyTorch-based library that combines deep perceptual models with physics-based optimization techniques. Our design goal for PyPose is to make it user-friendly, efficient, and interpretable with a tidy and well-organized architecture. Using an imperative style interface, it can be easily integrated into real-world robotic applications.


PyPose is highly efficient and supports parallel computing for Jacobian of Lie group and Lie algebra. See following comparison.
image
Efficiency comparisons of Lie group operations on CPU and GPU (we take Theseus performance as 1×).

More information about efficiency comparison goes to our paper for PyPose.


To boost future research, we provide concrete examples across several fields of robotics, including SLAM, inertial navigation, planning, and control.

SLAM

To showcase PyPose’s ability to bridge learning and optimization tasks, we develop a method for learning the SLAM front-end in a self-supervised manner. As shown in Fig. 1 (a), matching accuracy (reproj. error \(\leq\) 1 pixel) increased by up to 77.5% on unseen sequences after self-supervised fine-tuning. We also show the resulting trajectory and point cloud on a test sequence in below (right). While the pretrained model quickly loses track, the fine tuned model runs to completion with an ATE of 0.63 m. This verifies the feasibility of PyPose for optimization in the SLAM backend.

An example of SLAM.
Fig. 1. An example of visual SLAM using PyPose library.

Planning

The PyPose library can be used to develop a novel end-to-end planning policy that maps the depth sensor inputs directly into kinodynamically feasible trajectories. As shown in Fig. 2 (a), our method achieves around \(3\times\) speedup on average compared to a traditional planning framework, which utilizes a combined pipeline of geometry-based terrain analysis and motion-primitives-based planning~Fig. 2 (b). The efficiency of this method benefits from both the end-to-end planning pipeline and the efficiency of the PyPose library for training and deployment. Furthermore, this end-to-end policy has been integrated and tested on a real legged robot system, ANYmal. A planning instance during the field test is shown in Fig. 2 (c) using the current depth observation, shown in Fig. 2 (d).

An example of planning.
Fig. 2. An example of an end-to-end planning using PyPose library.

Control

PyPose also integrates the dynamics and control tasks into the end-to-end learning framework. We demonstrate this capability using a learning-based MPC for an imitation learning problem, Diff-MPC, where both the expert and learner employ a linear-quadratic regulator (LQR) and the learner tries to recover the dynamics using only expert controls. We treat MPC as a generic policy class with parameterized cost functions and dynamics, which can be learned by automatic differentiating (AD) through the LQR optimization. Our backward time is always faster because the method Diff-MPC needs to solve an additional LQR iteration in the backward pass. The computational advantage of our method may be more prominent as more complex optimization problems are involved in the learning.

An example of control.
Fig. 3. An example of MPC with imitation learning using PyPose library.

IMU preintegration

To boost future research in this field, PyPose develops an IMUPreintegrator module for differentiable IMU preintegration with covariance propagation. It supports batched operation, cumulative product, and integration on the manifold space. As the example shown in Fig. 4, we train an IMU calibration network that denoises the IMU signals after which we integrate the denoised signal and calculate IMU’s state expressed with LieTensor including position, orientation, and velocity. In order to learn the parameters in a deep neural network (DNN), we supervise the integrated pose and back-propagate the gradient through the integrator module. Our method has a significant improvement on both orientation and translation, compared to the traditional method.

The framework of IMU calibration network
Fig. 4. The framework of IMU calibration network using PyPose’s `IMUPreintegrator` with `LieTensor`.

For more usage, see Documentation. For more applications, see Examples.

Citing PyPose

If you use PyPose, please cite the paper below. You may also download it here.

@article{wang2022pypose,
  title   = {PyPose: A Library for Robot Learning with Physics-based Optimization},
  author  = {Wang, Chen and Gao, Dasong and Xu, Kuan and Geng, Junyi and Hu, Yaoyu and Qiu, Yuheng and Li, Bowen and Yang, Fan and Moon, Brady and Pandey, Abhinav and Aryan and Xu, Jiahe and Wu, Tianhao and He, Haonan and Huang, Daning and Ren, Zhongqiang and Zhao, Shibo and Fu, Taimeng and Reddy, Pranay and Lin, Xiao and Wang, Wenshan and Shi, Jingnan and Talak, Rajat and Cao, Kun and Du, Yi and Wang, Han and Yu, Huai and Wang, Shanzhao and Chen, Siyu and Kashyap, Ananth  and Bandaru, Rohan and Dantu, Karthik and Wu, Jiajun and Xie, Lihua and Carlone, Luca and Hutter, Marco and Scherer, Sebastian},
  journal = {arXiv preprint arXiv:2209.15428},
  year    = {2022}
}