Why it exists
Build sparse BA with normal Python control flow
The paper frames BAE around eager execution: researchers can prototype, debug, and adapt optimization code in the same interactive style that made PyTorch popular.
Research Paper
BAE brings bundle adjustment and pose graph optimization into eager-mode PyTorch with sparsity-aware autodiff and GPU-accelerated sparse linear algebra, so research code stays debuggable, composable, and fast as scenes grow.
Why it exists
The paper frames BAE around eager execution: researchers can prototype, debug, and adapt optimization code in the same interactive style that made PyTorch popular.
What is new
BAE combines sparse-Jacobian-aware autodiff with GPU sparse linear algebra so second-order optimization stays both flexible in eager mode and scalable on real problems.
What it unlocks
The same LM-style optimizer patterns extend from bundle adjustment to pose graph optimization and support downstream reconstruction systems.
Performance
Built as a PyTorch-native, sparsity-aware BA library with GPU-accelerated sparse linear algebra, it widens the runtime gap as optimization problems scale.
Bar lengths shown on a logarithmic scale, with a small minimum width for readability.
Interactive compute graphs for Bundle Adjustment and Pose Graph Optimization.
Our work powering BA and global positioning in downstream system, InstantSfM.
Citation
If BAE is helpful for your research, please consider citing the paper below.
@article{zhan2025bundle,
title = {Bundle Adjustment in the Eager Mode},
author = {Zhan, Zitong and Xu, Huan and Fang, Zihang and Wei, Xinpeng and Hu, Yaoyu and Wang, Chen},
journal = {arXiv preprint arXiv:2409.12190},
year = {2025},
url = {https://arxiv.org/abs/2409.12190}
}