Research Paper

Sparse 2nd order optimization that feels native to

BAE brings bundle adjustment and pose graph optimization into eager-mode PyTorch with sparsity-aware autodiff and GPU-accelerated sparse linear algebra, so research code stays debuggable, composable, and fast as scenes grow.

Paper

Bundle Adjustment In The Eager Mode

PyTorch-native eager mode Sparsity-aware autodiff GPU sparse linear algebra Flexible compute graphs beyond BA & PGO

Why it exists

Build sparse BA with normal Python control flow

The paper frames BAE around eager execution: researchers can prototype, debug, and adapt optimization code in the same interactive style that made PyTorch popular.

What is new

Sparsity-aware autodiff and GPU sparse linear algebra

BAE combines sparse-Jacobian-aware autodiff with GPU sparse linear algebra so second-order optimization stays both flexible in eager mode and scalable on real problems.

What it unlocks

One stack for BA, PGO, and downstream systems

The same LM-style optimizer patterns extend from bundle adjustment to pose graph optimization and support downstream reconstruction systems.

Performance

Speedups exponentially growing with scene size

Built as a PyTorch-native, sparsity-aware BA library with GPU-accelerated sparse linear algebra, it widens the runtime gap as optimization problems scale.

Ours Fastest
GTSAM 18.5x slower
g2o 22x slower
Ceres 23x slower

Bar lengths shown on a logarithmic scale, with a small minimum width for readability.

Log-scale speedup trend across increasing problem sizes

Sparsity-aware Jacobian Construction

Interactive compute graphs for Bundle Adjustment and Pose Graph Optimization.

Our work powering BA and global positioning in downstream system, InstantSfM.

Broad Downstream Usage

Bonsai

Counter

Garden

Kitchen

Room

Citation

BibTeX

If BAE is helpful for your research, please consider citing the paper below.

@article{zhan2025bundle,
  title = {Bundle Adjustment in the Eager Mode},
  author = {Zhan, Zitong and Xu, Huan and Fang, Zihang and Wei, Xinpeng and Hu, Yaoyu and Wang, Chen},
  journal = {arXiv preprint arXiv:2409.12190},
  year = {2025},
  url = {https://arxiv.org/abs/2409.12190}
}