The Theme of SAIR Lab in 2025 is π Transform π
The year 2025 marked a period of fundamental transformation for the SAIR Lab. Across research, open-source infrastructure, community engagement, and student development, the lab advanced toward a more unified, scalable, and impactful vision of neuro-symbolic robot autonomy. This post highlights the people, ideas, and milestones that defined SAIR Lab in 2025.
1. SAIR STAR Awards 2025
To recognize outstanding contributions across the lab, SAIR Lab annually presents the SAIR STAR Awards, based on anonymous voting by all lab members. We are proud to announce the 2025 award recipients.
SAIR Rising STAR
The SAIR Rising STAR Award is the highest honor for junior researchers in SAIR Lab.
The recipient receives a certificate and a $3,000 research award.
Winner:Qiwei Du
For achievements in fascinating demonstration of efficient task planning, showcasing both technical depth and system-level thinking.
SAIR STAR Award
The SAIR STAR Award is the highest overall honor in SAIR Lab, recognizing research depth, leadership, and broader impact.
The recipient receives a trophy, medal, certificate, and a $7,000 research award.
Winner:Yi Du
For outstanding leadership and impactful contributions to vision-language navigation research.
Congratulations to both award recipients!
2. Highlighted Research Achievements
Five PhD students successfully passed their Qualifying Exams, marking a major milestone in their doctoral journeys and a significant step toward independent research. Congratulations to Yi Du, Shaoshu Su, Zitong Zhan, Zhipeng Zhao, and Taimeng Fu!
In 2025, SAIR Lab continued to push forward its core research agenda on Imperative Learning (IL), a unified neuro-symbolic framework for robot autonomy. Our flagship IL work has now been formally accepted to The International Journal of Robotics Research (IJRR), marking a major milestone for the lab. Below is a list of iSeries articles accepted in 2025:
iA*: Imperative Learning-based A* Search for Path Planning.
Xiangyu Chen, Fan Yang, Chen Wang.
IEEE Robotics and Automation Letters (RA-L), vol. 10, no. 12, pp. 12987β12994, 2025.
Highlighted as an iSeries Article in Path Planning
@article{chen2025iastar,
title = {{iA*}: Imperative Learning-based A* Search for Path Planning},
author = {Chen, Xiangyu and Yang, Fan and Wang, Chen},
journal = {IEEE Robotics and Automation Letters (RA-L)},
year = {2025},
volume = {10},
number = {12},
pages = {12987-12994},
url = {https://arxiv.org/abs/2403.15870},
code = {https://github.com/sair-lab/iAstar},
website = {https://sairlab.org/iastar/},
addendum = {Highlighted as an iSeries Article in Path Planning}
}
Chen, Xiangyu and Yang, Fan and Wang, Chen, "iA*: Imperative Learning-based A* Search for Path Planning," IEEE Robotics and Automation Letters (RA-L), 2025.
iWalker: Imperative Visual Planning for Walking Humanoid Robot.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2025.
Highlighted as an iSeries Article in Humanoid Robots; Best Workshop Paper Award
@inproceedings{lin2025iwalker,
title = {{iWalker}: Imperative Visual Planning for Walking Humanoid Robot},
author = {Lin, Xiao and Huang, Yuhao and Fu, Taimeng and Xiong, Xiaobin and Wang, Chen},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year = {2025},
url = {https://arxiv.org/abs/2409.18361},
video = {https://youtu.be/FPV74PznzTU},
website = {https://sairlab.org/iwalker},
addendum = {Highlighted as an iSeries Article in Humanoid Robots; Best Workshop Paper Award}
}
Lin, Xiao and Huang, Yuhao and Fu, Taimeng and Xiong, Xiaobin and Wang, Chen, "iWalker: Imperative Visual Planning for Walking Humanoid Robot," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2025.
Imperative Learning: A Self-supervised Neuro-Symbolic Learning Framework for Robot Autonomy.
Chen Wang, Kaiyi Ji, Junyi Geng, Zhongqiang Ren, Taimeng Fu, Fan Yang, Yifan Guo, Haonan He, Xiangyu Chen, Zitong Zhan, Qiwei Du, Shaoshu Su, Bowen Li, Yuheng Qiu, Yi Du, Qihang Li, Yifan Yang, Xiao Lin, Zhipeng Zhao.
International Journal of Robotics Research (IJRR), 2025.
Unifying Neuro-Symbolic Robot Autonomy via a Unified Paradigm
@article{wang2025imperative,
title = {Imperative Learning: A Self-supervised Neuro-Symbolic Learning Framework for Robot Autonomy},
author = {Wang, Chen and Ji, Kaiyi and Geng, Junyi and Ren, Zhongqiang and Fu, Taimeng and Yang, Fan and Guo, Yifan and He, Haonan and Chen, Xiangyu and Zhan, Zitong and Du, Qiwei and Su, Shaoshu and Li, Bowen and Qiu, Yuheng and Du, Yi and Li, Qihang and Yang, Yifan and Lin, Xiao and Zhao, Zhipeng},
journal = {International Journal of Robotics Research (IJRR)},
year = {2025},
url = {https://arxiv.org/abs/2406.16087},
code = {https://github.com/sair-lab/iSeries},
website = {https://sairlab.org/iseries},
addendum = {Unifying Neuro-Symbolic Robot Autonomy via a Unified Paradigm}
}
Wang, Chen and Ji, Kaiyi and Geng, Junyi and Ren, Zhongqiang and Fu, Taimeng and Yang, Fan and Guo, Yifan and He, Haonan and Chen, Xiangyu and Zhan, Zitong and Du, Qiwei and Su, Shaoshu and Li, Bowen and Qiu, Yuheng and Du, Yi and Li, Qihang and Yang, Yifan and Lin, Xiao and Zhao, Zhipeng, "Imperative Learning: A Self-supervised Neuro-Symbolic Learning Framework for Robot Autonomy," International Journal of Robotics Research (IJRR), 2025.
iKap: Kinematics-aware Planning with Imperative Learning.
IEEE International Conference on Robotics and Automation (ICRA), pp. 10164β10170, 2025.
Highlighted as an iSeries Article in Quadruped Robots
@inproceedings{li2025ikap,
title = {{iKap}: Kinematics-aware Planning with Imperative Learning},
author = {Li, Qihang and Chen, Zhuoqun and Zheng, Haoze and He, Haonan and Su, Shaoshu and Geng, Junyi and Wang, Chen},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2025},
pages = {10164--10170},
url = {https://arxiv.org/abs/2412.09496},
video = {https://youtu.be/7HPAMFbHc4U},
website = {https://sairlab.org/iKap},
addendum = {Highlighted as an iSeries Article in Quadruped Robots}
}
Li, Qihang and Chen, Zhuoqun and Zheng, Haoze and He, Haonan and Su, Shaoshu and Geng, Junyi and Wang, Chen, "iKap: Kinematics-aware Planning with Imperative Learning," IEEE International Conference on Robotics and Automation (ICRA), 2025.
Additional First-Authored Publications
Beyond the iSeries, many lab members published or submitted their first-authored papers to top-tier robotics and vision venues, reflecting the breadth of SAIR Labβs research.
SuperPC: A Single Diffusion Model for Point Cloud Completion, Upsampling, Denoising, and Colorization.
Yi Du, Zhipeng Zhao, Shaoshu Su, Sharath Golluri, Haoze Zheng, Runmao Yao, Chen Wang.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16953β16964, 2025.
@inproceedings{du2025superpc,
title = {{SuperPC}: A Single Diffusion Model for Point Cloud Completion, Upsampling, Denoising, and Colorization},
author = {Du, Yi and Zhao, Zhipeng and Su, Shaoshu and Golluri, Sharath and Zheng, Haoze and Yao, Runmao and Wang, Chen},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
url = {https://arxiv.org/abs/2503.14558},
website = {https://sairlab.org/superpc/},
year = {2025},
pages = {16953--16964}
}
Du, Yi and Zhao, Zhipeng and Su, Shaoshu and Golluri, Sharath and Zheng, Haoze and Yao, Runmao and Wang, Chen, "SuperPC: A Single Diffusion Model for Point Cloud Completion, Upsampling, Denoising, and Colorization," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025.
Enhancing Scene Coordinate Regression with Efficient Keypoint Detection and Sequential Information.
IEEE Robotics and Automation Letters (RA-L), vol. 10, no. 10, pp. 9932β9939, 2025.
@article{xu2025enhancing,
title = {Enhancing Scene Coordinate Regression with Efficient Keypoint Detection and Sequential Information},
author = {Xu, Kuan and Jiang, Zeyu and Cao, Haozhi and Yuan, Shenghai and Wang, Chen and Xie, Lihua},
journal = {IEEE Robotics and Automation Letters (RA-L)},
year = {2025},
volume = {10},
number = {10},
pages = {9932-9939},
url = {https://arxiv.org/abs/2412.06488},
code = {https://github.com/sair-lab/SeqACE},
video = {https://youtu.be/5OcR5KeO5nc}
}
Xu, Kuan and Jiang, Zeyu and Cao, Haozhi and Yuan, Shenghai and Wang, Chen and Xie, Lihua, "Enhancing Scene Coordinate Regression with Efficient Keypoint Detection and Sequential Information," IEEE Robotics and Automation Letters (RA-L), 2025.
AirRoom: Objects Matter in Room Reidentification.
Runmao Yao, Yi Du, Zhuoqun Chen, Haoze Zheng, Chen Wang.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1385β1394, 2025.
@inproceedings{yao2025airroom,
title = {{AirRoom}: Objects Matter in Room Reidentification},
author = {Yao, Runmao and Du, Yi and Chen, Zhuoqun and Zheng, Haoze and Wang, Chen},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
url = {https://arxiv.org/abs/2503.01130},
website = {https://sairlab.org/airroom/},
year = {2025},
pages = {1385--1394},
code = {https://github.com/21yrm/AirRoom}
}
Yao, Runmao and Du, Yi and Chen, Zhuoqun and Zheng, Haoze and Wang, Chen, "AirRoom: Objects Matter in Room Reidentification," IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025.
AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System.
IEEE Transactions on Robotics (T-RO), vol. 41, pp. 1673β1692, 2025.
Real-time Visual SLAM at 70Hz with Superior Accuracy
@article{xu2025airslam,
title = {{AirSLAM}: An Efficient and Illumination-Robust Point-Line Visual SLAM System},
author = {Xu, Kuan and Hao, Yuefan and Yuan, Shenghai and Wang, Chen and Xie, Lihua},
journal = {IEEE Transactions on Robotics (T-RO)},
year = {2025},
volume = {41},
pages = {1673-1692},
url = {https://arxiv.org/abs/2408.03520},
code = {https://github.com/sair-lab/AirSLAM},
video = {https://youtu.be/5OcR5KeO5nc},
website = {https://sairlab.org/airslam},
addendum = {Real-time Visual SLAM at 70Hz with Superior Accuracy}
}
Xu, Kuan and Hao, Yuefan and Yuan, Shenghai and Wang, Chen and Xie, Lihua, "AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System," IEEE Transactions on Robotics (T-RO), 2025.
Vision-Language Memory for Spatial Reasoning.
Zuntao Liu, Yi Du, Taimeng Fu, Shaoshu Su, Cherie Ho, Chen Wang.
arXiv preprint arXiv:2511.20644, 2025.
@article{liu2025vlm2,
title = {Vision-Language Memory for Spatial Reasoning},
author = {Liu, Zuntao and Du, Yi and Fu, Taimeng and Su, Shaoshu and Ho, Cherie and Wang, Chen},
year = {2025},
journal = {arXiv preprint arXiv:2511.20644},
url = {https://arxiv.org/abs/2511.20644},
website = {https://sairlab.org/vlm2/}
}
Liu, Zuntao and Du, Yi and Fu, Taimeng and Su, Shaoshu and Ho, Cherie and Wang, Chen, "Vision-Language Memory for Spatial Reasoning," arXiv preprint arXiv:2511.20644, 2025.
Fast Task Planning with Neuro-Symbolic Relaxation.
Qiwei Du, Bowen Li, Yi Du, Shaoshu Su, Taimeng Fu, Zitong Zhan, Zhipeng Zhao, Chen Wang.
arXiv preprint arXiv:2507.15975, 2025.
@article{du2025fast,
title = {Fast Task Planning with Neuro-Symbolic Relaxation},
author = {Du, Qiwei and Li, Bowen and Du, Yi and Su, Shaoshu and Fu, Taimeng and Zhan, Zitong and Zhao, Zhipeng and Wang, Chen},
year = {2025},
journal = {arXiv preprint arXiv:2507.15975},
url = {https://arxiv.org/abs/2507.15975},
website = {https://sairlab.org/flax/},
video = {https://youtu.be/_4DYcqwycnQ}
}
Du, Qiwei and Li, Bowen and Du, Yi and Su, Shaoshu and Fu, Taimeng and Zhan, Zitong and Zhao, Zhipeng and Wang, Chen, "Fast Task Planning with Neuro-Symbolic Relaxation," arXiv preprint arXiv:2507.15975, 2025.
@article{zhan2025bundle,
title = {Bundle Adjustment in the Eager Mode},
author = {Zhan, Zitong and Xu, Huan and Fang, Zihang and Wei, Xinpeng and Hu, Yaoyu and Wang, Chen},
journal = {arXiv preprint arXiv:2409.12190},
year = {2025},
url = {https://arxiv.org/abs/2409.12190},
addendum = {A GPU Implementation Achieving 20x Speedup}
}
Zhan, Zitong and Xu, Huan and Fang, Zihang and Wei, Xinpeng and Hu, Yaoyu and Wang, Chen, "Bundle Adjustment in the Eager Mode," arXiv preprint arXiv:2409.12190, 2025.
VL-Nav: Real-time Vision-Language Navigation with Spatial Reasoning.
Yi Du, Taimeng Fu, Zhuoqun Chen, Bowen Li, Shaoshu Su, Zhipeng Zhao, Chen Wang.
@article{du2025vlnav,
title = {{VL-Nav}: Real-time Vision-Language Navigation with Spatial Reasoning},
author = {Du, Yi and Fu, Taimeng and Chen, Zhuoqun and Li, Bowen and Su, Shaoshu and Zhao, Zhipeng and Wang, Chen},
year = {2025},
journal = {arXiv preprint arXiv:2502.00931},
url = {https://arxiv.org/abs/2502.00931},
website = {https://sairlab.org/vlnav/},
addendum = {Deployment-Ready Neuro-Symbolic Vision-Language Navigation}
}
Du, Yi and Fu, Taimeng and Chen, Zhuoqun and Li, Bowen and Su, Shaoshu and Zhao, Zhipeng and Wang, Chen, "VL-Nav: Real-time Vision-Language Navigation with Spatial Reasoning," arXiv preprint arXiv:2502.00931, 2025.
AnyNav: Visual Neuro-symbolic Friction Learning for Off-road Navigation.
@article{fu2025anynav,
title = {{AnyNav}: Visual Neuro-symbolic Friction Learning for Off-road Navigation},
author = {Fu, Taimeng and Zhan, Zitong and Zhao, Zhipeng and Su, Shaoshu and Lin, Xiao and Esfahani, Ehsan Tarkesh and Dantu, Karthik and Chowdhury, Souma and Wang, Chen},
journal = {arXiv preprint arXiv:2501.12654},
year = {2025},
url = {https://arxiv.org/abs/2501.12654},
website = {https://sairlab.org/anynav/},
addendum = {A Self-Supervised Friction Estimation Framework}
}
Fu, Taimeng and Zhan, Zitong and Zhao, Zhipeng and Su, Shaoshu and Lin, Xiao and Esfahani, Ehsan Tarkesh and Dantu, Karthik and Chowdhury, Souma and Wang, Chen, "AnyNav: Visual Neuro-symbolic Friction Learning for Off-road Navigation," arXiv preprint arXiv:2501.12654, 2025.
Collaborative Research with External Groups
SAIR Lab also collaborated closely with external research groups, resulting in several high-impact, top-tier publications.
Resilient Odometry via Hierarchical Adaptation.
Shibo Zhao, Sifan Zhou, Yuchen Zhang, Ji Zhang, Chen Wang, Wenshan Wang, Sebastian Scherer.
Science Robotics, vol. 10, no. 109, 2025.
Selected as the Top Featured Image by Science Robotics
@article{zhao2025resilient,
title = {Resilient Odometry via Hierarchical Adaptation},
author = {Zhao, Shibo and Zhou, Sifan and Zhang, Yuchen and Zhang, Ji and Wang, Chen and Wang, Wenshan and Scherer, Sebastian},
journal = {Science Robotics},
volume = {10},
number = {109},
year = {2025},
publisher = {American Association for the Advancement of Science},
url = {https://doi.org/10.1126/scirobotics.adv1818},
video = {https://youtu.be/xpRZGgGaFRA},
website = {https://superodometry.com/},
code = {https://github.com/superxslam/SuperOdom},
addendum = {Selected as the Top Featured Image by Science Robotics}
}
Zhao, Shibo and Zhou, Sifan and Zhang, Yuchen and Zhang, Ji and Wang, Chen and Wang, Wenshan and Scherer, Sebastian, "Resilient Odometry via Hierarchical Adaptation," Science Robotics, 2025.
Spatially-Enhanced Recurrent Memory for Long-Range Mapless Navigation via End-to-End Reinforcement Learning.
Fan Yang, Per Frivik, David Hoeller, Chen Wang, Cesar Cadena, Marco Hutter.
International Journal of Robotics Research (IJRR), 2025.
@article{yang2025improving,
title = {Spatially-Enhanced Recurrent Memory for Long-Range Mapless Navigation via End-to-End Reinforcement Learning},
author = {Yang, Fan and Frivik, Per and Hoeller, David and Wang, Chen and Cadena, Cesar and Hutter, Marco},
journal = {International Journal of Robotics Research (IJRR)},
year = {2025},
url = {https://arxiv.org/abs/2506.05997}
}
Yang, Fan and Frivik, Per and Hoeller, David and Wang, Chen and Cadena, Cesar and Hutter, Marco, "Spatially-Enhanced Recurrent Memory for Long-Range Mapless Navigation via End-to-End Reinforcement Learning," International Journal of Robotics Research (IJRR), 2025.
BridgeDepth: Bridging Monocular and Stereo Reasoning with Latent Alignment.
Tongfan Guan, Jiaxin Guo, Chen Wang, Yun-Hui Liu.
International Conference on Computer Vision (ICCV), pp. 27681β27691, 2025.
Selected as a Hightlight Paper in ICCV 2025
@inproceedings{guan2025bridgedepth,
title = {BridgeDepth: Bridging Monocular and Stereo Reasoning with Latent Alignment},
author = {Guan, Tongfan and Guo, Jiaxin and Wang, Chen and Liu, Yun-Hui},
booktitle = {International Conference on Computer Vision (ICCV)},
year = {2025},
pages = {27681--27691},
url = {https://www.arxiv.org/abs/2508.04611},
code = {https://github.com/aeolusguan/BridgeDepth},
addendum = {Selected as a Hightlight Paper in ICCV 2025}
}
Guan, Tongfan and Guo, Jiaxin and Wang, Chen and Liu, Yun-Hui, "BridgeDepth: Bridging Monocular and Stereo Reasoning with Latent Alignment," International Conference on Computer Vision (ICCV), 2025.
MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions.
Cherie Ho, Seungchan Kim, Brady Moon, Aditya Parandekar, Narek Harutyunyan, Chen Wang, Katia Sycara, Graeme Best, Sebastian Scherer.
IEEE International Conference on Robotics and Automation (ICRA), pp. 13074β13080, 2025.
@inproceedings{ho2025mapex,
title = {{MapEx}: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions},
author = {Ho, Cherie and Kim, Seungchan and Moon, Brady and Parandekar, Aditya and Harutyunyan, Narek and Wang, Chen and Sycara, Katia and Best, Graeme and Scherer, Sebastian},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2025},
pages = {13074--13080},
url = {https://arxiv.org/abs/2409.15590}
}
Ho, Cherie and Kim, Seungchan and Moon, Brady and Parandekar, Aditya and Harutyunyan, Narek and Wang, Chen and Sycara, Katia and Best, Graeme and Scherer, Sebastian, "MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions," IEEE International Conference on Robotics and Automation (ICRA), 2025.
Conference Participation and Recognition
Lab members actively participated in major international conferences, including CVPR, ICRA, and IROS, presenting research and engaging with the broader robotics community.
Notably, the paper iWalker, first authored by Xiao Lin and collaborated with Prof. Xiaobin Xiong, received the Best Workshop Paper Award. Congratulations to Xiao Lin on this outstanding achievement!
3. Long-Term Impact on the Open-Source Community
In 2025, SAIR Lab continued to make sustained investments in open-source infrastructure to support reproducible, scalable, and accessible robotics research. A flagship example is PyPose, our open-source library for differentiable robotics and optimization on manifolds. Over the past year alone, PyPose accumulated more than 150,000 downloads, reflecting its growing adoption by the global robotics and machine learning communities across academia and industry.
4. Outreach and Community Building
ICRA 2025 Workshop on Foundation Models and Neuro-Symbolic AI for Robotics
RoboRanking: A Robotics Faculty Hub and University Ranking System
In 2025, we released Roboranking, a new platform designed to promote transparency, connectivity, and data-driven insights within the robotics academic community. The platform has been widely welcomed by prospective PhD applicants as a valuable resource for exploring research groups, faculty profiles, and institutional strengths.
5. SAIR Lab Activity Memo
Beyond research and publications, 2025 was also a year of strong internal growth, collaboration, and shared experiences within SAIR Lab.
Robotics Day 2025
The SAIR Lab welcomed K-12 kids, undergrad students, teachers, and families from across Western New York for Robotics Day 2025, a vibrant outreach event aimed at engaging K-12 students in the exciting world of robotics, artificial intelligence, and autonomous systems.
Lab Community and Social Activities.
Throughout 2025, SAIR Lab organized a variety of social activities, fostering a supportive, collaborative, and inclusive lab culture beyond day-to-day research.
6. Looking Ahead
The Theme of SAIR Lab in 2026 will be π Insight π
Building on the transformations of 2025, SAIR Lab enters 2026 with a renewed focus on Insight, both in the technical sense and in defining a clearer, more ambitious long-term direction for robot autonomy research.