Depth completion is crucial for many robotic tasks such as autonomous driving, 3-D reconstruction, and manipulation. However, existing methods often fail with transparent objects and real-time needs of low-power robots due to their design for opaque objects and computational intensity. To solve these problems, we propose a Fast Depth Completion framework for Transparent objects (FDCT), which also benefits downstream tasks like object pose estimation.
![](/img/posts/2023-07-29-fdct/task.jpg)
To leverage local information and avoid overfitting issues when integrating it with global information, we design a new fusion branch and shortcuts to exploit low-level features and a loss function to suppress overfitting.
![](/img/posts/2023-07-29-fdct/framework.jpg)
This results in an accurate and user-friendly depth rectification framework which can recover dense depth estimation from RGB-D images alone. Extensive experiments demonstrate that FDCT can run about 70 FPS with a higher accuracy than the state-of-the-art methods.
![](/img/posts/2023-07-29-fdct/real-demo.jpg)
We also demonstrate that FDCT can improve pose estimation in object grasping tasks.
![](/img/posts/2023-07-29-fdct/comparison.jpg)