UDepFusion
An Object-Aware Indoor Semantic Mapping Framework
Introduction
UDepFusion is an object-aware indoor RGB-D semantic mapping system based on YOLACT++ and Deep Learning based Depth Estimation model FRCN. The system is built upon the work of Elasticfusion1 and Maskfusion2.
UDepFusion is capable of
- Perform $geometric + semantic~segmentation$ with RGB-D (or raw RGB if D is missing) input image sequences.
- Assign model ids to the reconstructed models and perform tracking on individual models.
- Remove dynamic objects from the final semantic map by filtering
Additional Depth Estimation
Evaluation on TUM
APE (rmse) | RPE (rmse) | |
---|---|---|
Co-fusion | 0.6738 | 0.0835 |
MaskFusion | 0.7211 | 2.1530 |
UDepthFusion | 0.6943 | 2.1688 |
-
paper link: http://www.roboticsproceedings.org/rss11/p01.pdf ↩︎
-
paper link: https://arxiv.org/abs/1804.09194 ↩︎