Mastodon

UDepFusion

An Object-Aware Indoor Semantic Mapping Framework

Introduction

UDepFusion is an object-aware indoor RGB-D semantic mapping system based on YOLACT++ and Deep Learning based Depth Estimation model FRCN. The system is built upon the work of Elasticfusion1 and Maskfusion2.

UDepFusion is capable of

  • Perform $geometric + semantic~segmentation$ with RGB-D (or raw RGB if D is missing) input image sequences.
  • Assign model ids to the reconstructed models and perform tracking on individual models.
  • Remove dynamic objects from the final semantic map by filtering

Additional Depth Estimation

Results of depth fusion

Evaluation on TUM

APE (rmse) RPE (rmse)
Co-fusion 0.6738 0.0835
MaskFusion 0.7211 2.1530
UDepthFusion 0.6943 2.1688
Avatar
Chengkun (Charlie) Li
MSc student in Robotics
Next
Previous

Related