Chen 2018 Review of VI SLAM

Search IconIcon to open search

Sourcehttp://www.mdpi.com/2218-6581/7/3/45
Authors: Chen et. al

Abstract

  • Survey on visual-inertial SLAM over the last 10 years
  • Aspects: filtering vs optimisation based, camera type, sensor fusion type
  • Explains core theory of SLAM, feature extraction, feature tracking, loop closure
  • Experimental comparison of filtering-based and optimisation-based methods
  • Research trends for VI-SLAM

s.  Works of possible interest

Contents/Chapters

SLAM

SLAM: build a real-time map of the unknown environment based on sensor data, while the sensor (robot) itself is traversing the environment

Growing prevalence of visual SLAM: rich in information compared yet low-cost (camera vs other sensors)

Filtering-based:

  • Loose vs tight coupling

  • Feature extraction, feature tracking

  • Basic method framework (three step): propagation, image registration, update

  • Algorithms: MSCKF, Maplab

  • Loosely-coupled: usually only fuses the IMU to estimate partial state, not full POSE

  • Tightly-coupled: camera and IMU states are fused together into a motion and observation equation –> more common

Optimisation-based:

  • front- vs back-end division (map construction vs pose optimisation)
  • Loop closure (odometry-based or appearance-based), preintegration
  • Algorithms: OKVIS (stereo), VIORB , VINS-mono

Takeaway

Filtering-basedOptimisation-based
More advantageous w.r.t. computing resourcesGood localisation accuracy with lower memory utilisation