One may ask why use a visual-inertial sensor? The main reason is because of the complimentary nature of the two different sensing modalities. The camera provides high density external measurements of the environment, while the IMU measures internal ego-motion of the sensor platform. The IMU is crucial in providing robustness to the estimator while also providing system scale in the case of a monocular camera.
However, there are some challenges when leveraging the IMU in estimation. An IMU requires estimating of additional bias terms and requires highly accurate calibration between the camera and IMU. Additionally small errors in the relative timestamps between the sensors can also degrade performance very quickly in dynamic trajectories. Within this OpenVINS project we address these by advocating the online estimation of these extrinsic and time offset parameters between the cameras and IMU.
This first video walks through the process of performing visual-inertial sensor calibration and is a complete overview of the below text. The video was recorded in a single session from start to finish, so please use the chapters to skip to the sections which are of interest. The sensor used is the Intel Realsense D455 color camera and internal IMU. The key software used is Kalibr and allan_variance_ros.
@m_div{m-text-center}
In this video takes from having a sensor, to collecting data, performing calibration, and finally processing that data live with OpenVINS to recover a 6dof pose estimate. The later part is similar to the Simple Tutorial but is for a live sensor. First we create a launch file for the Intel Realsense T265 sensor, after which we perform calibration. Finally we use the calibration to process data with OpenVINS and demo the recovered trajectory.
@m_div{m-text-center}
The first task is to calibrate the camera intrinsic values such as the focal length, camera center, and distortion coefficients. Our group often uses the Kalibr [Furgale2013IROS] calibration toolbox to perform both intrinsic and extrinsic offline calibrations, by proceeding the following steps:
kalibr_camera_focus
or just manually)kalibr_calibrate_cameras
with your specified topic--bag-freq 10.0
to reduce the processing timeAn example script calibrate_camera_static.sh, dataset, and configuration can be found in our group's ar_table_dataset repository.
The other important intrinsic calibration is to compute the inertial sensor intrinsic noise characteristics, which are needed for the batch optimization to calibrate the camera to IMU transform and in any VINS estimator so that we can properly probabilistically fuse the images and inertial readings. Specifically we are estimating the following noise parameters:
Parameter | YAML element | Symbol | Units |
---|---|---|---|
Gyroscope "white noise" | gyroscope_noise_density | ![]() | ![]() |
Accelerometer "white noise" | accelerometer_noise_density | ![]() | ![]() |
Gyroscope "random walk" | gyroscope_random_walk | ![]() | ![]() |
Accelerometer "random walk" | accelerometer_random_walk | ![]() | ![]() |
The standard way as explained in [IEEE Standard Specification Format Guide and Test Procedure for Single-Axis Interferometric Fiber Optic Gyros (page 71, section C)] is that we can compute an Allan variance plot of the sensor readings over different observation times (see below).
As shown in the above figure, if we compute the Allan variance we we can look at the value of a line at with a -1/2 slope fitted to the left side of the plot to get the white noise of the sensor. Similarly, a line with 1/2 fitted to the right side can be evaluated at
to get the random walk noise.
We recommend using the allan_variance_ros project to recover these parameters experimentally.
allan_variance
commandanalysis.py
scriptAn example script calibrate_imu.sh, dataset, and configuration can be found in our group's ar_table_dataset repository.
After obtaining the intrinsic calibration of both the camera and IMU, we can now perform dynamic calibration of the transform between the two sensors. For this we again leverage the [Kalibr calibration toolbox]. For these collected datasets, it is important to minimize the motion blur in the camera while also ensuring that you excite all axes of the IMU. One needs to have at least one translational motion along with two degrees of orientation change for these calibration parameters to be observable (please see our recent paper on why: [Degenerate Motion Analysis for Aided INS With Online Spatial and Temporal Sensor Calibration]). We recommend having as much change in orientation as possible in order to ensure convergence.
kalibr_calibrate_imu_camera
An example script calibrate_camera_dynamic.sh, dataset, and configuration can be found in our group's ar_table_dataset repository.