Before getting started with the state estimation nodes in robot_localization, it is important that users ensure that their sensor data well-formed. There are various considerations for each class of sensor data, and users are encouraged to read this tutorial in its entirety before attempting to use robot_localization.
For additional information, users are encouraged to watch this presentation from ROSCon 2015.
The two most important ROS REPs to consider are:
Users who are new to ROS or state estimation are encouraged to read over both REPs, as it will almost certainly aid you in preparing your sensor data. robot_localization attempts to adhere to these standards as much as possible.
Also, it will likely benefit users to look over the specifications for each of the supported ROS message types:
REP-105 specifies four principal coordinate frames: base_link, odom, map, and earth. The base_link frame is rigidly affixed to the robot. The map and odom frames are world-fixed frames whose origins are typically aligned with the robot’s start position. The earth frame is used to provide a common reference frame for multiple map frames (e.g., for robots distributed over a large area). The earth frame is not relevant to this tutorial.
The state estimation nodes of robot_localization produce a state estimate whose pose is given in the map or odom frame and whose velocity is given in the base_link frame. All incoming data is transformed into one of these coordinate frames before being fused with the state. The data in each message type is transformed as follows:
The IMU may also be oriented on the robot in a position other than its “neutral” position. For example, the user may mount the IMU on its side, or rotate it so that it faces a direction other than the front of the robot. This offset is typically specified by a static transform from the base_link_frame parameter to the IMU message’s frame_id. The state estimation nodes in robot_localization will automatically correct for the orientation of the sensor so that its data aligns with the frame specified by the base_link_frame parameter.
With the migration to tf2 as of ROS Indigo, robot_localization still allows for the use of the tf_prefix parameter, but, in accordance with tf2, all frame_id values will have any leading ‘/’ stripped.
Many robot platforms come equipped with wheel encoders that provide instantaneous translational and rotational velocity. Many also internally integrate these velocities to generate a position estimate. If you are responsible for this data, or can edit it, keep the following in mind:
- If the odometry provides both position and linear velocity, fuse the linear velocity.
- If the odometry provides both orientation and angular velocity, fuse the orientation.
Note
If you have two sources of orientation data, then you’ll want to be careful. If both produce orientations with accurate covariance matrices, it’s safe to fuse the orientations. If, however, one or both under-reports its covariance, then you should only fuse the orientation data from the more accurate sensor. For the other sensor, use the angular velocity (if it’s provided), or continue to fuse the absolute orientation data, but turn _differential mode on for that sensor.
Note
See Configuring robot_localization and Migrating from robot_pose_ekf for more information.
In addition to the following, be sure to read the above section regarding coordinate frames and transforms for IMU data.
- Measure +\(9.81\) meters per second squared for the \(Z\) axis.
- If the sensor is rolled +\(90\) degrees (left side up), the acceleration should be +\(9.81\) meters per second squared for the \(Y\) axis.
- If the sensor is pitched +\(90\) degrees (front side down), it should read -\(9.81\) meters per second squared for the \(X\) axis.
See the section on odometry.
See the section on odometry.