Code Used in this Tutorial Available
Code can be found at moveit_tutorials repository in doc folder. Use kinetic-devel branch.
This is the new benchmarking method only available in ROS Kinetic, onward. Documentation for the previous benchmarking tutorial for Jade and earlier can be found on the deprecated wiki
To use this benchmarking method, you will need to download and install the ROS Warehouse plugin. Currently this is not available from debians and requires a source install for at least some aspects. For source instructions, see this page
The benchmarking package provides methods to benchmark motion planning algorithms and aggregate/plot statistics using the OMPL Planner Arena. The example below demonstrates how the benchmarking can be run for a Fanuc M-10iA.
An example is provided in the
examples folder. The launch file requires a MoveIt! configuration package
for the Fanuc M-10iA available from here.
Launch the Fanuc M-10iA
roslaunch moveit_resources demo.launch db:=true
Within the Motion Planning RViz plugin, connect to the database by pressing the Connect button in the Context tab.
Save a scene on the Stored Scenes tab and name it
Kitchen1by double clicking the scene in the list.
Move the start and goal states of the Fanuc M-10iA by using the interactive markers.
Save an associated query for the
Kitchen1scene and name the query
Also save a start state for the robot on the Stored States tab and name it
The config file
moveit_ros/benchmarks/examples/demo1.yamlrefers to the scenes, queries and start states used for benchmarking. Modify them appropriately.
Bring down your previous launch file (
Change the location
output_directoryto export the benchmarked files:
rosed moveit_ros_benchmarks demo_fanuc.launch
Run the benchmarks:
roslaunch moveit_ros_benchmarks demo_fanuc.launch
The benchmarks are executed and many interesting parameters are aggregated and written to a logfile. A script (
moveit_benchmark_statistics.py) is supplied to parse this data and plot the statistics.
rosrun moveit_ros_benchmarks moveit_benchmark_statistics.py <path_of_logfile>
To generate a PDF of plots:
python -p <plot_filename> moveit_benchmark_statistics.py <path_of_logfile>
Alternatively, upload the database file generated by
moveit_benchmark_statistics.py to plannerarena.org and interactively visualize the results.
Parameters of the BenchmarkOptions Class¶
This class reads in parameters and options for the benchmarks to run from the ROS parameter server. The format of the parameters is assumed to be in the following form:
benchmark_config: warehouse: host: [hostname/IP address of ROS Warehouse node] # Default localhost port: [port number of ROS Warehouse node] # Default 33829 scene_name: [Name of the planning scene to use for benchmarks] # REQUIRED parameters: runs: [Number of runs for each planning algorithm on each request] # Default 10 group: [The name of the group to plan] # REQUIRED timeout: [The maximum time for a single run; seconds] # Default 10.0 output_directory: [The directory to write the output to] # Default is current working directory start_states: [Regex for the stored start states in the warehouse to try] # Default "" path_constraints: [Regex for the path constraints to benchmark] # Default "" queries: [Regex for the motion plan queries in the warehouse to try] # Default .* goal_constraints: [Regex for the goal constraints to benchmark] # Default "" trajectory_constraints: [Regex for the trajectory constraints to benchmark] # Default "" workspace: [Bounds of the workspace the robot plans in. This is an AABB] # Optional frame_id: [The frame the workspace parameters are specified in] min_corner: [Coordinates of the minimum corner of the AABB] x: [x-value] y: [y-value] z: [z-value] max_corner: [Coordinates of the maximum corner of the AABB] x: [x-value] y: [y-value] z: [z-value] planners: - plugin: [The name of the planning plugin the planners are in] # REQUIRED planners: # REQUIRED - A list of planners - from the plugin above - to benchmark the - queries in. - plugin: ... - ...
Parameters of the BenchmarkExecutor Class¶
This class creates a set of
MotionPlanRequests that respect the parameters given in the supplied instance of
BenchmarkOptions and then executes the requests on each of the planners specified. From the
trajectory_constraints are treated as separate queries. If a set of
start_states is specified, each query,
trajectory_constraint is attempted with each start state (existing start states from a query are ignored). Similarly, the (optional) set of path constraints is combined combinatorially with the start query and start
goal_constraint pairs (existing
path_constraint from a query are ignored). The workspace, if specified, overrides any existing workspace parameters.
The benchmarking pipeline does not utilize
PlanningRequestAdaptors are not invoked.
It is possible to customize a benchmark run by deriving a class from
BenchmarkExecutor and overriding one or more of the virtual functions. Additionally, a set of functions exists for ease of customization in derived classes:
preRunEvent: invoked immediately before each call to solve
postRunEvent: invoked immediately after each call to solve
plannerSwitchEvent: invoked when the planner changes during benchmarking
querySwitchEvent: invoked before a new benchmark problem begin execution
Note, in the above, a benchmark is a concrete instance of a
PlanningScene, start state, goal constraints /
trajectory_constraints, and (optionally)
path_constraints. A run is one attempt by a specific planner to solve the benchmark.
Open Source Feedback
See something that needs improvement? Please open a pull request on this GitHub page