laser_interface implments a laser pointer finder taking in images from a stereo pair and outputing a 3D point that can be used a mouse cursor in the world. This works by first filtering the stereo pair based on image intensity and image motion. Large connected components are then thrown out leaving smaller components representative of laser points. Each components is then fed to a random forest classifier to find whether the detection is a laser point or not. To gather positive and negative training examples for the classifier, use the interface launched by the user_interface_node. By default the laser_pointer_detector_node gathers negative training example so just point the camera to an area containing motion for it to grab negative samples. To gather positive examples, point the camera at a static scene where the laser point is the only moving object then switch to 'positive' mode with the GUI spawned by @ user_interface_node. The modes offered by the interface are:
All parameters for the algorithm is stored in params.xml. For triangulation this code uses the camera calibrations provided by cameras.xml.
$ roslaunch launch.xml
Subscribes to (name/type):
Publishes to (name / type):