#include <algos.h>
Public Attributes | |
double | clustering_threshold |
int | debug_verify_tricks |
int | do_alpha_test |
double | do_alpha_test_thresholdDeg |
int | do_compute_covariance |
int | do_visibility_test |
double | epsilon_theta |
double | epsilon_xy |
double | first_guess [3] |
double | gpm_extend_range_deg |
int | gpm_interval |
double | gpm_theta_bin_size_deg |
struct hsm_params | hsm |
double | laser [3] |
LDP | laser_ref |
LDP | laser_sens |
double | max_angular_correction_deg |
double | max_correspondence_dist |
int | max_iterations |
double | max_linear_correction |
double | max_reading |
double | min_reading |
int | orientation_neighbourhood |
double | outliers_adaptive_mult |
double | outliers_adaptive_order |
double | outliers_maxPerc |
int | outliers_remove_doubles |
int | restart |
double | restart_dt |
double | restart_dtheta |
double | restart_threshold_mean_error |
double | sigma |
int | use_corr_tricks |
int | use_ml_weights |
int | use_point_to_line_distance |
int | use_sigma_weights |
Use the method in http://purl.org/censi/2006/icpcov to compute the matching covariance.
I believe this trick is documented in one of the papers by Guttman (but I can't find the reference). Or perhaps I was told by him directly.
If you already have a guess of the solution, you can compute the polar angle of the points of one scan in the new position. If the polar angle is not a monotone function of the readings index, it means that the surface is not visible in the next position. If it is not visible, then we don't use it for matching.
This is confusing without a picture! To understand what's going on, make a drawing in which a surface is not visible in one of the poses.
Implemented in the function visibilityTest().
double sm_params::epsilon_theta |
double sm_params::epsilon_xy |
double sm_params::first_guess[3] |
struct hsm_params sm_params::hsm |
double sm_params::laser[3] |
double sm_params::max_reading |
double sm_params::min_reading |
Parameters describing a simple adaptive algorithm for discarding. 1) Order the errors. 2) Choose the percentile according to outliers_adaptive_order. (if it is 0.7, get the 70% percentile) 3) Define an adaptive threshold multiplying outliers_adaptive_mult with the value of the error at the chosen percentile. 4) Discard correspondences over the threshold.
This is useful to be conservative; yet remove the biggest errors.
double sm_params::outliers_maxPerc |
double sm_params::restart_dt |
double sm_params::restart_dtheta |
double sm_params::sigma |
If 1, the field "true_alpha" is used to compute the incidence beta, and the factor (1/cos^2(beta)) used to weight the impact of each correspondence. This works fabolously if doing localization, that is the first scan has no noise. If "true_alpha" is not available, it uses "alpha".