Classes | |
| class | AsymmetricCauchy |
| class | AsymmetricTukey |
| class | Base |
| class | Cauchy |
| class | Custom |
| class | DCS |
| class | Fair |
| class | GemanMcClure |
| class | Huber |
| class | L2WithDeadZone |
| class | Null |
| class | Tukey |
| class | Welsch |
Typedefs | |
| using | CustomLossFunction = std::function< double(double)> |
| using | CustomWeightFunction = std::function< double(double)> |
The mEstimator name space contains all robust error functions. It mirrors the exposition at https://members.loria.fr/MOBerger/Enseignement/Master2/Documents/ZhangIVC-97-01.pdf which talks about minimizing \sum \rho(r_i), where \rho is a loss function of choice.
To illustrate, let's consider the least-squares (L2), L1, and Huber estimators as examples:
Name Symbol Least-Squares L1-norm Huber Loss \rho(x) 0.5*x^2 |x| 0.5*x^2 if |x|<k, 0.5*k^2 + k|x-k| otherwise Derivative \phi(x) x sgn(x) x if |x|<k, k sgn(x) otherwise Weight w(x)=\phi(x)/x 1 1/|x| 1 if |x|<k, k/|x| otherwise
With these definitions, D(\rho(x), p) = \phi(x) D(x,p) = w(x) x D(x,p) = w(x) D(L2(x), p), and hence we can solve the equivalent weighted least squares problem \sum w(r_i) \rho(r_i)
Each M-estimator in the mEstimator name space simply implements the above functions.
| using gtsam::noiseModel::mEstimator::CustomLossFunction = typedef std::function<double(double)> |
Definition at line 548 of file LossFunctions.h.
| using gtsam::noiseModel::mEstimator::CustomWeightFunction = typedef std::function<double(double)> |
Definition at line 549 of file LossFunctions.h.