|
void | change_feat_id (size_t id_old, size_t id_new) |
| Changes the ID of an actively tracked feature to another one. More...
|
|
virtual void | display_active (cv::Mat &img_out, int r1, int g1, int b1, int r2, int g2, int b2, std::string overlay="") |
| Shows features extracted in the last image. More...
|
|
virtual void | display_history (cv::Mat &img_out, int r1, int g1, int b1, int r2, int g2, int b2, std::vector< size_t > highlighted={}, std::string overlay="") |
| Shows a "trail" for each feature (i.e. its history) More...
|
|
virtual void | feed_new_camera (const CameraData &message)=0 |
| Process a new image. More...
|
|
std::shared_ptr< FeatureDatabase > | get_feature_database () |
| Get the feature database with all the track information. More...
|
|
std::unordered_map< size_t, std::vector< size_t > > | get_last_ids () |
| Getter method for active features in the last frame (ids per camera) More...
|
|
std::unordered_map< size_t, std::vector< cv::KeyPoint > > | get_last_obs () |
| Getter method for active features in the last frame (observations per camera) More...
|
|
int | get_num_features () |
| Getter method for number of active features. More...
|
|
void | set_num_features (int _num_features) |
| Setter method for number of active features. More...
|
|
| TrackBase (std::unordered_map< size_t, std::shared_ptr< CamBase >> cameras, int numfeats, int numaruco, bool stereo, HistogramMethod histmethod) |
| Public constructor with configuration variables. More...
|
|
virtual | ~TrackBase () |
|
|
std::unordered_map< size_t, std::shared_ptr< CamBase > > | camera_calib |
| Camera object which has all calibration in it. More...
|
|
std::map< size_t, bool > | camera_fisheye |
| If we are a fisheye model or not. More...
|
|
std::atomic< size_t > | currid |
| Master ID for this tracker (atomic to allow for multi-threading) More...
|
|
std::shared_ptr< FeatureDatabase > | database |
| Database with all our current features. More...
|
|
HistogramMethod | histogram_method |
| What histogram equalization method we should pre-process images with? More...
|
|
std::unordered_map< size_t, std::vector< size_t > > | ids_last |
| Set of IDs of each current feature in the database. More...
|
|
std::map< size_t, cv::Mat > | img_last |
| Last set of images (use map so all trackers render in the same order) More...
|
|
std::map< size_t, cv::Mat > | img_mask_last |
| Last set of images (use map so all trackers render in the same order) More...
|
|
std::vector< std::mutex > | mtx_feeds |
| Mutexs for our last set of image storage (img_last, pts_last, and ids_last) More...
|
|
std::mutex | mtx_last_vars |
| Mutex for editing the *_last variables. More...
|
|
int | num_features |
| Number of features we should try to track frame to frame. More...
|
|
std::unordered_map< size_t, std::vector< cv::KeyPoint > > | pts_last |
| Last set of tracked points. More...
|
|
boost::posix_time::ptime | rT1 |
|
boost::posix_time::ptime | rT2 |
|
boost::posix_time::ptime | rT3 |
|
boost::posix_time::ptime | rT4 |
|
boost::posix_time::ptime | rT5 |
|
boost::posix_time::ptime | rT6 |
|
boost::posix_time::ptime | rT7 |
|
bool | use_stereo |
| If we should use binocular tracking or stereo tracking for multi-camera. More...
|
|
Visual feature tracking base class.
This is the base class for all our visual trackers. The goal here is to provide a common interface so all underlying trackers can simply hide away all the complexities. We have something called the "feature database" which has all the tracking information inside of it. The user can ask this database for features which can then be used in an MSCKF or batch-based setting. The feature tracks store both the raw (distorted) and undistorted/normalized values. Right now we just support two camera models, see: undistort_point_brown() and undistort_point_fisheye().
@m_class{m-note m-warning}
- A Note on Multi-Threading Support
- There is some support for asynchronous multi-threaded feature tracking of independent cameras. The key assumption during implementation is that the user will not try to track on the same camera in parallel, and instead call on different cameras. For example, if I have two cameras, I can either sequentially call the feed function, or I spin each of these into separate threads and wait for their return. The currid is atomic to allow for multiple threads to access it without issue and ensure that all features have unique id values. We also have mutex for access for the calibration and previous images and tracks (used during visualization). It should be noted that if a thread calls visualization, it might hang or the feed thread might, due to acquiring the mutex for that specific camera id / feed.
This base class also handles most of the heavy lifting with the visualization, but the sub-classes can override this and do their own logic if they want (i.e. the TrackAruco has its own logic for visualization). This visualization needs access to the prior images and their tracks, thus must synchronise in the case of multi-threading. This shouldn't impact performance, but high frequency visualization calls can negatively effect the performance.
Definition at line 72 of file TrackBase.h.