|
void | feed_new_camera (const CameraData &message) override |
| Process a new image. More...
|
|
| TrackDescriptor (std::unordered_map< size_t, std::shared_ptr< CamBase >> cameras, int numfeats, int numaruco, bool stereo, HistogramMethod histmethod, int fast_threshold, int gridx, int gridy, int minpxdist, double knnratio) |
| Public constructor with configuration variables. More...
|
|
void | change_feat_id (size_t id_old, size_t id_new) |
| Changes the ID of an actively tracked feature to another one. More...
|
|
virtual void | display_active (cv::Mat &img_out, int r1, int g1, int b1, int r2, int g2, int b2, std::string overlay="") |
| Shows features extracted in the last image. More...
|
|
virtual void | display_history (cv::Mat &img_out, int r1, int g1, int b1, int r2, int g2, int b2, std::vector< size_t > highlighted={}, std::string overlay="") |
| Shows a "trail" for each feature (i.e. its history) More...
|
|
std::shared_ptr< FeatureDatabase > | get_feature_database () |
| Get the feature database with all the track information. More...
|
|
std::unordered_map< size_t, std::vector< size_t > > | get_last_ids () |
| Getter method for active features in the last frame (ids per camera) More...
|
|
std::unordered_map< size_t, std::vector< cv::KeyPoint > > | get_last_obs () |
| Getter method for active features in the last frame (observations per camera) More...
|
|
int | get_num_features () |
| Getter method for number of active features. More...
|
|
void | set_num_features (int _num_features) |
| Setter method for number of active features. More...
|
|
| TrackBase (std::unordered_map< size_t, std::shared_ptr< CamBase >> cameras, int numfeats, int numaruco, bool stereo, HistogramMethod histmethod) |
| Public constructor with configuration variables. More...
|
|
virtual | ~TrackBase () |
|
|
void | feed_monocular (const CameraData &message, size_t msg_id) |
| Process a new monocular image. More...
|
|
void | feed_stereo (const CameraData &message, size_t msg_id_left, size_t msg_id_right) |
| Process new stereo pair of images. More...
|
|
void | perform_detection_monocular (const cv::Mat &img0, const cv::Mat &mask0, std::vector< cv::KeyPoint > &pts0, cv::Mat &desc0, std::vector< size_t > &ids0) |
| Detects new features in the current image. More...
|
|
void | perform_detection_stereo (const cv::Mat &img0, const cv::Mat &img1, const cv::Mat &mask0, const cv::Mat &mask1, std::vector< cv::KeyPoint > &pts0, std::vector< cv::KeyPoint > &pts1, cv::Mat &desc0, cv::Mat &desc1, size_t cam_id0, size_t cam_id1, std::vector< size_t > &ids0, std::vector< size_t > &ids1) |
| Detects new features in the current stereo pair. More...
|
|
void | robust_match (const std::vector< cv::KeyPoint > &pts0, const std::vector< cv::KeyPoint > &pts1, const cv::Mat &desc0, const cv::Mat &desc1, size_t id0, size_t id1, std::vector< cv::DMatch > &matches) |
| Find matches between two keypoint+descriptor sets. More...
|
|
void | robust_ratio_test (std::vector< std::vector< cv::DMatch >> &matches) |
|
void | robust_symmetry_test (std::vector< std::vector< cv::DMatch >> &matches1, std::vector< std::vector< cv::DMatch >> &matches2, std::vector< cv::DMatch > &good_matches) |
|
|
std::unordered_map< size_t, cv::Mat > | desc_last |
|
int | grid_x |
|
int | grid_y |
|
double | knn_ratio |
|
cv::Ptr< cv::DescriptorMatcher > | matcher = cv::DescriptorMatcher::create("BruteForce-Hamming") |
|
int | min_px_dist |
|
cv::Ptr< cv::ORB > | orb0 = cv::ORB::create() |
|
cv::Ptr< cv::ORB > | orb1 = cv::ORB::create() |
|
boost::posix_time::ptime | rT1 |
|
boost::posix_time::ptime | rT2 |
|
boost::posix_time::ptime | rT3 |
|
boost::posix_time::ptime | rT4 |
|
boost::posix_time::ptime | rT5 |
|
boost::posix_time::ptime | rT6 |
|
boost::posix_time::ptime | rT7 |
|
int | threshold |
|
std::unordered_map< size_t, std::shared_ptr< CamBase > > | camera_calib |
| Camera object which has all calibration in it. More...
|
|
std::map< size_t, bool > | camera_fisheye |
| If we are a fisheye model or not. More...
|
|
std::atomic< size_t > | currid |
| Master ID for this tracker (atomic to allow for multi-threading) More...
|
|
std::shared_ptr< FeatureDatabase > | database |
| Database with all our current features. More...
|
|
HistogramMethod | histogram_method |
| What histogram equalization method we should pre-process images with? More...
|
|
std::unordered_map< size_t, std::vector< size_t > > | ids_last |
| Set of IDs of each current feature in the database. More...
|
|
std::map< size_t, cv::Mat > | img_last |
| Last set of images (use map so all trackers render in the same order) More...
|
|
std::map< size_t, cv::Mat > | img_mask_last |
| Last set of images (use map so all trackers render in the same order) More...
|
|
std::vector< std::mutex > | mtx_feeds |
| Mutexs for our last set of image storage (img_last, pts_last, and ids_last) More...
|
|
std::mutex | mtx_last_vars |
| Mutex for editing the *_last variables. More...
|
|
int | num_features |
| Number of features we should try to track frame to frame. More...
|
|
std::unordered_map< size_t, std::vector< cv::KeyPoint > > | pts_last |
| Last set of tracked points. More...
|
|
boost::posix_time::ptime | rT1 |
|
boost::posix_time::ptime | rT2 |
|
boost::posix_time::ptime | rT3 |
|
boost::posix_time::ptime | rT4 |
|
boost::posix_time::ptime | rT5 |
|
boost::posix_time::ptime | rT6 |
|
boost::posix_time::ptime | rT7 |
|
bool | use_stereo |
| If we should use binocular tracking or stereo tracking for multi-camera. More...
|
|
Descriptor-based visual tracking.
Here we use descriptor matching to track features from one frame to the next. We track both temporally, and across stereo pairs to get stereo constraints. Right now we use ORB descriptors as we have found it is the fastest when computing descriptors. Tracks are then rejected based on a ratio test and ransac.
Definition at line 37 of file TrackDescriptor.h.
void TrackDescriptor::perform_detection_monocular |
( |
const cv::Mat & |
img0, |
|
|
const cv::Mat & |
mask0, |
|
|
std::vector< cv::KeyPoint > & |
pts0, |
|
|
cv::Mat & |
desc0, |
|
|
std::vector< size_t > & |
ids0 |
|
) |
| |
|
protected |
Detects new features in the current image.
- Parameters
-
img0 | image we will detect features on |
mask0 | mask which has what ROI we do not want features in |
pts0 | vector of extracted keypoints |
desc0 | vector of the extracted descriptors |
ids0 | vector of all new IDs |
Given a set of images, and their currently extracted features, this will try to add new features. We return all extracted descriptors here since we DO NOT need to do stereo tracking left to right. Our vector of IDs will be later overwritten when we match features temporally to the previous frame's features. See robust_match() for the matching.
Definition at line 355 of file TrackDescriptor.cpp.
void TrackDescriptor::perform_detection_stereo |
( |
const cv::Mat & |
img0, |
|
|
const cv::Mat & |
img1, |
|
|
const cv::Mat & |
mask0, |
|
|
const cv::Mat & |
mask1, |
|
|
std::vector< cv::KeyPoint > & |
pts0, |
|
|
std::vector< cv::KeyPoint > & |
pts1, |
|
|
cv::Mat & |
desc0, |
|
|
cv::Mat & |
desc1, |
|
|
size_t |
cam_id0, |
|
|
size_t |
cam_id1, |
|
|
std::vector< size_t > & |
ids0, |
|
|
std::vector< size_t > & |
ids1 |
|
) |
| |
|
protected |
Detects new features in the current stereo pair.
- Parameters
-
img0 | left image we will detect features on |
img1 | right image we will detect features on |
mask0 | mask which has what ROI we do not want features in |
mask1 | mask which has what ROI we do not want features in |
pts0 | left vector of new keypoints |
pts1 | right vector of new keypoints |
desc0 | left vector of extracted descriptors |
desc1 | left vector of extracted descriptors |
cam_id0 | id of the first camera |
cam_id1 | id of the second camera |
ids0 | left vector of all new IDs |
ids1 | right vector of all new IDs |
This does the same logic as the perform_detection_monocular() function, but we also enforce stereo contraints. We also do STEREO matching from the left to right, and only return good matches that are found in both the left and right. Our vector of IDs will be later overwritten when we match features temporally to the previous frame's features. See robust_match() for the matching.
Definition at line 403 of file TrackDescriptor.cpp.