|
void | feed_new_camera (const CameraData &message) override |
| Process a new image. More...
|
|
| TrackKLT (std::unordered_map< size_t, std::shared_ptr< CamBase >> cameras, int numfeats, int numaruco, bool stereo, HistogramMethod histmethod, int fast_threshold, int gridx, int gridy, int minpxdist) |
| Public constructor with configuration variables. More...
|
|
void | change_feat_id (size_t id_old, size_t id_new) |
| Changes the ID of an actively tracked feature to another one. More...
|
|
virtual void | display_active (cv::Mat &img_out, int r1, int g1, int b1, int r2, int g2, int b2, std::string overlay="") |
| Shows features extracted in the last image. More...
|
|
virtual void | display_history (cv::Mat &img_out, int r1, int g1, int b1, int r2, int g2, int b2, std::vector< size_t > highlighted={}, std::string overlay="") |
| Shows a "trail" for each feature (i.e. its history) More...
|
|
std::shared_ptr< FeatureDatabase > | get_feature_database () |
| Get the feature database with all the track information. More...
|
|
std::unordered_map< size_t, std::vector< size_t > > | get_last_ids () |
| Getter method for active features in the last frame (ids per camera) More...
|
|
std::unordered_map< size_t, std::vector< cv::KeyPoint > > | get_last_obs () |
| Getter method for active features in the last frame (observations per camera) More...
|
|
int | get_num_features () |
| Getter method for number of active features. More...
|
|
void | set_num_features (int _num_features) |
| Setter method for number of active features. More...
|
|
| TrackBase (std::unordered_map< size_t, std::shared_ptr< CamBase >> cameras, int numfeats, int numaruco, bool stereo, HistogramMethod histmethod) |
| Public constructor with configuration variables. More...
|
|
virtual | ~TrackBase () |
|
|
void | feed_monocular (const CameraData &message, size_t msg_id) |
| Process a new monocular image. More...
|
|
void | feed_stereo (const CameraData &message, size_t msg_id_left, size_t msg_id_right) |
| Process new stereo pair of images. More...
|
|
void | perform_detection_monocular (const std::vector< cv::Mat > &img0pyr, const cv::Mat &mask0, std::vector< cv::KeyPoint > &pts0, std::vector< size_t > &ids0) |
| Detects new features in the current image. More...
|
|
void | perform_detection_stereo (const std::vector< cv::Mat > &img0pyr, const std::vector< cv::Mat > &img1pyr, const cv::Mat &mask0, const cv::Mat &mask1, size_t cam_id_left, size_t cam_id_right, std::vector< cv::KeyPoint > &pts0, std::vector< cv::KeyPoint > &pts1, std::vector< size_t > &ids0, std::vector< size_t > &ids1) |
| Detects new features in the current stereo pair. More...
|
|
void | perform_matching (const std::vector< cv::Mat > &img0pyr, const std::vector< cv::Mat > &img1pyr, std::vector< cv::KeyPoint > &pts0, std::vector< cv::KeyPoint > &pts1, size_t id0, size_t id1, std::vector< uchar > &mask_out) |
| KLT track between two images, and do RANSAC afterwards. More...
|
|
|
int | grid_x |
|
int | grid_y |
|
std::map< size_t, cv::Mat > | img_curr |
|
std::map< size_t, std::vector< cv::Mat > > | img_pyramid_curr |
|
std::map< size_t, std::vector< cv::Mat > > | img_pyramid_last |
|
int | min_px_dist |
|
int | pyr_levels = 5 |
|
int | threshold |
|
cv::Size | win_size = cv::Size(15, 15) |
|
std::unordered_map< size_t, std::shared_ptr< CamBase > > | camera_calib |
| Camera object which has all calibration in it. More...
|
|
std::map< size_t, bool > | camera_fisheye |
| If we are a fisheye model or not. More...
|
|
std::atomic< size_t > | currid |
| Master ID for this tracker (atomic to allow for multi-threading) More...
|
|
std::shared_ptr< FeatureDatabase > | database |
| Database with all our current features. More...
|
|
HistogramMethod | histogram_method |
| What histogram equalization method we should pre-process images with? More...
|
|
std::unordered_map< size_t, std::vector< size_t > > | ids_last |
| Set of IDs of each current feature in the database. More...
|
|
std::map< size_t, cv::Mat > | img_last |
| Last set of images (use map so all trackers render in the same order) More...
|
|
std::map< size_t, cv::Mat > | img_mask_last |
| Last set of images (use map so all trackers render in the same order) More...
|
|
std::vector< std::mutex > | mtx_feeds |
| Mutexs for our last set of image storage (img_last, pts_last, and ids_last) More...
|
|
std::mutex | mtx_last_vars |
| Mutex for editing the *_last variables. More...
|
|
int | num_features |
| Number of features we should try to track frame to frame. More...
|
|
std::unordered_map< size_t, std::vector< cv::KeyPoint > > | pts_last |
| Last set of tracked points. More...
|
|
boost::posix_time::ptime | rT1 |
|
boost::posix_time::ptime | rT2 |
|
boost::posix_time::ptime | rT3 |
|
boost::posix_time::ptime | rT4 |
|
boost::posix_time::ptime | rT5 |
|
boost::posix_time::ptime | rT6 |
|
boost::posix_time::ptime | rT7 |
|
bool | use_stereo |
| If we should use binocular tracking or stereo tracking for multi-camera. More...
|
|
KLT tracking of features.
This is the implementation of a KLT visual frontend for tracking sparse features. We can track either monocular cameras across time (temporally) along with stereo cameras which we also track across time (temporally) but track from left to right to find the stereo correspondence information also. This uses the calcOpticalFlowPyrLK OpenCV function to do the KLT tracking.
Definition at line 39 of file TrackKLT.h.
void TrackKLT::perform_detection_stereo |
( |
const std::vector< cv::Mat > & |
img0pyr, |
|
|
const std::vector< cv::Mat > & |
img1pyr, |
|
|
const cv::Mat & |
mask0, |
|
|
const cv::Mat & |
mask1, |
|
|
size_t |
cam_id_left, |
|
|
size_t |
cam_id_right, |
|
|
std::vector< cv::KeyPoint > & |
pts0, |
|
|
std::vector< cv::KeyPoint > & |
pts1, |
|
|
std::vector< size_t > & |
ids0, |
|
|
std::vector< size_t > & |
ids1 |
|
) |
| |
|
protected |
Detects new features in the current stereo pair.
- Parameters
-
img0pyr | left image we will detect features on (first level of pyramid) |
img1pyr | right image we will detect features on (first level of pyramid) |
mask0 | mask which has what ROI we do not want features in |
mask1 | mask which has what ROI we do not want features in |
cam_id_left | first camera sensor id |
cam_id_right | second camera sensor id |
pts0 | left vector of currently extracted keypoints |
pts1 | right vector of currently extracted keypoints |
ids0 | left vector of feature ids for each currently extracted keypoint |
ids1 | right vector of feature ids for each currently extracted keypoint |
This does the same logic as the perform_detection_monocular() function, but we also enforce stereo contraints. So we detect features in the left image, and then KLT track them onto the right image. If we have valid tracks, then we have both the keypoint on the left and its matching point in the right image. Will try to always have the "max_features" being tracked through KLT at each timestep.
Definition at line 530 of file TrackKLT.cpp.
void TrackKLT::perform_matching |
( |
const std::vector< cv::Mat > & |
img0pyr, |
|
|
const std::vector< cv::Mat > & |
img1pyr, |
|
|
std::vector< cv::KeyPoint > & |
pts0, |
|
|
std::vector< cv::KeyPoint > & |
pts1, |
|
|
size_t |
id0, |
|
|
size_t |
id1, |
|
|
std::vector< uchar > & |
mask_out |
|
) |
| |
|
protected |
KLT track between two images, and do RANSAC afterwards.
- Parameters
-
img0pyr | starting image pyramid |
img1pyr | image pyramid we want to track too |
pts0 | starting points |
pts1 | points we have tracked |
id0 | id of the first camera |
id1 | id of the second camera |
mask_out | what points had valid tracks |
This will track features from the first image into the second image. The two point vectors will be of equal size, but the mask_out variable will specify which points are good or bad. If the second vector is non-empty, it will be used as an initial guess of where the keypoints are in the second image.
Definition at line 829 of file TrackKLT.cpp.