Class PointPerceptionsOpsView

Nested Relationships

Nested Types

Class Documentation

class PointPerceptionsOpsView

Provides efficient, non-destructive, chainable operations over a set of point-based perceptions.

This view enables filtering, downsampling, fusion, and dimensional collapsing across multiple point clouds without duplicating the underlying perceptions. Most operations work lazily by keeping index-based selections and transformation options, and only materialize a new point cloud when required (for example in as_points()).

Public Functions

explicit PointPerceptionsOpsView(const PointPerceptions &perceptions)

Constructs a view over an external container of perceptions.

Parameters:

perceptions – Constant reference to the source container.

explicit PointPerceptionsOpsView(const PointPerception &perception)

Constructs a view from a single PointPerception instance.

Creates an internal container owning the given perception and initializes the index set for all its points.

Parameters:

perception – The PointPerception to wrap in the view.

explicit PointPerceptionsOpsView(PointPerceptions &&perceptions)

Constructs a view taking ownership of the container.

Parameters:

perceptions – Rvalue container of perceptions to be owned by the view.

PointPerceptionsOpsView(const PointPerceptionsOpsView&) = delete
PointPerceptionsOpsView &operator=(const PointPerceptionsOpsView&) = delete
PointPerceptionsOpsView(PointPerceptionsOpsView&&) = default
PointPerceptionsOpsView &filter(const std::vector<double> &min_bounds, const std::vector<double> &max_bounds, bool lazy_post_fuse = true)

Filters all point clouds by axis-aligned bounds.

When the view has no target frame configured (that is, fuse() has not been called), this method always performs an eager filtering step in the original frame of each sensor, updating the internal index sets in-place.

When a target frame has been configured via fuse(), the behaviour depends on lazy_post_fuse:

  • If lazy_post_fuse is true (default), the bounds are stored as a post-fuse filter in the target frame and are applied lazily during materialization (for example, in as_points()), without modifying the internal indices.

  • If lazy_post_fuse is false, the method applies the transform to the target frame immediately and performs an eager filtering step in that frame, updating the internal indices accordingly.

Components set to NaN in min_bounds or max_bounds leave the corresponding axis unbounded.

Parameters:
  • min_bounds – Minimum [x,y,z] values (use NaN to disable per axis).

  • max_bounds – Maximum [x,y,z] values (use NaN to disable per axis).

  • lazy_post_fuse – If true and a target frame is set, configure a lazy post-fuse filter in the target frame; if false, always filter eagerly (updating indices) in the current frame (sensor or target).

Returns:

Reference to *this to allow chaining.

PointPerceptionsOpsView &downsample(double resolution)

Downsamples each perception using a voxel grid.

Parameters:

resolution – Voxel size in meters.

Returns:

Reference to *this to allow chaining.

PointPerceptionsOpsView &collapse(const std::vector<double> &collapse_dims, bool lazy = true)

Collapses dimensions to fixed values (for example, projection onto a plane).

Components set to NaN in collapse_dims keep the original values.

The behaviour depends on lazy:

  • If lazy is true (default), the collapse configuration is stored in the view and applied lazily when materializing point clouds (for example, in as_points()), without modifying the underlying perception data.

  • If lazy is false and the view owns its internal container, the collapse is applied eagerly by updating the coordinates of all stored points, so subsequent operations (filters, fusion, etc.) observe the collapsed geometry. On non-owning views, the eager mode is ignored to avoid modifying external data.

Parameters:
  • collapse_dims – Fixed values for each axis (use NaN to preserve original values).

  • lazy – If true, configure collapse lazily for output only; if false and the view is owning, apply the collapse immediately to the internal point data.

Returns:

Reference to *this to allow chaining.

pcl::PointCloud<pcl::PointXYZ> as_points() const

Retrieves all selected points across perceptions as a single concatenated cloud.

Returns:

Concatenated point cloud.

const pcl::PointCloud<pcl::PointXYZ> &as_points(int idx) const

Retrieves the filtered point cloud for a specific perception.

Parameters:

idx – Index of the target perception in the underlying container.

Returns:

Const reference to the filtered point cloud.

PointPerceptionsOpsView &fuse(const std::string &target_frame)

Configures fusion of all perceptions into a common frame.

This method does not immediately build a fused point cloud. Instead, it stores the target frame and the required transforms so that subsequent operations (for example filter) and final materialization (as_points()) work in target_frame without duplicating the underlying data.

Parameters:

target_frame – Frame ID to which all clouds are conceptually transformed.

Returns:

Reference to *this to allow chaining.

PointPerceptionsOpsView &add(const pcl::PointCloud<pcl::PointXYZ> points, const std::string &frame, rclcpp::Time stamp)

Adds a new perception to the current view.

This method extends the underlying set of perceptions managed by the view. It does not create a new PointPerceptionsOpsView instance; instead, it updates the current view in place and returns a reference to *this to allow method chaining.

The newly added perception becomes part of subsequent operations such as filtering, fusion, collapsing, and final materialization (as_points()).

Parameters:
  • points – Point cloud to include.

  • frame – Frame ID associated with points.

  • stamp – Timestamp associated with points.

Returns:

Reference to *this to allow chaining.

inline const PointPerceptions &get_perceptions() const

Provides a constant reference to the underlying perceptions container.

Returns:

Constant reference to the container.

struct VoxelKey

Discrete 3D voxel index used for downsampling.

Public Functions

inline bool operator==(const VoxelKey &other) const

Equality comparison.

Parameters:

other – Voxel key to compare against.

Returns:

true if all components are equal, otherwise false.

Public Members

int x

Discrete coordinates (voxel indices).

int y
int z
struct VoxelKeyHash

Hash functor for VoxelKey.

Public Functions

inline std::size_t operator()(const VoxelKey &key) const

Computes the hash value.

Parameters:

key – Voxel key.

Returns:

Hash value for key.