Class PointPerceptionsOpsView
Defined in File PointPerception.hpp
Nested Relationships
Nested Types
Class Documentation
Provides efficient, non-destructive, chainable operations over a set of point-based perceptions.
This view enables filtering, downsampling, fusion, and dimensional collapsing across multiple point clouds without duplicating the underlying perceptions. Most operations work lazily by keeping index-based selections and transformation options, and only materialize a new point cloud when required (for example in as_points()).
Public Functions
Constructs a view over an external container of perceptions.
- Parameters:
perceptions – Constant reference to the source container.
Constructs a view from a single PointPerception instance.
Creates an internal container owning the given perception and initializes the index set for all its points.
- Parameters:
perception – The PointPerception to wrap in the view.
Constructs a view taking ownership of the container.
- Parameters:
perceptions – Rvalue container of perceptions to be owned by the view.
Filters all point clouds by axis-aligned bounds.
When the view has no target frame configured (that is,
fuse()has not been called), this method always performs an eager filtering step in the original frame of each sensor, updating the internal index sets in-place.When a target frame has been configured via
fuse(), the behaviour depends onlazy_post_fuse:If
lazy_post_fuseistrue(default), the bounds are stored as a post-fuse filter in the target frame and are applied lazily during materialization (for example, inas_points()), without modifying the internal indices.If
lazy_post_fuseisfalse, the method applies the transform to the target frame immediately and performs an eager filtering step in that frame, updating the internal indices accordingly.
Components set to
NaNinmin_boundsormax_boundsleave the corresponding axis unbounded.- Parameters:
min_bounds – Minimum
[x,y,z] values (useNaNto disable per axis).max_bounds – Maximum
[x,y,z] values (useNaNto disable per axis).lazy_post_fuse – If
trueand a target frame is set, configure a lazy post-fuse filter in the target frame; iffalse, always filter eagerly (updating indices) in the current frame (sensor or target).
- Returns:
Reference to
*thisto allow chaining.
Downsamples each perception using a voxel grid.
- Parameters:
resolution – Voxel size in meters.
- Returns:
Reference to
*thisto allow chaining.
Collapses dimensions to fixed values (for example, projection onto a plane).
Components set to
NaNincollapse_dimskeep the original values.The behaviour depends on
lazy:If
lazyistrue(default), the collapse configuration is stored in the view and applied lazily when materializing point clouds (for example, inas_points()), without modifying the underlying perception data.If
lazyisfalseand the view owns its internal container, the collapse is applied eagerly by updating the coordinates of all stored points, so subsequent operations (filters, fusion, etc.) observe the collapsed geometry. On non-owning views, the eager mode is ignored to avoid modifying external data.
- Parameters:
collapse_dims – Fixed values for each axis (use
NaNto preserve original values).lazy – If
true, configure collapse lazily for output only; iffalseand the view is owning, apply the collapse immediately to the internal point data.
- Returns:
Reference to
*thisto allow chaining.
Retrieves all selected points across perceptions as a single concatenated cloud.
- Returns:
Concatenated point cloud.
Retrieves the filtered point cloud for a specific perception.
- Parameters:
idx – Index of the target perception in the underlying container.
- Returns:
Const reference to the filtered point cloud.
Configures fusion of all perceptions into a common frame.
This method does not immediately build a fused point cloud. Instead, it stores the target frame and the required transforms so that subsequent operations (for example filter) and final materialization (as_points()) work in
target_framewithout duplicating the underlying data.- Parameters:
target_frame – Frame ID to which all clouds are conceptually transformed.
- Returns:
Reference to
*thisto allow chaining.
Adds a new perception to the current view.
This method extends the underlying set of perceptions managed by the view. It does not create a new PointPerceptionsOpsView instance; instead, it updates the current view in place and returns a reference to
*thisto allow method chaining.The newly added perception becomes part of subsequent operations such as filtering, fusion, collapsing, and final materialization (as_points()).
- Parameters:
points – Point cloud to include.
frame – Frame ID associated with
points.stamp – Timestamp associated with
points.
- Returns:
Reference to
*thisto allow chaining.
Provides a constant reference to the underlying perceptions container.
- Returns:
Constant reference to the container.
Discrete 3D voxel index used for downsampling.
Public Functions
Equality comparison.
- Parameters:
other – Voxel key to compare against.
- Returns:
trueif all components are equal, otherwisefalse.
Public Members
Discrete coordinates (voxel indices).
Hash functor for VoxelKey.
Public Functions
Computes the hash value.
- Parameters:
key – Voxel key.
- Returns:
Hash value for
key.