Class SpatialDetectionNetwork
Defined in File SpatialDetectionNetwork.hpp
Inheritance Relationships
Base Type
public dai::DeviceNodeCRTP< DeviceNode, SpatialDetectionNetwork, SpatialDetectionNetworkProperties >(Template Class DeviceNodeCRTP)
Class Documentation
-
class SpatialDetectionNetwork : public dai::DeviceNodeCRTP<DeviceNode, SpatialDetectionNetwork, SpatialDetectionNetworkProperties>
SpatialDetectionNetwork node. Runs a neural inference on input image and calculates spatial location data.
Public Functions
-
inline SpatialDetectionNetwork(std::unique_ptr<Properties> props)
-
inline SpatialDetectionNetwork(std::unique_ptr<Properties> props, bool confMode)
-
void setNNArchive(const NNArchive &nnArchive)
Set NNArchive for this Node. If the archive’s type is SUPERBLOB, use default number of shaves.
- Parameters:
nnArchive – NNArchive to set
-
void setFromModelZoo(NNModelDescription description, bool useCached = true)
Download model from zoo and set it for this Node.
- Parameters:
description – Model description to download
useCached – Use cached model if available
-
void setNNArchive(const NNArchive &nnArchive, int numShaves)
Set NNArchive for this Node, throws if the archive’s type is not SUPERBLOB.
- Parameters:
nnArchive – NNArchive to set
numShaves – Number of shaves to use
-
void setBlobPath(const std::filesystem::path &path)
Backwards compatibility interface Load network blob into assets and use once pipeline is started.
- Throws:
Error – if file doesn’t exist or isn’t a valid network blob.
- Parameters:
path – Path to network blob
-
void setBlob(OpenVINO::Blob blob)
Load network blob into assets and use once pipeline is started.
- Parameters:
blob – Network blob
-
void setBlob(const std::filesystem::path &path)
Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.
- Throws:
Error – if file doesn’t exist or isn’t a valid network blob.
- Parameters:
path – Path to network blob
-
void setModelPath(const std::filesystem::path &modelPath)
Load network file into assets.
- Parameters:
modelPath – Path to the model file.
-
void setNumPoolFrames(int numFrames)
Specifies how many frames will be available in the pool
- Parameters:
numFrames – How many frames will pool have
-
void setNumInferenceThreads(int numThreads)
How many threads should the node use to run the network.
- Parameters:
numThreads – Number of threads to dedicate to this node
-
void setNumNCEPerInferenceThread(int numNCEPerThread)
How many Neural Compute Engines should a single thread use for inference
- Parameters:
numNCEPerThread – Number of NCE per thread
-
void setNumShavesPerInferenceThread(int numShavesPerThread)
How many Shaves should a single thread use for inference
- Parameters:
numShavesPerThread – Number of shaves per thread
-
void setBackend(std::string backend)
Specifies backend to use
- Parameters:
backend – String specifying backend to use
-
void setBackendProperties(std::map<std::string, std::string> properties)
Set backend properties
- Parameters:
backendProperties – backend properties map
-
int getNumInferenceThreads()
How many inference threads will be used to run the network
- Returns:
Number of threads, 0, 1 or 2. Zero means AUTO
-
void setConfidenceThreshold(float thresh)
Specifies confidence threshold at which to filter the rest of the detections.
- Parameters:
thresh – Detection confidence must be greater than specified threshold to be added to the list
-
float getConfidenceThreshold() const
Retrieves threshold at which to filter the rest of the detections.
- Returns:
Detection confidence
-
void setBoundingBoxScaleFactor(float scaleFactor)
Custom interface Specifies scale factor for detected bounding boxes.
- Parameters:
scaleFactor – Scale factor must be in the interval (0,1].
-
void setDepthLowerThreshold(uint32_t lowerThreshold)
Specifies lower threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data
- Parameters:
lowerThreshold – LowerThreshold must be in the interval [0,upperThreshold] and less than upperThreshold.
-
void setDepthUpperThreshold(uint32_t upperThreshold)
Specifies upper threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data
- Parameters:
upperThreshold – UpperThreshold must be in the interval (lowerThreshold,65535].
-
void setSpatialCalculationAlgorithm(dai::SpatialLocationCalculatorAlgorithm calculationAlgorithm)
Specifies spatial location calculator algorithm: Average/Min/Max
- Parameters:
calculationAlgorithm – Calculation algorithm.
-
void setSpatialCalculationStepSize(int stepSize)
Specifies spatial location calculator step size for depth calculation. Step size 1 means that every pixel is taken into calculation, size 2 means every second etc.
- Parameters:
stepSize – Step size.
-
std::optional<std::vector<std::string>> getClasses() const
Get classes labels.
-
virtual void buildInternal() override
Function called from within the
createfunction to build the node. This function is useful for initialization, setting up inputs and outputs = stuff that cannot be perform in the constuctor.
Public Members
-
Subnode<NeuralNetwork> neuralNetwork = {*this, "neuralNetwork"}
-
Subnode<DetectionParser> detectionParser = {*this, "detectionParser"}
-
std::unique_ptr<Subnode<ImageAlign>> depthAlign
-
Input &input
Input message with data to be inferred upon Default queue is blocking with size 5
-
Output &outNetwork
Outputs unparsed inference results.
-
Output &passthrough
Passthrough message on which the inference was performed.
Suitable for when input queue is set to non-blocking behavior.
-
Input inputDepth = {*this, {"inputDepth", DEFAULT_GROUP, false, 4, {{{DatatypeEnum::ImgFrame, false}}}, true}}
Input message with depth data used to retrieve spatial information about detected object Default queue is non-blocking with size 4
-
Input inputImg = {*this, {"inputImg", DEFAULT_GROUP, true, 2, {{{DatatypeEnum::ImgFrame, false}}}, true}}
Input message with image data used to retrieve image transformation from detected object Default queue is blocking with size 1
-
Input inputDetections = {*this, {"inputDetections", DEFAULT_GROUP, true, 5, {{{DatatypeEnum::ImgDetections, false}}}, true}}
Input message with input detections object Default queue is blocking with size 1
-
Output out = {*this, {"out", DEFAULT_GROUP, {{{DatatypeEnum::SpatialImgDetections, false}}}}}
Outputs ImgDetections message that carries parsed detection results.
-
Output boundingBoxMapping = {*this, {"boundingBoxMapping", DEFAULT_GROUP, {{{DatatypeEnum::SpatialLocationCalculatorConfig, false}}}}}
Outputs mapping of detected bounding boxes relative to depth map Suitable for when displaying remapped bounding boxes on depth frame
-
Output passthroughDepth = {*this, {"passthroughDepth", DEFAULT_GROUP, {{{DatatypeEnum::ImgFrame, false}}}}}
Passthrough message for depth frame on which the spatial location calculation was performed. Suitable for when input queue is set to non-blocking behavior.
-
Output spatialLocationCalculatorOutput = {*this, {"spatialLocationCalculatorOutput", DEFAULT_GROUP, {{{DatatypeEnum::SpatialLocationCalculatorData, false}}}}}
Output of SpatialLocationCalculator node, which is used internally by SpatialDetectionNetwork. Suitable when extra information is required from SpatialLocationCalculator node, e.g. minimum, maximum distance.
Public Static Attributes
-
static constexpr const char *NAME = "SpatialDetectionNetwork"
Protected Functions
-
inline DeviceNodeCRTP()
-
inline DeviceNodeCRTP(std::unique_ptr<Properties> props)
-
inline DeviceNodeCRTP(std::unique_ptr<Properties> props, bool confMode)
-
inline SpatialDetectionNetwork(std::unique_ptr<Properties> props)