Class NeuralNetwork

Inheritance Relationships

Base Type

Class Documentation

class NeuralNetwork : public dai::DeviceNodeCRTP<DeviceNode, NeuralNetwork, NeuralNetworkProperties>

NeuralNetwork node. Runs a neural inference on input data.

Public Functions

~NeuralNetwork() override
std::shared_ptr<NeuralNetwork> build(Node::Output &input, const NNArchive &nnArchive)

Build NeuralNetwork node. Connect output to this node’s input. Also call setNNArchive() with provided NNArchive.

Parameters:
  • output – Output to link

  • nnArchive – Neural network archive

Returns:

Shared pointer to NeuralNetwork node

std::shared_ptr<NeuralNetwork> build(const std::shared_ptr<Camera> &input, NNModelDescription modelDesc, std::optional<float> fps = std::nullopt)
std::shared_ptr<NeuralNetwork> build(const std::shared_ptr<Camera> &input, NNArchive nnArchive, std::optional<float> fps = std::nullopt)
std::optional<std::reference_wrapper<const NNArchive>> getNNArchive() const

Get the archive owned by this Node.

Returns:

constant reference to this Nodes archive

void setNNArchive(const NNArchive &nnArchive)

Set NNArchive for this Node. If the archive’s type is SUPERBLOB, use default number of shaves.

Parameters:

nnArchiveNNArchive to set

void setNNArchive(const NNArchive &nnArchive, int numShaves)

Set NNArchive for this Node, throws if the archive’s type is not SUPERBLOB.

Parameters:
  • nnArchiveNNArchive to set

  • numShaves – Number of shaves to use

void setFromModelZoo(NNModelDescription description, bool useCached = true)

Download model from zoo and set it for this Node.

Parameters:
  • description – Model description to download

  • useCached – Use cached model if available

void setBlobPath(const std::filesystem::path &path)

Load network blob into assets and use once pipeline is started.

Throws:

Error – if file doesn’t exist or isn’t a valid network blob.

Parameters:

path – Path to network blob

void setBlob(OpenVINO::Blob blob)

Load network blob into assets and use once pipeline is started.

Parameters:

blob – Network blob

void setBlob(const std::filesystem::path &path)

Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.

Throws:

Error – if file doesn’t exist or isn’t a valid network blob.

Parameters:

path – Path to network blob

void setModelPath(const std::filesystem::path &modelPath)

Load network xml and bin files into assets.

Parameters:

xmlModelPath – Path to the neural network model file.

void setNumPoolFrames(int numFrames)

Specifies how many frames will be available in the pool

Parameters:

numFrames – How many frames will pool have

void setNumInferenceThreads(int numThreads)

How many threads should the node use to run the network.

Parameters:

numThreads – Number of threads to dedicate to this node

void setNumNCEPerInferenceThread(int numNCEPerThread)

How many Neural Compute Engines should a single thread use for inference

Parameters:

numNCEPerThread – Number of NCE per thread

void setNumShavesPerInferenceThread(int numShavesPerThread)

How many Shaves should a single thread use for inference

Parameters:

numShavesPerThread – Number of shaves per thread

void setBackend(std::string backend)

Specifies backend to use

Parameters:

backend – String specifying backend to use

void setBackendProperties(std::map<std::string, std::string> properties)

Set backend properties

Parameters:

backendProperties – backend properties map

int getNumInferenceThreads()

How many inference threads will be used to run the network

Returns:

Number of threads, 0, 1 or 2. Zero means AUTO

inline DeviceNodeCRTP()
inline DeviceNodeCRTP(const std::shared_ptr<Device> &device)
inline DeviceNodeCRTP(std::unique_ptr<Properties> props)
inline DeviceNodeCRTP(std::unique_ptr<Properties> props, bool confMode)
inline DeviceNodeCRTP(const std::shared_ptr<Device> &device, std::unique_ptr<Properties> props, bool confMode)

Public Members

Input input = {*this, {"in", DEFAULT_GROUP, DEFAULT_BLOCKING, DEFAULT_QUEUE_SIZE, {{{DatatypeEnum::Buffer, true}}}, true}}

Input message with data to be inferred upon

Output out = {*this, {"out", DEFAULT_GROUP, {{{DatatypeEnum::NNData, false}}}}}

Outputs NNData message that carries inference results

Output passthrough = {*this, {"passthrough", DEFAULT_GROUP, {{{DatatypeEnum::Buffer, true}}}}}

Passthrough message on which the inference was performed.

Suitable for when input queue is set to non-blocking behavior.

InputMap inputs = {*this, "inputs", {DEFAULT_NAME, DEFAULT_GROUP, false, 1, {{{DatatypeEnum::Buffer, true}}}, true}}

Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages

OutputMap passthroughs = {*this, "passthroughs", {"", DEFAULT_GROUP, {{{DatatypeEnum::Buffer, true}}}}}

Passthroughs which correspond to specified input

Public Static Attributes

static constexpr const char *NAME = "NeuralNetwork"