Create synchronous/asynchronous inference tasks based on input parameters. For the asynchronous inference task, the caller can use the returned taskHandle across functions and threads.
taskHandle Task handle pointer.output Output of the inference task.input Input of the inference task.dnnHandle DNN handle pointer.0, the API is executed successfully, otherwise the execution fails.taskHandle is set to nullptr, a synchronous task will be created, as returned by the interface, and the interface will be complete.*taskHandle is set to nullptr, an asynchronous task will be created, and the taskHandle returned by the interface can be used for subsequent blocking or callbacking.*taskHandle is not null and points to a previously created but uncommitted task, a new task will be created and added to it.Up to 32 coexisting model tasks are supported.
Create ROI synchronous/asynchronous inference tasks based on input parameters. For the asynchronous task, the caller can use the returned taskHandle across functions and threads.
taskHandle Task handle pointer.output Output of the inference task.input Input of the inference task.rois Info of the ROI box.roiCount Number of ROI boxes.dnnHandle DNN handle pointer.0, the API is executed successfully, otherwise the execution fails.Concept Description:
input_count: number of input branches for the model.output_count: number of output branches for the model.resizer_count: number of branches for the model with resizer input source(≤input_count), and each resizer input source needs to correspond to a ROI.roiCount: number of Roi boxes, its value should be an integer multiple of resizer_count.data_batch: number of data batches that the model needs to infer, the value is roiCount / resizer_count.input: input of the inference task, the number should be input_count * data_batch.output: number of inference task outputs is consistent with output_count, and the memory required for each output is data_batch times the memory required by the corresponding tensor of the model.Input/Output Description:
Taking a more complex multi-input model as an example, suppose the model has 3 input branches (2 resizer inputs and 1 ddr input) and 1 output branch. The model needs to process 3 batches of data with a total of 6 rois (i.e., each batch of data has 2 rois). Then the following information is available:
input_count = 3output_count = 1resizer_count = 2roiCount = 6data_batch = roiCount / resizer_count = 3input = input_count * data_batch = 9output = output_count = 1Additionally, suppose the static information for the model's inputs/outputs is as follows:
Then, the dynamic information during model inference would be:
input ):
output ):
Interface Limitation Description:
taskHandle should be set to nullptr in advance, unless appending tasks to a specified taskHandle.stride must be a multiple of 32.roi is that it must not exceed the boundaries of the input image, this limit will be relaxed in subsequent versions.