Executing the Script

In HAT, the functions that can be used and modified directly by users mainly include tools and configs. The tools are mainly the core functional modules, including training and validation visualization, while configs mainly contains the options and parameters that can be configured during the execution of the functional modules.

This tutorial shows you the core functions included in the tools, as well as the development specifications and the usage of configs.

In most cases, for the execution of tools, a config input is needed, except for some tools related to datasets or single image visualization. Therefore, the general execution paradigm can be summarized as follows:

python3 tools/${TOOLS} --config configs/${CONFIGS}

Here we mainly introduce the core functions and external interfaces of the tools.

Tools

The current tools have some python scripts, each for a different function.

train.py is a training tool with the following major parameters:

ParameterDescription
--stage {float, calibration, qat}Different training and prediction stages.
--config CONFIG, -c CONFIGPath to the config file.
--device-ids DEVICE_IDS, -ids DEVICE_IDSList of running GPUs.
--dist-url DIST_URLServer address for multi-computer operations, auto by default.
--launcher {torch}Launch mode for multi-computer operations.
--pipeline-testWhether to run the pipeline test.
--optsModify config options using the command-line.
--opts-overwriteWhether to allow modify config.
--levelDefault logging level for other rank except rank0.

predict.py is a predicting tool with the following major parameters:

ParameterDescription
--stage {float, calibration, qat, int_infer}Different training and prediction stages.
--config CONFIG, -c CONFIGPath to the config file.
--device-ids DEVICE_IDS, -ids DEVICE_IDSList of running GPUs.
--dist-url DIST_URLServer address for multi-computer operations, auto by default.
--backendThe backend of communication methods of multiple nodes or GPUs.
--launcher {torch}Launch mode for multi-computer operations.
--ckptThe ckpt file for predict model.
--pipeline-testWhether to run the pipeline test.

model_checker.py is a checker tool for checking model executable on the BPU.

ParameterDescription
--config CONFIG, -c CONFIGPath to the config file.

validation_hbir.py is an accuracy validation tool that provides fixed-point accuracy and fully aligned results with the on-board situations, with the following major parameters:

ParameterDescription
--config CONFIG, -c CONFIGPath to the config file.
--stage {align_bpu}Different prediction stages.

calops.py is the network ops calculation tool, with the following major parameters:

ParameterDescription
--config CONFIG, -c CONFIGPath to the config file.
--input-shapeInput shape.

compile_perf_hbir.py is the compilation and performance tool, with the following major parameters:

ParameterDescription
--config CONFIG, -c CONFIGDirectory of the config file.
--opt {0,1,2,3}Compilation-time optimization options.
--jobs JOBSNumber of threads for compilation.

infer_hbir.py is used to perform single image prediction, with the following major parameters:

ParameterDescription
--config CONFIG, -c CONFIGPath to the config file.
--model-inputsThe specified model inputs.
--save-pathThe path where the visualization results are saved.

create_data.py is used to pre-process Kitti3D lidar dataset, with the following major parameters:

ParameterDescription
--datasetName of dataset.
--root-dirPath of dataset.

export_onnx.py is used to export the model to onnx (only for visualization and does not support inference), with the following major parameters:

ParameterDescription
--config CONFIG, -c CONFIGPath to the config file.

The datasets directory is for dataset-related packaging and visualization tools.

On This Page