In HAT, the functions that can be used and modified directly by users mainly include tools and configs. The tools are mainly the core functional modules, including training and validation visualization, while configs mainly contains the options and parameters that can be configured during the execution of the functional modules.
This tutorial shows you the core functions included in the tools, as well as the development specifications and the usage of configs.
In most cases, for the execution of tools, a config input is needed, except for some tools related to datasets or single image visualization. Therefore, the general execution paradigm can be summarized as follows:
Here we mainly introduce the core functions and external interfaces of the tools.
The current tools have some python scripts, each for a different function.
train.py is a training tool with the following major parameters:
| Parameter | Description |
|---|---|
--stage {float, calibration, qat} | Different training and prediction stages. |
--config CONFIG, -c CONFIG | Path to the config file. |
--device-ids DEVICE_IDS, -ids DEVICE_IDS | List of running GPUs. |
--dist-url DIST_URL | Server address for multi-computer operations, auto by default. |
--launcher {torch} | Launch mode for multi-computer operations. |
--pipeline-test | Whether to run the pipeline test. |
--opts | Modify config options using the command-line. |
--opts-overwrite | Whether to allow modify config. |
--level | Default logging level for other rank except rank0. |
predict.py is a predicting tool with the following major parameters:
| Parameter | Description |
|---|---|
--stage {float, calibration, qat, int_infer} | Different training and prediction stages. |
--config CONFIG, -c CONFIG | Path to the config file. |
--device-ids DEVICE_IDS, -ids DEVICE_IDS | List of running GPUs. |
--dist-url DIST_URL | Server address for multi-computer operations, auto by default. |
--backend | The backend of communication methods of multiple nodes or GPUs. |
--launcher {torch} | Launch mode for multi-computer operations. |
--ckpt | The ckpt file for predict model. |
--pipeline-test | Whether to run the pipeline test. |
model_checker.py is a checker tool for checking model executable on the BPU.
| Parameter | Description |
|---|---|
--config CONFIG, -c CONFIG | Path to the config file. |
validation_hbir.py is an accuracy validation tool that provides fixed-point accuracy and fully aligned results with the on-board situations, with the following major parameters:
| Parameter | Description |
|---|---|
--config CONFIG, -c CONFIG | Path to the config file. |
--stage {align_bpu} | Different prediction stages. |
calops.py is the network ops calculation tool, with the following major parameters:
| Parameter | Description |
|---|---|
--config CONFIG, -c CONFIG | Path to the config file. |
--input-shape | Input shape. |
compile_perf_hbir.py is the compilation and performance tool, with the following major parameters:
| Parameter | Description |
|---|---|
--config CONFIG, -c CONFIG | Directory of the config file. |
--opt {0,1,2,3} | Compilation-time optimization options. |
--jobs JOBS | Number of threads for compilation. |
infer_hbir.py is used to perform single image prediction, with the following major parameters:
| Parameter | Description |
|---|---|
--config CONFIG, -c CONFIG | Path to the config file. |
--model-inputs | The specified model inputs. |
--save-path | The path where the visualization results are saved. |
create_data.py is used to pre-process Kitti3D lidar dataset, with the following major parameters:
| Parameter | Description |
|---|---|
--dataset | Name of dataset. |
--root-dir | Path of dataset. |
export_onnx.py is used to export the model to onnx (only for visualization and does not support inference), with the following major parameters:
| Parameter | Description |
|---|---|
--config CONFIG, -c CONFIG | Path to the config file. |
The datasets directory is for dataset-related packaging and visualization tools.