How to Turn on AMP

When AMP (Automatic Mixed Precision) is enabled, Pytorch can automatically compute some operators (such as convolution and full concatenation) by using float16 during model execution to increase computing speed and reduce memory usage. See the Official Pytorch Documentation for more details.

AMP is prepared and ready to use in HAT. Users just need to set the enable_amp to True when defining the batch_processor field in the config file.

Note

In order to get accurate metrics during model validation, it is not necessary to turn on the AMP. To turn it off, set the enable_amp parameter to False when defining the val_batch_processor field.

# configs/example.py # Use of BasicBatchProcessor batch_processor = dict( type='BasicBatchProcessor', need_grad_update=..., batch_transforms=..., enable_amp=True, ) # Use of MultiBatchProcessor batch_processor = dict( type="MultiBatchProcessor", need_grad_update=..., batch_transforms=..., loss_collector=..., enable_amp=True, )