+--------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------+----------------------------------------------------------+
| mod_name                             | op_type                                                                       | abnormal_info                                                       | advice                                                   |
|--------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------+----------------------------------------------------------|
| head                                 | torch.Tensor.cpu                                                              | Total data range 7183591.5 maybe too large for quantization.        | Please change model structure or limit this output range |
| head.instance_bank.anchor_quant_stub | horizon_plugin_pytorch.quantization.stubs.QuantStub                           | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.1.key_cat                | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.1.attn                   | torch.Tensor.transpose                                                        | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.1.attn                   | torch.Tensor.reshape                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.3.point_mul              | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 130925.09375 maybe too large for quantization.     | Please change model structure or limit this output range |
| head.layers.3.point_mul              | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.3                        | torch.clamp                                                                   | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.3                        | torch.Tensor.flatten                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.8.key_cat                | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.8.attn                   | torch.Tensor.transpose                                                        | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.8.attn                   | torch.Tensor.reshape                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.10.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 127090.46875 maybe too large for quantization.     | Please change model structure or limit this output range |
| head.layers.10.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.10                       | torch.clamp                                                                   | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.10                       | torch.Tensor.flatten                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.15.key_cat               | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.15.attn                  | torch.Tensor.transpose                                                        | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.15.attn                  | torch.Tensor.reshape                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.17.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 125776.015625 maybe too large for quantization.    | Please change model structure or limit this output range |
| head.layers.17.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.17                       | torch.clamp                                                                   | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.17                       | torch.Tensor.flatten                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.21.attn.matmul           | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.matmul | Data range 301.9509582519531 maybe too large for int8 quantization. | Please try qint16 quantization.                          |
| head.layers.22.key_cat               | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.22.attn                  | torch.Tensor.transpose                                                        | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.22.attn                  | torch.Tensor.reshape                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.24.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 124370.4921875 maybe too large for quantization.   | Please change model structure or limit this output range |
| head.layers.24.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.24                       | torch.clamp                                                                   | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.24                       | torch.Tensor.flatten                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.29.key_cat               | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.29.attn                  | torch.Tensor.transpose                                                        | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.29.attn                  | torch.Tensor.reshape                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.31.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 125794.234375 maybe too large for quantization.    | Please change model structure or limit this output range |
| head.layers.31.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.31                       | torch.clamp                                                                   | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.31                       | torch.Tensor.flatten                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.35.attn.matmul           | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.matmul | Data range 304.5721435546875 maybe too large for int8 quantization. | Please try qint16 quantization.                          |
| head.layers.35.attn.matmul           | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.matmul | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.36.key_cat               | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.36.attn                  | torch.Tensor.transpose                                                        | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.36.attn                  | torch.Tensor.reshape                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.38.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 125331.140625 maybe too large for quantization.    | Please change model structure or limit this output range |
| head.layers.38.point_mul             | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.38                       | torch.clamp                                                                   | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
| head.layers.38                       | torch.Tensor.flatten                                                          | Current scale does not cover the data range                         | Please check whether fake quant enabled.                 |
+--------------------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------+----------------------------------------------------------+