+----------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------+----------------------------------------------------------+
| mod_name                   | op_type                                                                       | abnormal_info                                                       | advice                                                   |
|----------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------+----------------------------------------------------------|
| head                       | torch.Tensor.cpu                                                              | Total data range 7183568.0 maybe too large for quantization.        | Please change model structure or limit this output range |
| head.layers.3.point_mul    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 70310.390625 maybe too large for quantization.     | Please change model structure or limit this output range |
| head.layers.10.point_mul   | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 66436.9296875 maybe too large for quantization.    | Please change model structure or limit this output range |
| head.layers.17.point_mul   | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul    | Total data range 65842.6171875 maybe too large for quantization.    | Please change model structure or limit this output range |
| head.layers.28.attn.matmul | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.matmul | Data range 258.7089538574219 maybe too large for int8 quantization. | Please try qint16 quantization.                          |
| head.layers.35.attn.matmul | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.matmul | Data range 290.7585144042969 maybe too large for int8 quantization. | Please try qint16 quantization.                          |
+----------------------------+-------------------------------------------------------------------------------+---------------------------------------------------------------------+----------------------------------------------------------+