+---------+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------+---------------+-----------+--------------+---------------+--------------+------------------+----------------------------------+
| Index   | Op Name                                                                     | Mod Name                                          | Attr                | Dtype         | Scale     | Min          | Max           | Mean         | Var              | Shape                            |
|---------+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------+---------------+-----------+--------------+---------------+--------------+------------------+----------------------------------|
| 0       | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | backbone.quant                                    | input               | torch.float32 |           | -0.8671875   | 0.8359375     | -0.1171943   | 0.0536020        | torch.Size([12, 3, 256, 704])    |
| 0       | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | backbone.quant                                    | output              | qint8         | 0.0078125 | -0.8671875   | 0.8359375     | -0.1171943   | 0.0536020        | torch.Size([12, 3, 256, 704])    |
| 1       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.0.0                          | input               | qint8         | 0.0078125 | -0.8671875   | 0.8359375     | -0.1171943   | 0.0536020        | torch.Size([12, 3, 256, 704])    |
| 1       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.0.0                          | weight              | torch.float32 |           | -5.7051187   | 6.2699337     | -0.0242346   | 2.8382275        | torch.Size([32, 3, 3, 3])        |
| 1       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.0.0                          | bias                | torch.float32 |           | -1.1260719   | 0.8556141     | -0.0936056   | 0.2422427        | torch.Size([32])                 |
| 1       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.0.0                          | output              | torch.float32 |           | -11.4682589  | 10.2932444    | -0.0118096   | 0.4531450        | torch.Size([12, 32, 128, 352])   |
| 2       | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.patch_embed.0.1                          | input               | torch.float32 |           | -11.4682589  | 10.2932444    | -0.0118096   | 0.4531450        | torch.Size([12, 32, 128, 352])   |
| 2       | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.patch_embed.0.1                          | output              | torch.float32 |           | -11.4682589  | 10.2932444    | -0.0118096   | 0.4531450        | torch.Size([12, 32, 128, 352])   |
| 3       | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | backbone.patch_embed.0.2                          | input               | torch.float32 |           | -11.4682589  | 10.2932444    | -0.0118096   | 0.4531450        | torch.Size([12, 32, 128, 352])   |
| 3       | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | backbone.patch_embed.0.2                          | output              | qint8         | 0.0525488 | 0.0000000    | 6.6737013     | 0.2090752    | 0.1562170        | torch.Size([12, 32, 128, 352])   |
| 4       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.1.0                          | input               | qint8         | 0.0525488 | 0.0000000    | 6.6737013     | 0.2090752    | 0.1562170        | torch.Size([12, 32, 128, 352])   |
| 4       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.1.0                          | weight              | torch.float32 |           | -0.5002961   | 0.2541434     | -0.0059152   | 0.0021482        | torch.Size([64, 32, 3, 3])       |
| 4       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.1.0                          | bias                | torch.float32 |           | -1.6605425   | 1.2818030     | 0.3666089    | 0.3750002        | torch.Size([64])                 |
| 4       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.patch_embed.1.0                          | output              | torch.float32 |           | -15.8234053  | 6.4241562     | 0.0061644    | 0.5156285        | torch.Size([12, 64, 64, 176])    |
| 5       | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.patch_embed.1.1                          | input               | torch.float32 |           | -15.8234053  | 6.4241562     | 0.0061644    | 0.5156285        | torch.Size([12, 64, 64, 176])    |
| 5       | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.patch_embed.1.1                          | output              | torch.float32 |           | -15.8234053  | 6.4241562     | 0.0061644    | 0.5156285        | torch.Size([12, 64, 64, 176])    |
| 6       | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | backbone.patch_embed.1.2                          | input               | torch.float32 |           | -15.8234053  | 6.4241562     | 0.0061644    | 0.5156285        | torch.Size([12, 64, 64, 176])    |
| 6       | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | backbone.patch_embed.1.2                          | output              | qint8         | 0.0314438 | 0.0000000    | 3.9933603     | 0.2563317    | 0.1091638        | torch.Size([12, 64, 64, 176])    |
| 7       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.dwconv.0                | input               | qint8         | 0.0314438 | 0.0000000    | 3.9933603     | 0.2563317    | 0.1091638        | torch.Size([12, 64, 64, 176])    |
| 7       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.dwconv.0                | weight              | torch.float32 |           | -9.5854597   | 7.1587496     | 0.0327995    | 2.1365983        | torch.Size([64, 1, 3, 3])        |
| 7       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.dwconv.0                | bias                | torch.float32 |           | -1.6667110   | 1.2828984     | -0.0091643   | 0.5738692        | torch.Size([64])                 |
| 7       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.dwconv.0                | output              | qint8         | 0.0527713 | -6.7547264   | 6.7019553     | 0.0351541    | 0.6646416        | torch.Size([12, 64, 64, 176])    |
| 8       | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.0.dwconv.1                | input               | qint8         | 0.0527713 | -6.7547264   | 6.7019553     | 0.0351541    | 0.6646416        | torch.Size([12, 64, 64, 176])    |
| 8       | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.0.dwconv.1                | output              | qint8         | 0.0527713 | -6.7547264   | 6.7019553     | 0.0351541    | 0.6646416        | torch.Size([12, 64, 64, 176])    |
| 9       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv1                 | input               | qint8         | 0.0527713 | -6.7547264   | 6.7019553     | 0.0351541    | 0.6646416        | torch.Size([12, 64, 64, 176])    |
| 9       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv1                 | weight              | torch.float32 |           | -0.3771044   | 0.4624698     | -0.0007910   | 0.0080298        | torch.Size([128, 64, 1, 1])      |
| 9       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv1                 | bias                | torch.float32 |           | -0.3051038   | 0.1933560     | -0.0611908   | 0.0079311        | torch.Size([128])                |
| 9       | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv1                 | output              | qint8         | 0.0313504 | -4.0128574   | 3.9815071     | -0.0721105   | 0.4688455        | torch.Size([12, 128, 64, 176])   |
| 10      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.0.act                     | input               | qint8         | 0.0313504 | -4.0128574   | 3.9815071     | -0.0721105   | 0.4688455        | torch.Size([12, 128, 64, 176])   |
| 10      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.0.act                     | output              | qint8         | 0.0237139 | -0.1659972   | 3.0116639     | 0.1129751    | 0.1221377        | torch.Size([12, 128, 64, 176])   |
| 11      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv2                 | input               | qint8         | 0.0237139 | -0.1659972   | 3.0116639     | 0.1129751    | 0.1221377        | torch.Size([12, 128, 64, 176])   |
| 11      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv2                 | weight              | torch.float32 |           | -0.2269347   | 0.2743329     | 0.0035677    | 0.0040366        | torch.Size([64, 128, 1, 1])      |
| 11      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv2                 | bias                | torch.float32 |           | -0.6593340   | 0.4300478     | 0.0252424    | 0.0407704        | torch.Size([64])                 |
| 11      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.0.pwconv2                 | output              | torch.float32 |           | -3.1403356   | 4.0719700     | 0.0474411    | 0.2142279        | torch.Size([12, 64, 64, 176])    |
| 12      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.0.layer_scale             | input               | torch.float32 |           | -3.1403356   | 4.0719700     | 0.0474411    | 0.2142279        | torch.Size([12, 64, 64, 176])    |
| 12      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.0.layer_scale             | output              | torch.float32 |           | -3.1403356   | 4.0719700     | 0.0474411    | 0.2142279        | torch.Size([12, 64, 64, 176])    |
| 13      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.0.add                     | input_0             | qint8         | 0.0314438 | 0.0000000    | 3.9933603     | 0.2563317    | 0.1091638        | torch.Size([12, 64, 64, 176])    |
| 13      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.0.add                     | input_1             | torch.float32 |           | -3.1403356   | 4.0719700     | 0.0474411    | 0.2142279        | torch.Size([12, 64, 64, 176])    |
| 13      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.0.add                     | output              | qint8         | 0.0335567 | -3.1543312   | 4.2617025     | 0.3037468    | 0.3837953        | torch.Size([12, 64, 64, 176])    |
| 14      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.0.extra_act               | input               | qint8         | 0.0335567 | -3.1543312   | 4.2617025     | 0.3037468    | 0.3837953        | torch.Size([12, 64, 64, 176])    |
| 14      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.0.extra_act               | output              | qint8         | 0.0335567 | -3.1543312   | 4.2617025     | 0.3037468    | 0.3837953        | torch.Size([12, 64, 64, 176])    |
| 15      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.dwconv.0                | input               | qint8         | 0.0335567 | -3.1543312   | 4.2617025     | 0.3037468    | 0.3837953        | torch.Size([12, 64, 64, 176])    |
| 15      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.dwconv.0                | weight              | torch.float32 |           | -3.7368960   | 5.5162377     | -0.0070013   | 0.7181063        | torch.Size([64, 1, 3, 3])        |
| 15      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.dwconv.0                | bias                | torch.float32 |           | -2.6346710   | 2.5816419     | 0.1193647    | 0.7623617        | torch.Size([64])                 |
| 15      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.dwconv.0                | output              | qint8         | 0.0527121 | -6.7471476   | 6.6944356     | -0.0077074   | 0.8253501        | torch.Size([12, 64, 64, 176])    |
| 16      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.1.dwconv.1                | input               | qint8         | 0.0527121 | -6.7471476   | 6.6944356     | -0.0077074   | 0.8253501        | torch.Size([12, 64, 64, 176])    |
| 16      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.1.dwconv.1                | output              | qint8         | 0.0527121 | -6.7471476   | 6.6944356     | -0.0077074   | 0.8253501        | torch.Size([12, 64, 64, 176])    |
| 17      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv1                 | input               | qint8         | 0.0527121 | -6.7471476   | 6.6944356     | -0.0077074   | 0.8253501        | torch.Size([12, 64, 64, 176])    |
| 17      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv1                 | weight              | torch.float32 |           | -0.6521709   | 0.5402265     | -0.0032623   | 0.0090059        | torch.Size([128, 64, 1, 1])      |
| 17      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv1                 | bias                | torch.float32 |           | -0.2690851   | 0.2633894     | -0.0974847   | 0.0095327        | torch.Size([128])                |
| 17      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv1                 | output              | qint8         | 0.0396791 | -5.0789218   | 5.0392427     | -0.1147480   | 0.6048589        | torch.Size([12, 128, 64, 176])   |
| 18      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.1.act                     | input               | qint8         | 0.0396791 | -5.0789218   | 5.0392427     | -0.1147480   | 0.6048589        | torch.Size([12, 128, 64, 176])   |
| 18      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.1.act                     | output              | qint8         | 0.0280909 | -0.1685455   | 3.5675468     | 0.1253823    | 0.1503712        | torch.Size([12, 128, 64, 176])   |
| 19      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv2                 | input               | qint8         | 0.0280909 | -0.1685455   | 3.5675468     | 0.1253823    | 0.1503712        | torch.Size([12, 128, 64, 176])   |
| 19      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv2                 | weight              | torch.float32 |           | -0.2849883   | 0.2750434     | 0.0016833    | 0.0052921        | torch.Size([64, 128, 1, 1])      |
| 19      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv2                 | bias                | torch.float32 |           | -0.3476788   | 0.5052569     | 0.0406722    | 0.0286444        | torch.Size([64])                 |
| 19      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.1.pwconv2                 | output              | torch.float32 |           | -3.3145347   | 3.7996991     | 0.0684215    | 0.2054916        | torch.Size([12, 64, 64, 176])    |
| 20      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.1.layer_scale             | input               | torch.float32 |           | -3.3145347   | 3.7996991     | 0.0684215    | 0.2054916        | torch.Size([12, 64, 64, 176])    |
| 20      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.1.layer_scale             | output              | torch.float32 |           | -3.3145347   | 3.7996991     | 0.0684215    | 0.2054916        | torch.Size([12, 64, 64, 176])    |
| 21      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.1.add                     | input_0             | qint8         | 0.0335567 | -3.1543312   | 4.2617025     | 0.3037468    | 0.3837953        | torch.Size([12, 64, 64, 176])    |
| 21      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.1.add                     | input_1             | torch.float32 |           | -3.3145347   | 3.7996991     | 0.0684215    | 0.2054916        | torch.Size([12, 64, 64, 176])    |
| 21      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.1.add                     | output              | qint8         | 0.0408227 | -4.6946115   | 5.1844845     | 0.3721478    | 0.7486000        | torch.Size([12, 64, 64, 176])    |
| 22      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.1.extra_act               | input               | qint8         | 0.0408227 | -4.6946115   | 5.1844845     | 0.3721478    | 0.7486000        | torch.Size([12, 64, 64, 176])    |
| 22      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.1.extra_act               | output              | qint8         | 0.0408227 | -4.6946115   | 5.1844845     | 0.3721478    | 0.7486000        | torch.Size([12, 64, 64, 176])    |
| 23      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.dwconv.0                | input               | qint8         | 0.0408227 | -4.6946115   | 5.1844845     | 0.3721478    | 0.7486000        | torch.Size([12, 64, 64, 176])    |
| 23      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.dwconv.0                | weight              | torch.float32 |           | -2.2920854   | 2.0023391     | 0.0041875    | 0.4041744        | torch.Size([64, 1, 3, 3])        |
| 23      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.dwconv.0                | bias                | torch.float32 |           | -2.0258489   | 1.2771499     | -0.0540585   | 0.4061824        | torch.Size([64])                 |
| 23      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.dwconv.0                | output              | qint8         | 0.0546444 | -6.9944849   | 6.9398403     | 0.0047156    | 1.0848008        | torch.Size([12, 64, 64, 176])    |
| 24      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.2.dwconv.1                | input               | qint8         | 0.0546444 | -6.9944849   | 6.9398403     | 0.0047156    | 1.0848008        | torch.Size([12, 64, 64, 176])    |
| 24      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.2.dwconv.1                | output              | qint8         | 0.0546444 | -6.9944849   | 6.9398403     | 0.0047156    | 1.0848008        | torch.Size([12, 64, 64, 176])    |
| 25      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv1                 | input               | qint8         | 0.0546444 | -6.9944849   | 6.9398403     | 0.0047156    | 1.0848008        | torch.Size([12, 64, 64, 176])    |
| 25      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv1                 | weight              | torch.float32 |           | -0.5646603   | 0.4319130     | -0.0004717   | 0.0100047        | torch.Size([128, 64, 1, 1])      |
| 25      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv1                 | bias                | torch.float32 |           | -0.3360012   | 0.1481226     | -0.1170983   | 0.0094382        | torch.Size([128])                |
| 25      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv1                 | output              | qint8         | 0.0550595 | -7.0476127   | 6.9925532     | -0.2377845   | 1.0312603        | torch.Size([12, 128, 64, 176])   |
| 26      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.2.act                     | input               | qint8         | 0.0550595 | -7.0476127   | 6.9925532     | -0.2377845   | 1.0312603        | torch.Size([12, 128, 64, 176])   |
| 26      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.2.act                     | output              | qint8         | 0.0532121 | -0.1596364   | 6.7579403     | 0.1580756    | 0.2708760        | torch.Size([12, 128, 64, 176])   |
| 27      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv2                 | input               | qint8         | 0.0532121 | -0.1596364   | 6.7579403     | 0.1580756    | 0.2708760        | torch.Size([12, 128, 64, 176])   |
| 27      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv2                 | weight              | torch.float32 |           | -0.3032996   | 0.3379996     | 0.0030602    | 0.0071195        | torch.Size([64, 128, 1, 1])      |
| 27      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv2                 | bias                | torch.float32 |           | -0.3535352   | 0.3696326     | 0.0459012    | 0.0274600        | torch.Size([64])                 |
| 27      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.2.pwconv2                 | output              | torch.float32 |           | -7.3813233   | 7.7187319     | 0.1261926    | 0.4543925        | torch.Size([12, 64, 64, 176])    |
| 28      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.2.layer_scale             | input               | torch.float32 |           | -7.3813233   | 7.7187319     | 0.1261926    | 0.4543925        | torch.Size([12, 64, 64, 176])    |
| 28      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.2.layer_scale             | output              | torch.float32 |           | -7.3813233   | 7.7187319     | 0.1261926    | 0.4543925        | torch.Size([12, 64, 64, 176])    |
| 29      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.2.add                     | input_0             | qint8         | 0.0408227 | -4.6946115   | 5.1844845     | 0.3721478    | 0.7486000        | torch.Size([12, 64, 64, 176])    |
| 29      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.2.add                     | input_1             | torch.float32 |           | -7.3813233   | 7.7187319     | 0.1261926    | 0.4543925        | torch.Size([12, 64, 64, 176])    |
| 29      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.2.add                     | output              | qint8         | 0.0520774 | -6.6659021   | 6.6138248     | 0.4983042    | 1.3447405        | torch.Size([12, 64, 64, 176])    |
| 30      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.2.extra_act               | input               | qint8         | 0.0520774 | -6.6659021   | 6.6138248     | 0.4983042    | 1.3447405        | torch.Size([12, 64, 64, 176])    |
| 30      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.2.extra_act               | output              | qint8         | 0.0520774 | -6.6659021   | 6.6138248     | 0.4983042    | 1.3447405        | torch.Size([12, 64, 64, 176])    |
| 31      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.dwconv.0                | input               | qint8         | 0.0520774 | -6.6659021   | 6.6138248     | 0.4983042    | 1.3447405        | torch.Size([12, 64, 64, 176])    |
| 31      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.dwconv.0                | weight              | torch.float32 |           | -1.4082469   | 1.4164271     | 0.0166782    | 0.1640987        | torch.Size([64, 1, 3, 3])        |
| 31      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.dwconv.0                | bias                | torch.float32 |           | -2.7496595   | 2.5150573     | 0.0056774    | 0.8893678        | torch.Size([64])                 |
| 31      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.dwconv.0                | output              | qint8         | 0.0523424 | -6.6998301   | 6.6474876     | -0.0024692   | 1.1882778        | torch.Size([12, 64, 64, 176])    |
| 32      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.3.dwconv.1                | input               | qint8         | 0.0523424 | -6.6998301   | 6.6474876     | -0.0024692   | 1.1882778        | torch.Size([12, 64, 64, 176])    |
| 32      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.3.dwconv.1                | output              | qint8         | 0.0523424 | -6.6998301   | 6.6474876     | -0.0024692   | 1.1882778        | torch.Size([12, 64, 64, 176])    |
| 33      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv1                 | input               | qint8         | 0.0523424 | -6.6998301   | 6.6474876     | -0.0024692   | 1.1882778        | torch.Size([12, 64, 64, 176])    |
| 33      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv1                 | weight              | torch.float32 |           | -0.3884685   | 0.4816757     | -0.0017783   | 0.0104277        | torch.Size([128, 64, 1, 1])      |
| 33      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv1                 | bias                | torch.float32 |           | -0.3522463   | 0.0540238     | -0.1331877   | 0.0076328        | torch.Size([128])                |
| 33      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv1                 | output              | qint8         | 0.0532339 | -6.8139377   | 6.7607036     | -0.2501832   | 1.0964160        | torch.Size([12, 128, 64, 176])   |
| 34      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.3.act                     | input               | qint8         | 0.0532339 | -6.8139377   | 6.7607036     | -0.2501832   | 1.0964160        | torch.Size([12, 128, 64, 176])   |
| 34      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.0.block.3.act                     | output              | qint8         | 0.0412116 | -0.1648464   | 5.2338724     | 0.1688455    | 0.2671965        | torch.Size([12, 128, 64, 176])   |
| 35      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv2                 | input               | qint8         | 0.0412116 | -0.1648464   | 5.2338724     | 0.1688455    | 0.2671965        | torch.Size([12, 128, 64, 176])   |
| 35      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv2                 | weight              | torch.float32 |           | -0.3284251   | 0.3101799     | 0.0006201    | 0.0084742        | torch.Size([64, 128, 1, 1])      |
| 35      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv2                 | bias                | torch.float32 |           | -0.2236055   | 0.1901882     | -0.0011887   | 0.0069269        | torch.Size([64])                 |
| 35      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.0.block.3.pwconv2                 | output              | torch.float32 |           | -8.5308514   | 6.2305284     | 0.0369930    | 0.4795334        | torch.Size([12, 64, 64, 176])    |
| 36      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.3.layer_scale             | input               | torch.float32 |           | -8.5308514   | 6.2305284     | 0.0369930    | 0.4795334        | torch.Size([12, 64, 64, 176])    |
| 36      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.0.block.3.layer_scale             | output              | torch.float32 |           | -8.5308514   | 6.2305284     | 0.0369930    | 0.4795334        | torch.Size([12, 64, 64, 176])    |
| 37      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.3.add                     | input_0             | qint8         | 0.0520774 | -6.6659021   | 6.6138248     | 0.4983042    | 1.3447405        | torch.Size([12, 64, 64, 176])    |
| 37      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.3.add                     | input_1             | torch.float32 |           | -8.5308514   | 6.2305284     | 0.0369930    | 0.4795334        | torch.Size([12, 64, 64, 176])    |
| 37      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.0.block.3.add                     | output              | qint8         | 0.0589087 | -7.5403147   | 7.4814057     | 0.5352739    | 1.9067529        | torch.Size([12, 64, 64, 176])    |
| 38      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.3.extra_act               | input               | qint8         | 0.0589087 | -7.5403147   | 7.4814057     | 0.5352739    | 1.9067529        | torch.Size([12, 64, 64, 176])    |
| 38      | torch.nn.modules.linear.Identity                                            | backbone.stages.0.block.3.extra_act               | output              | qint8         | 0.0589087 | -7.5403147   | 7.4814057     | 0.5352739    | 1.9067529        | torch.Size([12, 64, 64, 176])    |
| 39      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.0                             | input               | qint8         | 0.0589087 | -7.5403147   | 7.4814057     | 0.5352739    | 1.9067529        | torch.Size([12, 64, 64, 176])    |
| 39      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.0                             | weight              | torch.float32 |           | 0.0117894    | 0.2205137     | 0.0516751    | 0.0024542        | torch.Size([64])                 |
| 39      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.0                             | bias                | torch.float32 |           | -0.0642388   | 0.1026925     | 0.0011650    | 0.0004492        | torch.Size([64])                 |
| 39      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.0                             | running_mean        | torch.float32 |           | -1.0947821   | 2.1904764     | 0.6345965    | 0.4023141        | torch.Size([64])                 |
| 39      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.0                             | running_var         | torch.float32 |           | 0.7230649    | 3.3071532     | 1.4103394    | 0.2920620        | torch.Size([64])                 |
| 39      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.0                             | num_batches_tracked | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | nan              | torch.Size([])                   |
| 39      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.0                             | output              | qint8         | 0.0065952 | -0.8441840   | 0.8375888     | -0.0022974   | 0.0058313        | torch.Size([12, 64, 64, 176])    |
| 40      | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | backbone.up                                       | input               | qint8         | 0.0065952 | -0.8441840   | 0.8375888     | -0.0022974   | 0.0058313        | torch.Size([12, 64, 64, 176])    |
| 40      | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | backbone.up                                       | output              | qint8         | 0.0065952 | -0.8441840   | 0.8375888     | -0.0023060   | 0.0045197        | torch.Size([12, 64, 128, 352])   |
| 41      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.0.proj.0                | input               | qint8         | 0.0589087 | -7.5403147   | 7.4814057     | 0.5352739    | 1.9067529        | torch.Size([12, 64, 64, 176])    |
| 41      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.0.proj.0                | weight              | torch.float32 |           | -0.1137355   | 0.1251108     | 0.0000046    | 0.0003832        | torch.Size([128, 64, 2, 2])      |
| 41      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.0.proj.0                | bias                | torch.float32 |           | -1.6827896   | 1.7587558     | -0.0279865   | 0.4669739        | torch.Size([128])                |
| 41      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.0.proj.0                | output              | qint8         | 0.0422032 | -5.4020047   | 5.3598018     | 0.0123171    | 1.0560318        | torch.Size([12, 128, 32, 88])    |
| 42      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.downsample_block.0.proj.1                | input               | qint8         | 0.0422032 | -5.4020047   | 5.3598018     | 0.0123171    | 1.0560318        | torch.Size([12, 128, 32, 88])    |
| 42      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.downsample_block.0.proj.1                | output              | qint8         | 0.0422032 | -5.4020047   | 5.3598018     | 0.0123171    | 1.0560318        | torch.Size([12, 128, 32, 88])    |
| 43      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.dwconv.0                | input               | qint8         | 0.0422032 | -5.4020047   | 5.3598018     | 0.0123171    | 1.0560318        | torch.Size([12, 128, 32, 88])    |
| 43      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.dwconv.0                | weight              | torch.float32 |           | -1.3887808   | 1.2360568     | 0.0055723    | 0.2056874        | torch.Size([128, 1, 3, 3])       |
| 43      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.dwconv.0                | bias                | torch.float32 |           | -0.5518610   | 0.5826378     | -0.0071815   | 0.0450563        | torch.Size([128])                |
| 43      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.dwconv.0                | output              | qint8         | 0.0484290 | -6.1989117   | 6.1504827     | 0.0040160    | 1.1735305        | torch.Size([12, 128, 32, 88])    |
| 44      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.0.dwconv.1                | input               | qint8         | 0.0484290 | -6.1989117   | 6.1504827     | 0.0040160    | 1.1735305        | torch.Size([12, 128, 32, 88])    |
| 44      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.0.dwconv.1                | output              | qint8         | 0.0484290 | -6.1989117   | 6.1504827     | 0.0040160    | 1.1735305        | torch.Size([12, 128, 32, 88])    |
| 45      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv1                 | input               | qint8         | 0.0484290 | -6.1989117   | 6.1504827     | 0.0040160    | 1.1735305        | torch.Size([12, 128, 32, 88])    |
| 45      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv1                 | weight              | torch.float32 |           | -0.3914456   | 0.4410824     | 0.0011752    | 0.0094514        | torch.Size([256, 64, 1, 1])      |
| 45      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv1                 | bias                | torch.float32 |           | -0.3276729   | 0.0660501     | -0.1640906   | 0.0061752        | torch.Size([256])                |
| 45      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv1                 | output              | qint8         | 0.0462013 | -5.9137607   | 5.8675594     | -0.3238384   | 1.0551240        | torch.Size([12, 256, 32, 88])    |
| 46      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.1.block.0.act                     | input               | qint8         | 0.0462013 | -5.9137607   | 5.8675594     | -0.3238384   | 1.0551240        | torch.Size([12, 256, 32, 88])    |
| 46      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.1.block.0.act                     | output              | qint8         | 0.0408330 | -0.1633319   | 5.1857872     | 0.1358956    | 0.2399434        | torch.Size([12, 256, 32, 88])    |
| 47      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv2                 | input               | qint8         | 0.0408330 | -0.1633319   | 5.1857872     | 0.1358956    | 0.2399434        | torch.Size([12, 256, 32, 88])    |
| 47      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv2                 | weight              | torch.float32 |           | -0.2906360   | 0.3089400     | 0.0000804    | 0.0049468        | torch.Size([128, 256, 1, 1])     |
| 47      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv2                 | bias                | torch.float32 |           | -0.5209989   | 0.3338013     | 0.0037921    | 0.0280195        | torch.Size([128])                |
| 47      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.0.pwconv2                 | output              | torch.float32 |           | -8.0407324   | 8.0573835     | 0.0200724    | 0.6959179        | torch.Size([12, 128, 32, 88])    |
| 48      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.0.layer_scale             | input               | torch.float32 |           | -8.0407324   | 8.0573835     | 0.0200724    | 0.6959179        | torch.Size([12, 128, 32, 88])    |
| 48      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.0.layer_scale             | output              | torch.float32 |           | -8.0407324   | 8.0573835     | 0.0200724    | 0.6959179        | torch.Size([12, 128, 32, 88])    |
| 49      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.0.add                     | input_0             | qint8         | 0.0422032 | -5.4020047   | 5.3598018     | 0.0123171    | 1.0560318        | torch.Size([12, 128, 32, 88])    |
| 49      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.0.add                     | input_1             | torch.float32 |           | -8.0407324   | 8.0573835     | 0.0200724    | 0.6959179        | torch.Size([12, 128, 32, 88])    |
| 49      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.0.add                     | output              | qint8         | 0.0555393 | -7.1090307   | 7.0534916     | 0.0323674    | 1.9331328        | torch.Size([12, 128, 32, 88])    |
| 50      | torch.nn.modules.linear.Identity                                            | backbone.stages.1.block.0.extra_act               | input               | qint8         | 0.0555393 | -7.1090307   | 7.0534916     | 0.0323674    | 1.9331328        | torch.Size([12, 128, 32, 88])    |
| 50      | torch.nn.modules.linear.Identity                                            | backbone.stages.1.block.0.extra_act               | output              | qint8         | 0.0555393 | -7.1090307   | 7.0534916     | 0.0323674    | 1.9331328        | torch.Size([12, 128, 32, 88])    |
| 51      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.dwconv.0                | input               | qint8         | 0.0555393 | -7.1090307   | 7.0534916     | 0.0323674    | 1.9331328        | torch.Size([12, 128, 32, 88])    |
| 51      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.dwconv.0                | weight              | torch.float32 |           | -1.0748863   | 1.0881808     | -0.0088984   | 0.1311005        | torch.Size([128, 1, 3, 3])       |
| 51      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.dwconv.0                | bias                | torch.float32 |           | -1.2457412   | 0.8867583     | 0.0394011    | 0.0991965        | torch.Size([128])                |
| 51      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.dwconv.0                | output              | qint8         | 0.0531897 | -6.8082752   | 6.7550855     | -0.0013084   | 1.3131974        | torch.Size([12, 128, 32, 88])    |
| 52      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.1.dwconv.1                | input               | qint8         | 0.0531897 | -6.8082752   | 6.7550855     | -0.0013084   | 1.3131974        | torch.Size([12, 128, 32, 88])    |
| 52      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.1.dwconv.1                | output              | qint8         | 0.0531897 | -6.8082752   | 6.7550855     | -0.0013084   | 1.3131974        | torch.Size([12, 128, 32, 88])    |
| 53      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv1                 | input               | qint8         | 0.0531897 | -6.8082752   | 6.7550855     | -0.0013084   | 1.3131974        | torch.Size([12, 128, 32, 88])    |
| 53      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv1                 | weight              | torch.float32 |           | -0.4112096   | 0.4213794     | 0.0000915    | 0.0102508        | torch.Size([256, 64, 1, 1])      |
| 53      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv1                 | bias                | torch.float32 |           | -0.3662765   | 0.0691123     | -0.1543788   | 0.0071230        | torch.Size([256])                |
| 53      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv1                 | output              | qint8         | 0.0554996 | -7.1039462   | 7.0484467     | -0.3976242   | 1.2104877        | torch.Size([12, 256, 32, 88])    |
| 54      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.1.block.1.act                     | input               | qint8         | 0.0554996 | -7.1039462   | 7.0484467     | -0.3976242   | 1.2104877        | torch.Size([12, 256, 32, 88])    |
| 54      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.1.block.1.act                     | output              | qint8         | 0.0416667 | -0.1666668   | 5.2916718     | 0.1290638    | 0.2402428        | torch.Size([12, 256, 32, 88])    |
| 55      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv2                 | input               | qint8         | 0.0416667 | -0.1666668   | 5.2916718     | 0.1290638    | 0.2402428        | torch.Size([12, 256, 32, 88])    |
| 55      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv2                 | weight              | torch.float32 |           | -0.3238759   | 0.3630225     | 0.0008478    | 0.0062374        | torch.Size([128, 256, 1, 1])     |
| 55      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv2                 | bias                | torch.float32 |           | -0.3402917   | 0.3940509     | 0.0060393    | 0.0272719        | torch.Size([128])                |
| 55      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.1.pwconv2                 | output              | torch.float32 |           | -10.0656881  | 9.1972656     | 0.0317694    | 0.7171644        | torch.Size([12, 128, 32, 88])    |
| 56      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.1.layer_scale             | input               | torch.float32 |           | -10.0656881  | 9.1972656     | 0.0317694    | 0.7171644        | torch.Size([12, 128, 32, 88])    |
| 56      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.1.layer_scale             | output              | torch.float32 |           | -10.0656881  | 9.1972656     | 0.0317694    | 0.7171644        | torch.Size([12, 128, 32, 88])    |
| 57      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.1.add                     | input_0             | qint8         | 0.0555393 | -7.1090307   | 7.0534916     | 0.0323674    | 1.9331328        | torch.Size([12, 128, 32, 88])    |
| 57      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.1.add                     | input_1             | torch.float32 |           | -10.0656881  | 9.1972656     | 0.0317694    | 0.7171644        | torch.Size([12, 128, 32, 88])    |
| 57      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.1.add                     | output              | qint8         | 0.0738817 | -9.4568539   | 9.3829718     | 0.0641019    | 2.8711321        | torch.Size([12, 128, 32, 88])    |
| 58      | torch.nn.modules.linear.Identity                                            | backbone.stages.1.block.1.extra_act               | input               | qint8         | 0.0738817 | -9.4568539   | 9.3829718     | 0.0641019    | 2.8711321        | torch.Size([12, 128, 32, 88])    |
| 58      | torch.nn.modules.linear.Identity                                            | backbone.stages.1.block.1.extra_act               | output              | qint8         | 0.0738817 | -9.4568539   | 9.3829718     | 0.0641019    | 2.8711321        | torch.Size([12, 128, 32, 88])    |
| 59      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.dwconv.0                | input               | qint8         | 0.0738817 | -9.4568539   | 9.3829718     | 0.0641019    | 2.8711321        | torch.Size([12, 128, 32, 88])    |
| 59      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.dwconv.0                | weight              | torch.float32 |           | -0.7983105   | 0.8143296     | -0.0063120   | 0.0799373        | torch.Size([128, 1, 3, 3])       |
| 59      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.dwconv.0                | bias                | torch.float32 |           | -1.1406622   | 0.9870166     | 0.0115092    | 0.1416940        | torch.Size([128])                |
| 59      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.dwconv.0                | output              | qint8         | 0.0511533 | -6.5476251   | 6.4964719     | -0.0008168   | 1.3008772        | torch.Size([12, 128, 32, 88])    |
| 60      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.2.dwconv.1                | input               | qint8         | 0.0511533 | -6.5476251   | 6.4964719     | -0.0008168   | 1.3008772        | torch.Size([12, 128, 32, 88])    |
| 60      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.2.dwconv.1                | output              | qint8         | 0.0511533 | -6.5476251   | 6.4964719     | -0.0008168   | 1.3008772        | torch.Size([12, 128, 32, 88])    |
| 61      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv1                 | input               | qint8         | 0.0511533 | -6.5476251   | 6.4964719     | -0.0008168   | 1.3008772        | torch.Size([12, 128, 32, 88])    |
| 61      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv1                 | weight              | torch.float32 |           | -0.4869311   | 0.3686527     | 0.0017785    | 0.0100755        | torch.Size([256, 64, 1, 1])      |
| 61      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv1                 | bias                | torch.float32 |           | -0.3687980   | 0.1828431     | -0.1405843   | 0.0078767        | torch.Size([256])                |
| 61      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv1                 | output              | qint8         | 0.0482297 | -6.1734052   | 6.1251755     | -0.3489749   | 1.0822709        | torch.Size([12, 256, 32, 88])    |
| 62      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.1.block.2.act                     | input               | qint8         | 0.0482297 | -6.1734052   | 6.1251755     | -0.3489749   | 1.0822709        | torch.Size([12, 256, 32, 88])    |
| 62      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.1.block.2.act                     | output              | qint8         | 0.0406772 | -0.1627090   | 5.1660109     | 0.1318952    | 0.2245581        | torch.Size([12, 256, 32, 88])    |
| 63      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv2                 | input               | qint8         | 0.0406772 | -0.1627090   | 5.1660109     | 0.1318952    | 0.2245581        | torch.Size([12, 256, 32, 88])    |
| 63      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv2                 | weight              | torch.float32 |           | -0.3565927   | 0.3791376     | -0.0002235   | 0.0064779        | torch.Size([128, 256, 1, 1])     |
| 63      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv2                 | bias                | torch.float32 |           | -0.1682071   | 0.2197312     | 0.0018994    | 0.0036639        | torch.Size([128])                |
| 63      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.1.block.2.pwconv2                 | output              | torch.float32 |           | -7.6019006   | 8.2709017     | -0.0047825   | 0.7233713        | torch.Size([12, 128, 32, 88])    |
| 64      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.2.layer_scale             | input               | torch.float32 |           | -7.6019006   | 8.2709017     | -0.0047825   | 0.7233713        | torch.Size([12, 128, 32, 88])    |
| 64      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.1.block.2.layer_scale             | output              | torch.float32 |           | -7.6019006   | 8.2709017     | -0.0047825   | 0.7233713        | torch.Size([12, 128, 32, 88])    |
| 65      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.2.add                     | input_0             | qint8         | 0.0738817 | -9.4568539   | 9.3829718     | 0.0641019    | 2.8711321        | torch.Size([12, 128, 32, 88])    |
| 65      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.2.add                     | input_1             | torch.float32 |           | -7.6019006   | 8.2709017     | -0.0047825   | 0.7233713        | torch.Size([12, 128, 32, 88])    |
| 65      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.1.block.2.add                     | output              | qint8         | 0.0837117 | -10.7150927  | 10.6313810    | 0.0593153    | 3.9527898        | torch.Size([12, 128, 32, 88])    |
| 66      | torch.nn.modules.linear.Identity                                            | backbone.stages.1.block.2.extra_act               | input               | qint8         | 0.0837117 | -10.7150927  | 10.6313810    | 0.0593153    | 3.9527898        | torch.Size([12, 128, 32, 88])    |
| 66      | torch.nn.modules.linear.Identity                                            | backbone.stages.1.block.2.extra_act               | output              | qint8         | 0.0837117 | -10.7150927  | 10.6313810    | 0.0593153    | 3.9527898        | torch.Size([12, 128, 32, 88])    |
| 67      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.1                             | input               | qint8         | 0.0837117 | -10.7150927  | 10.6313810    | 0.0593153    | 3.9527898        | torch.Size([12, 128, 32, 88])    |
| 67      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.1                             | weight              | torch.float32 |           | 0.0262112    | 0.7064559     | 0.2382833    | 0.0347776        | torch.Size([128])                |
| 67      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.1                             | bias                | torch.float32 |           | -0.0895434   | 0.0831020     | -0.0037414   | 0.0007803        | torch.Size([128])                |
| 67      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.1                             | running_mean        | torch.float32 |           | -2.1813064   | 1.4853035     | 0.0402109    | 0.6425784        | torch.Size([128])                |
| 67      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.1                             | running_var         | torch.float32 |           | 1.3674419    | 7.8914661     | 2.6385241    | 0.5175550        | torch.Size([128])                |
| 67      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.1                             | num_batches_tracked | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | nan              | torch.Size([])                   |
| 67      | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.1                             | output              | qint8         | 0.0211009 | -2.7009161   | 2.6798151     | -0.0034986   | 0.1091889        | torch.Size([12, 128, 32, 88])    |
| 68      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.1.proj.0                | input               | qint8         | 0.0837117 | -10.7150927  | 10.6313810    | 0.0593153    | 3.9527898        | torch.Size([12, 128, 32, 88])    |
| 68      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.1.proj.0                | weight              | torch.float32 |           | -0.0401473   | 0.0409340     | -0.0000718   | 0.0000710        | torch.Size([192, 128, 2, 2])     |
| 68      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.1.proj.0                | bias                | torch.float32 |           | -1.0511650   | 1.2528142     | 0.0200040    | 0.2073513        | torch.Size([192])                |
| 68      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.1.proj.0                | output              | qint8         | 0.0433291 | -5.5461278   | 5.5027986     | -0.0217158   | 1.2607620        | torch.Size([12, 192, 16, 44])    |
| 69      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.downsample_block.1.proj.1                | input               | qint8         | 0.0433291 | -5.5461278   | 5.5027986     | -0.0217158   | 1.2607620        | torch.Size([12, 192, 16, 44])    |
| 69      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.downsample_block.1.proj.1                | output              | qint8         | 0.0433291 | -5.5461278   | 5.5027986     | -0.0217158   | 1.2607620        | torch.Size([12, 192, 16, 44])    |
| 70      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.dwconv.0.0              | input               | qint8         | 0.0433291 | -5.5461278   | 5.5027986     | -0.0217158   | 1.2607620        | torch.Size([12, 192, 16, 44])    |
| 70      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.dwconv.0.0              | weight              | torch.float32 |           | -1.8053808   | 1.6027147     | -0.0089043   | 0.3436099        | torch.Size([192, 1, 1, 5])       |
| 70      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.dwconv.0.0              | bias                | torch.float32 |           | -0.6191546   | 0.8098391     | 0.0105077    | 0.0645590        | torch.Size([192])                |
| 70      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.dwconv.0.0              | output              | qint8         | 0.0450672 | -5.7685986   | 5.7235312     | 0.0122327    | 0.8703461        | torch.Size([12, 192, 16, 44])    |
| 71      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.0.dwconv.0.1              | input               | qint8         | 0.0450672 | -5.7685986   | 5.7235312     | 0.0122327    | 0.8703461        | torch.Size([12, 192, 16, 44])    |
| 71      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.0.dwconv.0.1              | output              | qint8         | 0.0450672 | -5.7685986   | 5.7235312     | 0.0122327    | 0.8703461        | torch.Size([12, 192, 16, 44])    |
| 72      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv1.0               | input               | qint8         | 0.0450672 | -5.7685986   | 5.7235312     | 0.0122327    | 0.8703461        | torch.Size([12, 192, 16, 44])    |
| 72      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv1.0               | weight              | torch.float32 |           | -0.3090113   | 0.3081646     | -0.0006261   | 0.0043692        | torch.Size([384, 192, 1, 1])     |
| 72      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv1.0               | bias                | torch.float32 |           | -0.3488774   | 0.0704211     | -0.1746289   | 0.0044114        | torch.Size([384])                |
| 72      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv1.0               | output              | qint8         | 0.0601353 | -7.6973171   | 7.6371818     | -0.8319537   | 1.6075342        | torch.Size([12, 384, 16, 44])    |
| 73      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.0.pwconv1.1               | input               | qint8         | 0.0601353 | -7.6973171   | 7.6371818     | -0.8319537   | 1.6075342        | torch.Size([12, 384, 16, 44])    |
| 73      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.0.pwconv1.1               | output              | qint8         | 0.0415570 | -0.1662281   | 5.2777424     | 0.0678542    | 0.1924001        | torch.Size([12, 384, 16, 44])    |
| 74      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv2                 | input               | qint8         | 0.0415570 | -0.1662281   | 5.2777424     | 0.0678542    | 0.1924001        | torch.Size([12, 384, 16, 44])    |
| 74      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv2                 | weight              | torch.float32 |           | -0.2394255   | 0.2281027     | -0.0003773   | 0.0027870        | torch.Size([192, 384, 1, 1])     |
| 74      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv2                 | bias                | torch.float32 |           | -0.3724385   | 0.3247940     | -0.0013486   | 0.0136637        | torch.Size([192])                |
| 74      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.0.pwconv2                 | output              | torch.float32 |           | -8.3487129   | 7.1944060     | -0.0110288   | 0.5298706        | torch.Size([12, 192, 16, 44])    |
| 75      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.0.layer_scale             | input               | torch.float32 |           | -8.3487129   | 7.1944060     | -0.0110288   | 0.5298706        | torch.Size([12, 192, 16, 44])    |
| 75      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.0.layer_scale             | output              | torch.float32 |           | -8.3487129   | 7.1944060     | -0.0110288   | 0.5298706        | torch.Size([12, 192, 16, 44])    |
| 76      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.0.add                     | input_0             | qint8         | 0.0433291 | -5.5461278   | 5.5027986     | -0.0217158   | 1.2607620        | torch.Size([12, 192, 16, 44])    |
| 76      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.0.add                     | input_1             | torch.float32 |           | -8.3487129   | 7.1944060     | -0.0110288   | 0.5298706        | torch.Size([12, 192, 16, 44])    |
| 76      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.0.add                     | output              | qint8         | 0.0638573 | -8.1737394   | 8.1098824     | -0.0327473   | 1.9108361        | torch.Size([12, 192, 16, 44])    |
| 77      | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.0.extra_act               | input               | qint8         | 0.0638573 | -8.1737394   | 8.1098824     | -0.0327473   | 1.9108361        | torch.Size([12, 192, 16, 44])    |
| 77      | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.0.extra_act               | output              | qint8         | 0.0638573 | -8.1737394   | 8.1098824     | -0.0327473   | 1.9108361        | torch.Size([12, 192, 16, 44])    |
| 78      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.dwconv.0.0              | input               | qint8         | 0.0638573 | -8.1737394   | 8.1098824     | -0.0327473   | 1.9108361        | torch.Size([12, 192, 16, 44])    |
| 78      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.dwconv.0.0              | weight              | torch.float32 |           | -1.1196158   | 1.1111720     | -0.0122636   | 0.1747316        | torch.Size([192, 1, 5, 1])       |
| 78      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.dwconv.0.0              | bias                | torch.float32 |           | -1.2218522   | 1.0962701     | 0.0492770    | 0.1350441        | torch.Size([192])                |
| 78      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.dwconv.0.0              | output              | qint8         | 0.0451379 | -5.7776556   | 5.7325177     | 0.0182385    | 1.0605037        | torch.Size([12, 192, 16, 44])    |
| 79      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.1.dwconv.0.1              | input               | qint8         | 0.0451379 | -5.7776556   | 5.7325177     | 0.0182385    | 1.0605037        | torch.Size([12, 192, 16, 44])    |
| 79      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.1.dwconv.0.1              | output              | qint8         | 0.0451379 | -5.7776556   | 5.7325177     | 0.0182385    | 1.0605037        | torch.Size([12, 192, 16, 44])    |
| 80      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv1.0               | input               | qint8         | 0.0451379 | -5.7776556   | 5.7325177     | 0.0182385    | 1.0605037        | torch.Size([12, 192, 16, 44])    |
| 80      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv1.0               | weight              | torch.float32 |           | -0.3532308   | 0.3216734     | -0.0008758   | 0.0052960        | torch.Size([384, 192, 1, 1])     |
| 80      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv1.0               | bias                | torch.float32 |           | -0.3628721   | 0.0242217     | -0.1780612   | 0.0045064        | torch.Size([384])                |
| 80      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv1.0               | output              | qint8         | 0.0667666 | -8.5461216   | 8.4793549     | -0.9763855   | 2.1657963        | torch.Size([12, 384, 16, 44])    |
| 81      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.1.pwconv1.1               | input               | qint8         | 0.0667666 | -8.5461216   | 8.4793549     | -0.9763855   | 2.1657963        | torch.Size([12, 384, 16, 44])    |
| 81      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.1.pwconv1.1               | output              | qint8         | 0.0434954 | -0.1739816   | 5.5239153     | 0.1145481    | 0.2610558        | torch.Size([12, 384, 16, 44])    |
| 82      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv2                 | input               | qint8         | 0.0434954 | -0.1739816   | 5.5239153     | 0.1145481    | 0.2610558        | torch.Size([12, 384, 16, 44])    |
| 82      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv2                 | weight              | torch.float32 |           | -0.2762360   | 0.2889839     | 0.0003965    | 0.0041298        | torch.Size([192, 384, 1, 1])     |
| 82      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv2                 | bias                | torch.float32 |           | -0.3792332   | 0.2417495     | 0.0055696    | 0.0114180        | torch.Size([192])                |
| 82      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.1.pwconv2                 | output              | torch.float32 |           | -7.9616427   | 9.3826647     | 0.0063098    | 0.9034420        | torch.Size([12, 192, 16, 44])    |
| 83      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.1.layer_scale             | input               | torch.float32 |           | -7.9616427   | 9.3826647     | 0.0063098    | 0.9034420        | torch.Size([12, 192, 16, 44])    |
| 83      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.1.layer_scale             | output              | torch.float32 |           | -7.9616427   | 9.3826647     | 0.0063098    | 0.9034420        | torch.Size([12, 192, 16, 44])    |
| 84      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.1.add                     | input_0             | qint8         | 0.0638573 | -8.1737394   | 8.1098824     | -0.0327473   | 1.9108361        | torch.Size([12, 192, 16, 44])    |
| 84      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.1.add                     | input_1             | torch.float32 |           | -7.9616427   | 9.3826647     | 0.0063098    | 0.9034420        | torch.Size([12, 192, 16, 44])    |
| 84      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.1.add                     | output              | qint8         | 0.0797823 | -10.2121325  | 10.1323500    | -0.0264416   | 3.0118122        | torch.Size([12, 192, 16, 44])    |
| 85      | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.1.extra_act               | input               | qint8         | 0.0797823 | -10.2121325  | 10.1323500    | -0.0264416   | 3.0118122        | torch.Size([12, 192, 16, 44])    |
| 85      | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.1.extra_act               | output              | qint8         | 0.0797823 | -10.2121325  | 10.1323500    | -0.0264416   | 3.0118122        | torch.Size([12, 192, 16, 44])    |
| 86      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.dwconv.0.0              | input               | qint8         | 0.0797823 | -10.2121325  | 10.1323500    | -0.0264416   | 3.0118122        | torch.Size([12, 192, 16, 44])    |
| 86      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.dwconv.0.0              | weight              | torch.float32 |           | -0.8671168   | 1.1124765     | 0.0133938    | 0.1169963        | torch.Size([192, 1, 1, 5])       |
| 86      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.dwconv.0.0              | bias                | torch.float32 |           | -1.0464172   | 1.2038803     | 0.0141884    | 0.1114662        | torch.Size([192])                |
| 86      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.dwconv.0.0              | output              | qint8         | 0.0505373 | -6.4687710   | 6.4182339     | 0.0232683    | 1.0443941        | torch.Size([12, 192, 16, 44])    |
| 87      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.2.dwconv.0.1              | input               | qint8         | 0.0505373 | -6.4687710   | 6.4182339     | 0.0232683    | 1.0443941        | torch.Size([12, 192, 16, 44])    |
| 87      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.2.dwconv.0.1              | output              | qint8         | 0.0505373 | -6.4687710   | 6.4182339     | 0.0232683    | 1.0443941        | torch.Size([12, 192, 16, 44])    |
| 88      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv1.0               | input               | qint8         | 0.0505373 | -6.4687710   | 6.4182339     | 0.0232683    | 1.0443941        | torch.Size([12, 192, 16, 44])    |
| 88      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv1.0               | weight              | torch.float32 |           | -0.3219835   | 0.3560996     | -0.0006409   | 0.0051941        | torch.Size([384, 192, 1, 1])     |
| 88      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv1.0               | bias                | torch.float32 |           | -0.3400693   | 0.0379616     | -0.1582377   | 0.0047251        | torch.Size([384])                |
| 88      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv1.0               | output              | qint8         | 0.0723911 | -9.2660627   | 9.1936712     | -0.9687260   | 1.9127924        | torch.Size([12, 384, 16, 44])    |
| 89      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.2.pwconv1.1               | input               | qint8         | 0.0723911 | -9.2660627   | 9.1936712     | -0.9687260   | 1.9127924        | torch.Size([12, 384, 16, 44])    |
| 89      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.2.pwconv1.1               | output              | qint8         | 0.0412302 | -0.1649207   | 5.2362328     | 0.0795320    | 0.1954164        | torch.Size([12, 384, 16, 44])    |
| 90      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv2                 | input               | qint8         | 0.0412302 | -0.1649207   | 5.2362328     | 0.0795320    | 0.1954164        | torch.Size([12, 384, 16, 44])    |
| 90      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv2                 | weight              | torch.float32 |           | -0.3018730   | 0.3186386     | 0.0005609    | 0.0042065        | torch.Size([192, 384, 1, 1])     |
| 90      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv2                 | bias                | torch.float32 |           | -0.2679185   | 0.2253690     | 0.0019991    | 0.0100274        | torch.Size([192])                |
| 90      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.2.pwconv2                 | output              | torch.float32 |           | -6.3604918   | 8.2544718     | 0.0314360    | 0.6508299        | torch.Size([12, 192, 16, 44])    |
| 91      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.2.layer_scale             | input               | torch.float32 |           | -6.3604918   | 8.2544718     | 0.0314360    | 0.6508299        | torch.Size([12, 192, 16, 44])    |
| 91      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.2.layer_scale             | output              | torch.float32 |           | -6.3604918   | 8.2544718     | 0.0314360    | 0.6508299        | torch.Size([12, 192, 16, 44])    |
| 92      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.2.add                     | input_0             | qint8         | 0.0797823 | -10.2121325  | 10.1323500    | -0.0264416   | 3.0118122        | torch.Size([12, 192, 16, 44])    |
| 92      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.2.add                     | input_1             | torch.float32 |           | -6.3604918   | 8.2544718     | 0.0314360    | 0.6508299        | torch.Size([12, 192, 16, 44])    |
| 92      | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.2.add                     | output              | qint8         | 0.0928293 | -11.8821564  | 11.7893267    | 0.0050207    | 4.0303593        | torch.Size([12, 192, 16, 44])    |
| 93      | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.2.extra_act               | input               | qint8         | 0.0928293 | -11.8821564  | 11.7893267    | 0.0050207    | 4.0303593        | torch.Size([12, 192, 16, 44])    |
| 93      | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.2.extra_act               | output              | qint8         | 0.0928293 | -11.8821564  | 11.7893267    | 0.0050207    | 4.0303593        | torch.Size([12, 192, 16, 44])    |
| 94      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.dwconv.0.0              | input               | qint8         | 0.0928293 | -11.8821564  | 11.7893267    | 0.0050207    | 4.0303593        | torch.Size([12, 192, 16, 44])    |
| 94      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.dwconv.0.0              | weight              | torch.float32 |           | -0.8587278   | 0.8105494     | 0.0128680    | 0.0902263        | torch.Size([192, 1, 5, 1])       |
| 94      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.dwconv.0.0              | bias                | torch.float32 |           | -1.2032009   | 1.1422203     | -0.0321676   | 0.1259915        | torch.Size([192])                |
| 94      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.dwconv.0.0              | output              | qint8         | 0.0498528 | -6.3811598   | 6.3313069     | -0.0030652   | 1.1584494        | torch.Size([12, 192, 16, 44])    |
| 95      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.3.dwconv.0.1              | input               | qint8         | 0.0498528 | -6.3811598   | 6.3313069     | -0.0030652   | 1.1584494        | torch.Size([12, 192, 16, 44])    |
| 95      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.3.dwconv.0.1              | output              | qint8         | 0.0498528 | -6.3811598   | 6.3313069     | -0.0030652   | 1.1584494        | torch.Size([12, 192, 16, 44])    |
| 96      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv1.0               | input               | qint8         | 0.0498528 | -6.3811598   | 6.3313069     | -0.0030652   | 1.1584494        | torch.Size([12, 192, 16, 44])    |
| 96      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv1.0               | weight              | torch.float32 |           | -0.3382373   | 0.3421775     | 0.0001951    | 0.0059827        | torch.Size([384, 192, 1, 1])     |
| 96      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv1.0               | bias                | torch.float32 |           | -0.3560182   | 0.0539114     | -0.1509591   | 0.0047631        | torch.Size([384])                |
| 96      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv1.0               | output              | qint8         | 0.0788659 | -10.0948296  | 7.8865857     | -1.0132266   | 2.3530803        | torch.Size([12, 384, 16, 44])    |
| 97      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.3.pwconv1.1               | input               | qint8         | 0.0788659 | -10.0948296  | 7.8865857     | -1.0132266   | 2.3530803        | torch.Size([12, 384, 16, 44])    |
| 97      | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.3.pwconv1.1               | output              | qint8         | 0.0525434 | -0.1576301   | 6.6730070     | 0.1248774    | 0.2700762        | torch.Size([12, 384, 16, 44])    |
| 98      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv2                 | input               | qint8         | 0.0525434 | -0.1576301   | 6.6730070     | 0.1248774    | 0.2700762        | torch.Size([12, 384, 16, 44])    |
| 98      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv2                 | weight              | torch.float32 |           | -0.3477112   | 0.3268056     | 0.0000250    | 0.0058916        | torch.Size([192, 384, 1, 1])     |
| 98      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv2                 | bias                | torch.float32 |           | -0.3212580   | 0.2838761     | 0.0174479    | 0.0123456        | torch.Size([192])                |
| 98      | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.3.pwconv2                 | output              | torch.float32 |           | -8.9968758   | 10.2588215    | 0.0183393    | 1.1972562        | torch.Size([12, 192, 16, 44])    |
| 99      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.3.layer_scale             | input               | torch.float32 |           | -8.9968758   | 10.2588215    | 0.0183393    | 1.1972562        | torch.Size([12, 192, 16, 44])    |
| 99      | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.3.layer_scale             | output              | torch.float32 |           | -8.9968758   | 10.2588215    | 0.0183393    | 1.1972562        | torch.Size([12, 192, 16, 44])    |
| 100     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.3.add                     | input_0             | qint8         | 0.0928293 | -11.8821564  | 11.7893267    | 0.0050207    | 4.0303593        | torch.Size([12, 192, 16, 44])    |
| 100     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.3.add                     | input_1             | torch.float32 |           | -8.9968758   | 10.2588215    | 0.0183393    | 1.1972562        | torch.Size([12, 192, 16, 44])    |
| 100     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.3.add                     | output              | qint8         | 0.1046339 | -13.3931427  | 13.2885084    | 0.0233726    | 5.8373060        | torch.Size([12, 192, 16, 44])    |
| 101     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.3.extra_act               | input               | qint8         | 0.1046339 | -13.3931427  | 13.2885084    | 0.0233726    | 5.8373060        | torch.Size([12, 192, 16, 44])    |
| 101     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.3.extra_act               | output              | qint8         | 0.1046339 | -13.3931427  | 13.2885084    | 0.0233726    | 5.8373060        | torch.Size([12, 192, 16, 44])    |
| 102     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.dwconv.0.0              | input               | qint8         | 0.1046339 | -13.3931427  | 13.2885084    | 0.0233726    | 5.8373060        | torch.Size([12, 192, 16, 44])    |
| 102     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.dwconv.0.0              | weight              | torch.float32 |           | -0.9060833   | 0.8625374     | 0.0063610    | 0.0929200        | torch.Size([192, 1, 1, 5])       |
| 102     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.dwconv.0.0              | bias                | torch.float32 |           | -1.3799524   | 1.0802820     | 0.0005331    | 0.1186790        | torch.Size([192])                |
| 102     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.dwconv.0.0              | output              | qint8         | 0.0557816 | -7.1400456   | 7.0842638     | 0.0280672    | 1.0752466        | torch.Size([12, 192, 16, 44])    |
| 103     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.4.dwconv.0.1              | input               | qint8         | 0.0557816 | -7.1400456   | 7.0842638     | 0.0280672    | 1.0752466        | torch.Size([12, 192, 16, 44])    |
| 103     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.4.dwconv.0.1              | output              | qint8         | 0.0557816 | -7.1400456   | 7.0842638     | 0.0280672    | 1.0752466        | torch.Size([12, 192, 16, 44])    |
| 104     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv1.0               | input               | qint8         | 0.0557816 | -7.1400456   | 7.0842638     | 0.0280672    | 1.0752466        | torch.Size([12, 192, 16, 44])    |
| 104     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv1.0               | weight              | torch.float32 |           | -0.3992376   | 0.3128951     | -0.0014283   | 0.0053068        | torch.Size([384, 192, 1, 1])     |
| 104     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv1.0               | bias                | torch.float32 |           | -0.4347230   | 0.0640009     | -0.1505269   | 0.0066143        | torch.Size([384])                |
| 104     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv1.0               | output              | qint8         | 0.0730804 | -9.3542948   | 8.9158125     | -0.8543031   | 1.6489099        | torch.Size([12, 384, 16, 44])    |
| 105     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.4.pwconv1.1               | input               | qint8         | 0.0730804 | -9.3542948   | 8.9158125     | -0.8543031   | 1.6489099        | torch.Size([12, 384, 16, 44])    |
| 105     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.4.pwconv1.1               | output              | qint8         | 0.0530567 | -0.1591702   | 6.7382050     | 0.0771281    | 0.1936388        | torch.Size([12, 384, 16, 44])    |
| 106     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv2                 | input               | qint8         | 0.0530567 | -0.1591702   | 6.7382050     | 0.0771281    | 0.1936388        | torch.Size([12, 384, 16, 44])    |
| 106     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv2                 | weight              | torch.float32 |           | -0.3879046   | 0.3055161     | 0.0006160    | 0.0044630        | torch.Size([192, 384, 1, 1])     |
| 106     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv2                 | bias                | torch.float32 |           | -0.2859674   | 0.2270272     | 0.0141211    | 0.0085002        | torch.Size([192])                |
| 106     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.4.pwconv2                 | output              | torch.float32 |           | -9.2299747   | 10.1114788    | 0.0326795    | 0.6589628        | torch.Size([12, 192, 16, 44])    |
| 107     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.4.layer_scale             | input               | torch.float32 |           | -9.2299747   | 10.1114788    | 0.0326795    | 0.6589628        | torch.Size([12, 192, 16, 44])    |
| 107     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.4.layer_scale             | output              | torch.float32 |           | -9.2299747   | 10.1114788    | 0.0326795    | 0.6589628        | torch.Size([12, 192, 16, 44])    |
| 108     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.4.add                     | input_0             | qint8         | 0.1046339 | -13.3931427  | 13.2885084    | 0.0233726    | 5.8373060        | torch.Size([12, 192, 16, 44])    |
| 108     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.4.add                     | input_1             | torch.float32 |           | -9.2299747   | 10.1114788    | 0.0326795    | 0.6589628        | torch.Size([12, 192, 16, 44])    |
| 108     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.4.add                     | output              | qint8         | 0.1156204 | -14.7994127  | 14.6837921    | 0.0561357    | 6.7737050        | torch.Size([12, 192, 16, 44])    |
| 109     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.4.extra_act               | input               | qint8         | 0.1156204 | -14.7994127  | 14.6837921    | 0.0561357    | 6.7737050        | torch.Size([12, 192, 16, 44])    |
| 109     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.4.extra_act               | output              | qint8         | 0.1156204 | -14.7994127  | 14.6837921    | 0.0561357    | 6.7737050        | torch.Size([12, 192, 16, 44])    |
| 110     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.dwconv.0.0              | input               | qint8         | 0.1156204 | -14.7994127  | 14.6837921    | 0.0561357    | 6.7737050        | torch.Size([12, 192, 16, 44])    |
| 110     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.dwconv.0.0              | weight              | torch.float32 |           | -0.6616382   | 0.6243868     | 0.0129125    | 0.0588053        | torch.Size([192, 1, 5, 1])       |
| 110     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.dwconv.0.0              | bias                | torch.float32 |           | -1.7488381   | 1.4660774     | 0.0158886    | 0.1826286        | torch.Size([192])                |
| 110     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.dwconv.0.0              | output              | qint8         | 0.0552978 | -7.0781150   | 7.0228171     | -0.0177886   | 1.2274517        | torch.Size([12, 192, 16, 44])    |
| 111     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.5.dwconv.0.1              | input               | qint8         | 0.0552978 | -7.0781150   | 7.0228171     | -0.0177886   | 1.2274517        | torch.Size([12, 192, 16, 44])    |
| 111     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.5.dwconv.0.1              | output              | qint8         | 0.0552978 | -7.0781150   | 7.0228171     | -0.0177886   | 1.2274517        | torch.Size([12, 192, 16, 44])    |
| 112     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv1.0               | input               | qint8         | 0.0552978 | -7.0781150   | 7.0228171     | -0.0177886   | 1.2274517        | torch.Size([12, 192, 16, 44])    |
| 112     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv1.0               | weight              | torch.float32 |           | -0.3696525   | 0.3289170     | 0.0014847    | 0.0064055        | torch.Size([384, 192, 1, 1])     |
| 112     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv1.0               | bias                | torch.float32 |           | -0.4440894   | 0.0760342     | -0.1446273   | 0.0064442        | torch.Size([384])                |
| 112     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv1.0               | output              | qint8         | 0.0892405 | -11.4227791  | 10.6196146    | -1.0651765   | 2.5147996        | torch.Size([12, 384, 16, 44])    |
| 113     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.5.pwconv1.1               | input               | qint8         | 0.0892405 | -11.4227791  | 10.6196146    | -1.0651765   | 2.5147996        | torch.Size([12, 384, 16, 44])    |
| 113     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.5.pwconv1.1               | output              | qint8         | 0.0528542 | -0.1585626   | 6.7124853     | 0.1315110    | 0.2941858        | torch.Size([12, 384, 16, 44])    |
| 114     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv2                 | input               | qint8         | 0.0528542 | -0.1585626   | 6.7124853     | 0.1315110    | 0.2941858        | torch.Size([12, 384, 16, 44])    |
| 114     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv2                 | weight              | torch.float32 |           | -0.3631534   | 0.4298059     | 0.0016655    | 0.0074091        | torch.Size([192, 384, 1, 1])     |
| 114     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv2                 | bias                | torch.float32 |           | -0.2528028   | 0.4737013     | 0.0192933    | 0.0113406        | torch.Size([192])                |
| 114     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.5.pwconv2                 | output              | torch.float32 |           | -13.7396240  | 12.6181145    | 0.1244856    | 1.7505455        | torch.Size([12, 192, 16, 44])    |
| 115     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.5.layer_scale             | input               | torch.float32 |           | -13.7396240  | 12.6181145    | 0.1244856    | 1.7505455        | torch.Size([12, 192, 16, 44])    |
| 115     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.5.layer_scale             | output              | torch.float32 |           | -13.7396240  | 12.6181145    | 0.1244856    | 1.7505455        | torch.Size([12, 192, 16, 44])    |
| 116     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.5.add                     | input_0             | qint8         | 0.1156204 | -14.7994127  | 14.6837921    | 0.0561357    | 6.7737050        | torch.Size([12, 192, 16, 44])    |
| 116     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.5.add                     | input_1             | torch.float32 |           | -13.7396240  | 12.6181145    | 0.1244856    | 1.7505455        | torch.Size([12, 192, 16, 44])    |
| 116     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.5.add                     | output              | qint8         | 0.1268216 | -16.2331676  | 16.1063461    | 0.1805643    | 9.4076042        | torch.Size([12, 192, 16, 44])    |
| 117     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.5.extra_act               | input               | qint8         | 0.1268216 | -16.2331676  | 16.1063461    | 0.1805643    | 9.4076042        | torch.Size([12, 192, 16, 44])    |
| 117     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.5.extra_act               | output              | qint8         | 0.1268216 | -16.2331676  | 16.1063461    | 0.1805643    | 9.4076042        | torch.Size([12, 192, 16, 44])    |
| 118     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.dwconv.0.0              | input               | qint8         | 0.1268216 | -16.2331676  | 16.1063461    | 0.1805643    | 9.4076042        | torch.Size([12, 192, 16, 44])    |
| 118     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.dwconv.0.0              | weight              | torch.float32 |           | -0.6474196   | 0.6681350     | -0.0037086   | 0.0525745        | torch.Size([192, 1, 1, 5])       |
| 118     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.dwconv.0.0              | bias                | torch.float32 |           | -1.3961746   | 1.5103958     | -0.0748284   | 0.1962233        | torch.Size([192])                |
| 118     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.dwconv.0.0              | output              | qint8         | 0.0578552 | -7.4054675   | 7.3476124     | 0.0255796    | 1.0900429        | torch.Size([12, 192, 16, 44])    |
| 119     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.6.dwconv.0.1              | input               | qint8         | 0.0578552 | -7.4054675   | 7.3476124     | 0.0255796    | 1.0900429        | torch.Size([12, 192, 16, 44])    |
| 119     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.6.dwconv.0.1              | output              | qint8         | 0.0578552 | -7.4054675   | 7.3476124     | 0.0255796    | 1.0900429        | torch.Size([12, 192, 16, 44])    |
| 120     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv1.0               | input               | qint8         | 0.0578552 | -7.4054675   | 7.3476124     | 0.0255796    | 1.0900429        | torch.Size([12, 192, 16, 44])    |
| 120     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv1.0               | weight              | torch.float32 |           | -0.3172777   | 0.4361245     | 0.0001557    | 0.0056055        | torch.Size([384, 192, 1, 1])     |
| 120     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv1.0               | bias                | torch.float32 |           | -0.4558428   | 0.1022655     | -0.1479584   | 0.0081169        | torch.Size([384])                |
| 120     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv1.0               | output              | qint8         | 0.0842903 | -10.7891617  | 10.3677101    | -0.7748241   | 1.9231483        | torch.Size([12, 384, 16, 44])    |
| 121     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.6.pwconv1.1               | input               | qint8         | 0.0842903 | -10.7891617  | 10.3677101    | -0.7748241   | 1.9231483        | torch.Size([12, 384, 16, 44])    |
| 121     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.6.pwconv1.1               | output              | qint8         | 0.0544433 | -0.1633298   | 6.9142957     | 0.1279866    | 0.2731014        | torch.Size([12, 384, 16, 44])    |
| 122     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv2                 | input               | qint8         | 0.0544433 | -0.1633298   | 6.9142957     | 0.1279866    | 0.2731014        | torch.Size([12, 384, 16, 44])    |
| 122     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv2                 | weight              | torch.float32 |           | -0.3467066   | 0.3264248     | 0.0004352    | 0.0052596        | torch.Size([192, 384, 1, 1])     |
| 122     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv2                 | bias                | torch.float32 |           | -0.3885508   | 0.3460018     | 0.0132673    | 0.0095256        | torch.Size([192])                |
| 122     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.6.pwconv2                 | output              | torch.float32 |           | -12.5316553  | 14.4050045    | 0.0218073    | 1.3758197        | torch.Size([12, 192, 16, 44])    |
| 123     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.6.layer_scale             | input               | torch.float32 |           | -12.5316553  | 14.4050045    | 0.0218073    | 1.3758197        | torch.Size([12, 192, 16, 44])    |
| 123     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.6.layer_scale             | output              | torch.float32 |           | -12.5316553  | 14.4050045    | 0.0218073    | 1.3758197        | torch.Size([12, 192, 16, 44])    |
| 124     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.6.add                     | input_0             | qint8         | 0.1268216 | -16.2331676  | 16.1063461    | 0.1805643    | 9.4076042        | torch.Size([12, 192, 16, 44])    |
| 124     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.6.add                     | input_1             | torch.float32 |           | -12.5316553  | 14.4050045    | 0.0218073    | 1.3758197        | torch.Size([12, 192, 16, 44])    |
| 124     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.6.add                     | output              | qint8         | 0.1676833 | -20.7927265  | 21.2957764    | 0.2023959    | 11.0869160       | torch.Size([12, 192, 16, 44])    |
| 125     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.6.extra_act               | input               | qint8         | 0.1676833 | -20.7927265  | 21.2957764    | 0.2023959    | 11.0869160       | torch.Size([12, 192, 16, 44])    |
| 125     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.6.extra_act               | output              | qint8         | 0.1676833 | -20.7927265  | 21.2957764    | 0.2023959    | 11.0869160       | torch.Size([12, 192, 16, 44])    |
| 126     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.dwconv.0.0              | input               | qint8         | 0.1676833 | -20.7927265  | 21.2957764    | 0.2023959    | 11.0869160       | torch.Size([12, 192, 16, 44])    |
| 126     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.dwconv.0.0              | weight              | torch.float32 |           | -0.5165650   | 0.5456899     | 0.0030580    | 0.0371696        | torch.Size([192, 1, 5, 1])       |
| 126     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.dwconv.0.0              | bias                | torch.float32 |           | -1.6416326   | 1.3166218     | -0.0237879   | 0.2125032        | torch.Size([192])                |
| 126     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.dwconv.0.0              | output              | qint8         | 0.0606644 | -7.7650452   | 7.7043810     | 0.0074102    | 1.1587801        | torch.Size([12, 192, 16, 44])    |
| 127     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.7.dwconv.0.1              | input               | qint8         | 0.0606644 | -7.7650452   | 7.7043810     | 0.0074102    | 1.1587801        | torch.Size([12, 192, 16, 44])    |
| 127     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.7.dwconv.0.1              | output              | qint8         | 0.0606644 | -7.7650452   | 7.7043810     | 0.0074102    | 1.1587801        | torch.Size([12, 192, 16, 44])    |
| 128     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv1.0               | input               | qint8         | 0.0606644 | -7.7650452   | 7.7043810     | 0.0074102    | 1.1587801        | torch.Size([12, 192, 16, 44])    |
| 128     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv1.0               | weight              | torch.float32 |           | -0.3543261   | 0.3535915     | 0.0000997    | 0.0064399        | torch.Size([384, 192, 1, 1])     |
| 128     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv1.0               | bias                | torch.float32 |           | -0.3648984   | 0.0541026     | -0.1423695   | 0.0057704        | torch.Size([384])                |
| 128     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv1.0               | output              | qint8         | 0.0851890 | -10.9041929  | 10.8190041    | -1.0041436   | 2.3748567        | torch.Size([12, 384, 16, 44])    |
| 129     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.7.pwconv1.1               | input               | qint8         | 0.0851890 | -10.9041929  | 10.8190041    | -1.0041436   | 2.3748567        | torch.Size([12, 384, 16, 44])    |
| 129     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.2.block.7.pwconv1.1               | output              | qint8         | 0.0776078 | -0.1552155   | 9.8561850     | 0.1291442    | 0.3238890        | torch.Size([12, 384, 16, 44])    |
| 130     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv2                 | input               | qint8         | 0.0776078 | -0.1552155   | 9.8561850     | 0.1291442    | 0.3238890        | torch.Size([12, 384, 16, 44])    |
| 130     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv2                 | weight              | torch.float32 |           | -0.4259546   | 0.3993227     | -0.0002211   | 0.0079722        | torch.Size([192, 384, 1, 1])     |
| 130     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv2                 | bias                | torch.float32 |           | -0.1842889   | 0.1827211     | 0.0032989    | 0.0048203        | torch.Size([192])                |
| 130     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.2.block.7.pwconv2                 | output              | torch.float32 |           | -22.9059868  | 22.0217476    | -0.0327473   | 4.7102175        | torch.Size([12, 192, 16, 44])    |
| 131     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.7.layer_scale             | input               | torch.float32 |           | -22.9059868  | 22.0217476    | -0.0327473   | 4.7102175        | torch.Size([12, 192, 16, 44])    |
| 131     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.2.block.7.layer_scale             | output              | torch.float32 |           | -22.9059868  | 22.0217476    | -0.0327473   | 4.7102175        | torch.Size([12, 192, 16, 44])    |
| 132     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.7.add                     | input_0             | qint8         | 0.1676833 | -20.7927265  | 21.2957764    | 0.2023959    | 11.0869160       | torch.Size([12, 192, 16, 44])    |
| 132     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.7.add                     | input_1             | torch.float32 |           | -22.9059868  | 22.0217476    | -0.0327473   | 4.7102175        | torch.Size([12, 192, 16, 44])    |
| 132     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.2.block.7.add                     | output              | qint8         | 0.2330543 | -29.8309441  | 29.5978889    | 0.1696626    | 18.2211380       | torch.Size([12, 192, 16, 44])    |
| 133     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.7.extra_act               | input               | qint8         | 0.2330543 | -29.8309441  | 29.5978889    | 0.1696626    | 18.2211380       | torch.Size([12, 192, 16, 44])    |
| 133     | torch.nn.modules.linear.Identity                                            | backbone.stages.2.block.7.extra_act               | output              | qint8         | 0.2330543 | -29.8309441  | 29.5978889    | 0.1696626    | 18.2211380       | torch.Size([12, 192, 16, 44])    |
| 134     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.2                             | input               | qint8         | 0.2330543 | -29.8309441  | 29.5978889    | 0.1696626    | 18.2211380       | torch.Size([12, 192, 16, 44])    |
| 134     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.2                             | weight              | torch.float32 |           | 0.4536798    | 0.8310655     | 0.6572363    | 0.0057898        | torch.Size([192])                |
| 134     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.2                             | bias                | torch.float32 |           | -0.1366851   | 0.1372305     | 0.0004346    | 0.0031207        | torch.Size([192])                |
| 134     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.2                             | running_mean        | torch.float32 |           | -5.9906597   | 4.8830447     | 0.2049369    | 3.5092514        | torch.Size([192])                |
| 134     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.2                             | running_var         | torch.float32 |           | 6.6480818    | 34.8125572    | 19.6136646   | 31.4906311       | torch.Size([192])                |
| 134     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.2                             | num_batches_tracked | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | nan              | torch.Size([])                   |
| 134     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.2                             | output              | qint8         | 0.0446985 | -5.7214088   | 5.6767101     | -0.0054187   | 0.3409360        | torch.Size([12, 192, 16, 44])    |
| 135     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.2.proj.0                | input               | qint8         | 0.2330543 | -29.8309441  | 29.5978889    | 0.1696626    | 18.2211380       | torch.Size([12, 192, 16, 44])    |
| 135     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.2.proj.0                | weight              | torch.float32 |           | -0.0324320   | 0.0279991     | 0.0000044    | 0.0000225        | torch.Size([384, 192, 2, 2])     |
| 135     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.2.proj.0                | bias                | torch.float32 |           | -2.8980277   | 2.4975598     | -0.0468593   | 0.6053444        | torch.Size([384])                |
| 135     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.downsample_block.2.proj.0                | output              | qint8         | 0.0557805 | -7.1398997   | 7.0841193     | -0.0081402   | 1.2429804        | torch.Size([12, 384, 8, 22])     |
| 136     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.downsample_block.2.proj.1                | input               | qint8         | 0.0557805 | -7.1398997   | 7.0841193     | -0.0081402   | 1.2429804        | torch.Size([12, 384, 8, 22])     |
| 136     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.downsample_block.2.proj.1                | output              | qint8         | 0.0557805 | -7.1398997   | 7.0841193     | -0.0081402   | 1.2429804        | torch.Size([12, 384, 8, 22])     |
| 137     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.dwconv.0                | input               | qint8         | 0.0557805 | -7.1398997   | 7.0841193     | -0.0081402   | 1.2429804        | torch.Size([12, 384, 8, 22])     |
| 137     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.dwconv.0                | weight              | torch.float32 |           | -2.2572098   | 2.9652271     | -0.0022723   | 0.1532137        | torch.Size([384, 1, 3, 3])       |
| 137     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.dwconv.0                | bias                | torch.float32 |           | -0.8557070   | 1.0467781     | -0.0016701   | 0.0800254        | torch.Size([384])                |
| 137     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.dwconv.0                | output              | qint8         | 0.0442714 | -5.6667433   | 5.6224718     | 0.0039471    | 0.8448867        | torch.Size([12, 384, 8, 22])     |
| 138     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.0.dwconv.1                | input               | qint8         | 0.0442714 | -5.6667433   | 5.6224718     | 0.0039471    | 0.8448867        | torch.Size([12, 384, 8, 22])     |
| 138     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.0.dwconv.1                | output              | qint8         | 0.0442714 | -5.6667433   | 5.6224718     | 0.0039471    | 0.8448867        | torch.Size([12, 384, 8, 22])     |
| 139     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv1                 | input               | qint8         | 0.0442714 | -5.6667433   | 5.6224718     | 0.0039471    | 0.8448867        | torch.Size([12, 384, 8, 22])     |
| 139     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv1                 | weight              | torch.float32 |           | -0.3454532   | 0.4094583     | -0.0002953   | 0.0041955        | torch.Size([1152, 384, 1, 1])    |
| 139     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv1                 | bias                | torch.float32 |           | -0.3558742   | 0.1381556     | -0.1628864   | 0.0041367        | torch.Size([1152])               |
| 139     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv1                 | output              | qint8         | 0.0966536 | -12.3716612  | 12.2750072    | -1.8694398   | 4.0503216        | torch.Size([12, 1152, 8, 22])    |
| 140     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.0.act                     | input               | qint8         | 0.0966536 | -12.3716612  | 12.2750072    | -1.8694398   | 4.0503216        | torch.Size([12, 1152, 8, 22])    |
| 140     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.0.act                     | output              | qint8         | 0.0757557 | -0.1515114   | 9.6209717     | 0.1136040    | 0.3117645        | torch.Size([12, 1152, 8, 22])    |
| 141     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv2                 | input               | qint8         | 0.0757557 | -0.1515114   | 9.6209717     | 0.1136040    | 0.3117645        | torch.Size([12, 1152, 8, 22])    |
| 141     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv2                 | weight              | torch.float32 |           | -0.2840501   | 0.2853577     | 0.0001908    | 0.0028844        | torch.Size([384, 1152, 1, 1])    |
| 141     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv2                 | bias                | torch.float32 |           | -0.3010883   | 0.2499636     | -0.0046456   | 0.0063172        | torch.Size([384])                |
| 141     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.0.pwconv2                 | output              | torch.float32 |           | -20.9845676  | 18.8979645    | 0.0276539    | 4.0081978        | torch.Size([12, 384, 8, 22])     |
| 142     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.0.layer_scale             | input               | torch.float32 |           | -20.9845676  | 18.8979645    | 0.0276539    | 4.0081978        | torch.Size([12, 384, 8, 22])     |
| 142     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.0.layer_scale             | output              | torch.float32 |           | -20.9845676  | 18.8979645    | 0.0276539    | 4.0081978        | torch.Size([12, 384, 8, 22])     |
| 143     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.0.add                     | input_0             | qint8         | 0.0557805 | -7.1398997   | 7.0841193     | -0.0081402   | 1.2429804        | torch.Size([12, 384, 8, 22])     |
| 143     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.0.add                     | input_1             | torch.float32 |           | -20.9845676  | 18.8979645    | 0.0276539    | 4.0081978        | torch.Size([12, 384, 8, 22])     |
| 143     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.0.add                     | output              | qint8         | 0.1447716 | -18.5307674  | 18.3859959    | 0.0195229    | 5.9488029        | torch.Size([12, 384, 8, 22])     |
| 144     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.0.extra_act               | input               | qint8         | 0.1447716 | -18.5307674  | 18.3859959    | 0.0195229    | 5.9488029        | torch.Size([12, 384, 8, 22])     |
| 144     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.0.extra_act               | output              | qint8         | 0.1447716 | -18.5307674  | 18.3859959    | 0.0195229    | 5.9488029        | torch.Size([12, 384, 8, 22])     |
| 145     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.dwconv.0                | input               | qint8         | 0.1447716 | -18.5307674  | 18.3859959    | 0.0195229    | 5.9488029        | torch.Size([12, 384, 8, 22])     |
| 145     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.dwconv.0                | weight              | torch.float32 |           | -0.5497411   | 0.5855303     | 0.0011440    | 0.0334270        | torch.Size([384, 1, 3, 3])       |
| 145     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.dwconv.0                | bias                | torch.float32 |           | -0.6564792   | 0.8316791     | 0.0053440    | 0.0519585        | torch.Size([384])                |
| 145     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.dwconv.0                | output              | qint8         | 0.0512266 | -6.5570102   | 6.5057836     | -0.0206101   | 0.9755064        | torch.Size([12, 384, 8, 22])     |
| 146     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.1.dwconv.1                | input               | qint8         | 0.0512266 | -6.5570102   | 6.5057836     | -0.0206101   | 0.9755064        | torch.Size([12, 384, 8, 22])     |
| 146     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.1.dwconv.1                | output              | qint8         | 0.0512266 | -6.5570102   | 6.5057836     | -0.0206101   | 0.9755064        | torch.Size([12, 384, 8, 22])     |
| 147     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv1                 | input               | qint8         | 0.0512266 | -6.5570102   | 6.5057836     | -0.0206101   | 0.9755064        | torch.Size([12, 384, 8, 22])     |
| 147     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv1                 | weight              | torch.float32 |           | -0.3329572   | 0.3142635     | 0.0011703    | 0.0044448        | torch.Size([1152, 384, 1, 1])    |
| 147     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv1                 | bias                | torch.float32 |           | -0.3428729   | 0.0749362     | -0.1486899   | 0.0037087        | torch.Size([1152])               |
| 147     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv1                 | output              | qint8         | 0.1275831 | -16.3306351  | 11.0997286    | -2.0415404   | 4.9878559        | torch.Size([12, 1152, 8, 22])    |
| 148     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.1.act                     | input               | qint8         | 0.1275831 | -16.3306351  | 11.0997286    | -2.0415404   | 4.9878559        | torch.Size([12, 1152, 8, 22])    |
| 148     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.1.act                     | output              | qint8         | 0.0751976 | -0.1503951   | 9.5500898     | 0.0944096    | 0.2489907        | torch.Size([12, 1152, 8, 22])    |
| 149     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv2                 | input               | qint8         | 0.0751976 | -0.1503951   | 9.5500898     | 0.0944096    | 0.2489907        | torch.Size([12, 1152, 8, 22])    |
| 149     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv2                 | weight              | torch.float32 |           | -0.3141920   | 0.3154377     | 0.0000507    | 0.0037171        | torch.Size([384, 1152, 1, 1])    |
| 149     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv2                 | bias                | torch.float32 |           | -0.2160069   | 0.2532634     | -0.0027464   | 0.0057291        | torch.Size([384])                |
| 149     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.1.pwconv2                 | output              | torch.float32 |           | -19.0820713  | 20.6871281    | -0.0351512   | 3.9772739        | torch.Size([12, 384, 8, 22])     |
| 150     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.1.layer_scale             | input               | torch.float32 |           | -19.0820713  | 20.6871281    | -0.0351512   | 3.9772739        | torch.Size([12, 384, 8, 22])     |
| 150     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.1.layer_scale             | output              | torch.float32 |           | -19.0820713  | 20.6871281    | -0.0351512   | 3.9772739        | torch.Size([12, 384, 8, 22])     |
| 151     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.1.add                     | input_0             | qint8         | 0.1447716 | -18.5307674  | 18.3859959    | 0.0195229    | 5.9488029        | torch.Size([12, 384, 8, 22])     |
| 151     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.1.add                     | input_1             | torch.float32 |           | -19.0820713  | 20.6871281    | -0.0351512   | 3.9772739        | torch.Size([12, 384, 8, 22])     |
| 151     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.1.add                     | output              | qint8         | 0.1919433 | -24.3767967  | 24.3767967    | -0.0157998   | 12.1950016       | torch.Size([12, 384, 8, 22])     |
| 152     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.1.extra_act               | input               | qint8         | 0.1919433 | -24.3767967  | 24.3767967    | -0.0157998   | 12.1950016       | torch.Size([12, 384, 8, 22])     |
| 152     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.1.extra_act               | output              | qint8         | 0.1919433 | -24.3767967  | 24.3767967    | -0.0157998   | 12.1950016       | torch.Size([12, 384, 8, 22])     |
| 153     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.dwconv.0                | input               | qint8         | 0.1919433 | -24.3767967  | 24.3767967    | -0.0157998   | 12.1950016       | torch.Size([12, 384, 8, 22])     |
| 153     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.dwconv.0                | weight              | torch.float32 |           | -0.4889810   | 0.5235844     | -0.0002876   | 0.0208882        | torch.Size([384, 1, 3, 3])       |
| 153     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.dwconv.0                | bias                | torch.float32 |           | -1.2699754   | 1.2889090     | -0.0348300   | 0.0956152        | torch.Size([384])                |
| 153     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.dwconv.0                | output              | qint8         | 0.0526084 | -6.7338734   | 6.6812649     | 0.0027858    | 0.9892997        | torch.Size([12, 384, 8, 22])     |
| 154     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.2.dwconv.1                | input               | qint8         | 0.0526084 | -6.7338734   | 6.6812649     | 0.0027858    | 0.9892997        | torch.Size([12, 384, 8, 22])     |
| 154     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.2.dwconv.1                | output              | qint8         | 0.0526084 | -6.7338734   | 6.6812649     | 0.0027858    | 0.9892997        | torch.Size([12, 384, 8, 22])     |
| 155     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv1                 | input               | qint8         | 0.0526084 | -6.7338734   | 6.6812649     | 0.0027858    | 0.9892997        | torch.Size([12, 384, 8, 22])     |
| 155     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv1                 | weight              | torch.float32 |           | -0.3282626   | 0.4871326     | -0.0007906   | 0.0044548        | torch.Size([1152, 384, 1, 1])    |
| 155     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv1                 | bias                | torch.float32 |           | -0.3262930   | 0.0915803     | -0.1388855   | 0.0039567        | torch.Size([1152])               |
| 155     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv1                 | output              | qint8         | 0.1097593 | -14.0491953  | 13.9394360    | -1.7091929   | 4.1380987        | torch.Size([12, 1152, 8, 22])    |
| 156     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.2.act                     | input               | qint8         | 0.1097593 | -14.0491953  | 13.9394360    | -1.7091929   | 4.1380987        | torch.Size([12, 1152, 8, 22])    |
| 156     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.2.act                     | output              | qint8         | 0.0574498 | -0.1723494   | 7.2961240     | 0.1141629    | 0.3074441        | torch.Size([12, 1152, 8, 22])    |
| 157     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv2                 | input               | qint8         | 0.0574498 | -0.1723494   | 7.2961240     | 0.1141629    | 0.3074441        | torch.Size([12, 1152, 8, 22])    |
| 157     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv2                 | weight              | torch.float32 |           | -0.3461267   | 0.3738944     | -0.0003572   | 0.0039895        | torch.Size([384, 1152, 1, 1])    |
| 157     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv2                 | bias                | torch.float32 |           | -0.2189030   | 0.2009952     | -0.0055359   | 0.0066916        | torch.Size([384])                |
| 157     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.2.pwconv2                 | output              | torch.float32 |           | -35.6974220  | 42.3680382    | -0.0746995   | 6.3007522        | torch.Size([12, 384, 8, 22])     |
| 158     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.2.layer_scale             | input               | torch.float32 |           | -35.6974220  | 42.3680382    | -0.0746995   | 6.3007522        | torch.Size([12, 384, 8, 22])     |
| 158     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.2.layer_scale             | output              | torch.float32 |           | -35.6974220  | 42.3680382    | -0.0746995   | 6.3007522        | torch.Size([12, 384, 8, 22])     |
| 159     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.2.add                     | input_0             | qint8         | 0.1919433 | -24.3767967  | 24.3767967    | -0.0157998   | 12.1950016       | torch.Size([12, 384, 8, 22])     |
| 159     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.2.add                     | input_1             | torch.float32 |           | -35.6974220  | 42.3680382    | -0.0746995   | 6.3007522        | torch.Size([12, 384, 8, 22])     |
| 159     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.2.add                     | output              | qint8         | 0.3113140 | -39.8481979  | 39.5368843    | -0.0905911   | 23.1870041       | torch.Size([12, 384, 8, 22])     |
| 160     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.2.extra_act               | input               | qint8         | 0.3113140 | -39.8481979  | 39.5368843    | -0.0905911   | 23.1870041       | torch.Size([12, 384, 8, 22])     |
| 160     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.2.extra_act               | output              | qint8         | 0.3113140 | -39.8481979  | 39.5368843    | -0.0905911   | 23.1870041       | torch.Size([12, 384, 8, 22])     |
| 161     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.dwconv.0                | input               | qint8         | 0.3113140 | -39.8481979  | 39.5368843    | -0.0905911   | 23.1870041       | torch.Size([12, 384, 8, 22])     |
| 161     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.dwconv.0                | weight              | torch.float32 |           | -0.4032759   | 0.4019983     | -0.0000116   | 0.0121566        | torch.Size([384, 1, 3, 3])       |
| 161     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.dwconv.0                | bias                | torch.float32 |           | -1.4726284   | 1.7076901     | 0.0130897    | 0.1375311        | torch.Size([384])                |
| 161     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.dwconv.0                | output              | qint8         | 0.0593706 | -7.5994401   | 7.5400696     | 0.0017156    | 1.0580425        | torch.Size([12, 384, 8, 22])     |
| 162     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.3.dwconv.1                | input               | qint8         | 0.0593706 | -7.5994401   | 7.5400696     | 0.0017156    | 1.0580425        | torch.Size([12, 384, 8, 22])     |
| 162     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.3.dwconv.1                | output              | qint8         | 0.0593706 | -7.5994401   | 7.5400696     | 0.0017156    | 1.0580425        | torch.Size([12, 384, 8, 22])     |
| 163     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv1                 | input               | qint8         | 0.0593706 | -7.5994401   | 7.5400696     | 0.0017156    | 1.0580425        | torch.Size([12, 384, 8, 22])     |
| 163     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv1                 | weight              | torch.float32 |           | -0.3110211   | 0.4192705     | -0.0005754   | 0.0044924        | torch.Size([1152, 384, 1, 1])    |
| 163     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv1                 | bias                | torch.float32 |           | -0.3628277   | 0.1167385     | -0.1340542   | 0.0047501        | torch.Size([1152])               |
| 163     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv1                 | output              | qint8         | 0.1135963 | -14.5403214  | 14.4267254    | -1.6905724   | 4.8723574        | torch.Size([12, 1152, 8, 22])    |
| 164     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.3.act                     | input               | qint8         | 0.1135963 | -14.5403214  | 14.4267254    | -1.6905724   | 4.8723574        | torch.Size([12, 1152, 8, 22])    |
| 164     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.3.act                     | output              | qint8         | 0.0764504 | -0.1529007   | 9.7091951     | 0.1611411    | 0.4189578        | torch.Size([12, 1152, 8, 22])    |
| 165     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv2                 | input               | qint8         | 0.0764504 | -0.1529007   | 9.7091951     | 0.1611411    | 0.4189578        | torch.Size([12, 1152, 8, 22])    |
| 165     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv2                 | weight              | torch.float32 |           | -0.7040561   | 1.2002802     | -0.0005701   | 0.0041489        | torch.Size([384, 1152, 1, 1])    |
| 165     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv2                 | bias                | torch.float32 |           | -0.2491754   | 0.1973000     | -0.0066969   | 0.0068541        | torch.Size([384])                |
| 165     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.3.pwconv2                 | output              | torch.float32 |           | -68.7717819  | 88.5857544    | -0.0981152   | 12.1757803       | torch.Size([12, 384, 8, 22])     |
| 166     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.3.layer_scale             | input               | torch.float32 |           | -68.7717819  | 88.5857544    | -0.0981152   | 12.1757803       | torch.Size([12, 384, 8, 22])     |
| 166     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.3.layer_scale             | output              | torch.float32 |           | -68.7717819  | 88.5857544    | -0.0981152   | 12.1757803       | torch.Size([12, 384, 8, 22])     |
| 167     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.3.add                     | input_0             | qint8         | 0.3113140 | -39.8481979  | 39.5368843    | -0.0905911   | 23.1870041       | torch.Size([12, 384, 8, 22])     |
| 167     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.3.add                     | input_1             | torch.float32 |           | -68.7717819  | 88.5857544    | -0.0981152   | 12.1757803       | torch.Size([12, 384, 8, 22])     |
| 167     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.3.add                     | output              | qint8         | 0.5651718 | -72.3419952  | 71.7768250    | -0.1889161   | 44.8195305       | torch.Size([12, 384, 8, 22])     |
| 168     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.3.extra_act               | input               | qint8         | 0.5651718 | -72.3419952  | 71.7768250    | -0.1889161   | 44.8195305       | torch.Size([12, 384, 8, 22])     |
| 168     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.3.extra_act               | output              | qint8         | 0.5651718 | -72.3419952  | 71.7768250    | -0.1889161   | 44.8195305       | torch.Size([12, 384, 8, 22])     |
| 169     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.dwconv.0                | input               | qint8         | 0.5651718 | -72.3419952  | 71.7768250    | -0.1889161   | 44.8195305       | torch.Size([12, 384, 8, 22])     |
| 169     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.dwconv.0                | weight              | torch.float32 |           | -0.3250363   | 0.3248968     | -0.0000286   | 0.0078953        | torch.Size([384, 1, 3, 3])       |
| 169     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.dwconv.0                | bias                | torch.float32 |           | -1.8321942   | 1.8211768     | -0.0060723   | 0.1871269        | torch.Size([384])                |
| 169     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.dwconv.0                | output              | qint8         | 0.0631082 | -8.0778484   | 8.0147400     | -0.0156991   | 1.1107764        | torch.Size([12, 384, 8, 22])     |
| 170     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.4.dwconv.1                | input               | qint8         | 0.0631082 | -8.0778484   | 8.0147400     | -0.0156991   | 1.1107764        | torch.Size([12, 384, 8, 22])     |
| 170     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.4.dwconv.1                | output              | qint8         | 0.0631082 | -8.0778484   | 8.0147400     | -0.0156991   | 1.1107764        | torch.Size([12, 384, 8, 22])     |
| 171     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv1                 | input               | qint8         | 0.0631082 | -8.0778484   | 8.0147400     | -0.0156991   | 1.1107764        | torch.Size([12, 384, 8, 22])     |
| 171     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv1                 | weight              | torch.float32 |           | -0.3335321   | 0.4012497     | 0.0012445    | 0.0046102        | torch.Size([1152, 384, 1, 1])    |
| 171     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv1                 | bias                | torch.float32 |           | -0.3734207   | 0.1077710     | -0.1267115   | 0.0052377        | torch.Size([1152])               |
| 171     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv1                 | output              | qint8         | 0.1207512 | -15.4561539  | 15.3354025    | -1.6568954   | 5.4276032        | torch.Size([12, 1152, 8, 22])    |
| 172     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.4.act                     | input               | qint8         | 0.1207512 | -15.4561539  | 15.3354025    | -1.6568954   | 5.4276032        | torch.Size([12, 1152, 8, 22])    |
| 172     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.4.act                     | output              | qint8         | 0.0816583 | -0.1633166   | 10.3706055    | 0.2089320    | 0.5773143        | torch.Size([12, 1152, 8, 22])    |
| 173     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv2                 | input               | qint8         | 0.0816583 | -0.1633166   | 10.3706055    | 0.2089320    | 0.5773143        | torch.Size([12, 1152, 8, 22])    |
| 173     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv2                 | weight              | torch.float32 |           | -0.5479969   | 0.6943429     | 0.0004558    | 0.0046549        | torch.Size([384, 1152, 1, 1])    |
| 173     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv2                 | bias                | torch.float32 |           | -0.2239063   | 0.2391806     | -0.0033371   | 0.0073833        | torch.Size([384])                |
| 173     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.4.pwconv2                 | output              | torch.float32 |           | -117.0619507 | 113.1241531   | 0.0511690    | 23.7247677       | torch.Size([12, 384, 8, 22])     |
| 174     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.4.layer_scale             | input               | torch.float32 |           | -117.0619507 | 113.1241531   | 0.0511690    | 23.7247677       | torch.Size([12, 384, 8, 22])     |
| 174     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.4.layer_scale             | output              | torch.float32 |           | -117.0619507 | 113.1241531   | 0.0511690    | 23.7247677       | torch.Size([12, 384, 8, 22])     |
| 175     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.4.add                     | input_0             | qint8         | 0.5651718 | -72.3419952  | 71.7768250    | -0.1889161   | 44.8195305       | torch.Size([12, 384, 8, 22])     |
| 175     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.4.add                     | input_1             | torch.float32 |           | -117.0619507 | 113.1241531   | 0.0511690    | 23.7247677       | torch.Size([12, 384, 8, 22])     |
| 175     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.4.add                     | output              | qint8         | 0.8078725 | -103.4076843 | 102.5998154   | -0.1382425   | 86.4505005       | torch.Size([12, 384, 8, 22])     |
| 176     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.4.extra_act               | input               | qint8         | 0.8078725 | -103.4076843 | 102.5998154   | -0.1382425   | 86.4505005       | torch.Size([12, 384, 8, 22])     |
| 176     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.4.extra_act               | output              | qint8         | 0.8078725 | -103.4076843 | 102.5998154   | -0.1382425   | 86.4505005       | torch.Size([12, 384, 8, 22])     |
| 177     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.dwconv.0                | input               | qint8         | 0.8078725 | -103.4076843 | 102.5998154   | -0.1382425   | 86.4505005       | torch.Size([12, 384, 8, 22])     |
| 177     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.dwconv.0                | weight              | torch.float32 |           | -0.2206255   | 0.2807655     | -0.0000136   | 0.0038105        | torch.Size([384, 1, 3, 3])       |
| 177     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.dwconv.0                | bias                | torch.float32 |           | -2.9995344   | 2.9854457     | 0.0241023    | 0.2505129        | torch.Size([384])                |
| 177     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.dwconv.0                | output              | qint8         | 0.0660417 | -8.4533348   | 8.3872929     | -0.0166472   | 1.0422288        | torch.Size([12, 384, 8, 22])     |
| 178     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.5.dwconv.1                | input               | qint8         | 0.0660417 | -8.4533348   | 8.3872929     | -0.0166472   | 1.0422288        | torch.Size([12, 384, 8, 22])     |
| 178     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.5.dwconv.1                | output              | qint8         | 0.0660417 | -8.4533348   | 8.3872929     | -0.0166472   | 1.0422288        | torch.Size([12, 384, 8, 22])     |
| 179     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv1                 | input               | qint8         | 0.0660417 | -8.4533348   | 8.3872929     | -0.0166472   | 1.0422288        | torch.Size([12, 384, 8, 22])     |
| 179     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv1                 | weight              | torch.float32 |           | -0.3455547   | 0.4153213     | 0.0004850    | 0.0046058        | torch.Size([1152, 384, 1, 1])    |
| 179     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv1                 | bias                | torch.float32 |           | -0.3698098   | 0.1860601     | -0.1037247   | 0.0058884        | torch.Size([1152])               |
| 179     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv1                 | output              | qint8         | 0.1913075 | -24.2960587  | 24.2960587    | -1.2222917   | 5.7940683        | torch.Size([12, 1152, 8, 22])    |
| 180     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.5.act                     | input               | qint8         | 0.1913075 | -24.2960587  | 24.2960587    | -1.2222917   | 5.7940683        | torch.Size([12, 1152, 8, 22])    |
| 180     | horizon_plugin_pytorch.nn.gelu.GELU                                         | backbone.stages.3.block.5.act                     | output              | qint8         | 0.1897770 | -0.1897770   | 24.1016731    | 0.3289461    | 1.0412021        | torch.Size([12, 1152, 8, 22])    |
| 181     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv2                 | input               | qint8         | 0.1897770 | -0.1897770   | 24.1016731    | 0.3289461    | 1.0412021        | torch.Size([12, 1152, 8, 22])    |
| 181     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv2                 | weight              | torch.float32 |           | -0.4918636   | 0.4244632     | 0.0020804    | 0.0046009        | torch.Size([384, 1152, 1, 1])    |
| 181     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv2                 | bias                | torch.float32 |           | -0.1067470   | 0.1143304     | -0.0003810   | 0.0015117        | torch.Size([384])                |
| 181     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | backbone.stages.3.block.5.pwconv2                 | output              | torch.float32 |           | -626.6348267 | 670.6146851   | 0.4270325    | 218.3554840      | torch.Size([12, 384, 8, 22])     |
| 182     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.5.layer_scale             | input               | torch.float32 |           | -626.6348267 | 670.6146851   | 0.4270325    | 218.3554840      | torch.Size([12, 384, 8, 22])     |
| 182     | horizon_plugin_pytorch.nn.linear.Identity                                   | backbone.stages.3.block.5.layer_scale             | output              | torch.float32 |           | -626.6348267 | 670.6146851   | 0.4270325    | 218.3554840      | torch.Size([12, 384, 8, 22])     |
| 183     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.5.add                     | input_0             | qint8         | 0.8078725 | -103.4076843 | 102.5998154   | -0.1382425   | 86.4505005       | torch.Size([12, 384, 8, 22])     |
| 183     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.5.add                     | input_1             | torch.float32 |           | -626.6348267 | 670.6146851   | 0.4270325    | 218.3554840      | torch.Size([12, 384, 8, 22])     |
| 183     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | backbone.stages.3.block.5.add                     | output              | qint8         | 4.9393206 | -632.2330322 | 627.2937012   | 0.2874461    | 350.2877197      | torch.Size([12, 384, 8, 22])     |
| 184     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.5.extra_act               | input               | qint8         | 4.9393206 | -632.2330322 | 627.2937012   | 0.2874461    | 350.2877197      | torch.Size([12, 384, 8, 22])     |
| 184     | torch.nn.modules.linear.Identity                                            | backbone.stages.3.block.5.extra_act               | output              | qint8         | 4.9393206 | -632.2330322 | 627.2937012   | 0.2874461    | 350.2877197      | torch.Size([12, 384, 8, 22])     |
| 185     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.3                             | input               | qint8         | 4.9393206 | -632.2330322 | 627.2937012   | 0.2874461    | 350.2877197      | torch.Size([12, 384, 8, 22])     |
| 185     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.3                             | weight              | torch.float32 |           | 0.4716969    | 0.9356748     | 0.7450352    | 0.0049230        | torch.Size([384])                |
| 185     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.3                             | bias                | torch.float32 |           | -0.1242242   | 0.1170985     | 0.0012044    | 0.0011783        | torch.Size([384])                |
| 185     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.3                             | running_mean        | torch.float32 |           | -21.9474144  | 22.9861240    | 0.6532449    | 65.5134888       | torch.Size([384])                |
| 185     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.3                             | running_var         | torch.float32 |           | 114.5144501  | 21094.2441406 | 4989.1074219 | 13216904.0000000 | torch.Size([384])                |
| 185     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.3                             | num_batches_tracked | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | nan              | torch.Size([])                   |
| 185     | horizon_plugin_pytorch.nn.qat.batchnorm.BatchNorm2d                         | backbone.stage_norm.3                             | output              | qint8         | 0.0448037 | -5.6900764   | 5.6900764     | -0.0028371   | 0.0981072        | torch.Size([12, 384, 8, 22])     |
| 186     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.0.0                             | input               | qint8         | 0.0065952 | -0.8441840   | 0.8375888     | -0.0022974   | 0.0058313        | torch.Size([12, 64, 64, 176])    |
| 186     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.0.0                             | weight              | torch.float32 |           | -0.3206275   | 0.3504827     | 0.0011598    | 0.0060194        | torch.Size([256, 64, 1, 1])      |
| 186     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.0.0                             | bias                | torch.float32 |           | -0.2086400   | 0.2225119     | 0.0024037    | 0.0032313        | torch.Size([256])                |
| 186     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.0.0                             | output              | torch.float32 |           | -0.6050257   | 0.7330970     | 0.0023453    | 0.0049007        | torch.Size([12, 256, 64, 176])   |
| 187     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.1.0                             | input               | qint8         | 0.0211009 | -2.7009161   | 2.6798151     | -0.0034986   | 0.1091889        | torch.Size([12, 128, 32, 88])    |
| 187     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.1.0                             | weight              | torch.float32 |           | -0.3428875   | 0.3670728     | 0.0007555    | 0.0042203        | torch.Size([256, 128, 1, 1])     |
| 187     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.1.0                             | bias                | torch.float32 |           | -0.2329265   | 0.2361577     | 0.0047584    | 0.0071602        | torch.Size([256])                |
| 187     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.1.0                             | output              | torch.float32 |           | -4.0133100   | 3.9301786     | -0.0010978   | 0.0687131        | torch.Size([12, 256, 32, 88])    |
| 188     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.2.0                             | input               | qint8         | 0.0446985 | -5.7214088   | 5.6767101     | -0.0054187   | 0.3409360        | torch.Size([12, 192, 16, 44])    |
| 188     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.2.0                             | weight              | torch.float32 |           | -0.1827236   | 0.1774697     | -0.0000924   | 0.0025376        | torch.Size([256, 192, 1, 1])     |
| 188     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.2.0                             | bias                | torch.float32 |           | -0.1729663   | 0.2027678     | -0.0009974   | 0.0027339        | torch.Size([256])                |
| 188     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.2.0                             | output              | torch.float32 |           | -5.4629278   | 6.3695331     | 0.0060430    | 0.1464723        | torch.Size([12, 256, 16, 44])    |
| 189     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.3.0                             | input               | qint8         | 0.0448037 | -5.6900764   | 5.6900764     | -0.0028371   | 0.0981072        | torch.Size([12, 384, 8, 22])     |
| 189     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.3.0                             | weight              | torch.float32 |           | -0.1964730   | 0.1978286     | 0.0000471    | 0.0020445        | torch.Size([256, 384, 1, 1])     |
| 189     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.3.0                             | bias                | torch.float32 |           | -0.1620243   | 0.1673113     | 0.0016446    | 0.0019964        | torch.Size([256])                |
| 189     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.conv_extract.3.0                             | output              | qint8         | 0.0466380 | -5.9696579   | 5.9230199     | 0.0041783    | 0.1823987        | torch.Size([12, 256, 8, 22])     |
| 190     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | neck.upscale.2                                    | input               | qint8         | 0.0466380 | -5.9696579   | 5.9230199     | 0.0041783    | 0.1823987        | torch.Size([12, 256, 8, 22])     |
| 190     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | neck.upscale.2                                    | output              | qint8         | 0.0466380 | -5.9696579   | 5.8297439     | 0.0041725    | 0.1417305        | torch.Size([12, 256, 16, 44])    |
| 191     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.0                                   | input_0             | torch.float32 |           | -5.4629278   | 6.3695331     | 0.0060430    | 0.1464723        | torch.Size([12, 256, 16, 44])    |
| 191     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.0                                   | input_1             | qint8         | 0.0466380 | -5.9696579   | 5.8297439     | 0.0041725    | 0.1417305        | torch.Size([12, 256, 16, 44])    |
| 191     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.0                                   | output              | qint8         | 0.0518213 | -6.6331220   | 6.5813007     | 0.0102286    | 0.3166934        | torch.Size([12, 256, 16, 44])    |
| 192     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | neck.upscale.1                                    | input               | qint8         | 0.0518213 | -6.6331220   | 6.5813007     | 0.0102286    | 0.3166934        | torch.Size([12, 256, 16, 44])    |
| 192     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | neck.upscale.1                                    | output              | qint8         | 0.0518213 | -6.6331220   | 6.5294795     | 0.0102464    | 0.2711185        | torch.Size([12, 256, 32, 88])    |
| 193     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.1                                   | input_0             | torch.float32 |           | -4.0133100   | 3.9301786     | -0.0010978   | 0.0687131        | torch.Size([12, 256, 32, 88])    |
| 193     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.1                                   | input_1             | qint8         | 0.0518213 | -6.6331220   | 6.5294795     | 0.0102464    | 0.2711185        | torch.Size([12, 256, 32, 88])    |
| 193     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.1                                   | output              | qint8         | 0.0504908 | -6.4628239   | 6.4123330     | 0.0091654    | 0.2702592        | torch.Size([12, 256, 32, 88])    |
| 194     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | neck.upscale.0                                    | input               | qint8         | 0.0504908 | -6.4628239   | 6.4123330     | 0.0091654    | 0.2702592        | torch.Size([12, 256, 32, 88])    |
| 194     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer          | neck.upscale.0                                    | output              | qint8         | 0.0504908 | -6.4628239   | 6.4123330     | 0.0091678    | 0.2531801        | torch.Size([12, 256, 64, 176])   |
| 195     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.2                                   | input_0             | torch.float32 |           | -0.6050257   | 0.7330970     | 0.0023453    | 0.0049007        | torch.Size([12, 256, 64, 176])   |
| 195     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.2                                   | input_1             | qint8         | 0.0504908 | -6.4628239   | 6.4123330     | 0.0091678    | 0.2531801        | torch.Size([12, 256, 64, 176])   |
| 195     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | neck.conv_add.2                                   | output              | qint8         | 0.0506302 | -6.4806690   | 6.4300389     | 0.0115083    | 0.2582085        | torch.Size([12, 256, 64, 176])   |
| 196     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.0.0                                 | input               | qint8         | 0.0506302 | -6.4806690   | 6.4300389     | 0.0115083    | 0.2582085        | torch.Size([12, 256, 64, 176])   |
| 196     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.0.0                                 | weight              | torch.float32 |           | -0.2571002   | 0.2533301     | -0.0000001   | 0.0002334        | torch.Size([256, 256, 3, 3])     |
| 196     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.0.0                                 | bias                | torch.float32 |           | -0.1612954   | 0.1691896     | 0.0023394    | 0.0014786        | torch.Size([256])                |
| 196     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.0.0                                 | output              | qint8         | 0.1624091 | -20.7883625  | 20.6259537    | -0.0151935   | 0.8771955        | torch.Size([12, 256, 64, 176])   |
| 197     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.1.0                                 | input               | qint8         | 0.0504908 | -6.4628239   | 6.4123330     | 0.0091654    | 0.2702592        | torch.Size([12, 256, 32, 88])    |
| 197     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.1.0                                 | weight              | torch.float32 |           | -0.2879935   | 0.3221029     | 0.0000086    | 0.0002552        | torch.Size([256, 256, 3, 3])     |
| 197     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.1.0                                 | bias                | torch.float32 |           | -0.2607886   | 0.2473673     | -0.0079984   | 0.0032435        | torch.Size([256])                |
| 197     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.1.0                                 | output              | qint8         | 0.1464215 | -18.7419529  | 18.5955315    | -0.0143764   | 0.6619356        | torch.Size([12, 256, 32, 88])    |
| 198     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.2.0                                 | input               | qint8         | 0.0518213 | -6.6331220   | 6.5813007     | 0.0102286    | 0.3166934        | torch.Size([12, 256, 16, 44])    |
| 198     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.2.0                                 | weight              | torch.float32 |           | -0.2914507   | 0.2987113     | -0.0000730   | 0.0020421        | torch.Size([256, 256, 3, 3])     |
| 198     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.2.0                                 | bias                | torch.float32 |           | -0.2858557   | 0.3223354     | -0.0022723   | 0.0132826        | torch.Size([256])                |
| 198     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.2.0                                 | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 199     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.3.0                                 | input               | qint8         | 0.0466380 | -5.9696579   | 5.9230199     | 0.0041783    | 0.1823987        | torch.Size([12, 256, 8, 22])     |
| 199     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.3.0                                 | weight              | torch.float32 |           | -0.0208302   | 0.0208302     | -0.0000112   | 0.0001450        | torch.Size([256, 256, 3, 3])     |
| 199     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.3.0                                 | bias                | torch.float32 |           | -0.0207248   | 0.0207744     | 0.0003362    | 0.0001384        | torch.Size([256])                |
| 199     | horizon_plugin_pytorch.nn.qat.conv2d.Conv2d                                 | neck.fpn_conv.3.0                                 | output              | qint8         | 0.0134903 | -1.7267525   | 1.7132623     | -0.0007563   | 0.0551021        | torch.Size([12, 256, 8, 22])     |
| 200     | torch.Tensor.float                                                          | head                                              | input               | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 200     | torch.Tensor.float                                                          | head                                              | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 201     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub              | input               | torch.float32 |           | -52.9582825  | 52.8438606    | 0.6379549    | 103.1539612      | torch.Size([2, 384, 11])         |
| 201     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub              | output              | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.6379561    | 103.1538239      | torch.Size([2, 384, 11])         |
| 202     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub    | input               | torch.float32 |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 256])        |
| 202     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub    | output              | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 256])        |
| 203     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub(1)           | input               | torch.float32 |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 11])         |
| 203     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub(1)           | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 11])         |
| 204     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub(1) | input               | torch.float32 |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 256])        |
| 204     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub(1) | output              | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 256])        |
| 205     | torch.clamp                                                                 | head                                              | input               | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 11])         |
| 205     | torch.clamp                                                                 | head                                              | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 11])         |
| 206     | torch.clamp                                                                 | head                                              | input               | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.6379561    | 103.1538239      | torch.Size([2, 384, 11])         |
| 206     | torch.clamp                                                                 | head                                              | output              | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.6379561    | 103.1538239      | torch.Size([2, 384, 11])         |
| 207     | torch.Tensor.__getitem__                                                    | head                                              | input_0             | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 11])         |
| 207     | torch.Tensor.__getitem__                                                    | head                                              | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 128, 11])         |
| 208     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.instance_bank.anchor_cat                     | input_0             | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 128, 11])         |
| 208     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.instance_bank.anchor_cat                     | input_1             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.6379561    | 103.1538239      | torch.Size([2, 384, 11])         |
| 208     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.instance_bank.anchor_cat                     | output              | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.4784671    | 77.4393997       | torch.Size([2, 512, 11])         |
| 209     | torch.Tensor.__getitem__                                                    | head                                              | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 256])        |
| 209     | torch.Tensor.__getitem__                                                    | head                                              | output              | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 128, 256])        |
| 210     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.instance_bank.feature_cat                    | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 128, 256])        |
| 210     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.instance_bank.feature_cat                    | input_1             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 256])        |
| 210     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.instance_bank.feature_cat                    | output              | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 512, 256])        |
| 211     | torch.Tensor.__getitem__                                                    | head                                              | input_0             | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 11])         |
| 211     | torch.Tensor.__getitem__                                                    | head                                              | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 11])         |
| 212     | torch.Tensor.__getitem__                                                    | head                                              | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 384, 256])        |
| 212     | torch.Tensor.__getitem__                                                    | head                                              | output              | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 213     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.4784671    | 77.4393997       | torch.Size([2, 512, 11])         |
| 213     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 1.0650992    | 283.1613770      | torch.Size([2, 512, 3])          |
| 214     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0                      | input               | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 1.0650992    | 283.1613770      | torch.Size([2, 512, 3])          |
| 214     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0                      | weight              | torch.float32 |           | -0.9216561   | 0.9167990     | -0.0046354   | 0.1373587        | torch.Size([128, 3])             |
| 214     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0                      | bias                | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3650480        | torch.Size([128])                |
| 214     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0                      | output              | torch.float32 |           | -32.8179817  | 33.7280197    | -0.1003323   | 69.4566879       | torch.Size([2, 512, 128])        |
| 215     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1                      | input               | torch.float32 |           | -32.8179817  | 33.7280197    | -0.1003323   | 69.4566879       | torch.Size([2, 512, 128])        |
| 215     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1                      | output              | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8383400    | 25.7053127       | torch.Size([2, 512, 128])        |
| 216     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean      | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8383400    | 25.7053127       | torch.Size([2, 512, 128])        |
| 216     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean      | output              | qint16        | 0.0002498 | 0.2510299    | 7.3333216     | 2.8383470    | 4.2452178        | torch.Size([2, 512, 1])          |
| 217     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub                  | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8383400    | 25.7053127       | torch.Size([2, 512, 128])        |
| 217     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub                  | input_1             | qint16        | 0.0002498 | 0.2510299    | 7.3333216     | 2.8383470    | 4.2452178        | torch.Size([2, 512, 1])          |
| 217     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub                  | output              | qint16        | 0.0008924 | -7.3330698   | 27.4635372    | 0.0000307    | 21.4641590       | torch.Size([2, 512, 128])        |
| 218     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul                  | input_0             | qint16        | 0.0008924 | -7.3330698   | 27.4635372    | 0.0000307    | 21.4641590       | torch.Size([2, 512, 128])        |
| 218     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul                  | input_1             | qint16        | 0.0008924 | -7.3330698   | 27.4635372    | 0.0000307    | 21.4641590       | torch.Size([2, 512, 128])        |
| 218     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul                  | output              | qint16        | 0.0261809 | 0.0000000    | 754.2441406   | 21.4634323   | 2608.8012695     | torch.Size([2, 512, 128])        |
| 219     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean        | input_0             | qint16        | 0.0261809 | 0.0000000    | 754.2441406   | 21.4634323   | 2608.8012695     | torch.Size([2, 512, 128])        |
| 219     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean        | output              | qint16        | 0.0029473 | 0.1090503    | 79.9103241    | 21.4634895   | 485.2437439      | torch.Size([2, 512, 1])          |
| 220     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt                | input               | qint16        | 0.0029473 | 0.1090503    | 79.9103241    | 21.4634895   | 485.2437439      | torch.Size([2, 512, 1])          |
| 220     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt                | output              | qint16        | 0.0000538 | 0.1118589    | 1.7621539     | 0.6430420    | 0.4457080        | torch.Size([2, 512, 1])          |
| 221     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul              | input_0             | qint16        | 0.0008924 | -7.3330698   | 27.4635372    | 0.0000307    | 21.4641590       | torch.Size([2, 512, 128])        |
| 221     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul              | input_1             | qint16        | 0.0000538 | 0.1118589    | 1.7621539     | 0.6430420    | 0.4457080        | torch.Size([2, 512, 1])          |
| 221     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul              | output              | qint16        | 0.0001192 | -0.8844452   | 3.7011032     | 0.0000504    | 0.8370724        | torch.Size([2, 512, 128])        |
| 222     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant         | input               | torch.float32 |           | 0.7278287    | 1.3287159     | 0.9627235    | 0.0086877        | torch.Size([128])                |
| 222     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant         | output              | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 223     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul           | input_0             | qint16        | 0.0001192 | -0.8844452   | 3.7011032     | 0.0000504    | 0.8370724        | torch.Size([2, 512, 128])        |
| 223     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul           | input_1             | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 223     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul           | output              | qint16        | 0.0001208 | -1.0886813   | 3.6094120     | -0.0024154   | 0.7704377        | torch.Size([2, 512, 128])        |
| 224     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant           | input               | torch.float32 |           | -0.0562531   | 0.0804052     | 0.0088204    | 0.0005294        | torch.Size([128])                |
| 224     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant           | output              | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 225     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add             | input_0             | qint16        | 0.0001208 | -1.0886813   | 3.6094120     | -0.0024154   | 0.7704377        | torch.Size([2, 512, 128])        |
| 225     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add             | input_1             | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 225     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add             | output              | qint8         | 0.0271288 | -1.0851527   | 3.4453597     | 0.0063993    | 0.7655274        | torch.Size([2, 512, 128])        |
| 226     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3                      | input               | qint8         | 0.0271288 | -1.0851527   | 3.4453597     | 0.0063993    | 0.7655274        | torch.Size([2, 512, 128])        |
| 226     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3                      | weight              | torch.float32 |           | -0.3750711   | 0.3968706     | 0.0019093    | 0.0048458        | torch.Size([128, 128])           |
| 226     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3                      | bias                | torch.float32 |           | -0.1863807   | 0.1385574     | -0.0156467   | 0.0047256        | torch.Size([128])                |
| 226     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3                      | output              | torch.float32 |           | -5.2730846   | 5.5271597     | -0.1029110   | 1.9107423        | torch.Size([2, 512, 128])        |
| 227     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4                      | input               | torch.float32 |           | -5.2730846   | 5.5271597     | -0.1029110   | 1.9107423        | torch.Size([2, 512, 128])        |
| 227     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4                      | output              | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5087832    | 0.6791008        | torch.Size([2, 512, 128])        |
| 228     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean      | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5087832    | 0.6791008        | torch.Size([2, 512, 128])        |
| 228     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean      | output              | qint16        | 0.0000298 | 0.2857055    | 0.8838744     | 0.5087842    | 0.0294066        | torch.Size([2, 512, 1])          |
| 229     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub                  | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5087832    | 0.6791008        | torch.Size([2, 512, 128])        |
| 229     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub                  | input_1             | qint16        | 0.0000298 | 0.2857055    | 0.8838744     | 0.5087842    | 0.0294066        | torch.Size([2, 512, 1])          |
| 229     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub                  | output              | qint16        | 0.0001641 | -0.8839493   | 5.0655580     | -0.0000093   | 0.6497362        | torch.Size([2, 512, 128])        |
| 230     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul                  | input_0             | qint16        | 0.0001641 | -0.8839493   | 5.0655580     | -0.0000093   | 0.6497362        | torch.Size([2, 512, 128])        |
| 230     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul                  | input_1             | qint16        | 0.0001641 | -0.8839493   | 5.0655580     | -0.0000093   | 0.6497362        | torch.Size([2, 512, 128])        |
| 230     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul                  | output              | qint16        | 0.0008856 | 0.0000000    | 25.6595287    | 0.6497782    | 2.7809327        | torch.Size([2, 512, 128])        |
| 231     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean        | input_0             | qint16        | 0.0008856 | 0.0000000    | 25.6595287    | 0.6497782    | 2.7809327        | torch.Size([2, 512, 128])        |
| 231     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean        | output              | qint16        | 0.0000499 | 0.3037120    | 1.4214820     | 0.6497810    | 0.0810527        | torch.Size([2, 512, 1])          |
| 232     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt                | input               | qint16        | 0.0000499 | 0.3037120    | 1.4214820     | 0.6497810    | 0.0810527        | torch.Size([2, 512, 1])          |
| 232     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt                | output              | qint16        | 0.0000553 | 0.8387314    | 1.8121266     | 1.3245037    | 0.0696104        | torch.Size([2, 512, 1])          |
| 233     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul              | input_0             | qint16        | 0.0001641 | -0.8839493   | 5.0655580     | -0.0000093   | 0.6497362        | torch.Size([2, 512, 128])        |
| 233     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul              | input_1             | qint16        | 0.0000553 | 0.8387314    | 1.8121266     | 1.3245037    | 0.0696104        | torch.Size([2, 512, 1])          |
| 233     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul              | output              | qint16        | 0.0002164 | -0.7627654   | 6.9068799     | -0.0000185   | 0.9999592        | torch.Size([2, 512, 128])        |
| 234     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant         | input               | torch.float32 |           | 0.5925044    | 1.4726304     | 0.9182085    | 0.0175060        | torch.Size([128])                |
| 234     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant         | output              | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 235     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul           | input_0             | qint16        | 0.0002164 | -0.7627654   | 6.9068799     | -0.0000185   | 0.9999592        | torch.Size([2, 512, 128])        |
| 235     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul           | input_1             | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 235     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul           | output              | qint16        | 0.0002127 | -0.8656419   | 6.7866750     | 0.0389972    | 0.9737670        | torch.Size([2, 512, 128])        |
| 236     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant           | input               | torch.float32 |           | -0.0644210   | 0.2426097     | 0.0318023    | 0.0030999        | torch.Size([128])                |
| 236     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant           | output              | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 237     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add             | input_0             | qint16        | 0.0002127 | -0.8656419   | 6.7866750     | 0.0389972    | 0.9737670        | torch.Size([2, 512, 128])        |
| 237     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add             | input_1             | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 237     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add             | output              | qint8         | 0.0521229 | -0.8860894   | 6.6196094     | 0.0707241    | 0.9479374        | torch.Size([2, 512, 128])        |
| 238     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6                      | input               | qint8         | 0.0521229 | -0.8860894   | 6.6196094     | 0.0707241    | 0.9479374        | torch.Size([2, 512, 128])        |
| 238     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6                      | weight              | torch.float32 |           | -0.7504157   | 0.4182976     | -0.0024651   | 0.0052447        | torch.Size([128, 128])           |
| 238     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6                      | bias                | torch.float32 |           | -0.1397866   | 0.1210779     | 0.0064616    | 0.0040949        | torch.Size([128])                |
| 238     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6                      | output              | torch.float32 |           | -10.2708101  | 7.2193308     | -0.0240497   | 5.5674162        | torch.Size([2, 512, 128])        |
| 239     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7                      | input               | torch.float32 |           | -10.2708101  | 7.2193308     | -0.0240497   | 5.5674162        | torch.Size([2, 512, 128])        |
| 239     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7                      | output              | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8798186    | 1.7385682        | torch.Size([2, 512, 128])        |
| 240     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean      | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8798186    | 1.7385682        | torch.Size([2, 512, 128])        |
| 240     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean      | output              | qint16        | 0.0000319 | 0.5535182    | 1.0447656     | 0.7736207    | 0.0261463        | torch.Size([2, 512, 1])          |
| 241     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub                  | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8798186    | 1.7385682        | torch.Size([2, 512, 128])        |
| 241     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub                  | input_1             | qint16        | 0.0000319 | 0.5535182    | 1.0447656     | 0.7736207    | 0.0261463        | torch.Size([2, 512, 1])          |
| 241     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub                  | output              | qint16        | 0.0001844 | -1.0447190   | 5.6159182     | 0.1061974    | 1.6548518        | torch.Size([2, 512, 128])        |
| 242     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul                  | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6159182     | 0.1061974    | 1.6548518        | torch.Size([2, 512, 128])        |
| 242     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul                  | input_1             | qint16        | 0.0001844 | -1.0447190   | 5.6159182     | 0.1061974    | 1.6548518        | torch.Size([2, 512, 128])        |
| 242     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul                  | output              | qint16        | 0.0011151 | 0.0000000    | 31.5383568    | 1.6661549    | 12.7924690       | torch.Size([2, 512, 128])        |
| 243     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean        | input_0             | qint16        | 0.0011151 | 0.0000000    | 31.5383568    | 1.6661549    | 12.7924690       | torch.Size([2, 512, 128])        |
| 243     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean        | output              | qint16        | 0.0000656 | 0.8206643    | 2.1495371     | 1.3717221    | 0.2193482        | torch.Size([2, 512, 1])          |
| 244     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt                | input               | qint16        | 0.0000656 | 0.8206643    | 2.1495371     | 1.3717221    | 0.2193482        | torch.Size([2, 512, 1])          |
| 244     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt                | output              | qint16        | 0.0000338 | 0.6820595    | 1.1038622     | 0.8862832    | 0.0167899        | torch.Size([2, 512, 1])          |
| 245     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul              | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6159182     | 0.1061974    | 1.6548518        | torch.Size([2, 512, 128])        |
| 245     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul              | input_1             | qint16        | 0.0000338 | 0.6820595    | 1.1038622     | 0.8862832    | 0.0167899        | torch.Size([2, 512, 1])          |
| 245     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul              | output              | qint16        | 0.0001537 | -0.7503377   | 4.9320641     | 0.0724347    | 1.1316954        | torch.Size([2, 512, 128])        |
| 246     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant         | input               | torch.float32 |           | 0.7673740    | 1.1249810     | 0.9671495    | 0.0053221        | torch.Size([128])                |
| 246     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant         | output              | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 247     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul           | input_0             | qint16        | 0.0001537 | -0.7503377   | 4.9320641     | 0.0724347    | 1.1316954        | torch.Size([2, 512, 128])        |
| 247     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul           | input_1             | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 247     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul           | output              | qint16        | 0.0001601 | -0.8440742   | 5.1408010     | 0.0873688    | 1.1295246        | torch.Size([2, 512, 128])        |
| 248     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant           | input               | torch.float32 |           | -0.0537279   | 0.1594015     | 0.0216380    | 0.0014148        | torch.Size([128])                |
| 248     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant           | output              | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 249     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add             | input_0             | qint16        | 0.0001601 | -0.8440742   | 5.1408010     | 0.0873688    | 1.1295246        | torch.Size([2, 512, 128])        |
| 249     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add             | input_1             | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 249     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add             | output              | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.1087239    | 1.1112350        | torch.Size([2, 512, 128])        |
| 250     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9                      | input               | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.1087239    | 1.1112350        | torch.Size([2, 512, 128])        |
| 250     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9                      | weight              | torch.float32 |           | -0.4264432   | 0.3183554     | 0.0005866    | 0.0053991        | torch.Size([128, 128])           |
| 250     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9                      | bias                | torch.float32 |           | -0.1690418   | 0.1536980     | -0.0166056   | 0.0039884        | torch.Size([128])                |
| 250     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9                      | output              | torch.float32 |           | -11.7431946  | 10.8552198    | -0.4376920   | 5.1771088        | torch.Size([2, 512, 128])        |
| 251     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10                     | input               | torch.float32 |           | -11.7431946  | 10.8552198    | -0.4376920   | 5.1771088        | torch.Size([2, 512, 128])        |
| 251     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10                     | output              | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6634364    | 1.7038004        | torch.Size([2, 512, 128])        |
| 252     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean     | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6634364    | 1.7038004        | torch.Size([2, 512, 128])        |
| 252     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean     | output              | qint16        | 0.0000231 | 0.5261117    | 0.7555045     | 0.6478350    | 0.0056437        | torch.Size([2, 512, 1])          |
| 253     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub                 | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6634364    | 1.7038004        | torch.Size([2, 512, 128])        |
| 253     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub                 | input_1             | qint16        | 0.0000231 | 0.5261117    | 0.7555045     | 0.6478350    | 0.0056437        | torch.Size([2, 512, 1])          |
| 253     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub                 | output              | qint16        | 0.0003154 | -0.7554005   | 9.9362755     | 0.0156169    | 1.6947740        | torch.Size([2, 512, 128])        |
| 254     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul                 | input_0             | qint16        | 0.0003154 | -0.7554005   | 9.9362755     | 0.0156169    | 1.6947740        | torch.Size([2, 512, 128])        |
| 254     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul                 | input_1             | qint16        | 0.0003154 | -0.7554005   | 9.9362755     | 0.0156169    | 1.6947740        | torch.Size([2, 512, 128])        |
| 254     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul                 | output              | qint16        | 0.0032599 | 0.0000000    | 98.7311325    | 1.6949805    | 29.1854324       | torch.Size([2, 512, 128])        |
| 255     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean       | input_0             | qint16        | 0.0032599 | 0.0000000    | 98.7311325    | 1.6949805    | 29.1854324       | torch.Size([2, 512, 128])        |
| 255     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean       | output              | qint16        | 0.0000598 | 1.0998379    | 1.9495633     | 1.6949873    | 0.0219989        | torch.Size([2, 512, 1])          |
| 256     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt               | input               | qint16        | 0.0000598 | 1.0998379    | 1.9495633     | 1.6949873    | 0.0219989        | torch.Size([2, 512, 1])          |
| 256     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt               | output              | qint16        | 0.0000315 | 0.7161860    | 0.9535299     | 0.7705160    | 0.0013500        | torch.Size([2, 512, 1])          |
| 257     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul             | input_0             | qint16        | 0.0003154 | -0.7554005   | 9.9362755     | 0.0156169    | 1.6947740        | torch.Size([2, 512, 128])        |
| 257     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul             | input_1             | qint16        | 0.0000315 | 0.7161860    | 0.9535299     | 0.7705160    | 0.0013500        | torch.Size([2, 512, 1])          |
| 257     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul             | output              | qint16        | 0.0002431 | -0.6424454   | 7.8350148     | 0.0116620    | 0.9998609        | torch.Size([2, 512, 128])        |
| 258     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant        | input               | torch.float32 |           | 0.7088336    | 1.4002132     | 0.9292046    | 0.0145085        | torch.Size([128])                |
| 258     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant        | output              | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 259     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul          | input_0             | qint16        | 0.0002431 | -0.6424454   | 7.8350148     | 0.0116620    | 0.9998609        | torch.Size([2, 512, 128])        |
| 259     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul          | input_1             | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 259     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul          | output              | qint16        | 0.0002455 | -0.8559412   | 7.9143877     | 0.0214617    | 0.9066528        | torch.Size([2, 512, 128])        |
| 260     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant          | input               | torch.float32 |           | -0.0965041   | 0.2669707     | 0.0619903    | 0.0064956        | torch.Size([128])                |
| 260     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant          | output              | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 261     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add            | input_0             | qint16        | 0.0002455 | -0.8559412   | 7.9143877     | 0.0214617    | 0.9066528        | torch.Size([2, 512, 128])        |
| 261     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add            | input_1             | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 261     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add            | output              | qint8         | 0.0587279 | -0.8221908   | 7.4584455     | 0.0836274    | 0.8717202        | torch.Size([2, 512, 128])        |
| 262     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.4784671    | 77.4393997       | torch.Size([2, 512, 11])         |
| 262     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | 0.0000000    | 1.1315918     | 0.4410315    | 0.1082892        | torch.Size([2, 512, 3])          |
| 263     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0                     | input               | qint16        | 0.0018311 | 0.0000000    | 1.1315918     | 0.4410315    | 0.1082892        | torch.Size([2, 512, 3])          |
| 263     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0                     | weight              | torch.float32 |           | -0.8288664   | 0.6362330     | 0.0683853    | 0.1118651        | torch.Size([32, 3])              |
| 263     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0                     | bias                | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1068659        | torch.Size([32])                 |
| 263     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0                     | output              | torch.float32 |           | -1.0348635   | 0.9777132     | 0.1557990    | 0.1821702        | torch.Size([2, 512, 32])         |
| 264     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1                     | input               | torch.float32 |           | -1.0348635   | 0.9777132     | 0.1557990    | 0.1821702        | torch.Size([2, 512, 32])         |
| 264     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1                     | output              | qint8         | 0.0194126 | 0.0000000    | 0.9706300     | 0.2735667    | 0.0739865        | torch.Size([2, 512, 32])         |
| 265     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean     | input_0             | qint8         | 0.0194126 | 0.0000000    | 0.9706300     | 0.2735667    | 0.0739865        | torch.Size([2, 512, 32])         |
| 265     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean     | output              | qint16        | 0.0000252 | 0.1783445    | 0.3239369     | 0.2735648    | 0.0030478        | torch.Size([2, 512, 1])          |
| 266     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub                 | input_0             | qint8         | 0.0194126 | 0.0000000    | 0.9706300     | 0.2735667    | 0.0739865        | torch.Size([2, 512, 32])         |
| 266     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub                 | input_1             | qint16        | 0.0000252 | 0.1783445    | 0.3239369     | 0.2735648    | 0.0030478        | torch.Size([2, 512, 1])          |
| 266     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub                 | output              | qint16        | 0.0000639 | -0.3239265   | 0.6569929     | 0.0000046    | 0.0709415        | torch.Size([2, 512, 32])         |
| 267     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul                 | input_0             | qint16        | 0.0000639 | -0.3239265   | 0.6569929     | 0.0000046    | 0.0709415        | torch.Size([2, 512, 32])         |
| 267     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul                 | input_1             | qint16        | 0.0000639 | -0.3239265   | 0.6569929     | 0.0000046    | 0.0709415        | torch.Size([2, 512, 32])         |
| 267     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul                 | output              | qint16        | 0.0001394 | 0.0000000    | 0.4316191     | 0.0709317    | 0.0067434        | torch.Size([2, 512, 32])         |
| 268     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean       | input_0             | qint16        | 0.0001394 | 0.0000000    | 0.4316191     | 0.0709317    | 0.0067434        | torch.Size([2, 512, 32])         |
| 268     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean       | output              | qint16        | 0.0000212 | 0.0318008    | 0.1031669     | 0.0709297    | 0.0005324        | torch.Size([2, 512, 1])          |
| 269     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt               | input               | qint16        | 0.0000212 | 0.0318008    | 0.1031669     | 0.0709297    | 0.0005324        | torch.Size([2, 512, 1])          |
| 269     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt               | output              | qint16        | 0.0001649 | 3.1132267    | 5.4031301     | 3.9425519    | 0.7201110        | torch.Size([2, 512, 1])          |
| 270     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul             | input_0             | qint16        | 0.0000639 | -0.3239265   | 0.6569929     | 0.0000046    | 0.0709415        | torch.Size([2, 512, 32])         |
| 270     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul             | input_1             | qint16        | 0.0001649 | 3.1132267    | 5.4031301     | 3.9425519    | 0.7201110        | torch.Size([2, 512, 1])          |
| 270     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul             | output              | qint16        | 0.0000919 | -1.0804747   | 2.2302778     | 0.0000256    | 0.9822741        | torch.Size([2, 512, 32])         |
| 271     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant        | input               | torch.float32 |           | 0.8401937    | 1.1936733     | 0.9969203    | 0.0071658        | torch.Size([32])                 |
| 271     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant        | output              | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 272     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul          | input_0             | qint16        | 0.0000919 | -1.0804747   | 2.2302778     | 0.0000256    | 0.9822741        | torch.Size([2, 512, 32])         |
| 272     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul          | input_1             | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 272     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul          | output              | qint16        | 0.0001022 | -1.1886001   | 2.1215289     | -0.0015520   | 0.9230314        | torch.Size([2, 512, 32])         |
| 273     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant          | input               | torch.float32 |           | -0.1003950   | 0.1085345     | 0.0035262    | 0.0030721        | torch.Size([32])                 |
| 273     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant          | output              | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 274     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add            | input_0             | qint16        | 0.0001022 | -1.1886001   | 2.1215289     | -0.0015520   | 0.9230314        | torch.Size([2, 512, 32])         |
| 274     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add            | input_1             | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 274     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add            | output              | qint8         | 0.0232598 | -1.1629900   | 2.0933819     | 0.0014112    | 0.8443223        | torch.Size([2, 512, 32])         |
| 275     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3                     | input               | qint8         | 0.0232598 | -1.1629900   | 2.0933819     | 0.0014112    | 0.8443223        | torch.Size([2, 512, 32])         |
| 275     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3                     | weight              | torch.float32 |           | -0.5793310   | 0.5422795     | -0.0032135   | 0.0176575        | torch.Size([32, 32])             |
| 275     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3                     | bias                | torch.float32 |           | -0.1716317   | 0.2230143     | 0.0007250    | 0.0126328        | torch.Size([32])                 |
| 275     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3                     | output              | torch.float32 |           | -3.0396373   | 1.9772003     | -0.1120086   | 1.0638269        | torch.Size([2, 512, 32])         |
| 276     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4                     | input               | torch.float32 |           | -3.0396373   | 1.9772003     | -0.1120086   | 1.0638269        | torch.Size([2, 512, 32])         |
| 276     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4                     | output              | qint8         | 0.0172935 | 0.0000000    | 1.9714624     | 0.3549618    | 0.2167131        | torch.Size([2, 512, 32])         |
| 277     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean     | input_0             | qint8         | 0.0172935 | 0.0000000    | 1.9714624     | 0.3549618    | 0.2167131        | torch.Size([2, 512, 32])         |
| 277     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean     | output              | qint16        | 0.0000141 | 0.3280324    | 0.4053228     | 0.3549629    | 0.0008661        | torch.Size([2, 512, 1])          |
| 278     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub                 | input_0             | qint8         | 0.0172935 | 0.0000000    | 1.9714624     | 0.3549618    | 0.2167131        | torch.Size([2, 512, 32])         |
| 278     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub                 | input_1             | qint16        | 0.0000141 | 0.3280324    | 0.4053228     | 0.3549629    | 0.0008661        | torch.Size([2, 512, 1])          |
| 278     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub                 | output              | qint16        | 0.0000617 | -0.4053017   | 1.6185528     | 0.0000036    | 0.2158459        | torch.Size([2, 512, 32])         |
| 279     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul                 | input_0             | qint16        | 0.0000617 | -0.4053017   | 1.6185528     | 0.0000036    | 0.2158459        | torch.Size([2, 512, 32])         |
| 279     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul                 | input_1             | qint16        | 0.0000617 | -0.4053017   | 1.6185528     | 0.0000036    | 0.2158459        | torch.Size([2, 512, 32])         |
| 279     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul                 | output              | qint16        | 0.0001252 | 0.0000000    | 2.6197095     | 0.2158403    | 0.1265321        | torch.Size([2, 512, 32])         |
| 280     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean       | input_0             | qint16        | 0.0001252 | 0.0000000    | 2.6197095     | 0.2158403    | 0.1265321        | torch.Size([2, 512, 32])         |
| 280     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean       | output              | qint16        | 0.0000132 | 0.1621720    | 0.3288153     | 0.2158388    | 0.0043707        | torch.Size([2, 512, 1])          |
| 281     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt               | input               | qint16        | 0.0000132 | 0.1621720    | 0.3288153     | 0.2158388    | 0.0043707        | torch.Size([2, 512, 1])          |
| 281     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt               | output              | qint16        | 0.0000777 | 1.7439101    | 2.4831645     | 2.2155476    | 0.0784256        | torch.Size([2, 512, 1])          |
| 282     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul             | input_0             | qint16        | 0.0000617 | -0.4053017   | 1.6185528     | 0.0000036    | 0.2158459        | torch.Size([2, 512, 32])         |
| 282     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul             | input_1             | qint16        | 0.0000777 | 1.7439101    | 2.4831645     | 2.2155476    | 0.0784256        | torch.Size([2, 512, 1])          |
| 282     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul             | output              | qint16        | 0.0001125 | -0.8486254   | 3.4004619     | 0.0000069    | 1.0000013        | torch.Size([2, 512, 32])         |
| 283     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant        | input               | torch.float32 |           | 0.8191299    | 1.0923718     | 0.9808199    | 0.0031231        | torch.Size([32])                 |
| 283     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant        | output              | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 284     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul          | input_0             | qint16        | 0.0001125 | -0.8486254   | 3.4004619     | 0.0000069    | 1.0000013        | torch.Size([2, 512, 32])         |
| 284     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul          | input_1             | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 284     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul          | output              | qint16        | 0.0001113 | -0.8691043   | 3.3771038     | 0.0164652    | 1.0014945        | torch.Size([2, 512, 32])         |
| 285     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant          | input               | torch.float32 |           | -0.0704119   | 0.0788569     | 0.0097621    | 0.0015200        | torch.Size([32])                 |
| 285     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant          | output              | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 286     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add            | input_0             | qint16        | 0.0001113 | -0.8691043   | 3.3771038     | 0.0164652    | 1.0014945        | torch.Size([2, 512, 32])         |
| 286     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add            | input_1             | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 286     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add            | output              | qint8         | 0.0262611 | -0.8666149   | 3.3351545     | 0.0261745    | 0.9477942        | torch.Size([2, 512, 32])         |
| 287     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6                     | input               | qint8         | 0.0262611 | -0.8666149   | 3.3351545     | 0.0261745    | 0.9477942        | torch.Size([2, 512, 32])         |
| 287     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6                     | weight              | torch.float32 |           | -0.5712157   | 0.5219681     | -0.0062917   | 0.0166056        | torch.Size([32, 32])             |
| 287     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6                     | bias                | torch.float32 |           | -0.1649730   | 0.2318604     | 0.0253026    | 0.0136139        | torch.Size([32])                 |
| 287     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6                     | output              | torch.float32 |           | -4.2514515   | 2.0772579     | -0.2959630   | 1.5653009        | torch.Size([2, 512, 32])         |
| 288     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7                     | input               | torch.float32 |           | -4.2514515   | 2.0772579     | -0.2959630   | 1.5653009        | torch.Size([2, 512, 32])         |
| 288     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7                     | output              | qint8         | 0.0188970 | 0.0000000    | 2.0786693     | 0.3520096    | 0.2318343        | torch.Size([2, 512, 32])         |
| 289     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean     | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.0786693     | 0.3520096    | 0.2318343        | torch.Size([2, 512, 32])         |
| 289     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean     | output              | qint16        | 0.0000154 | 0.2722324    | 0.4783297     | 0.3520098    | 0.0054428        | torch.Size([2, 512, 1])          |
| 290     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub                 | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.0786693     | 0.3520096    | 0.2318343        | torch.Size([2, 512, 32])         |
| 290     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub                 | input_1             | qint16        | 0.0000154 | 0.2722324    | 0.4783297     | 0.3520098    | 0.0054428        | torch.Size([2, 512, 1])          |
| 290     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub                 | output              | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | -0.0000001   | 0.2263970        | torch.Size([2, 512, 32])         |
| 291     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul                 | input_0             | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | -0.0000001   | 0.2263970        | torch.Size([2, 512, 32])         |
| 291     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul                 | input_1             | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | -0.0000001   | 0.2263970        | torch.Size([2, 512, 32])         |
| 291     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul                 | output              | qint16        | 0.0001333 | 0.0000000    | 2.5611575     | 0.2263932    | 0.1389555        | torch.Size([2, 512, 32])         |
| 292     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean       | input_0             | qint16        | 0.0001333 | 0.0000000    | 2.5611575     | 0.2263932    | 0.1389555        | torch.Size([2, 512, 32])         |
| 292     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean       | output              | qint16        | 0.0000116 | 0.1610320    | 0.3522446     | 0.2263933    | 0.0053815        | torch.Size([2, 512, 1])          |
| 293     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt               | input               | qint16        | 0.0000116 | 0.1610320    | 0.3522446     | 0.2263933    | 0.0053815        | torch.Size([2, 512, 1])          |
| 293     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt               | output              | qint16        | 0.0000821 | 1.6848582    | 2.4919276     | 2.1699595    | 0.0822805        | torch.Size([2, 512, 1])          |
| 294     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul             | input_0             | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | -0.0000001   | 0.2263970        | torch.Size([2, 512, 32])         |
| 294     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul             | input_1             | qint16        | 0.0000821 | 1.6848582    | 2.4919276     | 2.1699595    | 0.0822805        | torch.Size([2, 512, 1])          |
| 294     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul             | output              | qint16        | 0.0001195 | -0.8111226   | 3.2493918     | 0.0000046    | 0.9999517        | torch.Size([2, 512, 32])         |
| 295     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant        | input               | torch.float32 |           | 0.8903234    | 1.1315480     | 0.9912031    | 0.0026835        | torch.Size([32])                 |
| 295     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant        | output              | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 296     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul          | input_0             | qint16        | 0.0001195 | -0.8111226   | 3.2493918     | 0.0000046    | 0.9999517        | torch.Size([2, 512, 32])         |
| 296     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul          | input_1             | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 296     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul          | output              | qint16        | 0.0001226 | -0.9177723   | 3.2098744     | 0.0056740    | 1.0134569        | torch.Size([2, 512, 32])         |
| 297     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant          | input               | torch.float32 |           | -0.0586081   | 0.0779655     | 0.0041962    | 0.0015323        | torch.Size([32])                 |
| 297     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant          | output              | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 298     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add            | input_0             | qint16        | 0.0001226 | -0.9177723   | 3.2098744     | 0.0056740    | 1.0134569        | torch.Size([2, 512, 32])         |
| 298     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add            | input_1             | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 298     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add            | output              | qint8         | 0.0302522 | -0.8773150   | 3.1764855     | 0.0101112    | 0.9667466        | torch.Size([2, 512, 32])         |
| 299     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9                     | input               | qint8         | 0.0302522 | -0.8773150   | 3.1764855     | 0.0101112    | 0.9667466        | torch.Size([2, 512, 32])         |
| 299     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9                     | weight              | torch.float32 |           | -0.3204980   | 0.3365203     | -0.0020388   | 0.0145364        | torch.Size([32, 32])             |
| 299     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9                     | bias                | torch.float32 |           | -0.1559148   | 0.2119379     | 0.0091616    | 0.0105488        | torch.Size([32])                 |
| 299     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9                     | output              | torch.float32 |           | -2.3972883   | 2.2078454     | -0.0830330   | 0.8563258        | torch.Size([2, 512, 32])         |
| 300     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10                    | input               | torch.float32 |           | -2.3972883   | 2.2078454     | -0.0830330   | 0.8563258        | torch.Size([2, 512, 32])         |
| 300     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10                    | output              | qint8         | 0.0200096 | 0.0000000    | 2.2010570     | 0.3453929    | 0.2567852        | torch.Size([2, 512, 32])         |
| 301     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean    | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.2010570     | 0.3453929    | 0.2567852        | torch.Size([2, 512, 32])         |
| 301     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean    | output              | qint16        | 0.0000157 | 0.2945151    | 0.4151993     | 0.3453945    | 0.0004551        | torch.Size([2, 512, 1])          |
| 302     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub                | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.2010570     | 0.3453929    | 0.2567852        | torch.Size([2, 512, 32])         |
| 302     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub                | input_1             | qint16        | 0.0000157 | 0.2945151    | 0.4151993     | 0.3453945    | 0.0004551        | torch.Size([2, 512, 1])          |
| 302     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub                | output              | qint16        | 0.0000689 | -0.4152296   | 1.8465068     | -0.0000032   | 0.2563325        | torch.Size([2, 512, 32])         |
| 303     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul                | input_0             | qint16        | 0.0000689 | -0.4152296   | 1.8465068     | -0.0000032   | 0.2563325        | torch.Size([2, 512, 32])         |
| 303     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul                | input_1             | qint16        | 0.0000689 | -0.4152296   | 1.8465068     | -0.0000032   | 0.2563325        | torch.Size([2, 512, 32])         |
| 303     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul                | output              | qint16        | 0.0001557 | 0.0000000    | 3.4096045     | 0.2563224    | 0.2380480        | torch.Size([2, 512, 32])         |
| 304     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean      | input_0             | qint16        | 0.0001557 | 0.0000000    | 3.4096045     | 0.2563224    | 0.2380480        | torch.Size([2, 512, 32])         |
| 304     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean      | output              | qint16        | 0.0000123 | 0.1915690    | 0.3029633     | 0.2563227    | 0.0010558        | torch.Size([2, 512, 1])          |
| 305     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt              | input               | qint16        | 0.0000123 | 0.1915690    | 0.3029633     | 0.2563227    | 0.0010558        | torch.Size([2, 512, 1])          |
| 305     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt              | output              | qint16        | 0.0000803 | 1.8167633    | 2.2846675     | 1.9870219    | 0.0157460        | torch.Size([2, 512, 1])          |
| 306     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul            | input_0             | qint16        | 0.0000689 | -0.4152296   | 1.8465068     | -0.0000032   | 0.2563325        | torch.Size([2, 512, 32])         |
| 306     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul            | input_1             | qint16        | 0.0000803 | 1.8167633    | 2.2846675     | 1.9870219    | 0.0157460        | torch.Size([2, 512, 1])          |
| 306     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul            | output              | qint16        | 0.0001207 | -0.8624286   | 3.5597064     | -0.0000039   | 0.9999896        | torch.Size([2, 512, 32])         |
| 307     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant       | input               | torch.float32 |           | 0.8289159    | 1.6609058     | 1.2561316    | 0.0353652        | torch.Size([32])                 |
| 307     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant       | output              | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 308     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul         | input_0             | qint16        | 0.0001207 | -0.8624286   | 3.5597064     | -0.0000039   | 0.9999896        | torch.Size([2, 512, 32])         |
| 308     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul         | input_1             | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 308     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul         | output              | qint16        | 0.0001642 | -1.3925869   | 4.6393833     | -0.0614013   | 1.3995370        | torch.Size([2, 512, 32])         |
| 309     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant         | input               | torch.float32 |           | -0.1194881   | 0.2576658     | 0.0445686    | 0.0113612        | torch.Size([32])                 |
| 309     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant         | output              | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 310     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add           | input_0             | qint16        | 0.0001642 | -1.3925869   | 4.6393833     | -0.0614013   | 1.3995370        | torch.Size([2, 512, 32])         |
| 310     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add           | input_1             | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 310     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add           | output              | qint8         | 0.0385920 | -1.2349430   | 4.5924444     | -0.0164247   | 1.2513903        | torch.Size([2, 512, 32])         |
| 311     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.4784671    | 77.4393997       | torch.Size([2, 512, 11])         |
| 311     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | -0.2435303   | 1.2469482     | 0.3691757    | 0.2347869        | torch.Size([2, 512, 2])          |
| 312     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0                      | input               | qint16        | 0.0018311 | -0.2435303   | 1.2469482     | 0.3691757    | 0.2347869        | torch.Size([2, 512, 2])          |
| 312     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0                      | weight              | torch.float32 |           | -0.7023237   | 0.7394427     | 0.0490668    | 0.1972211        | torch.Size([32, 2])              |
| 312     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0                      | bias                | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1641774        | torch.Size([32])                 |
| 312     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0                      | output              | torch.float32 |           | -1.5332637   | 1.3870511     | -0.0755315   | 0.3073026        | torch.Size([2, 512, 32])         |
| 313     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1                      | input               | torch.float32 |           | -1.5332637   | 1.3870511     | -0.0755315   | 0.3073026        | torch.Size([2, 512, 32])         |
| 313     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1                      | output              | qint8         | 0.0115854 | 0.0000000    | 1.3902526     | 0.1909363    | 0.0771496        | torch.Size([2, 512, 32])         |
| 314     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean      | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.3902526     | 0.1909363    | 0.0771496        | torch.Size([2, 512, 32])         |
| 314     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean      | output              | qint16        | 0.0000105 | 0.1227362    | 0.2758766     | 0.1909369    | 0.0017661        | torch.Size([2, 512, 1])          |
| 315     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub                  | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.3902526     | 0.1909363    | 0.0771496        | torch.Size([2, 512, 32])         |
| 315     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub                  | input_1             | qint16        | 0.0000105 | 0.1227362    | 0.2758766     | 0.1909369    | 0.0017661        | torch.Size([2, 512, 1])          |
| 315     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub                  | output              | qint16        | 0.0000395 | -0.2758917   | 1.1143634     | 0.0000008    | 0.0753843        | torch.Size([2, 512, 32])         |
| 316     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul                  | input_0             | qint16        | 0.0000395 | -0.2758917   | 1.1143634     | 0.0000008    | 0.0753843        | torch.Size([2, 512, 32])         |
| 316     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul                  | input_1             | qint16        | 0.0000395 | -0.2758917   | 1.1143634     | 0.0000008    | 0.0753843        | torch.Size([2, 512, 32])         |
| 316     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul                  | output              | qint16        | 0.0000524 | 0.0000000    | 1.2417800     | 0.0753853    | 0.0191725        | torch.Size([2, 512, 32])         |
| 317     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean        | input_0             | qint16        | 0.0000524 | 0.0000000    | 1.2417800     | 0.0753853    | 0.0191725        | torch.Size([2, 512, 32])         |
| 317     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean        | output              | qint16        | 0.0000071 | 0.0450738    | 0.1349300     | 0.0753855    | 0.0003927        | torch.Size([2, 512, 1])          |
| 318     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt                | input               | qint16        | 0.0000071 | 0.0450738    | 0.1349300     | 0.0753855    | 0.0003927        | torch.Size([2, 512, 1])          |
| 318     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt                | output              | qint16        | 0.0001514 | 2.7222314    | 4.7096615     | 3.7569573    | 0.3359282        | torch.Size([2, 512, 1])          |
| 319     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul              | input_0             | qint16        | 0.0000395 | -0.2758917   | 1.1143634     | 0.0000008    | 0.0753843        | torch.Size([2, 512, 32])         |
| 319     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul              | input_1             | qint16        | 0.0001514 | 2.7222314    | 4.7096615     | 3.7569573    | 0.3359282        | torch.Size([2, 512, 1])          |
| 319     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul              | output              | qint16        | 0.0001206 | -0.7590849   | 3.2368164     | 0.0000134    | 0.9997991        | torch.Size([2, 512, 32])         |
| 320     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant         | input               | torch.float32 |           | 0.8947600    | 1.1748335     | 0.9865216    | 0.0041537        | torch.Size([32])                 |
| 320     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant         | output              | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 321     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul           | input_0             | qint16        | 0.0001206 | -0.7590849   | 3.2368164     | 0.0000134    | 0.9997991        | torch.Size([2, 512, 32])         |
| 321     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul           | input_1             | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 321     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul           | output              | qint16        | 0.0001306 | -0.8635008   | 3.1486237     | -0.0016294   | 0.9736264        | torch.Size([2, 512, 32])         |
| 322     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant           | input               | torch.float32 |           | -0.0879948   | 0.1319895     | 0.0285039    | 0.0034159        | torch.Size([32])                 |
| 322     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant           | output              | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 323     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add             | input_0             | qint16        | 0.0001306 | -0.8635008   | 3.1486237     | -0.0016294   | 0.9736264        | torch.Size([2, 512, 32])         |
| 323     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add             | input_1             | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 323     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add             | output              | qint8         | 0.0302674 | -0.8172185   | 3.0570025     | 0.0265375    | 0.8890995        | torch.Size([2, 512, 32])         |
| 324     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3                      | input               | qint8         | 0.0302674 | -0.8172185   | 3.0570025     | 0.0265375    | 0.8890995        | torch.Size([2, 512, 32])         |
| 324     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3                      | weight              | torch.float32 |           | -1.0547366   | 0.5812716     | 0.0070099    | 0.0187704        | torch.Size([32, 32])             |
| 324     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3                      | bias                | torch.float32 |           | -0.2183180   | 0.1396109     | -0.0140744   | 0.0103446        | torch.Size([32])                 |
| 324     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3                      | output              | torch.float32 |           | -3.4460325   | 1.3586657     | -0.4478617   | 1.2954116        | torch.Size([2, 512, 32])         |
| 325     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4                      | input               | torch.float32 |           | -3.4460325   | 1.3586657     | -0.4478617   | 1.2954116        | torch.Size([2, 512, 32])         |
| 325     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4                      | output              | qint8         | 0.0142143 | 0.0000000    | 1.3645725     | 0.2481967    | 0.1088334        | torch.Size([2, 512, 32])         |
| 326     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean      | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.3645725     | 0.2481967    | 0.1088334        | torch.Size([2, 512, 32])         |
| 326     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean      | output              | qint16        | 0.0000116 | 0.2145448    | 0.3020520     | 0.2481955    | 0.0002823        | torch.Size([2, 512, 1])          |
| 327     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub                  | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.3645725     | 0.2481967    | 0.1088334        | torch.Size([2, 512, 32])         |
| 327     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub                  | input_1             | qint16        | 0.0000116 | 0.2145448    | 0.3020520     | 0.2481955    | 0.0002823        | torch.Size([2, 512, 1])          |
| 327     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub                  | output              | qint16        | 0.0000516 | -0.3020736   | 1.0976235     | 0.0000016    | 0.1085516        | torch.Size([2, 512, 32])         |
| 328     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul                  | input_0             | qint16        | 0.0000516 | -0.3020736   | 1.0976235     | 0.0000016    | 0.1085516        | torch.Size([2, 512, 32])         |
| 328     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul                  | input_1             | qint16        | 0.0000516 | -0.3020736   | 1.0976235     | 0.0000016    | 0.1085516        | torch.Size([2, 512, 32])         |
| 328     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul                  | output              | qint16        | 0.0000889 | 0.0000000    | 1.2048175     | 0.1085491    | 0.0321284        | torch.Size([2, 512, 32])         |
| 329     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean        | input_0             | qint16        | 0.0000889 | 0.0000000    | 1.2048175     | 0.1085491    | 0.0321284        | torch.Size([2, 512, 32])         |
| 329     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean        | output              | qint16        | 0.0000089 | 0.0906531    | 0.1489064     | 0.1085499    | 0.0001101        | torch.Size([2, 512, 1])          |
| 330     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt                | input               | qint16        | 0.0000089 | 0.0906531    | 0.1489064     | 0.1085499    | 0.0001101        | torch.Size([2, 512, 1])          |
| 330     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt                | output              | qint16        | 0.0001114 | 2.5913279    | 3.3211522     | 3.0454154    | 0.0206904        | torch.Size([2, 512, 1])          |
| 331     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul              | input_0             | qint16        | 0.0000516 | -0.3020736   | 1.0976235     | 0.0000016    | 0.1085516        | torch.Size([2, 512, 32])         |
| 331     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul              | input_1             | qint16        | 0.0001114 | 2.5913279    | 3.3211522     | 3.0454154    | 0.0206904        | torch.Size([2, 512, 1])          |
| 331     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul              | output              | qint16        | 0.0001083 | -0.8479192   | 3.1356540     | -0.0000013   | 0.9999298        | torch.Size([2, 512, 32])         |
| 332     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant         | input               | torch.float32 |           | 0.8550419    | 1.1198171     | 0.9805899    | 0.0036729        | torch.Size([32])                 |
| 332     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant         | output              | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 333     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul           | input_0             | qint16        | 0.0001083 | -0.8479192   | 3.1356540     | -0.0000013   | 0.9999298        | torch.Size([2, 512, 32])         |
| 333     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul           | input_1             | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 333     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul           | output              | qint16        | 0.0001106 | -0.9221573   | 3.1250520     | 0.0041233    | 0.9787082        | torch.Size([2, 512, 32])         |
| 334     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant           | input               | torch.float32 |           | -0.0792132   | 0.1045145     | 0.0242442    | 0.0021608        | torch.Size([32])                 |
| 334     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant           | output              | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 335     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add             | input_0             | qint16        | 0.0001106 | -0.9221573   | 3.1250520     | 0.0041233    | 0.9787082        | torch.Size([2, 512, 32])         |
| 335     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add             | input_1             | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 335     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add             | output              | qint8         | 0.0268612 | -0.8595570   | 3.0890329     | 0.0290023    | 0.9203348        | torch.Size([2, 512, 32])         |
| 336     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6                      | input               | qint8         | 0.0268612 | -0.8595570   | 3.0890329     | 0.0290023    | 0.9203348        | torch.Size([2, 512, 32])         |
| 336     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6                      | weight              | torch.float32 |           | -0.4480607   | 0.3678726     | 0.0004879    | 0.0160908        | torch.Size([32, 32])             |
| 336     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6                      | bias                | torch.float32 |           | -0.1861591   | 0.1739754     | 0.0155446    | 0.0137690        | torch.Size([32])                 |
| 336     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6                      | output              | torch.float32 |           | -3.6023743   | 2.3263395     | -0.0869410   | 1.2434390        | torch.Size([2, 512, 32])         |
| 337     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7                      | input               | torch.float32 |           | -3.6023743   | 2.3263395     | -0.0869410   | 1.2434390        | torch.Size([2, 512, 32])         |
| 337     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7                      | output              | qint8         | 0.0183966 | 0.0000000    | 2.3179710     | 0.4114835    | 0.2601100        | torch.Size([2, 512, 32])         |
| 338     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean      | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3179710     | 0.4114835    | 0.2601100        | torch.Size([2, 512, 32])         |
| 338     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean      | output              | qint16        | 0.0000156 | 0.3368896    | 0.5115557     | 0.4114666    | 0.0022155        | torch.Size([2, 512, 1])          |
| 339     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub                  | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3179710     | 0.4114835    | 0.2601100        | torch.Size([2, 512, 32])         |
| 339     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub                  | input_1             | qint16        | 0.0000156 | 0.3368896    | 0.5115557     | 0.4114666    | 0.0022155        | torch.Size([2, 512, 1])          |
| 339     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub                  | output              | qint16        | 0.0000645 | -0.5115817   | 1.8064101     | 0.0000159    | 0.2578942        | torch.Size([2, 512, 32])         |
| 340     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul                  | input_0             | qint16        | 0.0000645 | -0.5115817   | 1.8064101     | 0.0000159    | 0.2578942        | torch.Size([2, 512, 32])         |
| 340     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul                  | input_1             | qint16        | 0.0000645 | -0.5115817   | 1.8064101     | 0.0000159    | 0.2578942        | torch.Size([2, 512, 32])         |
| 340     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul                  | output              | qint16        | 0.0001365 | 0.0000000    | 3.2631259     | 0.2578946    | 0.1030146        | torch.Size([2, 512, 32])         |
| 341     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean        | input_0             | qint16        | 0.0001365 | 0.0000000    | 3.2631259     | 0.2578946    | 0.1030146        | torch.Size([2, 512, 32])         |
| 341     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean        | output              | qint16        | 0.0000123 | 0.1837343    | 0.4040551     | 0.2578778    | 0.0024771        | torch.Size([2, 512, 1])          |
| 342     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt                | input               | qint16        | 0.0000123 | 0.1837343    | 0.4040551     | 0.2578778    | 0.0024771        | torch.Size([2, 512, 1])          |
| 342     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt                | output              | qint16        | 0.0000749 | 1.5731732    | 2.3328609     | 1.9990361    | 0.0428807        | torch.Size([2, 512, 1])          |
| 343     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul              | input_0             | qint16        | 0.0000645 | -0.5115817   | 1.8064101     | 0.0000159    | 0.2578942        | torch.Size([2, 512, 32])         |
| 343     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul              | input_1             | qint16        | 0.0000749 | 1.5731732    | 2.3328609     | 1.9990361    | 0.0428807        | torch.Size([2, 512, 1])          |
| 343     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul              | output              | qint16        | 0.0001267 | -0.8687357   | 2.8536935     | 0.0000296    | 0.9999735        | torch.Size([2, 512, 32])         |
| 344     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant         | input               | torch.float32 |           | 0.8469434    | 1.1090456     | 0.9866461    | 0.0031007        | torch.Size([32])                 |
| 344     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant         | output              | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 345     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul           | input_0             | qint16        | 0.0001267 | -0.8687357   | 2.8536935     | 0.0000296    | 0.9999735        | torch.Size([2, 512, 32])         |
| 345     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul           | input_1             | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 345     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul           | output              | qint16        | 0.0001376 | -0.9479693   | 2.8352411     | 0.0060412    | 0.9878312        | torch.Size([2, 512, 32])         |
| 346     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant           | input               | torch.float32 |           | -0.0626723   | 0.0887763     | 0.0071697    | 0.0011301        | torch.Size([32])                 |
| 346     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant           | output              | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 347     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add             | input_0             | qint16        | 0.0001376 | -0.9479693   | 2.8352411     | 0.0060412    | 0.9878312        | torch.Size([2, 512, 32])         |
| 347     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add             | input_1             | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 347     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add             | output              | qint8         | 0.0326290 | -0.9136118   | 2.8060935     | 0.0129747    | 0.9487683        | torch.Size([2, 512, 32])         |
| 348     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9                      | input               | qint8         | 0.0326290 | -0.9136118   | 2.8060935     | 0.0129747    | 0.9487683        | torch.Size([2, 512, 32])         |
| 348     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9                      | weight              | torch.float32 |           | -0.5597425   | 0.7001730     | 0.0015679    | 0.0160348        | torch.Size([32, 32])             |
| 348     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9                      | bias                | torch.float32 |           | -0.1810580   | 0.1736723     | -0.0279047   | 0.0091159        | torch.Size([32])                 |
| 348     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9                      | output              | torch.float32 |           | -3.9244084   | 3.4994090     | -0.1459901   | 1.2510624        | torch.Size([2, 512, 32])         |
| 349     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10                     | input               | torch.float32 |           | -3.9244084   | 3.4994090     | -0.1459901   | 1.2510624        | torch.Size([2, 512, 32])         |
| 349     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10                     | output              | qint8         | 0.0271917 | 0.0000000    | 3.4533420     | 0.3052839    | 0.3994084        | torch.Size([2, 512, 32])         |
| 350     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean     | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.4533420     | 0.3052839    | 0.3994084        | torch.Size([2, 512, 32])         |
| 350     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean     | output              | qint16        | 0.0000121 | 0.2812601    | 0.3687864     | 0.3052831    | 0.0005791        | torch.Size([2, 512, 1])          |
| 351     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub                 | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.4533420     | 0.3052839    | 0.3994084        | torch.Size([2, 512, 32])         |
| 351     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub                 | input_1             | qint16        | 0.0000121 | 0.2812601    | 0.3687864     | 0.3052831    | 0.0005791        | torch.Size([2, 512, 1])          |
| 351     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub                 | output              | qint16        | 0.0000976 | -0.3687446   | 3.1695635     | -0.0000042   | 0.3988320        | torch.Size([2, 512, 32])         |
| 352     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul                 | input_0             | qint16        | 0.0000976 | -0.3687446   | 3.1695635     | -0.0000042   | 0.3988320        | torch.Size([2, 512, 32])         |
| 352     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul                 | input_1             | qint16        | 0.0000976 | -0.3687446   | 3.1695635     | -0.0000042   | 0.3988320        | torch.Size([2, 512, 32])         |
| 352     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul                 | output              | qint16        | 0.0003122 | 0.0000000    | 10.0461435    | 0.3987944    | 1.8649199        | torch.Size([2, 512, 32])         |
| 353     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean       | input_0             | qint16        | 0.0003122 | 0.0000000    | 10.0461435    | 0.3987944    | 1.8649199        | torch.Size([2, 512, 32])         |
| 353     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean       | output              | qint16        | 0.0000136 | 0.2788074    | 0.4466016     | 0.3987779    | 0.0012478        | torch.Size([2, 512, 1])          |
| 354     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt               | input               | qint16        | 0.0000136 | 0.2788074    | 0.4466016     | 0.3987779    | 0.0012478        | torch.Size([2, 512, 1])          |
| 354     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt               | output              | qint16        | 0.0000802 | 1.4963876    | 1.8938580     | 1.5887880    | 0.0061255        | torch.Size([2, 512, 1])          |
| 355     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul             | input_0             | qint16        | 0.0000976 | -0.3687446   | 3.1695635     | -0.0000042   | 0.3988320        | torch.Size([2, 512, 32])         |
| 355     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul             | input_1             | qint16        | 0.0000802 | 1.4963876    | 1.8938580     | 1.5887880    | 0.0061255        | torch.Size([2, 512, 1])          |
| 355     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul             | output              | qint16        | 0.0001482 | -0.6766537   | 4.8548083     | -0.0000084   | 1.0001154        | torch.Size([2, 512, 32])         |
| 356     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant        | input               | torch.float32 |           | 0.8363900    | 1.4688344     | 1.0570920    | 0.0396277        | torch.Size([32])                 |
| 356     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant        | output              | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 357     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul          | input_0             | qint16        | 0.0001482 | -0.6766537   | 4.8548083     | -0.0000084   | 1.0001154        | torch.Size([2, 512, 32])         |
| 357     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul          | input_1             | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 357     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul          | output              | qint16        | 0.0001637 | -0.9939561   | 4.8851361     | -0.0322369   | 0.9901759        | torch.Size([2, 512, 32])         |
| 358     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant          | input               | torch.float32 |           | -0.1492936   | 0.2842544     | 0.0803791    | 0.0109446        | torch.Size([32])                 |
| 358     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant          | output              | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 359     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add            | input_0             | qint16        | 0.0001637 | -0.9939561   | 4.8851361     | -0.0322369   | 0.9901759        | torch.Size([2, 512, 32])         |
| 359     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add            | input_1             | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 359     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add            | output              | qint8         | 0.0373904 | -0.8225893   | 4.7485838     | 0.0484085    | 0.8976397        | torch.Size([2, 512, 32])         |
| 360     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.4784671    | 77.4393997       | torch.Size([2, 512, 11])         |
| 360     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | -0.2874756   | 0.2874756     | 0.0021315    | 0.0049384        | torch.Size([2, 512, 3])          |
| 361     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0                      | input               | qint16        | 0.0018311 | -0.2874756   | 0.2874756     | 0.0021315    | 0.0049384        | torch.Size([2, 512, 3])          |
| 361     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0                      | weight              | torch.float32 |           | -1.0475703   | 0.9848034     | -0.0054673   | 0.2080412        | torch.Size([64, 3])              |
| 361     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0                      | bias                | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1294928        | torch.Size([64])                 |
| 361     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0                      | output              | torch.float32 |           | -0.9037883   | 0.7009729     | -0.0510568   | 0.1304925        | torch.Size([2, 512, 64])         |
| 362     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1                      | input               | torch.float32 |           | -0.9037883   | 0.7009729     | -0.0510568   | 0.1304925        | torch.Size([2, 512, 64])         |
| 362     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1                      | output              | qint8         | 0.0729980 | 0.0000000    | 0.7299801     | 0.1287312    | 0.0289556        | torch.Size([2, 512, 64])         |
| 363     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean      | input_0             | qint8         | 0.0729980 | 0.0000000    | 0.7299801     | 0.1287312    | 0.0289556        | torch.Size([2, 512, 64])         |
| 363     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean      | output              | qint16        | 0.0000685 | 0.1197747    | 0.1448534     | 0.1287262    | 0.0000138        | torch.Size([2, 512, 1])          |
| 364     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub                  | input_0             | qint8         | 0.0729980 | 0.0000000    | 0.7299801     | 0.1287312    | 0.0289556        | torch.Size([2, 512, 64])         |
| 364     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub                  | input_1             | qint16        | 0.0000685 | 0.1197747    | 0.1448534     | 0.1287262    | 0.0000138        | torch.Size([2, 512, 1])          |
| 364     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub                  | output              | qint16        | 0.0002902 | -0.1448141   | 0.5850605     | 0.0000158    | 0.0289417        | torch.Size([2, 512, 64])         |
| 365     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul                  | input_0             | qint16        | 0.0002902 | -0.1448141   | 0.5850605     | 0.0000158    | 0.0289417        | torch.Size([2, 512, 64])         |
| 365     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul                  | input_1             | qint16        | 0.0002902 | -0.1448141   | 0.5850605     | 0.0000158    | 0.0289417        | torch.Size([2, 512, 64])         |
| 365     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul                  | output              | qint16        | 0.0029551 | 0.0000000    | 0.3427911     | 0.0291831    | 0.0016132        | torch.Size([2, 512, 64])         |
| 366     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean        | input_0             | qint16        | 0.0029551 | 0.0000000    | 0.3427911     | 0.0291831    | 0.0016132        | torch.Size([2, 512, 64])         |
| 366     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean        | output              | qint16        | 0.0003723 | 0.0245721    | 0.0413258     | 0.0292187    | 0.0000057        | torch.Size([2, 512, 1])          |
| 367     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt                | input               | qint16        | 0.0003723 | 0.0245721    | 0.0413258     | 0.0292187    | 0.0000057        | torch.Size([2, 512, 1])          |
| 367     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt                | output              | qint16        | 0.0001859 | 4.9185348    | 6.0927577     | 5.8535910    | 0.0436259        | torch.Size([2, 512, 1])          |
| 368     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul              | input_0             | qint16        | 0.0002902 | -0.1448141   | 0.5850605     | 0.0000158    | 0.0289417        | torch.Size([2, 512, 64])         |
| 368     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul              | input_1             | qint16        | 0.0001859 | 4.9185348    | 6.0927577     | 5.8535910    | 0.0436259        | torch.Size([2, 512, 1])          |
| 368     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul              | output              | qint16        | 0.0001160 | -0.8251068   | 3.0470743     | 0.0000893    | 0.9872203        | torch.Size([2, 512, 64])         |
| 369     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant         | input               | torch.float32 |           | 0.8691067    | 1.1281288     | 0.9794419    | 0.0036082        | torch.Size([64])                 |
| 369     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant         | output              | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 370     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul           | input_0             | qint16        | 0.0001160 | -0.8251068   | 3.0470743     | 0.0000893    | 0.9872203        | torch.Size([2, 512, 64])         |
| 370     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul           | input_1             | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 370     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul           | output              | qint16        | 0.0001189 | -0.9213524   | 3.0417347     | 0.0103698    | 0.9411226        | torch.Size([2, 512, 64])         |
| 371     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant           | input               | torch.float32 |           | -0.1133662   | 0.1493634     | 0.0304540    | 0.0046508        | torch.Size([64])                 |
| 371     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant           | output              | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 372     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add             | input_0             | qint16        | 0.0001189 | -0.9213524   | 3.0417347     | 0.0103698    | 0.9411226        | torch.Size([2, 512, 64])         |
| 372     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add             | input_1             | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 372     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add             | output              | qint8         | 0.0267452 | -0.9093367   | 2.9954622     | 0.0409520    | 0.8374090        | torch.Size([2, 512, 64])         |
| 373     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3                      | input               | qint8         | 0.0267452 | -0.9093367   | 2.9954622     | 0.0409520    | 0.8374090        | torch.Size([2, 512, 64])         |
| 373     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3                      | weight              | torch.float32 |           | -0.4523612   | 0.4813256     | -0.0014562   | 0.0096743        | torch.Size([64, 64])             |
| 373     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3                      | bias                | torch.float32 |           | -0.1183558   | 0.2243176     | 0.0150283    | 0.0049289        | torch.Size([64])                 |
| 373     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3                      | output              | torch.float32 |           | -5.4968929   | 2.6026821     | -0.4481775   | 2.8092313        | torch.Size([2, 512, 64])         |
| 374     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4                      | input               | torch.float32 |           | -5.4968929   | 2.6026821     | -0.4481775   | 2.8092313        | torch.Size([2, 512, 64])         |
| 374     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4                      | output              | qint8         | 0.0337689 | 0.0000000    | 2.6002049     | 0.3940804    | 0.2837168        | torch.Size([2, 512, 64])         |
| 375     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean      | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.6002049     | 0.3940804    | 0.2837168        | torch.Size([2, 512, 64])         |
| 375     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean      | output              | qint16        | 0.0000195 | 0.3281929    | 0.4849016     | 0.3940820    | 0.0009381        | torch.Size([2, 512, 1])          |
| 376     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub                  | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.6002049     | 0.3940804    | 0.2837168        | torch.Size([2, 512, 64])         |
| 376     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub                  | input_1             | qint16        | 0.0000195 | 0.3281929    | 0.4849016     | 0.3940820    | 0.0009381        | torch.Size([2, 512, 1])          |
| 376     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub                  | output              | qint16        | 0.0001376 | -0.4848540   | 2.1290638     | -0.0000052   | 0.2827820        | torch.Size([2, 512, 64])         |
| 377     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul                  | input_0             | qint16        | 0.0001376 | -0.4848540   | 2.1290638     | -0.0000052   | 0.2827820        | torch.Size([2, 512, 64])         |
| 377     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul                  | input_1             | qint16        | 0.0001376 | -0.4848540   | 2.1290638     | -0.0000052   | 0.2827820        | torch.Size([2, 512, 64])         |
| 377     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul                  | output              | qint16        | 0.0006236 | 0.0000000    | 4.5332198     | 0.2828189    | 0.2831316        | torch.Size([2, 512, 64])         |
| 378     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean        | input_0             | qint16        | 0.0006236 | 0.0000000    | 4.5332198     | 0.2828189    | 0.2831316        | torch.Size([2, 512, 64])         |
| 378     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean        | output              | qint16        | 0.0000322 | 0.2177763    | 0.4658734     | 0.2828158    | 0.0026705        | torch.Size([2, 512, 1])          |
| 379     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt                | input               | qint16        | 0.0000322 | 0.2177763    | 0.4658734     | 0.2828158    | 0.0026705        | torch.Size([2, 512, 1])          |
| 379     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt                | output              | qint16        | 0.0001060 | 1.4650909    | 2.1427789     | 1.9024675    | 0.0267805        | torch.Size([2, 512, 1])          |
| 380     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul              | input_0             | qint16        | 0.0001376 | -0.4848540   | 2.1290638     | -0.0000052   | 0.2827820        | torch.Size([2, 512, 64])         |
| 380     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul              | input_1             | qint16        | 0.0001060 | 1.4650909    | 2.1427789     | 1.9024675    | 0.0267805        | torch.Size([2, 512, 1])          |
| 380     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul              | output              | qint16        | 0.0001466 | -0.8497055   | 3.8086455     | -0.0000066   | 0.9997991        | torch.Size([2, 512, 64])         |
| 381     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant         | input               | torch.float32 |           | 0.8333027    | 1.1388558     | 0.9778216    | 0.0042186        | torch.Size([64])                 |
| 381     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant         | output              | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 382     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul           | input_0             | qint16        | 0.0001466 | -0.8497055   | 3.8086455     | -0.0000066   | 0.9997991        | torch.Size([2, 512, 64])         |
| 382     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul           | input_1             | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 382     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul           | output              | qint16        | 0.0001474 | -0.9089167   | 4.1559534     | 0.0080721    | 1.0155169        | torch.Size([2, 512, 64])         |
| 383     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant           | input               | torch.float32 |           | -0.0757831   | 0.1161729     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 383     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant           | output              | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 384     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add             | input_0             | qint16        | 0.0001474 | -0.9089167   | 4.1559534     | 0.0080721    | 1.0155169        | torch.Size([2, 512, 64])         |
| 384     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add             | input_1             | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 384     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add             | output              | qint8         | 0.0350382 | -0.9109923   | 4.1345034     | 0.0247420    | 0.9656427        | torch.Size([2, 512, 64])         |
| 385     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6                      | input               | qint8         | 0.0350382 | -0.9109923   | 4.1345034     | 0.0247420    | 0.9656427        | torch.Size([2, 512, 64])         |
| 385     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6                      | weight              | torch.float32 |           | -0.5707353   | 0.3620123     | -0.0010372   | 0.0088292        | torch.Size([64, 64])             |
| 385     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6                      | bias                | torch.float32 |           | -0.1720246   | 0.1340137     | -0.0235144   | 0.0050507        | torch.Size([64])                 |
| 385     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6                      | output              | torch.float32 |           | -5.4402614   | 3.7159994     | -0.3619820   | 2.4976146        | torch.Size([2, 512, 64])         |
| 386     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7                      | input               | torch.float32 |           | -5.4402614   | 3.7159994     | -0.3619820   | 2.4976146        | torch.Size([2, 512, 64])         |
| 386     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7                      | output              | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4756055    | 0.6250108        | torch.Size([2, 512, 64])         |
| 387     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean      | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4756055    | 0.6250108        | torch.Size([2, 512, 64])         |
| 387     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean      | output              | qint16        | 0.0000166 | 0.3907624    | 0.5126176     | 0.4756073    | 0.0005862        | torch.Size([2, 512, 1])          |
| 388     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub                  | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4756055    | 0.6250108        | torch.Size([2, 512, 64])         |
| 388     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub                  | input_1             | qint16        | 0.0000166 | 0.3907624    | 0.5126176     | 0.4756073    | 0.0005862        | torch.Size([2, 512, 1])          |
| 388     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub                  | output              | qint16        | 0.0000988 | -0.5126588   | 3.1724162     | 0.0000017    | 0.6244214        | torch.Size([2, 512, 64])         |
| 389     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul                  | input_0             | qint16        | 0.0000988 | -0.5126588   | 3.1724162     | 0.0000017    | 0.6244214        | torch.Size([2, 512, 64])         |
| 389     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul                  | input_1             | qint16        | 0.0000988 | -0.5126588   | 3.1724162     | 0.0000017    | 0.6244214        | torch.Size([2, 512, 64])         |
| 389     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul                  | output              | qint16        | 0.0003201 | 0.0000000    | 10.0641823    | 0.6244155    | 1.6490788        | torch.Size([2, 512, 64])         |
| 390     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean        | input_0             | qint16        | 0.0003201 | 0.0000000    | 10.0641823    | 0.6244155    | 1.6490788        | torch.Size([2, 512, 64])         |
| 390     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean        | output              | qint16        | 0.0000230 | 0.4029179    | 0.7531324     | 0.6243941    | 0.0020645        | torch.Size([2, 512, 1])          |
| 391     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt                | input               | qint16        | 0.0000230 | 0.4029179    | 0.7531324     | 0.6243941    | 0.0020645        | torch.Size([2, 512, 1])          |
| 391     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt                | output              | qint16        | 0.0000608 | 1.1523145    | 1.5754030     | 1.2681923    | 0.0024162        | torch.Size([2, 512, 1])          |
| 392     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul              | input_0             | qint16        | 0.0000988 | -0.5126588   | 3.1724162     | 0.0000017    | 0.6244214        | torch.Size([2, 512, 64])         |
| 392     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul              | input_1             | qint16        | 0.0000608 | 1.1523145    | 1.5754030     | 1.2681923    | 0.0024162        | torch.Size([2, 512, 1])          |
| 392     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul              | output              | qint16        | 0.0001598 | -0.6779179   | 4.0384150     | 0.0000117    | 1.0000191        | torch.Size([2, 512, 64])         |
| 393     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant         | input               | torch.float32 |           | 0.8006503    | 1.1495361     | 0.9818506    | 0.0032003        | torch.Size([64])                 |
| 393     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant         | output              | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 394     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul           | input_0             | qint16        | 0.0001598 | -0.6779179   | 4.0384150     | 0.0000117    | 1.0000191        | torch.Size([2, 512, 64])         |
| 394     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul           | input_1             | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 394     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul           | output              | qint16        | 0.0001633 | -0.7793021   | 4.1755443     | 0.0093293    | 1.0097296        | torch.Size([2, 512, 64])         |
| 395     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant           | input               | torch.float32 |           | -0.0461140   | 0.1411197     | 0.0132828    | 0.0015701        | torch.Size([64])                 |
| 395     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant           | output              | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 396     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add             | input_0             | qint16        | 0.0001633 | -0.7793021   | 4.1755443     | 0.0093293    | 1.0097296        | torch.Size([2, 512, 64])         |
| 396     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add             | input_1             | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 396     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add             | output              | qint8         | 0.0387038 | -0.7740757   | 4.1800089     | 0.0224547    | 0.9923947        | torch.Size([2, 512, 64])         |
| 397     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9                      | input               | qint8         | 0.0387038 | -0.7740757   | 4.1800089     | 0.0224547    | 0.9923947        | torch.Size([2, 512, 64])         |
| 397     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9                      | weight              | torch.float32 |           | -0.5701389   | 0.3477888     | 0.0006721    | 0.0085883        | torch.Size([64, 64])             |
| 397     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9                      | bias                | torch.float32 |           | -0.1677032   | 0.1709885     | -0.0237130   | 0.0070098        | torch.Size([64])                 |
| 397     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9                      | output              | torch.float32 |           | -4.5578809   | 7.1779356     | -0.5412493   | 1.9536922        | torch.Size([2, 512, 64])         |
| 398     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10                     | input               | torch.float32 |           | -4.5578809   | 7.1779356     | -0.5412493   | 1.9536922        | torch.Size([2, 512, 64])         |
| 398     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10                     | output              | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2630418    | 0.7091212        | torch.Size([2, 512, 64])         |
| 399     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean     | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2630418    | 0.7091212        | torch.Size([2, 512, 64])         |
| 399     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean     | output              | qint16        | 0.0000138 | 0.2082230    | 0.3452767     | 0.2630456    | 0.0015422        | torch.Size([2, 512, 1])          |
| 400     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub                 | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2630418    | 0.7091212        | torch.Size([2, 512, 64])         |
| 400     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub                 | input_1             | qint16        | 0.0000138 | 0.2082230    | 0.3452767     | 0.2630456    | 0.0015422        | torch.Size([2, 512, 1])          |
| 400     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub                 | output              | qint16        | 0.0002137 | -0.3452823   | 6.9300041     | -0.0000040   | 0.7075820        | torch.Size([2, 512, 64])         |
| 401     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul                 | input_0             | qint16        | 0.0002137 | -0.3452823   | 6.9300041     | -0.0000040   | 0.7075820        | torch.Size([2, 512, 64])         |
| 401     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul                 | input_1             | qint16        | 0.0002137 | -0.3452823   | 6.9300041     | -0.0000040   | 0.7075820        | torch.Size([2, 512, 64])         |
| 401     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul                 | output              | qint16        | 0.0014959 | 0.0000000    | 48.0252304    | 0.7076897    | 21.4392757       | torch.Size([2, 512, 64])         |
| 402     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean       | input_0             | qint16        | 0.0014959 | 0.0000000    | 48.0252304    | 0.7076897    | 21.4392757       | torch.Size([2, 512, 64])         |
| 402     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean       | output              | qint16        | 0.0000253 | 0.3563285    | 0.8246819     | 0.7076918    | 0.0126876        | torch.Size([2, 512, 1])          |
| 403     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt               | input               | qint16        | 0.0000253 | 0.3563285    | 0.8246819     | 0.7076918    | 0.0126876        | torch.Size([2, 512, 1])          |
| 403     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt               | output              | qint16        | 0.0000680 | 1.1011648    | 1.6751828     | 1.2018880    | 0.0120247        | torch.Size([2, 512, 1])          |
| 404     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul             | input_0             | qint16        | 0.0002137 | -0.3452823   | 6.9300041     | -0.0000040   | 0.7075820        | torch.Size([2, 512, 64])         |
| 404     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul             | input_1             | qint16        | 0.0000680 | 1.1011648    | 1.6751828     | 1.2018880    | 0.0120247        | torch.Size([2, 512, 1])          |
| 404     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul             | output              | qint16        | 0.0002366 | -0.5490822   | 7.7041845     | 0.0000183    | 0.9998628        | torch.Size([2, 512, 64])         |
| 405     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant        | input               | torch.float32 |           | 0.7297163    | 1.2824999     | 1.0134131    | 0.0161719        | torch.Size([64])                 |
| 405     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant        | output              | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 406     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul          | input_0             | qint16        | 0.0002366 | -0.5490822   | 7.7041845     | 0.0000183    | 0.9998628        | torch.Size([2, 512, 64])         |
| 406     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul          | input_1             | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 406     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul          | output              | qint16        | 0.0001954 | -0.6829096   | 5.6219382     | -0.0364687   | 0.6569331        | torch.Size([2, 512, 64])         |
| 407     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant          | input               | torch.float32 |           | -0.2385408   | 0.3192695     | 0.0900053    | 0.0129013        | torch.Size([64])                 |
| 407     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant          | output              | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 408     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add            | input_0             | qint16        | 0.0001954 | -0.6829096   | 5.6219382     | -0.0364687   | 0.6569331        | torch.Size([2, 512, 64])         |
| 408     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add            | input_1             | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 408     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add            | output              | qint8         | 0.0462055 | -0.6930832   | 5.4060483     | 0.0534576    | 0.5756887        | torch.Size([2, 512, 64])         |
| 409     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_0             | qint8         | 0.0587279 | -0.8221908   | 7.4584455     | 0.0836274    | 0.8717202        | torch.Size([2, 512, 128])        |
| 409     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_1             | qint8         | 0.0385920 | -1.2349430   | 4.5924444     | -0.0164247   | 1.2513903        | torch.Size([2, 512, 32])         |
| 409     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_2             | qint8         | 0.0373904 | -0.8225893   | 4.7485838     | 0.0484085    | 0.8976397        | torch.Size([2, 512, 32])         |
| 409     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_3             | qint8         | 0.0462055 | -0.6930832   | 5.4060483     | 0.0534576    | 0.5756887        | torch.Size([2, 512, 64])         |
| 409     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | output              | qint8         | 0.0569265 | -1.2523835   | 7.2296681     | 0.0620206    | 0.8451077        | torch.Size([2, 512, 256])        |
| 410     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 11])         |
| 410     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 3])          |
| 411     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(1)                   | input               | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 3])          |
| 411     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(1)                   | weight              | torch.float32 |           | -0.9216561   | 0.9167990     | -0.0046354   | 0.1373587        | torch.Size([128, 3])             |
| 411     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(1)                   | bias                | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3650480        | torch.Size([128])                |
| 411     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(1)                   | output              | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3622016        | torch.Size([2, 256, 128])        |
| 412     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(1)                   | input               | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3622016        | torch.Size([2, 256, 128])        |
| 412     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(1)                   | output              | qint8         | 0.2590872 | 0.0000000    | 1.0363487     | 0.2509907    | 0.1121627        | torch.Size([2, 256, 128])        |
| 413     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(1)   | input_0             | qint8         | 0.2590872 | 0.0000000    | 1.0363487     | 0.2509907    | 0.1121627        | torch.Size([2, 256, 128])        |
| 413     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(1)   | output              | qint16        | 0.0002498 | 0.2510299    | 0.2510299     | 0.2510299    | 0.0000000        | torch.Size([2, 256, 1])          |
| 414     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(1)               | input_0             | qint8         | 0.2590872 | 0.0000000    | 1.0363487     | 0.2509907    | 0.1121627        | torch.Size([2, 256, 128])        |
| 414     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(1)               | input_1             | qint16        | 0.0002498 | 0.2510299    | 0.2510299     | 0.2510299    | 0.0000000        | torch.Size([2, 256, 1])          |
| 414     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(1)               | output              | qint16        | 0.0008924 | -0.2507719   | 0.7853354     | 0.0001184    | 0.1121189        | torch.Size([2, 256, 128])        |
| 415     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(1)               | input_0             | qint16        | 0.0008924 | -0.2507719   | 0.7853354     | 0.0001184    | 0.1121189        | torch.Size([2, 256, 128])        |
| 415     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(1)               | input_1             | qint16        | 0.0008924 | -0.2507719   | 0.7853354     | 0.0001184    | 0.1121189        | torch.Size([2, 256, 128])        |
| 415     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(1)               | output              | qint16        | 0.0261809 | 0.0000000    | 0.6283404     | 0.1090187    | 0.0188046        | torch.Size([2, 256, 128])        |
| 416     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(1)     | input_0             | qint16        | 0.0261809 | 0.0000000    | 0.6283404     | 0.1090187    | 0.0188046        | torch.Size([2, 256, 128])        |
| 416     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(1)     | output              | qint16        | 0.0029473 | 0.1090503    | 0.1090503     | 0.1090503    | 0.0000000        | torch.Size([2, 256, 1])          |
| 417     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(1)             | input               | qint16        | 0.0029473 | 0.1090503    | 0.1090503     | 0.1090503    | 0.0000000        | torch.Size([2, 256, 1])          |
| 417     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(1)             | output              | qint16        | 0.0000538 | 1.7621539    | 1.7621539     | 1.7621539    | 0.0000000        | torch.Size([2, 256, 1])          |
| 418     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(1)           | input_0             | qint16        | 0.0008924 | -0.2507719   | 0.7853354     | 0.0001184    | 0.1121189        | torch.Size([2, 256, 128])        |
| 418     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(1)           | input_1             | qint16        | 0.0000538 | 1.7621539    | 1.7621539     | 1.7621539    | 0.0000000        | torch.Size([2, 256, 1])          |
| 418     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(1)           | output              | qint16        | 0.0001192 | -0.4419246   | 1.3838307     | 0.0001937    | 0.3481574        | torch.Size([2, 256, 128])        |
| 419     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(1)      | input               | torch.float32 |           | 0.7278287    | 1.3287159     | 0.9627235    | 0.0086877        | torch.Size([128])                |
| 419     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(1)      | output              | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 420     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(1)        | input_0             | qint16        | 0.0001192 | -0.4419246   | 1.3838307     | 0.0001937    | 0.3481574        | torch.Size([2, 256, 128])        |
| 420     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(1)        | input_1             | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 420     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(1)        | output              | qint16        | 0.0001208 | -0.5006195   | 1.8387047     | 0.0171041    | 0.3609191        | torch.Size([2, 256, 128])        |
| 421     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(1)        | input               | torch.float32 |           | -0.0562531   | 0.0804052     | 0.0088204    | 0.0005294        | torch.Size([128])                |
| 421     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(1)        | output              | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 422     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(1)          | input_0             | qint16        | 0.0001208 | -0.5006195   | 1.8387047     | 0.0171041    | 0.3609191        | torch.Size([2, 256, 128])        |
| 422     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(1)          | input_1             | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 422     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(1)          | output              | qint8         | 0.0271288 | -0.4883187   | 1.8447596     | 0.0260691    | 0.3521270        | torch.Size([2, 256, 128])        |
| 423     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(1)                   | input               | qint8         | 0.0271288 | -0.4883187   | 1.8447596     | 0.0260691    | 0.3521270        | torch.Size([2, 256, 128])        |
| 423     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(1)                   | weight              | torch.float32 |           | -0.3750711   | 0.3968706     | 0.0019093    | 0.0048458        | torch.Size([128, 128])           |
| 423     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(1)                   | bias                | torch.float32 |           | -0.1863807   | 0.1385574     | -0.0156467   | 0.0047256        | torch.Size([128])                |
| 423     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(1)                   | output              | torch.float32 |           | -3.9203987   | 4.9219027     | -0.0012380   | 3.3247855        | torch.Size([2, 256, 128])        |
| 424     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(1)                   | input               | torch.float32 |           | -3.9203987   | 4.9219027     | -0.0012380   | 3.3247855        | torch.Size([2, 256, 128])        |
| 424     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(1)                   | output              | qint8         | 0.0433301 | 0.0000000    | 4.9396281     | 0.7768946    | 1.1029332        | torch.Size([2, 256, 128])        |
| 425     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(1)   | input_0             | qint8         | 0.0433301 | 0.0000000    | 4.9396281     | 0.7768946    | 1.1029332        | torch.Size([2, 256, 128])        |
| 425     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(1)   | output              | qint16        | 0.0000298 | 0.7769023    | 0.7769023     | 0.7769023    | 0.0000000        | torch.Size([2, 256, 1])          |
| 426     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(1)               | input_0             | qint8         | 0.0433301 | 0.0000000    | 4.9396281     | 0.7768946    | 1.1029332        | torch.Size([2, 256, 128])        |
| 426     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(1)               | input_1             | qint16        | 0.0000298 | 0.7769023    | 0.7769023     | 0.7769023    | 0.0000000        | torch.Size([2, 256, 1])          |
| 426     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(1)               | output              | qint16        | 0.0001641 | -0.7769432   | 4.1627350     | -0.0000269   | 1.1029693        | torch.Size([2, 256, 128])        |
| 427     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(1)               | input_0             | qint16        | 0.0001641 | -0.7769432   | 4.1627350     | -0.0000269   | 1.1029693        | torch.Size([2, 256, 128])        |
| 427     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(1)               | input_1             | qint16        | 0.0001641 | -0.7769432   | 4.1627350     | -0.0000269   | 1.1029693        | torch.Size([2, 256, 128])        |
| 427     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(1)               | output              | qint16        | 0.0008856 | 0.0000000    | 17.3283520    | 1.1031358    | 4.0146265        | torch.Size([2, 256, 128])        |
| 428     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(1)     | input_0             | qint16        | 0.0008856 | 0.0000000    | 17.3283520    | 1.1031358    | 4.0146265        | torch.Size([2, 256, 128])        |
| 428     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(1)     | output              | qint16        | 0.0000499 | 1.1031458    | 1.1031458     | 1.1031458    | 0.0000000        | torch.Size([2, 256, 1])          |
| 429     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(1)             | input               | qint16        | 0.0000499 | 1.1031458    | 1.1031458     | 1.1031458    | 0.0000000        | torch.Size([2, 256, 1])          |
| 429     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(1)             | output              | qint16        | 0.0000553 | 0.9521034    | 0.9521034     | 0.9521034    | 0.0000000        | torch.Size([2, 256, 1])          |
| 430     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(1)           | input_0             | qint16        | 0.0001641 | -0.7769432   | 4.1627350     | -0.0000269   | 1.1029693        | torch.Size([2, 256, 128])        |
| 430     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(1)           | input_1             | qint16        | 0.0000553 | 0.9521034    | 0.9521034     | 0.9521034    | 0.0000000        | torch.Size([2, 256, 1])          |
| 430     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(1)           | output              | qint16        | 0.0002164 | -0.7398219   | 3.9633932     | -0.0000592   | 0.9999142        | torch.Size([2, 256, 128])        |
| 431     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(1)      | input               | torch.float32 |           | 0.5925044    | 1.4726304     | 0.9182085    | 0.0175060        | torch.Size([128])                |
| 431     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(1)      | output              | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 432     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(1)        | input_0             | qint16        | 0.0002164 | -0.7398219   | 3.9633932     | -0.0000592   | 0.9999142        | torch.Size([2, 256, 128])        |
| 432     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(1)        | input_1             | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 432     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(1)        | output              | qint16        | 0.0002127 | -0.8656419   | 5.8365955     | 0.0584877    | 1.1560211        | torch.Size([2, 256, 128])        |
| 433     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(1)        | input               | torch.float32 |           | -0.0644210   | 0.2426097     | 0.0318023    | 0.0030999        | torch.Size([128])                |
| 433     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(1)        | output              | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 434     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(1)          | input_0             | qint16        | 0.0002127 | -0.8656419   | 5.8365955     | 0.0584877    | 1.1560211        | torch.Size([2, 256, 128])        |
| 434     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(1)          | input_1             | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 434     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(1)          | output              | qint8         | 0.0521229 | -0.8860894   | 5.7856431     | 0.0891790    | 1.1213394        | torch.Size([2, 256, 128])        |
| 435     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(1)                   | input               | qint8         | 0.0521229 | -0.8860894   | 5.7856431     | 0.0891790    | 1.1213394        | torch.Size([2, 256, 128])        |
| 435     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(1)                   | weight              | torch.float32 |           | -0.7504157   | 0.4182976     | -0.0024651   | 0.0052447        | torch.Size([128, 128])           |
| 435     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(1)                   | bias                | torch.float32 |           | -0.1397866   | 0.1210779     | 0.0064616    | 0.0040949        | torch.Size([128])                |
| 435     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(1)                   | output              | torch.float32 |           | -10.2708120  | 7.2193279     | -0.0062689   | 13.4972267       | torch.Size([2, 256, 128])        |
| 436     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(1)                   | input               | torch.float32 |           | -10.2708120  | 7.2193279     | -0.0062689   | 13.4972267       | torch.Size([2, 256, 128])        |
| 436     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(1)                   | output              | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 1.4695568    | 3.1467936        | torch.Size([2, 256, 128])        |
| 437     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(1)   | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 1.4695568    | 3.1467936        | torch.Size([2, 256, 128])        |
| 437     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(1)   | output              | qint16        | 0.0000319 | 1.0447656    | 1.0447656     | 1.0447656    | 0.0000000        | torch.Size([2, 256, 1])          |
| 438     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(1)               | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 1.4695568    | 3.1467936        | torch.Size([2, 256, 128])        |
| 438     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(1)               | input_1             | qint16        | 0.0000319 | 1.0447656    | 1.0447656     | 1.0447656    | 0.0000000        | torch.Size([2, 256, 1])          |
| 438     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(1)               | output              | qint16        | 0.0001844 | -1.0447190   | 5.2918410     | 0.4248075    | 3.1467285        | torch.Size([2, 256, 128])        |
| 439     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(1)               | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.2918410     | 0.4248075    | 3.1467285        | torch.Size([2, 256, 128])        |
| 439     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(1)               | input_1             | qint16        | 0.0001844 | -1.0447190   | 5.2918410     | 0.4248075    | 3.1467285        | torch.Size([2, 256, 128])        |
| 439     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(1)               | output              | qint16        | 0.0011151 | 0.0022303    | 28.0033684    | 3.3272693    | 30.1048164       | torch.Size([2, 256, 128])        |
| 440     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(1)     | input_0             | qint16        | 0.0011151 | 0.0022303    | 28.0033684    | 3.3272693    | 30.1048164       | torch.Size([2, 256, 128])        |
| 440     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(1)     | output              | qint16        | 0.0000656 | 2.1495371    | 2.1495371     | 2.1495371    | 0.0000000        | torch.Size([2, 256, 1])          |
| 441     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(1)             | input               | qint16        | 0.0000656 | 2.1495371    | 2.1495371     | 2.1495371    | 0.0000000        | torch.Size([2, 256, 1])          |
| 441     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(1)             | output              | qint16        | 0.0000338 | 0.6820595    | 0.6820595     | 0.6820595    | 0.0000000        | torch.Size([2, 256, 1])          |
| 442     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(1)           | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.2918410     | 0.4248075    | 3.1467285        | torch.Size([2, 256, 128])        |
| 442     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(1)           | input_1             | qint16        | 0.0000338 | 0.6820595    | 0.6820595     | 0.6820595    | 0.0000000        | torch.Size([2, 256, 1])          |
| 442     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(1)           | output              | qint16        | 0.0001537 | -0.7125288   | 3.6093671     | 0.2897590    | 1.4638472        | torch.Size([2, 256, 128])        |
| 443     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(1)      | input               | torch.float32 |           | 0.7673740    | 1.1249810     | 0.9671495    | 0.0053221        | torch.Size([128])                |
| 443     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(1)      | output              | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 444     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(1)        | input_0             | qint16        | 0.0001537 | -0.7125288   | 3.6093671     | 0.2897590    | 1.4638472        | torch.Size([2, 256, 128])        |
| 444     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(1)        | input_1             | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 444     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(1)        | output              | qint16        | 0.0001601 | -0.8014944   | 3.9901836     | 0.3030052    | 1.4985127        | torch.Size([2, 256, 128])        |
| 445     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(1)        | input               | torch.float32 |           | -0.0537279   | 0.1594015     | 0.0216380    | 0.0014148        | torch.Size([128])                |
| 445     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(1)        | output              | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 446     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(1)          | input_0             | qint16        | 0.0001601 | -0.8014944   | 3.9901836     | 0.3030052    | 1.4985127        | torch.Size([2, 256, 128])        |
| 446     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(1)          | input_1             | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 446     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(1)          | output              | qint8         | 0.0392422 | -0.7848449   | 3.9634669     | 0.3237485    | 1.4595617        | torch.Size([2, 256, 128])        |
| 447     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(1)                   | input               | qint8         | 0.0392422 | -0.7848449   | 3.9634669     | 0.3237485    | 1.4595617        | torch.Size([2, 256, 128])        |
| 447     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(1)                   | weight              | torch.float32 |           | -0.4264432   | 0.3183554     | 0.0005866    | 0.0053991        | torch.Size([128, 128])           |
| 447     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(1)                   | bias                | torch.float32 |           | -0.1690418   | 0.1536980     | -0.0166056   | 0.0039884        | torch.Size([128])                |
| 447     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(1)                   | output              | torch.float32 |           | -10.7857132  | 6.7616420     | -0.4231685   | 8.0094662        | torch.Size([2, 256, 128])        |
| 448     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(1)                  | input               | torch.float32 |           | -10.7857132  | 6.7616420     | -0.4231685   | 8.0094662        | torch.Size([2, 256, 128])        |
| 448     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(1)                  | output              | qint8         | 0.0826298 | 0.0000000    | 6.7756424     | 0.8179058    | 1.7934502        | torch.Size([2, 256, 128])        |
| 449     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(1)  | input_0             | qint8         | 0.0826298 | 0.0000000    | 6.7756424     | 0.8179058    | 1.7934502        | torch.Size([2, 256, 128])        |
| 449     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(1)  | output              | qint16        | 0.0000231 | 0.7555045    | 0.7555045     | 0.7555045    | 0.0000000        | torch.Size([2, 256, 1])          |
| 450     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(1)              | input_0             | qint8         | 0.0826298 | 0.0000000    | 6.7756424     | 0.8179058    | 1.7934502        | torch.Size([2, 256, 128])        |
| 450     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(1)              | input_1             | qint16        | 0.0000231 | 0.7555045    | 0.7555045     | 0.7555045    | 0.0000000        | torch.Size([2, 256, 1])          |
| 450     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(1)              | output              | qint16        | 0.0003154 | -0.7554005   | 6.0201788     | 0.0624581    | 1.7933403        | torch.Size([2, 256, 128])        |
| 451     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(1)              | input_0             | qint16        | 0.0003154 | -0.7554005   | 6.0201788     | 0.0624581    | 1.7933403        | torch.Size([2, 256, 128])        |
| 451     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(1)              | input_1             | qint16        | 0.0003154 | -0.7554005   | 6.0201788     | 0.0624581    | 1.7933403        | torch.Size([2, 256, 128])        |
| 451     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(1)              | output              | qint16        | 0.0032599 | 0.0065197    | 36.2430344    | 1.7970189    | 20.8238087       | torch.Size([2, 256, 128])        |
| 452     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(1)    | input_0             | qint16        | 0.0032599 | 0.0065197    | 36.2430344    | 1.7970189    | 20.8238087       | torch.Size([2, 256, 128])        |
| 452     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(1)    | output              | qint16        | 0.0000598 | 1.7970455    | 1.7970455     | 1.7970455    | 0.0000000        | torch.Size([2, 256, 1])          |
| 453     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(1)            | input               | qint16        | 0.0000598 | 1.7970455    | 1.7970455     | 1.7970455    | 0.0000000        | torch.Size([2, 256, 1])          |
| 453     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(1)            | output              | qint16        | 0.0000315 | 0.7459602    | 0.7459602     | 0.7459602    | 0.0000000        | torch.Size([2, 256, 1])          |
| 454     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(1)          | input_0             | qint16        | 0.0003154 | -0.7554005   | 6.0201788     | 0.0624581    | 1.7933403        | torch.Size([2, 256, 128])        |
| 454     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(1)          | input_1             | qint16        | 0.0000315 | 0.7459602    | 0.7459602     | 0.7459602    | 0.0000000        | torch.Size([2, 256, 1])          |
| 454     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(1)          | output              | qint16        | 0.0002431 | -0.5634463   | 4.4907985     | 0.0466287    | 0.9978821        | torch.Size([2, 256, 128])        |
| 455     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(1)     | input               | torch.float32 |           | 0.7088336    | 1.4002132     | 0.9292046    | 0.0145085        | torch.Size([128])                |
| 455     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(1)     | output              | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 456     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(1)       | input_0             | qint16        | 0.0002431 | -0.5634463   | 4.4907985     | 0.0466287    | 0.9978821        | torch.Size([2, 256, 128])        |
| 456     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(1)       | input_1             | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 456     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(1)       | output              | qint16        | 0.0002455 | -0.7889097   | 4.5360465     | 0.0557733    | 0.9503219        | torch.Size([2, 256, 128])        |
| 457     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(1)       | input               | torch.float32 |           | -0.0965041   | 0.2669707     | 0.0619903    | 0.0064956        | torch.Size([128])                |
| 457     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(1)       | output              | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 458     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(1)         | input_0             | qint16        | 0.0002455 | -0.7889097   | 4.5360465     | 0.0557733    | 0.9503219        | torch.Size([2, 256, 128])        |
| 458     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(1)         | input_1             | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 458     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(1)         | output              | qint8         | 0.0587279 | -0.8221908   | 4.5807776     | 0.1183735    | 0.9602281        | torch.Size([2, 256, 128])        |
| 459     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 11])         |
| 459     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 3])          |
| 460     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(1)                  | input               | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 3])          |
| 460     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(1)                  | weight              | torch.float32 |           | -0.8288664   | 0.6362330     | 0.0683853    | 0.1118651        | torch.Size([32, 3])              |
| 460     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(1)                  | bias                | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1068659        | torch.Size([32])                 |
| 460     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(1)                  | output              | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1035326        | torch.Size([2, 256, 32])         |
| 461     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(1)                  | input               | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1035326        | torch.Size([2, 256, 32])         |
| 461     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(1)                  | output              | qint8         | 0.0194126 | 0.0000000    | 0.5435528     | 0.1783533    | 0.0318324        | torch.Size([2, 256, 32])         |
| 462     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(1)  | input_0             | qint8         | 0.0194126 | 0.0000000    | 0.5435528     | 0.1783533    | 0.0318324        | torch.Size([2, 256, 32])         |
| 462     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(1)  | output              | qint16        | 0.0000252 | 0.1783445    | 0.1783445     | 0.1783445    | 0.0000000        | torch.Size([2, 256, 1])          |
| 463     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(1)              | input_0             | qint8         | 0.0194126 | 0.0000000    | 0.5435528     | 0.1783533    | 0.0318324        | torch.Size([2, 256, 32])         |
| 463     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(1)              | input_1             | qint16        | 0.0000252 | 0.1783445    | 0.1783445     | 0.1783445    | 0.0000000        | torch.Size([2, 256, 1])          |
| 463     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(1)              | output              | qint16        | 0.0000639 | -0.1783257   | 0.3652162     | 0.0000200    | 0.0318303        | torch.Size([2, 256, 32])         |
| 464     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(1)              | input_0             | qint16        | 0.0000639 | -0.1783257   | 0.3652162     | 0.0000200    | 0.0318303        | torch.Size([2, 256, 32])         |
| 464     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(1)              | input_1             | qint16        | 0.0000639 | -0.1783257   | 0.3652162     | 0.0000200    | 0.0318303        | torch.Size([2, 256, 32])         |
| 464     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(1)              | output              | qint16        | 0.0001394 | 0.0005575    | 0.1333741     | 0.0318105    | 0.0011105        | torch.Size([2, 256, 32])         |
| 465     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(1)    | input_0             | qint16        | 0.0001394 | 0.0005575    | 0.1333741     | 0.0318105    | 0.0011105        | torch.Size([2, 256, 32])         |
| 465     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(1)    | output              | qint16        | 0.0000212 | 0.0318008    | 0.0318008     | 0.0318008    | 0.0000000        | torch.Size([2, 256, 1])          |
| 466     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(1)            | input               | qint16        | 0.0000212 | 0.0318008    | 0.0318008     | 0.0318008    | 0.0000000        | torch.Size([2, 256, 1])          |
| 466     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(1)            | output              | qint16        | 0.0001649 | 5.4031301    | 5.4031301     | 5.4031301    | 0.0000000        | torch.Size([2, 256, 1])          |
| 467     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(1)          | input_0             | qint16        | 0.0000639 | -0.1783257   | 0.3652162     | 0.0000200    | 0.0318303        | torch.Size([2, 256, 32])         |
| 467     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(1)          | input_1             | qint16        | 0.0001649 | 5.4031301    | 5.4031301     | 5.4031301    | 0.0000000        | torch.Size([2, 256, 1])          |
| 467     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(1)          | output              | qint16        | 0.0000919 | -0.9635175   | 1.9732846     | 0.0001091    | 0.9292532        | torch.Size([2, 256, 32])         |
| 468     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(1)     | input               | torch.float32 |           | 0.8401937    | 1.1936733     | 0.9969203    | 0.0071658        | torch.Size([32])                 |
| 468     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(1)     | output              | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 469     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(1)       | input_0             | qint16        | 0.0000919 | -0.9635175   | 1.9732846     | 0.0001091    | 0.9292532        | torch.Size([2, 256, 32])         |
| 469     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(1)       | input_1             | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 469     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(1)       | output              | qint16        | 0.0001022 | -1.0938351   | 2.0334086     | 0.0214007    | 0.9483401        | torch.Size([2, 256, 32])         |
| 470     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(1)       | input               | torch.float32 |           | -0.1003950   | 0.1085345     | 0.0035262    | 0.0030721        | torch.Size([32])                 |
| 470     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(1)       | output              | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 471     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(1)         | input_0             | qint16        | 0.0001022 | -1.0938351   | 2.0334086     | 0.0214007    | 0.9483401        | torch.Size([2, 256, 32])         |
| 471     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(1)         | input_1             | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 471     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(1)         | output              | qint8         | 0.0232598 | -1.0932106   | 2.0236025     | 0.0232598    | 0.8801214        | torch.Size([2, 256, 32])         |
| 472     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(1)                  | input               | qint8         | 0.0232598 | -1.0932106   | 2.0236025     | 0.0232598    | 0.8801214        | torch.Size([2, 256, 32])         |
| 472     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(1)                  | weight              | torch.float32 |           | -0.5793310   | 0.5422795     | -0.0032135   | 0.0176575        | torch.Size([32, 32])             |
| 472     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(1)                  | bias                | torch.float32 |           | -0.1716317   | 0.2230143     | 0.0007250    | 0.0126328        | torch.Size([32])                 |
| 472     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(1)                  | output              | torch.float32 |           | -3.0189047   | 1.9772004     | -0.2343476   | 1.6920401        | torch.Size([2, 256, 32])         |
| 473     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(1)                  | input               | torch.float32 |           | -3.0189047   | 1.9772004     | -0.2343476   | 1.6920401        | torch.Size([2, 256, 32])         |
| 473     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(1)                  | output              | qint8         | 0.0172935 | 0.0000000    | 1.9714624     | 0.4053171    | 0.3288234        | torch.Size([2, 256, 32])         |
| 474     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(1)  | input_0             | qint8         | 0.0172935 | 0.0000000    | 1.9714624     | 0.4053171    | 0.3288234        | torch.Size([2, 256, 32])         |
| 474     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(1)  | output              | qint16        | 0.0000141 | 0.4053228    | 0.4053228     | 0.4053228    | 0.0000000        | torch.Size([2, 256, 1])          |
| 475     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(1)              | input_0             | qint8         | 0.0172935 | 0.0000000    | 1.9714624     | 0.4053171    | 0.3288234        | torch.Size([2, 256, 32])         |
| 475     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(1)              | input_1             | qint16        | 0.0000141 | 0.4053228    | 0.4053228     | 0.4053228    | 0.0000000        | torch.Size([2, 256, 1])          |
| 475     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(1)              | output              | qint16        | 0.0000617 | -0.4053017   | 1.5661463     | 0.0000096    | 0.3288180        | torch.Size([2, 256, 32])         |
| 476     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(1)              | input_0             | qint16        | 0.0000617 | -0.4053017   | 1.5661463     | 0.0000096    | 0.3288180        | torch.Size([2, 256, 32])         |
| 476     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(1)              | input_1             | qint16        | 0.0000617 | -0.4053017   | 1.5661463     | 0.0000096    | 0.3288180        | torch.Size([2, 256, 32])         |
| 476     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(1)              | output              | qint16        | 0.0001252 | 0.0035068    | 2.4527605     | 0.3288218    | 0.2872516        | torch.Size([2, 256, 32])         |
| 477     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(1)    | input_0             | qint16        | 0.0001252 | 0.0035068    | 2.4527605     | 0.3288218    | 0.2872516        | torch.Size([2, 256, 32])         |
| 477     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(1)    | output              | qint16        | 0.0000132 | 0.3288153    | 0.3288153     | 0.3288153    | 0.0000000        | torch.Size([2, 256, 1])          |
| 478     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(1)            | input               | qint16        | 0.0000132 | 0.3288153    | 0.3288153     | 0.3288153    | 0.0000000        | torch.Size([2, 256, 1])          |
| 478     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(1)            | output              | qint16        | 0.0000777 | 1.7439101    | 1.7439101     | 1.7439101    | 0.0000000        | torch.Size([2, 256, 1])          |
| 479     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(1)          | input_0             | qint16        | 0.0000617 | -0.4053017   | 1.5661463     | 0.0000096    | 0.3288180        | torch.Size([2, 256, 32])         |
| 479     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(1)          | input_1             | qint16        | 0.0000777 | 1.7439101    | 1.7439101     | 1.7439101    | 0.0000000        | torch.Size([2, 256, 1])          |
| 479     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(1)          | output              | qint16        | 0.0001125 | -0.7068129   | 2.7312107     | 0.0000106    | 1.0000162        | torch.Size([2, 256, 32])         |
| 480     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(1)     | input               | torch.float32 |           | 0.8191299    | 1.0923718     | 0.9808199    | 0.0031231        | torch.Size([32])                 |
| 480     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(1)     | output              | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 481     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(1)       | input_0             | qint16        | 0.0001125 | -0.7068129   | 2.7312107     | 0.0000106    | 1.0000162        | torch.Size([2, 256, 32])         |
| 481     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(1)       | input_1             | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 481     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(1)       | output              | qint16        | 0.0001113 | -0.7721289   | 2.8338857     | 0.0107719    | 1.0173039        | torch.Size([2, 256, 32])         |
| 482     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(1)       | input               | torch.float32 |           | -0.0704119   | 0.0788569     | 0.0097621    | 0.0015200        | torch.Size([32])                 |
| 482     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(1)       | output              | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 483     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(1)         | input_0             | qint16        | 0.0001113 | -0.7721289   | 2.8338857     | 0.0107719    | 1.0173039        | torch.Size([2, 256, 32])         |
| 483     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(1)         | input_1             | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 483     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(1)         | output              | qint8         | 0.0262611 | -0.7878318   | 2.8099334     | 0.0205164    | 0.9826177        | torch.Size([2, 256, 32])         |
| 484     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(1)                  | input               | qint8         | 0.0262611 | -0.7878318   | 2.8099334     | 0.0205164    | 0.9826177        | torch.Size([2, 256, 32])         |
| 484     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(1)                  | weight              | torch.float32 |           | -0.5712157   | 0.5219681     | -0.0062917   | 0.0166056        | torch.Size([32, 32])             |
| 484     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(1)                  | bias                | torch.float32 |           | -0.1649730   | 0.2318604     | 0.0253026    | 0.0136139        | torch.Size([32])                 |
| 484     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(1)                  | output              | torch.float32 |           | -4.0791140   | 2.0772579     | -0.0815531   | 1.8828346        | torch.Size([2, 256, 32])         |
| 485     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(1)                  | input               | torch.float32 |           | -4.0791140   | 2.0772579     | -0.0815531   | 1.8828346        | torch.Size([2, 256, 32])         |
| 485     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(1)                  | output              | qint8         | 0.0188970 | 0.0000000    | 2.0786693     | 0.4783301    | 0.3522622        | torch.Size([2, 256, 32])         |
| 486     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(1)  | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.0786693     | 0.4783301    | 0.3522622        | torch.Size([2, 256, 32])         |
| 486     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(1)  | output              | qint16        | 0.0000154 | 0.4783297    | 0.4783297     | 0.4783297    | 0.0000000        | torch.Size([2, 256, 1])          |
| 487     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(1)              | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.0786693     | 0.4783301    | 0.3522622        | torch.Size([2, 256, 32])         |
| 487     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(1)              | input_1             | qint16        | 0.0000154 | 0.4783297    | 0.4783297     | 0.4783297    | 0.0000000        | torch.Size([2, 256, 1])          |
| 487     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(1)              | output              | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | 0.0000020    | 0.3522623        | torch.Size([2, 256, 32])         |
| 488     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(1)              | input_0             | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | 0.0000020    | 0.3522623        | torch.Size([2, 256, 32])         |
| 488     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(1)              | input_1             | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | 0.0000020    | 0.3522623        | torch.Size([2, 256, 32])         |
| 488     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(1)              | output              | qint16        | 0.0001333 | 0.0000000    | 2.5611575     | 0.3522442    | 0.3271217        | torch.Size([2, 256, 32])         |
| 489     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(1)    | input_0             | qint16        | 0.0001333 | 0.0000000    | 2.5611575     | 0.3522442    | 0.3271217        | torch.Size([2, 256, 32])         |
| 489     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(1)    | output              | qint16        | 0.0000116 | 0.3522446    | 0.3522446     | 0.3522446    | 0.0000000        | torch.Size([2, 256, 1])          |
| 490     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(1)            | input               | qint16        | 0.0000116 | 0.3522446    | 0.3522446     | 0.3522446    | 0.0000000        | torch.Size([2, 256, 1])          |
| 490     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(1)            | output              | qint16        | 0.0000821 | 1.6848582    | 1.6848582     | 1.6848582    | 0.0000000        | torch.Size([2, 256, 1])          |
| 491     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(1)          | input_0             | qint16        | 0.0000636 | -0.4783245   | 1.6003541     | 0.0000020    | 0.3522623        | torch.Size([2, 256, 32])         |
| 491     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(1)          | input_1             | qint16        | 0.0000821 | 1.6848582    | 1.6848582     | 1.6848582    | 0.0000000        | torch.Size([2, 256, 1])          |
| 491     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(1)          | output              | qint16        | 0.0001195 | -0.8058625   | 2.6963701     | 0.0000262    | 0.9999490        | torch.Size([2, 256, 32])         |
| 492     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(1)     | input               | torch.float32 |           | 0.8903234    | 1.1315480     | 0.9912031    | 0.0026835        | torch.Size([32])                 |
| 492     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(1)     | output              | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 493     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(1)       | input_0             | qint16        | 0.0001195 | -0.8058625   | 2.6963701     | 0.0000262    | 0.9999490        | torch.Size([2, 256, 32])         |
| 493     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(1)       | input_1             | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 493     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(1)       | output              | qint16        | 0.0001226 | -0.9118891   | 2.8908358     | 0.0115595    | 1.0272626        | torch.Size([2, 256, 32])         |
| 494     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(1)       | input               | torch.float32 |           | -0.0586081   | 0.0779655     | 0.0041962    | 0.0015323        | torch.Size([32])                 |
| 494     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(1)       | output              | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 495     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(1)         | input_0             | qint16        | 0.0001226 | -0.9118891   | 2.8908358     | 0.0115595    | 1.0272626        | torch.Size([2, 256, 32])         |
| 495     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(1)         | input_1             | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 495     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(1)         | output              | qint8         | 0.0302522 | -0.8773150   | 2.9042153     | 0.0170169    | 0.9967653        | torch.Size([2, 256, 32])         |
| 496     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(1)                  | input               | qint8         | 0.0302522 | -0.8773150   | 2.9042153     | 0.0170169    | 0.9967653        | torch.Size([2, 256, 32])         |
| 496     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(1)                  | weight              | torch.float32 |           | -0.3204980   | 0.3365203     | -0.0020388   | 0.0145364        | torch.Size([32, 32])             |
| 496     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(1)                  | bias                | torch.float32 |           | -0.1559148   | 0.2119379     | 0.0091616    | 0.0105488        | torch.Size([32])                 |
| 496     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(1)                  | output              | torch.float32 |           | -1.5322225   | 2.2078454     | -0.0462873   | 0.8435488        | torch.Size([2, 256, 32])         |
| 497     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(1)                 | input               | torch.float32 |           | -1.5322225   | 2.2078454     | -0.0462873   | 0.8435488        | torch.Size([2, 256, 32])         |
| 497     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(1)                 | output              | qint8         | 0.0200096 | 0.0000000    | 2.2010570     | 0.3545452    | 0.3029903        | torch.Size([2, 256, 32])         |
| 498     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(1) | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.2010570     | 0.3545452    | 0.3029903        | torch.Size([2, 256, 32])         |
| 498     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(1) | output              | qint16        | 0.0000157 | 0.3545519    | 0.3545519     | 0.3545519    | 0.0000000        | torch.Size([2, 256, 1])          |
| 499     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(1)             | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.2010570     | 0.3545452    | 0.3029903        | torch.Size([2, 256, 32])         |
| 499     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(1)             | input_1             | qint16        | 0.0000157 | 0.3545519    | 0.3545519     | 0.3545519    | 0.0000000        | torch.Size([2, 256, 1])          |
| 499     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(1)             | output              | qint16        | 0.0000689 | -0.3545618   | 1.8465068     | -0.0000086   | 0.3029955        | torch.Size([2, 256, 32])         |
| 500     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(1)             | input_0             | qint16        | 0.0000689 | -0.3545618   | 1.8465068     | -0.0000086   | 0.3029955        | torch.Size([2, 256, 32])         |
| 500     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(1)             | input_1             | qint16        | 0.0000689 | -0.3545618   | 1.8465068     | -0.0000086   | 0.3029955        | torch.Size([2, 256, 32])         |
| 500     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(1)             | output              | qint16        | 0.0001557 | 0.0000000    | 3.4096045     | 0.3029620    | 0.4237115        | torch.Size([2, 256, 32])         |
| 501     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(1)   | input_0             | qint16        | 0.0001557 | 0.0000000    | 3.4096045     | 0.3029620    | 0.4237115        | torch.Size([2, 256, 32])         |
| 501     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(1)   | output              | qint16        | 0.0000123 | 0.3029633    | 0.3029633     | 0.3029633    | 0.0000000        | torch.Size([2, 256, 1])          |
| 502     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(1)           | input               | qint16        | 0.0000123 | 0.3029633    | 0.3029633     | 0.3029633    | 0.0000000        | torch.Size([2, 256, 1])          |
| 502     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(1)           | output              | qint16        | 0.0000803 | 1.8167633    | 1.8167633     | 1.8167633    | 0.0000000        | torch.Size([2, 256, 1])          |
| 503     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(1)         | input_0             | qint16        | 0.0000689 | -0.3545618   | 1.8465068     | -0.0000086   | 0.3029955        | torch.Size([2, 256, 32])         |
| 503     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(1)         | input_1             | qint16        | 0.0000803 | 1.8167633    | 1.8167633     | 1.8167633    | 0.0000000        | torch.Size([2, 256, 1])          |
| 503     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(1)         | output              | qint16        | 0.0001207 | -0.6441351   | 3.3546939     | -0.0000075   | 1.0000561        | torch.Size([2, 256, 32])         |
| 504     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(1)    | input               | torch.float32 |           | 0.8289159    | 1.6609058     | 1.2561316    | 0.0353652        | torch.Size([32])                 |
| 504     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(1)    | output              | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 505     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(1)      | input_0             | qint16        | 0.0001207 | -0.6441351   | 3.3546939     | -0.0000075   | 1.0000561        | torch.Size([2, 256, 32])         |
| 505     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(1)      | input_1             | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 505     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(1)      | output              | qint16        | 0.0001642 | -1.0698943   | 4.0631351     | -0.0515908   | 1.4223224        | torch.Size([2, 256, 32])         |
| 506     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(1)      | input               | torch.float32 |           | -0.1194881   | 0.2576658     | 0.0445686    | 0.0113612        | torch.Size([32])                 |
| 506     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(1)      | output              | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 507     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(1)        | input_0             | qint16        | 0.0001642 | -1.0698943   | 4.0631351     | -0.0515908   | 1.4223224        | torch.Size([2, 256, 32])         |
| 507     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(1)        | input_1             | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 507     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(1)        | output              | qint8         | 0.0385920 | -1.0033913   | 4.0521569     | -0.0048240   | 1.2958747        | torch.Size([2, 256, 32])         |
| 508     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 11])         |
| 508     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 2])          |
| 509     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(1)                   | input               | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 2])          |
| 509     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(1)                   | weight              | torch.float32 |           | -0.7023237   | 0.7394427     | 0.0490668    | 0.1972211        | torch.Size([32, 2])              |
| 509     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(1)                   | bias                | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1641774        | torch.Size([32])                 |
| 509     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(1)                   | output              | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1590565        | torch.Size([2, 256, 32])         |
| 510     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(1)                   | input               | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1590565        | torch.Size([2, 256, 32])         |
| 510     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(1)                   | output              | qint8         | 0.0115854 | 0.0000000    | 0.6719555     | 0.1227333    | 0.0450667        | torch.Size([2, 256, 32])         |
| 511     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(1)   | input_0             | qint8         | 0.0115854 | 0.0000000    | 0.6719555     | 0.1227333    | 0.0450667        | torch.Size([2, 256, 32])         |
| 511     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(1)   | output              | qint16        | 0.0000105 | 0.1227362    | 0.1227362     | 0.1227362    | 0.0000000        | torch.Size([2, 256, 1])          |
| 512     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(1)               | input_0             | qint8         | 0.0115854 | 0.0000000    | 0.6719555     | 0.1227333    | 0.0450667        | torch.Size([2, 256, 32])         |
| 512     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(1)               | input_1             | qint16        | 0.0000105 | 0.1227362    | 0.1227362     | 0.1227362    | 0.0000000        | torch.Size([2, 256, 1])          |
| 512     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(1)               | output              | qint16        | 0.0000395 | -0.1227196   | 0.5492126     | 0.0000049    | 0.0450629        | torch.Size([2, 256, 32])         |
| 513     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(1)               | input_0             | qint16        | 0.0000395 | -0.1227196   | 0.5492126     | 0.0000049    | 0.0450629        | torch.Size([2, 256, 32])         |
| 513     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(1)               | input_1             | qint16        | 0.0000395 | -0.1227196   | 0.5492126     | 0.0000049    | 0.0450629        | torch.Size([2, 256, 32])         |
| 513     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(1)               | output              | qint16        | 0.0000524 | 0.0000524    | 0.3016200     | 0.0450725    | 0.0065062        | torch.Size([2, 256, 32])         |
| 514     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(1)     | input_0             | qint16        | 0.0000524 | 0.0000524    | 0.3016200     | 0.0450725    | 0.0065062        | torch.Size([2, 256, 32])         |
| 514     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(1)     | output              | qint16        | 0.0000071 | 0.0450738    | 0.0450738     | 0.0450738    | 0.0000000        | torch.Size([2, 256, 1])          |
| 515     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(1)             | input               | qint16        | 0.0000071 | 0.0450738    | 0.0450738     | 0.0450738    | 0.0000000        | torch.Size([2, 256, 1])          |
| 515     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(1)             | output              | qint16        | 0.0001514 | 4.7096615    | 4.7096615     | 4.7096615    | 0.0000000        | torch.Size([2, 256, 1])          |
| 516     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(1)           | input_0             | qint16        | 0.0000395 | -0.1227196   | 0.5492126     | 0.0000049    | 0.0450629        | torch.Size([2, 256, 32])         |
| 516     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(1)           | input_1             | qint16        | 0.0001514 | 4.7096615    | 4.7096615     | 4.7096615    | 0.0000000        | torch.Size([2, 256, 1])          |
| 516     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(1)           | output              | qint16        | 0.0001206 | -0.5779082   | 2.5866547     | 0.0000604    | 0.9994971        | torch.Size([2, 256, 32])         |
| 517     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(1)      | input               | torch.float32 |           | 0.8947600    | 1.1748335     | 0.9865216    | 0.0041537        | torch.Size([32])                 |
| 517     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(1)      | output              | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 518     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(1)        | input_0             | qint16        | 0.0001206 | -0.5779082   | 2.5866547     | 0.0000604    | 0.9994971        | torch.Size([2, 256, 32])         |
| 518     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(1)        | input_1             | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 518     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(1)        | output              | qint16        | 0.0001306 | -0.6789408   | 2.8009245     | 0.0037348    | 1.0164539        | torch.Size([2, 256, 32])         |
| 519     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(1)        | input               | torch.float32 |           | -0.0879948   | 0.1319895     | 0.0285039    | 0.0034159        | torch.Size([32])                 |
| 519     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(1)        | output              | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 520     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(1)          | input_0             | qint16        | 0.0001306 | -0.6789408   | 2.8009245     | 0.0037348    | 1.0164539        | torch.Size([2, 256, 32])         |
| 520     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(1)          | input_1             | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 520     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(1)          | output              | qint8         | 0.0302674 | -0.6356144   | 2.7240617     | 0.0312132    | 0.9324011        | torch.Size([2, 256, 32])         |
| 521     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(1)                   | input               | qint8         | 0.0302674 | -0.6356144   | 2.7240617     | 0.0312132    | 0.9324011        | torch.Size([2, 256, 32])         |
| 521     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(1)                   | weight              | torch.float32 |           | -1.0547366   | 0.5812716     | 0.0070099    | 0.0187704        | torch.Size([32, 32])             |
| 521     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(1)                   | bias                | torch.float32 |           | -0.2183180   | 0.1396109     | -0.0140744   | 0.0103446        | torch.Size([32])                 |
| 521     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(1)                   | output              | torch.float32 |           | -3.4460330   | 1.2976041     | -0.5753502   | 1.5168110        | torch.Size([2, 256, 32])         |
| 522     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(1)                   | input               | torch.float32 |           | -3.4460330   | 1.2976041     | -0.5753502   | 1.5168110        | torch.Size([2, 256, 32])         |
| 522     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(1)                   | output              | qint8         | 0.0142143 | 0.0000000    | 1.2935010     | 0.2305381    | 0.1204867        | torch.Size([2, 256, 32])         |
| 523     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(1)   | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.2935010     | 0.2305381    | 0.1204867        | torch.Size([2, 256, 32])         |
| 523     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(1)   | output              | qint16        | 0.0000116 | 0.2305332    | 0.2305332     | 0.2305332    | 0.0000000        | torch.Size([2, 256, 1])          |
| 524     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(1)               | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.2935010     | 0.2305381    | 0.1204867        | torch.Size([2, 256, 32])         |
| 524     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(1)               | input_1             | qint16        | 0.0000116 | 0.2305332    | 0.2305332     | 0.2305332    | 0.0000000        | torch.Size([2, 256, 1])          |
| 524     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(1)               | output              | qint16        | 0.0000516 | -0.2305298   | 1.0629872     | 0.0000065    | 0.1204871        | torch.Size([2, 256, 32])         |
| 525     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(1)               | input_0             | qint16        | 0.0000516 | -0.2305298   | 1.0629872     | 0.0000065    | 0.1204871        | torch.Size([2, 256, 32])         |
| 525     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(1)               | input_1             | qint16        | 0.0000516 | -0.2305298   | 1.0629872     | 0.0000065    | 0.1204871        | torch.Size([2, 256, 32])         |
| 525     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(1)               | output              | qint16        | 0.0000889 | 0.0000889    | 1.1299052     | 0.1204773    | 0.0456625        | torch.Size([2, 256, 32])         |
| 526     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(1)     | input_0             | qint16        | 0.0000889 | 0.0000889    | 1.1299052     | 0.1204773    | 0.0456625        | torch.Size([2, 256, 32])         |
| 526     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(1)     | output              | qint16        | 0.0000089 | 0.1204808    | 0.1204808     | 0.1204808    | 0.0000000        | torch.Size([2, 256, 1])          |
| 527     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(1)             | input               | qint16        | 0.0000089 | 0.1204808    | 0.1204808     | 0.1204808    | 0.0000000        | torch.Size([2, 256, 1])          |
| 527     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(1)             | output              | qint16        | 0.0001114 | 2.8808506    | 2.8808506     | 2.8808506    | 0.0000000        | torch.Size([2, 256, 1])          |
| 528     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(1)           | input_0             | qint16        | 0.0000516 | -0.2305298   | 1.0629872     | 0.0000065    | 0.1204871        | torch.Size([2, 256, 32])         |
| 528     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(1)           | input_1             | qint16        | 0.0001114 | 2.8808506    | 2.8808506     | 2.8808506    | 0.0000000        | torch.Size([2, 256, 1])          |
| 528     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(1)           | output              | qint16        | 0.0001083 | -0.6641636   | 3.0623035     | -0.0000068   | 0.9999942        | torch.Size([2, 256, 32])         |
| 529     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(1)      | input               | torch.float32 |           | 0.8550419    | 1.1198171     | 0.9805899    | 0.0036729        | torch.Size([32])                 |
| 529     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(1)      | output              | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 530     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(1)        | input_0             | qint16        | 0.0001083 | -0.6641636   | 3.0623035     | -0.0000068   | 0.9999942        | torch.Size([2, 256, 32])         |
| 530     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(1)        | input_1             | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 530     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(1)        | output              | qint16        | 0.0001106 | -0.7436967   | 3.1250520     | -0.0013925   | 0.9850101        | torch.Size([2, 256, 32])         |
| 531     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(1)        | input               | torch.float32 |           | -0.0792132   | 0.1045145     | 0.0242442    | 0.0021608        | torch.Size([32])                 |
| 531     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(1)        | output              | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 532     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(1)          | input_0             | qint16        | 0.0001106 | -0.7436967   | 3.1250520     | -0.0013925   | 0.9850101        | torch.Size([2, 256, 32])         |
| 532     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(1)          | input_1             | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 532     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(1)          | output              | qint8         | 0.0268612 | -0.7252512   | 3.0890329     | 0.0251823    | 0.9192719        | torch.Size([2, 256, 32])         |
| 533     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(1)                   | input               | qint8         | 0.0268612 | -0.7252512   | 3.0890329     | 0.0251823    | 0.9192719        | torch.Size([2, 256, 32])         |
| 533     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(1)                   | weight              | torch.float32 |           | -0.4480607   | 0.3678726     | 0.0004879    | 0.0160908        | torch.Size([32, 32])             |
| 533     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(1)                   | bias                | torch.float32 |           | -0.1861591   | 0.1739754     | 0.0155446    | 0.0137690        | torch.Size([32])                 |
| 533     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(1)                   | output              | torch.float32 |           | -3.5853832   | 1.3076521     | -0.3626769   | 1.7682022        | torch.Size([2, 256, 32])         |
| 534     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(1)                   | input               | torch.float32 |           | -3.5853832   | 1.3076521     | -0.3626769   | 1.7682022        | torch.Size([2, 256, 32])         |
| 534     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(1)                   | output              | qint8         | 0.0183966 | 0.0000000    | 1.3061583     | 0.3368877    | 0.1837059        | torch.Size([2, 256, 32])         |
| 535     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(1)   | input_0             | qint8         | 0.0183966 | 0.0000000    | 1.3061583     | 0.3368877    | 0.1837059        | torch.Size([2, 256, 32])         |
| 535     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(1)   | output              | qint16        | 0.0000156 | 0.3368896    | 0.3368896     | 0.3368896    | 0.0000000        | torch.Size([2, 256, 1])          |
| 536     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(1)               | input_0             | qint8         | 0.0183966 | 0.0000000    | 1.3061583     | 0.3368877    | 0.1837059        | torch.Size([2, 256, 32])         |
| 536     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(1)               | input_1             | qint16        | 0.0000156 | 0.3368896    | 0.3368896     | 0.3368896    | 0.0000000        | torch.Size([2, 256, 1])          |
| 536     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(1)               | output              | qint16        | 0.0000645 | -0.3369032   | 0.9692822     | -0.0000080   | 0.1837115        | torch.Size([2, 256, 32])         |
| 537     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(1)               | input_0             | qint16        | 0.0000645 | -0.3369032   | 0.9692822     | -0.0000080   | 0.1837115        | torch.Size([2, 256, 32])         |
| 537     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(1)               | input_1             | qint16        | 0.0000645 | -0.3369032   | 0.9692822     | -0.0000080   | 0.1837115        | torch.Size([2, 256, 32])         |
| 537     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(1)               | output              | qint16        | 0.0001365 | 0.0001365    | 0.9395564     | 0.1837342    | 0.0549921        | torch.Size([2, 256, 32])         |
| 538     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(1)     | input_0             | qint16        | 0.0001365 | 0.0001365    | 0.9395564     | 0.1837342    | 0.0549921        | torch.Size([2, 256, 32])         |
| 538     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(1)     | output              | qint16        | 0.0000123 | 0.1837343    | 0.1837343     | 0.1837343    | 0.0000000        | torch.Size([2, 256, 1])          |
| 539     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(1)             | input               | qint16        | 0.0000123 | 0.1837343    | 0.1837343     | 0.1837343    | 0.0000000        | torch.Size([2, 256, 1])          |
| 539     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(1)             | output              | qint16        | 0.0000749 | 2.3328609    | 2.3328609     | 2.3328609    | 0.0000000        | torch.Size([2, 256, 1])          |
| 540     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(1)           | input_0             | qint16        | 0.0000645 | -0.3369032   | 0.9692822     | -0.0000080   | 0.1837115        | torch.Size([2, 256, 32])         |
| 540     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(1)           | input_1             | qint16        | 0.0000749 | 2.3328609    | 2.3328609     | 2.3328609    | 0.0000000        | torch.Size([2, 256, 1])          |
| 540     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(1)           | output              | qint16        | 0.0001267 | -0.7859024   | 2.2611952     | 0.0000079    | 0.9997666        | torch.Size([2, 256, 32])         |
| 541     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(1)      | input               | torch.float32 |           | 0.8469434    | 1.1090456     | 0.9866461    | 0.0031007        | torch.Size([32])                 |
| 541     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(1)      | output              | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 542     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(1)        | input_0             | qint16        | 0.0001267 | -0.7859024   | 2.2611952     | 0.0000079    | 0.9997666        | torch.Size([2, 256, 32])         |
| 542     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(1)        | input_1             | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 542     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(1)        | output              | qint16        | 0.0001376 | -0.8716200   | 2.2830501     | -0.0062206   | 0.9910924        | torch.Size([2, 256, 32])         |
| 543     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(1)        | input               | torch.float32 |           | -0.0626723   | 0.0887763     | 0.0071697    | 0.0011301        | torch.Size([32])                 |
| 543     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(1)        | output              | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 544     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(1)          | input_0             | qint16        | 0.0001376 | -0.8716200   | 2.2830501     | -0.0062206   | 0.9910924        | torch.Size([2, 256, 32])         |
| 544     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(1)          | input_1             | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 544     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(1)          | output              | qint8         | 0.0326290 | -0.8809829   | 2.2840297     | 0.0000000    | 0.9607734        | torch.Size([2, 256, 32])         |
| 545     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(1)                   | input               | qint8         | 0.0326290 | -0.8809829   | 2.2840297     | 0.0000000    | 0.9607734        | torch.Size([2, 256, 32])         |
| 545     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(1)                   | weight              | torch.float32 |           | -0.5597425   | 0.7001730     | 0.0015679    | 0.0160348        | torch.Size([32, 32])             |
| 545     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(1)                   | bias                | torch.float32 |           | -0.1810580   | 0.1736723     | -0.0279047   | 0.0091159        | torch.Size([32])                 |
| 545     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(1)                   | output              | torch.float32 |           | -3.5841384   | 2.8331716     | -0.2500882   | 1.3441530        | torch.Size([2, 256, 32])         |
| 546     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(1)                  | input               | torch.float32 |           | -3.5841384   | 2.8331716     | -0.2500882   | 1.3441530        | torch.Size([2, 256, 32])         |
| 546     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(1)                  | output              | qint8         | 0.0271917 | 0.0000000    | 2.8279335     | 0.2812638    | 0.4033421        | torch.Size([2, 256, 32])         |
| 547     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(1)  | input_0             | qint8         | 0.0271917 | 0.0000000    | 2.8279335     | 0.2812638    | 0.4033421        | torch.Size([2, 256, 32])         |
| 547     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(1)  | output              | qint16        | 0.0000121 | 0.2812601    | 0.2812601     | 0.2812601    | 0.0000000        | torch.Size([2, 256, 1])          |
| 548     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(1)              | input_0             | qint8         | 0.0271917 | 0.0000000    | 2.8279335     | 0.2812638    | 0.4033421        | torch.Size([2, 256, 32])         |
| 548     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(1)              | input_1             | qint16        | 0.0000121 | 0.2812601    | 0.2812601     | 0.2812601    | 0.0000000        | torch.Size([2, 256, 1])          |
| 548     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(1)              | output              | qint16        | 0.0000976 | -0.2812922   | 2.5466604     | -0.0000183   | 0.4033519        | torch.Size([2, 256, 32])         |
| 549     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(1)              | input_0             | qint16        | 0.0000976 | -0.2812922   | 2.5466604     | -0.0000183   | 0.4033519        | torch.Size([2, 256, 32])         |
| 549     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(1)              | input_1             | qint16        | 0.0000976 | -0.2812922   | 2.5466604     | -0.0000183   | 0.4033519        | torch.Size([2, 256, 32])         |
| 549     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(1)              | output              | qint16        | 0.0003122 | 0.0040580    | 6.4853702     | 0.4032383    | 1.6193718        | torch.Size([2, 256, 32])         |
| 550     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(1)    | input_0             | qint16        | 0.0003122 | 0.0040580    | 6.4853702     | 0.4032383    | 1.6193718        | torch.Size([2, 256, 32])         |
| 550     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(1)    | output              | qint16        | 0.0000136 | 0.4032322    | 0.4032322     | 0.4032322    | 0.0000000        | torch.Size([2, 256, 1])          |
| 551     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(1)            | input               | qint16        | 0.0000136 | 0.4032322    | 0.4032322     | 0.4032322    | 0.0000000        | torch.Size([2, 256, 1])          |
| 551     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(1)            | output              | qint16        | 0.0000802 | 1.5748072    | 1.5748072     | 1.5748072    | 0.0000000        | torch.Size([2, 256, 1])          |
| 552     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(1)          | input_0             | qint16        | 0.0000976 | -0.2812922   | 2.5466604     | -0.0000183   | 0.4033519        | torch.Size([2, 256, 32])         |
| 552     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(1)          | input_1             | qint16        | 0.0000802 | 1.5748072    | 1.5748072     | 1.5748072    | 0.0000000        | torch.Size([2, 256, 1])          |
| 552     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(1)          | output              | qint16        | 0.0001482 | -0.4430029   | 4.0104361     | -0.0000370   | 1.0003204        | torch.Size([2, 256, 32])         |
| 553     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(1)     | input               | torch.float32 |           | 0.8363900    | 1.4688344     | 1.0570920    | 0.0396277        | torch.Size([32])                 |
| 553     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(1)     | output              | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 554     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(1)       | input_0             | qint16        | 0.0001482 | -0.4430029   | 4.0104361     | -0.0000370   | 1.0003204        | torch.Size([2, 256, 32])         |
| 554     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(1)       | input_1             | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 554     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(1)       | output              | qint16        | 0.0001637 | -0.6506311   | 3.3543358     | -0.0721246   | 0.8077202        | torch.Size([2, 256, 32])         |
| 555     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(1)       | input               | torch.float32 |           | -0.1492936   | 0.2842544     | 0.0803791    | 0.0109446        | torch.Size([32])                 |
| 555     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(1)       | output              | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 556     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(1)         | input_0             | qint16        | 0.0001637 | -0.6506311   | 3.3543358     | -0.0721246   | 0.8077202        | torch.Size([2, 256, 32])         |
| 556     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(1)         | input_1             | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 556     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(1)         | output              | qint8         | 0.0373904 | -0.5234659   | 3.2155764     | 0.0093476    | 0.6984528        | torch.Size([2, 256, 32])         |
| 557     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 11])         |
| 557     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 3])          |
| 558     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(1)                   | input               | qint16        | 0.0018311 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 3])          |
| 558     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(1)                   | weight              | torch.float32 |           | -1.0475703   | 0.9848034     | -0.0054673   | 0.2080412        | torch.Size([64, 3])              |
| 558     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(1)                   | bias                | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1294928        | torch.Size([64])                 |
| 558     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(1)                   | output              | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1274733        | torch.Size([2, 256, 64])         |
| 559     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(1)                   | input               | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1274733        | torch.Size([2, 256, 64])         |
| 559     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(1)                   | output              | qint8         | 0.0729980 | 0.0000000    | 0.5109861     | 0.1277465    | 0.0273105        | torch.Size([2, 256, 64])         |
| 560     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(1)   | input_0             | qint8         | 0.0729980 | 0.0000000    | 0.5109861     | 0.1277465    | 0.0273105        | torch.Size([2, 256, 64])         |
| 560     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(1)   | output              | qint16        | 0.0000685 | 0.1277232    | 0.1277232     | 0.1277232    | 0.0000000        | torch.Size([2, 256, 1])          |
| 561     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(1)               | input_0             | qint8         | 0.0729980 | 0.0000000    | 0.5109861     | 0.1277465    | 0.0273105        | torch.Size([2, 256, 64])         |
| 561     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(1)               | input_1             | qint16        | 0.0000685 | 0.1277232    | 0.1277232     | 0.1277232    | 0.0000000        | torch.Size([2, 256, 1])          |
| 561     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(1)               | output              | qint16        | 0.0002902 | -0.1276918   | 0.3833656     | 0.0000454    | 0.0273148        | torch.Size([2, 256, 64])         |
| 562     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(1)               | input_0             | qint16        | 0.0002902 | -0.1276918   | 0.3833656     | 0.0000454    | 0.0273148        | torch.Size([2, 256, 64])         |
| 562     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(1)               | input_1             | qint16        | 0.0002902 | -0.1276918   | 0.3833656     | 0.0000454    | 0.0273148        | torch.Size([2, 256, 64])         |
| 562     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(1)               | output              | qint16        | 0.0029551 | 0.0000000    | 0.1477548     | 0.0281658    | 0.0013932        | torch.Size([2, 256, 64])         |
| 563     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(1)     | input_0             | qint16        | 0.0029551 | 0.0000000    | 0.1477548     | 0.0281658    | 0.0013932        | torch.Size([2, 256, 64])         |
| 563     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(1)     | output              | qint16        | 0.0003723 | 0.0282952    | 0.0282952     | 0.0282952    | 0.0000000        | torch.Size([2, 256, 1])          |
| 564     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(1)             | input               | qint16        | 0.0003723 | 0.0282952    | 0.0282952     | 0.0282952    | 0.0000000        | torch.Size([2, 256, 1])          |
| 564     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(1)             | output              | qint16        | 0.0001859 | 5.9438181    | 5.9438181     | 5.9438181    | 0.0000000        | torch.Size([2, 256, 1])          |
| 565     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(1)           | input_0             | qint16        | 0.0002902 | -0.1276918   | 0.3833656     | 0.0000454    | 0.0273148        | torch.Size([2, 256, 64])         |
| 565     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(1)           | input_1             | qint16        | 0.0001859 | 5.9438181    | 5.9438181     | 5.9438181    | 0.0000000        | torch.Size([2, 256, 1])          |
| 565     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(1)           | output              | qint16        | 0.0001160 | -0.7590148   | 2.2786677     | 0.0002427    | 0.9650360        | torch.Size([2, 256, 64])         |
| 566     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(1)      | input               | torch.float32 |           | 0.8691067    | 1.1281288     | 0.9794419    | 0.0036082        | torch.Size([64])                 |
| 566     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(1)      | output              | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 567     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(1)        | input_0             | qint16        | 0.0001160 | -0.7590148   | 2.2786677     | 0.0002427    | 0.9650360        | torch.Size([2, 256, 64])         |
| 567     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(1)        | input_1             | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 567     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(1)        | output              | qint16        | 0.0001189 | -0.7952325   | 2.2576759     | 0.0100240    | 0.9133486        | torch.Size([2, 256, 64])         |
| 568     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(1)        | input               | torch.float32 |           | -0.1133662   | 0.1493634     | 0.0304540    | 0.0046508        | torch.Size([64])                 |
| 568     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(1)        | output              | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 569     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(1)          | input_0             | qint16        | 0.0001189 | -0.7952325   | 2.2576759     | 0.0100240    | 0.9133486        | torch.Size([2, 256, 64])         |
| 569     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(1)          | input_1             | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 569     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(1)          | output              | qint8         | 0.0267452 | -0.7488655   | 2.1663611     | 0.0409536    | 0.8074698        | torch.Size([2, 256, 64])         |
| 570     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(1)                   | input               | qint8         | 0.0267452 | -0.7488655   | 2.1663611     | 0.0409536    | 0.8074698        | torch.Size([2, 256, 64])         |
| 570     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(1)                   | weight              | torch.float32 |           | -0.4523612   | 0.4813256     | -0.0014562   | 0.0096743        | torch.Size([64, 64])             |
| 570     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(1)                   | bias                | torch.float32 |           | -0.1183558   | 0.2243176     | 0.0150283    | 0.0049289        | torch.Size([64])                 |
| 570     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(1)                   | output              | torch.float32 |           | -4.9182577   | 2.0230374     | -0.4451180   | 2.7184920        | torch.Size([2, 256, 64])         |
| 571     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(1)                   | input               | torch.float32 |           | -4.9182577   | 2.0230374     | -0.4451180   | 2.7184920        | torch.Size([2, 256, 64])         |
| 571     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(1)                   | output              | qint8         | 0.0337689 | 0.0000000    | 2.0261338     | 0.3725131    | 0.2275746        | torch.Size([2, 256, 64])         |
| 572     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(1)   | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.0261338     | 0.3725131    | 0.2275746        | torch.Size([2, 256, 64])         |
| 572     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(1)   | output              | qint16        | 0.0000195 | 0.3725190    | 0.3725190     | 0.3725190    | 0.0000000        | torch.Size([2, 256, 1])          |
| 573     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(1)               | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.0261338     | 0.3725131    | 0.2275746        | torch.Size([2, 256, 64])         |
| 573     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(1)               | input_1             | qint16        | 0.0000195 | 0.3725190    | 0.3725190     | 0.3725190    | 0.0000000        | torch.Size([2, 256, 1])          |
| 573     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(1)               | output              | qint16        | 0.0001376 | -0.3725518   | 1.6535684     | -0.0000172   | 0.2275812        | torch.Size([2, 256, 64])         |
| 574     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(1)               | input_0             | qint16        | 0.0001376 | -0.3725518   | 1.6535684     | -0.0000172   | 0.2275812        | torch.Size([2, 256, 64])         |
| 574     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(1)               | input_1             | qint16        | 0.0001376 | -0.3725518   | 1.6535684     | -0.0000172   | 0.2275812        | torch.Size([2, 256, 64])         |
| 574     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(1)               | output              | qint16        | 0.0006236 | 0.0000000    | 2.7340262     | 0.2277251    | 0.1978282        | torch.Size([2, 256, 64])         |
| 575     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(1)     | input_0             | qint16        | 0.0006236 | 0.0000000    | 2.7340262     | 0.2277251    | 0.1978282        | torch.Size([2, 256, 64])         |
| 575     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(1)     | output              | qint16        | 0.0000322 | 0.2277117    | 0.2277117     | 0.2277117    | 0.0000000        | torch.Size([2, 256, 1])          |
| 576     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(1)             | input               | qint16        | 0.0000322 | 0.2277117    | 0.2277117     | 0.2277117    | 0.0000000        | torch.Size([2, 256, 1])          |
| 576     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(1)             | output              | qint16        | 0.0001060 | 2.0955007    | 2.0955007     | 2.0955007    | 0.0000000        | torch.Size([2, 256, 1])          |
| 577     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(1)           | input_0             | qint16        | 0.0001376 | -0.3725518   | 1.6535684     | -0.0000172   | 0.2275812        | torch.Size([2, 256, 64])         |
| 577     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(1)           | input_1             | qint16        | 0.0001060 | 2.0955007    | 2.0955007     | 2.0955007    | 0.0000000        | torch.Size([2, 256, 1])          |
| 577     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(1)           | output              | qint16        | 0.0001466 | -0.7806441   | 3.4650977     | -0.0000206   | 0.9993145        | torch.Size([2, 256, 64])         |
| 578     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(1)      | input               | torch.float32 |           | 0.8333027    | 1.1388558     | 0.9778216    | 0.0042186        | torch.Size([64])                 |
| 578     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(1)      | output              | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 579     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(1)        | input_0             | qint16        | 0.0001466 | -0.7806441   | 3.4650977     | -0.0000206   | 0.9993145        | torch.Size([2, 256, 64])         |
| 579     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(1)        | input_1             | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 579     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(1)        | output              | qint16        | 0.0001474 | -0.8350534   | 3.7810345     | 0.0086432    | 1.0149955        | torch.Size([2, 256, 64])         |
| 580     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(1)        | input               | torch.float32 |           | -0.0757831   | 0.1161729     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 580     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(1)        | output              | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 581     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(1)          | input_0             | qint16        | 0.0001474 | -0.8350534   | 3.7810345     | 0.0086432    | 1.0149955        | torch.Size([2, 256, 64])         |
| 581     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(1)          | input_1             | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 581     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(1)          | output              | qint8         | 0.0350382 | -0.8058778   | 3.7490838     | 0.0257311    | 0.9583524        | torch.Size([2, 256, 64])         |
| 582     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(1)                   | input               | qint8         | 0.0350382 | -0.8058778   | 3.7490838     | 0.0257311    | 0.9583524        | torch.Size([2, 256, 64])         |
| 582     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(1)                   | weight              | torch.float32 |           | -0.5707353   | 0.3620123     | -0.0010372   | 0.0088292        | torch.Size([64, 64])             |
| 582     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(1)                   | bias                | torch.float32 |           | -0.1720246   | 0.1340137     | -0.0235144   | 0.0050507        | torch.Size([64])                 |
| 582     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(1)                   | output              | torch.float32 |           | -5.0947351   | 3.6254582     | -0.3992728   | 2.7432122        | torch.Size([2, 256, 64])         |
| 583     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(1)                   | input               | torch.float32 |           | -5.0947351   | 3.6254582     | -0.3992728   | 2.7432122        | torch.Size([2, 256, 64])         |
| 583     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(1)                   | output              | qint8         | 0.0287789 | 0.0000000    | 3.6261353     | 0.4941868    | 0.6318119        | torch.Size([2, 256, 64])         |
| 584     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(1)   | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6261353     | 0.4941868    | 0.6318119        | torch.Size([2, 256, 64])         |
| 584     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(1)   | output              | qint16        | 0.0000166 | 0.4941945    | 0.4941945     | 0.4941945    | 0.0000000        | torch.Size([2, 256, 1])          |
| 585     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(1)               | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6261353     | 0.4941868    | 0.6318119        | torch.Size([2, 256, 64])         |
| 585     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(1)               | input_1             | qint16        | 0.0000166 | 0.4941945    | 0.4941945     | 0.4941945    | 0.0000000        | torch.Size([2, 256, 1])          |
| 585     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(1)               | output              | qint16        | 0.0000988 | -0.4941766   | 3.1318936     | 0.0000031    | 0.6318020        | torch.Size([2, 256, 64])         |
| 586     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(1)               | input_0             | qint16        | 0.0000988 | -0.4941766   | 3.1318936     | 0.0000031    | 0.6318020        | torch.Size([2, 256, 64])         |
| 586     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(1)               | input_1             | qint16        | 0.0000988 | -0.4941766   | 3.1318936     | 0.0000031    | 0.6318020        | torch.Size([2, 256, 64])         |
| 586     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(1)               | output              | qint16        | 0.0003201 | 0.0006402    | 9.8087368     | 0.6318073    | 1.8655338        | torch.Size([2, 256, 64])         |
| 587     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(1)     | input_0             | qint16        | 0.0003201 | 0.0006402    | 9.8087368     | 0.6318073    | 1.8655338        | torch.Size([2, 256, 64])         |
| 587     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(1)     | output              | qint16        | 0.0000230 | 0.6317974    | 0.6317974     | 0.6317974    | 0.0000000        | torch.Size([2, 256, 1])          |
| 588     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(1)             | input               | qint16        | 0.0000230 | 0.6317974    | 0.6317974     | 0.6317974    | 0.0000000        | torch.Size([2, 256, 1])          |
| 588     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(1)             | output              | qint16        | 0.0000608 | 1.2580867    | 1.2580867     | 1.2580867    | 0.0000000        | torch.Size([2, 256, 1])          |
| 589     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(1)           | input_0             | qint16        | 0.0000988 | -0.4941766   | 3.1318936     | 0.0000031    | 0.6318020        | torch.Size([2, 256, 64])         |
| 589     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(1)           | input_1             | qint16        | 0.0000608 | 1.2580867    | 1.2580867     | 1.2580867    | 0.0000000        | torch.Size([2, 256, 1])          |
| 589     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(1)           | output              | qint16        | 0.0001598 | -0.6216512   | 3.9402680     | 0.0000500    | 0.9999705        | torch.Size([2, 256, 64])         |
| 590     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(1)      | input               | torch.float32 |           | 0.8006503    | 1.1495361     | 0.9818506    | 0.0032003        | torch.Size([64])                 |
| 590     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(1)      | output              | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 591     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(1)        | input_0             | qint16        | 0.0001598 | -0.6216512   | 3.9402680     | 0.0000500    | 0.9999705        | torch.Size([2, 256, 64])         |
| 591     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(1)        | input_1             | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 591     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(1)        | output              | qint16        | 0.0001633 | -0.7146460   | 4.0741515     | 0.0120873    | 1.0183605        | torch.Size([2, 256, 64])         |
| 592     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(1)        | input               | torch.float32 |           | -0.0461140   | 0.1411197     | 0.0132828    | 0.0015701        | torch.Size([64])                 |
| 592     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(1)        | output              | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 593     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(1)          | input_0             | qint16        | 0.0001633 | -0.7146460   | 4.0741515     | 0.0120873    | 1.0183605        | torch.Size([2, 256, 64])         |
| 593     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(1)          | input_1             | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 593     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(1)          | output              | qint8         | 0.0387038 | -0.6966682   | 4.0638976     | 0.0247946    | 0.9995770        | torch.Size([2, 256, 64])         |
| 594     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(1)                   | input               | qint8         | 0.0387038 | -0.6966682   | 4.0638976     | 0.0247946    | 0.9995770        | torch.Size([2, 256, 64])         |
| 594     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(1)                   | weight              | torch.float32 |           | -0.5701389   | 0.3477888     | 0.0006721    | 0.0085883        | torch.Size([64, 64])             |
| 594     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(1)                   | bias                | torch.float32 |           | -0.1677032   | 0.1709885     | -0.0237130   | 0.0070098        | torch.Size([64])                 |
| 594     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(1)                   | output              | torch.float32 |           | -4.1676068   | 7.1161237     | -0.6260737   | 2.1186965        | torch.Size([2, 256, 64])         |
| 595     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(1)                  | input               | torch.float32 |           | -4.1676068   | 7.1161237     | -0.6260737   | 2.1186965        | torch.Size([2, 256, 64])         |
| 595     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(1)                  | output              | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2275446    | 0.8177109        | torch.Size([2, 256, 64])         |
| 596     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(1)  | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2275446    | 0.8177109        | torch.Size([2, 256, 64])         |
| 596     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(1)  | output              | qint16        | 0.0000138 | 0.2275511    | 0.2275511     | 0.2275511    | 0.0000000        | torch.Size([2, 256, 1])          |
| 597     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(1)              | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2275446    | 0.8177109        | torch.Size([2, 256, 64])         |
| 597     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(1)              | input_1             | qint16        | 0.0000138 | 0.2275511    | 0.2275511     | 0.2275511    | 0.0000000        | torch.Size([2, 256, 1])          |
| 597     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(1)              | output              | qint16        | 0.0002137 | -0.2275530   | 6.9133382     | -0.0000066   | 0.8177190        | torch.Size([2, 256, 64])         |
| 598     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(1)              | input_0             | qint16        | 0.0002137 | -0.2275530   | 6.9133382     | -0.0000066   | 0.8177190        | torch.Size([2, 256, 64])         |
| 598     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(1)              | input_1             | qint16        | 0.0002137 | -0.2275530   | 6.9133382     | -0.0000066   | 0.8177190        | torch.Size([2, 256, 64])         |
| 598     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(1)              | output              | qint16        | 0.0014959 | 0.0000000    | 47.7948570    | 0.8181317    | 35.0363617       | torch.Size([2, 256, 64])         |
| 599     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(1)    | input_0             | qint16        | 0.0014959 | 0.0000000    | 47.7948570    | 0.8181317    | 35.0363617       | torch.Size([2, 256, 64])         |
| 599     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(1)    | output              | qint16        | 0.0000253 | 0.8181413    | 0.8181413     | 0.8181413    | 0.0000000        | torch.Size([2, 256, 1])          |
| 600     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(1)            | input               | qint16        | 0.0000253 | 0.8181413    | 0.8181413     | 0.8181413    | 0.0000000        | torch.Size([2, 256, 1])          |
| 600     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(1)            | output              | qint16        | 0.0000680 | 1.1055866    | 1.1055866     | 1.1055866    | 0.0000000        | torch.Size([2, 256, 1])          |
| 601     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(1)          | input_0             | qint16        | 0.0002137 | -0.2275530   | 6.9133382     | -0.0000066   | 0.8177190        | torch.Size([2, 256, 64])         |
| 601     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(1)          | input_1             | qint16        | 0.0000680 | 1.1055866    | 1.1055866     | 1.1055866    | 0.0000000        | torch.Size([2, 256, 1])          |
| 601     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(1)          | output              | qint16        | 0.0002366 | -0.2514754   | 7.6433854     | 0.0000703    | 0.9994981        | torch.Size([2, 256, 64])         |
| 602     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(1)     | input               | torch.float32 |           | 0.7297163    | 1.2824999     | 1.0134131    | 0.0161719        | torch.Size([64])                 |
| 602     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(1)     | output              | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 603     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(1)       | input_0             | qint16        | 0.0002366 | -0.2514754   | 7.6433854     | 0.0000703    | 0.9994981        | torch.Size([2, 256, 64])         |
| 603     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(1)       | input_1             | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 603     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(1)       | output              | qint16        | 0.0001954 | -0.3225991   | 5.5775833     | -0.0422483   | 0.5729139        | torch.Size([2, 256, 64])         |
| 604     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(1)       | input               | torch.float32 |           | -0.2385408   | 0.3192695     | 0.0900053    | 0.0129013        | torch.Size([64])                 |
| 604     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(1)       | output              | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 605     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(1)         | input_0             | qint16        | 0.0001954 | -0.3225991   | 5.5775833     | -0.0422483   | 0.5729139        | torch.Size([2, 256, 64])         |
| 605     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(1)         | input_1             | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 605     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(1)         | output              | qint8         | 0.0462055 | -0.3696443   | 5.3598428     | 0.0476495    | 0.4951884        | torch.Size([2, 256, 64])         |
| 606     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_0             | qint8         | 0.0587279 | -0.8221908   | 4.5807776     | 0.1183735    | 0.9602281        | torch.Size([2, 256, 128])        |
| 606     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_1             | qint8         | 0.0385920 | -1.0033913   | 4.0521569     | -0.0048240   | 1.2958747        | torch.Size([2, 256, 32])         |
| 606     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_2             | qint8         | 0.0373904 | -0.5234659   | 3.2155764     | 0.0093476    | 0.6984528        | torch.Size([2, 256, 32])         |
| 606     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_3             | qint8         | 0.0462055 | -0.3696443   | 5.3598428     | 0.0476495    | 0.4951884        | torch.Size([2, 256, 64])         |
| 606     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | output              | qint8         | 0.0569265 | -1.0246774   | 5.3510933     | 0.0736042    | 0.8488365        | torch.Size([2, 256, 256])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | input_0             | torch.float32 |           | -0.8671875   | 0.8359375     | -0.1171943   | 0.0536020        | torch.Size([12, 3, 256, 704])    |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_0            | torch.float32 |           | -0.7109375   | 0.8359375     | -0.0736803   | 0.0375602        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_1            | torch.float32 |           | -0.7578125   | 0.8125000     | -0.1215375   | 0.0386390        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_2            | torch.float32 |           | -0.7656250   | 0.6796875     | -0.0698674   | 0.0240641        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_3            | torch.float32 |           | -0.6093750   | 0.8281250     | -0.0708556   | 0.0246479        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_4            | torch.float32 |           | -0.8437500   | 0.8281250     | -0.0984946   | 0.0571401        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_5            | torch.float32 |           | -0.7812500   | 0.8281250     | -0.0661624   | 0.0312031        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_6            | torch.float32 |           | -0.8671875   | 0.8203125     | -0.1705534   | 0.0695115        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_7            | torch.float32 |           | -0.8359375   | 0.8359375     | -0.1157308   | 0.0423470        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_8            | torch.float32 |           | -0.8437500   | 0.8359375     | -0.1084403   | 0.0604877        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_9            | torch.float32 |           | -0.8671875   | 0.8203125     | -0.1853776   | 0.0802727        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_10           | torch.float32 |           | -0.8593750   | 0.8359375     | -0.1273327   | 0.0596137        | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                         | head                                              | output_11           | torch.float32 |           | -0.8593750   | 0.8359375     | -0.1982997   | 0.0942286        | torch.Size([3, 256, 704])        |
| 608     | torch.Tensor.double                                                         | head                                              | input               | torch.float64 |           | -646.5387754 | 667.1202547   | -52.6064262  | 47748.4609375    | torch.Size([12, 4, 4])           |
| 608     | torch.Tensor.double                                                         | head                                              | output              | torch.float64 |           | -646.5387754 | 667.1202547   | -52.6064262  | 47748.4609375    | torch.Size([12, 4, 4])           |
| 609     | torch.matmul                                                                | head                                              | input_0             | torch.float64 |           | -1.0000000   | 1.0000000     | 0.0006658    | 0.2513128        | torch.Size([12, 4, 4])           |
| 609     | torch.matmul                                                                | head                                              | input_1             | torch.float64 |           | -646.5387754 | 667.1202547   | -52.6064262  | 47748.4609375    | torch.Size([12, 4, 4])           |
| 609     | torch.matmul                                                                | head                                              | output              | torch.float64 |           | -4.3784015   | 1.5829016     | -0.2683935   | 1.3634968        | torch.Size([12, 4, 4])           |
| 610     | torch.Tensor.view                                                           | head                                              | input_0             | torch.float64 |           | -4.3784015   | 1.5829016     | -0.2683935   | 1.3634968        | torch.Size([12, 4, 4])           |
| 610     | torch.Tensor.view                                                           | head                                              | output              | torch.float64 |           | -4.3784015   | 1.5829016     | -0.2683935   | 1.3634968        | torch.Size([2, 6, 4, 4])         |
| 611     | torch.Tensor.float                                                          | head                                              | input               | torch.float64 |           | -4.3784015   | 1.5829016     | -0.2683935   | 1.3634968        | torch.Size([2, 6, 4, 4])         |
| 611     | torch.Tensor.float                                                          | head                                              | output              | torch.float32 |           | -4.3784013   | 1.5829016     | -0.2683935   | 1.3634968        | torch.Size([2, 6, 4, 4])         |
| 612     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.mat_quant_stub                               | input               | torch.float32 |           | -4.3784013   | 1.5829016     | -0.2683935   | 1.3634968        | torch.Size([2, 6, 4, 4])         |
| 612     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.mat_quant_stub                               | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 613     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before                                    | input               | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 613     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before                                    | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 613     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before                                    | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 614     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.0.query_cat                           | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 512, 256])        |
| 614     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.0.query_cat                           | input_1             | qint8         | 0.0569265 | -1.2523835   | 7.2296681     | 0.0620206    | 0.8451077        | torch.Size([2, 512, 256])        |
| 614     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.0.query_cat                           | output              | qint8         | 0.0565044 | -1.2430965   | 7.1760573     | 0.0311522    | 0.4206262        | torch.Size([2, 512, 512])        |
| 615     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.0.key_cat                             | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 615     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.0.key_cat                             | input_1             | qint8         | 0.0569265 | -1.0246774   | 5.3510933     | 0.0736042    | 0.8488365        | torch.Size([2, 256, 256])        |
| 615     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.0.key_cat                             | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 616     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0565044 | -1.2430965   | 7.1760573     | 0.0311522    | 0.4206262        | torch.Size([2, 512, 512])        |
| 616     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0565044 | -1.2430965   | 7.1760573     | 0.0311522    | 0.4206262        | torch.Size([512, 2, 512])        |
| 617     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 617     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 618     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 618     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 619     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | input_0             | qint8         | 0.0565044 | -1.2430965   | 7.1760573     | 0.0311522    | 0.4206262        | torch.Size([512, 2, 512])        |
| 619     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | output              | qint8         | 0.0565044 | -1.2430965   | 7.1760573     | 0.0311522    | 0.4206262        | torch.Size([512, 2, 512])        |
| 620     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 620     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 621     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 621     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 622     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.q_proj                         | input               | qint8         | 0.0565044 | -1.2430965   | 7.1760573     | 0.0311522    | 0.4206262        | torch.Size([512, 2, 512])        |
| 622     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.q_proj                         | weight              | torch.float32 |           | -0.2786695   | 0.2698635     | 0.0002171    | 0.0036005        | torch.Size([512, 512])           |
| 622     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.q_proj                         | bias                | torch.float32 |           | -0.1025436   | 0.1140026     | -0.0003242   | 0.0019732        | torch.Size([512])                |
| 622     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.q_proj                         | output              | qint8         | 0.0918717 | -9.3709116   | 9.0952969     | -0.0650862   | 5.3204603        | torch.Size([512, 2, 512])        |
| 623     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.k_proj                         | input               | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 623     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.k_proj                         | weight              | torch.float32 |           | -0.2842779   | 0.2792765     | -0.0001027   | 0.0036413        | torch.Size([512, 512])           |
| 623     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.k_proj                         | bias                | torch.float32 |           | -0.0096402   | 0.0094814     | 0.0000140    | 0.0000141        | torch.Size([512])                |
| 623     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.k_proj                         | output              | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([256, 2, 512])        |
| 624     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.v_proj                         | input               | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 624     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.v_proj                         | weight              | torch.float32 |           | -0.1630211   | 0.1449102     | 0.0001645    | 0.0010630        | torch.Size([512, 512])           |
| 624     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.v_proj                         | bias                | torch.float32 |           | -0.0888495   | 0.0985312     | -0.0008267   | 0.0008712        | torch.Size([512])                |
| 624     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.v_proj                         | output              | qint8         | 0.0065373 | -0.0915225   | 0.0980598     | -0.0008172   | 0.0008789        | torch.Size([256, 2, 512])        |
| 625     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | input_0             | qint8         | 0.0918717 | -9.3709116   | 9.0952969     | -0.0650862   | 5.3204603        | torch.Size([512, 2, 512])        |
| 625     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | output              | qint8         | 0.0918717 | -9.3709116   | 9.0952969     | -0.0650862   | 5.3204603        | torch.Size([512, 16, 64])        |
| 626     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0918717 | -9.3709116   | 9.0952969     | -0.0650862   | 5.3204603        | torch.Size([512, 16, 64])        |
| 626     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0918717 | -9.3709116   | 9.0952969     | -0.0650862   | 5.3204603        | torch.Size([16, 512, 64])        |
| 627     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | input_0             | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([256, 2, 512])        |
| 627     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | output              | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([256, 16, 64])        |
| 628     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([256, 16, 64])        |
| 628     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([16, 256, 64])        |
| 629     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | input_0             | qint8         | 0.0065373 | -0.0915225   | 0.0980598     | -0.0008172   | 0.0008789        | torch.Size([256, 2, 512])        |
| 629     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | output              | qint8         | 0.0065373 | -0.0915225   | 0.0980598     | -0.0008172   | 0.0008789        | torch.Size([256, 16, 64])        |
| 630     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0065373 | -0.0915225   | 0.0980598     | -0.0008172   | 0.0008789        | torch.Size([256, 16, 64])        |
| 630     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0065373 | -0.0915225   | 0.0980598     | -0.0008172   | 0.0008789        | torch.Size([16, 256, 64])        |
| 631     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.0.attn.q_scale_mul                    | input_0             | qint8         | 0.0918717 | -9.3709116   | 9.0952969     | -0.0650862   | 5.3204603        | torch.Size([16, 512, 64])        |
| 631     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.0.attn.q_scale_mul                    | output              | qint8         | 0.0114840 | -1.1713639   | 1.1369121     | -0.0081358   | 0.0831322        | torch.Size([16, 512, 64])        |
| 632     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([16, 256, 64])        |
| 632     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([16, 64, 256])        |
| 633     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.0.attn.matmul                         | input_0             | qint8         | 0.0114840 | -1.1713639   | 1.1369121     | -0.0081358   | 0.0831322        | torch.Size([16, 512, 64])        |
| 633     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.0.attn.matmul                         | input_1             | qint8         | 0.0869502 | -6.8690662   | 7.1299167     | 0.1356899    | 6.5593877        | torch.Size([16, 64, 256])        |
| 633     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.0.attn.matmul                         | output              | qint8         | 1.8235589 | -78.4130325  | 82.0601501    | 1.2109571    | 693.5892944      | torch.Size([16, 512, 256])       |
| 634     | torch.Tensor.max                                                            | head.layers.0.attn.softmax                        | input               | qint8         | 1.8235589 | -78.4130325  | 82.0601501    | 1.2109571    | 693.5892944      | torch.Size([16, 512, 256])       |
| 634     | torch.Tensor.max                                                            | head.layers.0.attn.softmax                        | output_0            | qint8         | 1.8235589 | -78.4130325  | 82.0601501    | 1.2109571    | 693.6735840      | torch.Size([16, 512, 1])         |
| 634     | torch.Tensor.max                                                            | head.layers.0.attn.softmax                        | output_1            | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 1])         |
| 635     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.0.attn.softmax.sub                    | input_0             | qint8         | 1.8235589 | -78.4130325  | 82.0601501    | 1.2109571    | 693.5892944      | torch.Size([16, 512, 256])       |
| 635     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.0.attn.softmax.sub                    | input_1             | qint8         | 1.8235589 | -78.4130325  | 82.0601501    | 1.2109571    | 693.6735840      | torch.Size([16, 512, 1])         |
| 635     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.0.attn.softmax.sub                    | output              | qint16        | 0.0167598 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 636     | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.0.attn.softmax.exp                    | input               | qint16        | 0.0167598 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 636     | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.0.attn.softmax.exp                    | output              | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 637     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.0.attn.softmax.sum                    | input               | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 637     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.0.attn.softmax.sum                    | output              | qint16        | 0.0037493 | 122.8518143  | 122.8518143   | 122.8518143  | 0.0000000        | torch.Size([16, 512, 1])         |
| 638     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.0.attn.softmax.reciprocal             | input               | qint16        | 0.0037493 | 122.8518143  | 122.8518143   | 122.8518143  | 0.0000000        | torch.Size([16, 512, 1])         |
| 638     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.0.attn.softmax.reciprocal             | output              | qint16        | 0.0000305 | 0.0081483    | 0.0081483     | 0.0081483    | 0.0000000        | torch.Size([16, 512, 1])         |
| 639     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.0.attn.softmax.mul                    | input_0             | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 639     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.0.attn.softmax.mul                    | input_1             | qint16        | 0.0000305 | 0.0081483    | 0.0081483     | 0.0081483    | 0.0000000        | torch.Size([16, 512, 1])         |
| 639     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.0.attn.softmax.mul                    | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 640     | torch.nn.modules.dropout.Dropout                                            | head.layers.0.attn.attention_drop                 | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 640     | torch.nn.modules.dropout.Dropout                                            | head.layers.0.attn.attention_drop                 | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 641     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.0.attn.attn_matmul                    | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 641     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.0.attn.attn_matmul                    | input_1             | qint8         | 0.0065373 | -0.0915225   | 0.0980598     | -0.0008172   | 0.0008789        | torch.Size([16, 256, 64])        |
| 641     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.0.attn.attn_matmul                    | output              | qint8         | 0.0063305 | -0.1835840   | 0.1962449     | -0.0015703   | 0.0035078        | torch.Size([16, 512, 64])        |
| 642     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0063305 | -0.1835840   | 0.1962449     | -0.0015703   | 0.0035078        | torch.Size([16, 512, 64])        |
| 642     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0063305 | -0.1835840   | 0.1962449     | -0.0015703   | 0.0035078        | torch.Size([512, 16, 64])        |
| 643     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | input_0             | qint8         | 0.0063305 | -0.1835840   | 0.1962449     | -0.0015703   | 0.0035078        | torch.Size([512, 16, 64])        |
| 643     | torch.Tensor.reshape                                                        | head.layers.0.attn                                | output              | qint8         | 0.0063305 | -0.1835840   | 0.1962449     | -0.0015703   | 0.0035078        | torch.Size([512, 2, 512])        |
| 644     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.out_proj                       | input               | qint8         | 0.0063305 | -0.1835840   | 0.1962449     | -0.0015703   | 0.0035078        | torch.Size([512, 2, 512])        |
| 644     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.out_proj                       | weight              | torch.float32 |           | -0.1874478   | 0.1759859     | -0.0001105   | 0.0022686        | torch.Size([512, 512])           |
| 644     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.out_proj                       | bias                | torch.float32 |           | -0.3150745   | 0.2518794     | 0.0131974    | 0.0093190        | torch.Size([512])                |
| 644     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.0.attn.out_proj                       | output              | qint8         | 0.0105292 | -0.9055097   | 0.6633385     | 0.0316287    | 0.0398218        | torch.Size([512, 2, 512])        |
| 645     | torch.Tensor.view                                                           | head.layers.0.attn                                | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 645     | torch.Tensor.view                                                           | head.layers.0.attn                                | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 646     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.0.attn.attn_weights_mean              | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 646     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.0.attn.attn_weights_mean              | output              | qint8         | 0.0028862 | 0.0086585    | 0.0086585     | 0.0086585    | 0.0000000        | torch.Size([2, 512, 256])        |
| 647     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | input_0             | qint8         | 0.0105292 | -0.9055097   | 0.6633385     | 0.0316287    | 0.0398218        | torch.Size([512, 2, 512])        |
| 647     | torch.Tensor.transpose                                                      | head.layers.0.attn                                | output              | qint8         | 0.0105292 | -0.9055097   | 0.6633385     | 0.0316287    | 0.0398218        | torch.Size([2, 512, 512])        |
| 648     | torch.nn.modules.dropout.Dropout                                            | head.layers.0.dropout                             | input               | qint8         | 0.0105292 | -0.9055097   | 0.6633385     | 0.0316287    | 0.0398218        | torch.Size([2, 512, 512])        |
| 648     | torch.nn.modules.dropout.Dropout                                            | head.layers.0.dropout                             | output              | qint8         | 0.0105292 | -0.9055097   | 0.6633385     | 0.0316287    | 0.0398218        | torch.Size([2, 512, 512])        |
| 649     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.0.add                                 | input_0             | qint8         | 0.0565044 | -1.2430965   | 7.1760573     | 0.0311522    | 0.4206262        | torch.Size([2, 512, 512])        |
| 649     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.0.add                                 | input_1             | qint8         | 0.0105292 | -0.9055097   | 0.6633385     | 0.0316287    | 0.0398218        | torch.Size([2, 512, 512])        |
| 649     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.0.add                                 | output              | qint8         | 0.0531215 | -1.4342798   | 6.7464271     | 0.0623893    | 0.3721612        | torch.Size([2, 512, 512])        |
| 650     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after                                     | input               | qint8         | 0.0531215 | -1.4342798   | 6.7464271     | 0.0623893    | 0.3721612        | torch.Size([2, 512, 512])        |
| 650     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after                                     | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 650     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after                                     | output              | qint16        | 0.0015259 | -6.1813354   | 4.5745850     | -0.0220921   | 0.6049474        | torch.Size([2, 512, 256])        |
| 651     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(1)                                 | input               | qint16        | 0.0015259 | -6.1813354   | 4.5745850     | -0.0220921   | 0.6049474        | torch.Size([2, 512, 256])        |
| 651     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(1)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 651     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(1)                                 | output              | qint16        | 0.0001526 | -4.3124390   | 3.8990784     | 0.0009192    | 0.0348120        | torch.Size([2, 512, 512])        |
| 652     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.1.query_cat                           | input_0             | qint16        | 0.0015259 | -6.1813354   | 4.5745850     | -0.0220921   | 0.6049474        | torch.Size([2, 512, 256])        |
| 652     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.1.query_cat                           | input_1             | qint8         | 0.0569265 | -1.2523835   | 7.2296681     | 0.0620206    | 0.8451077        | torch.Size([2, 512, 256])        |
| 652     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.1.query_cat                           | output              | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([2, 512, 512])        |
| 653     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.1.key_cat                             | input_0             | qint16        | 0.0015259 | -6.1813354   | 4.5745850     | -0.0220921   | 0.6049474        | torch.Size([2, 512, 256])        |
| 653     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.1.key_cat                             | input_1             | qint8         | 0.0569265 | -1.2523835   | 7.2296681     | 0.0620206    | 0.8451077        | torch.Size([2, 512, 256])        |
| 653     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.1.key_cat                             | output              | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([2, 512, 512])        |
| 654     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([2, 512, 512])        |
| 654     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 655     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([2, 512, 512])        |
| 655     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 656     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint16        | 0.0001526 | -4.3124390   | 3.8990784     | 0.0009192    | 0.0348120        | torch.Size([2, 512, 512])        |
| 656     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint16        | 0.0001526 | -4.3124390   | 3.8990784     | 0.0009192    | 0.0348120        | torch.Size([512, 2, 512])        |
| 657     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | input_0             | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 657     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | output              | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 658     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | input_0             | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 658     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | output              | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 659     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | input_0             | qint16        | 0.0001526 | -4.3124390   | 3.8990784     | 0.0009192    | 0.0348120        | torch.Size([512, 2, 512])        |
| 659     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | output              | qint16        | 0.0001526 | -4.3124390   | 3.8990784     | 0.0009192    | 0.0348120        | torch.Size([512, 2, 512])        |
| 660     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.q_proj                         | input               | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 660     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.q_proj                         | weight              | torch.float32 |           | -0.6016091   | 0.5586885     | -0.0000813   | 0.0032236        | torch.Size([512, 512])           |
| 660     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.q_proj                         | bias                | torch.float32 |           | -0.1268833   | 0.1088683     | -0.0012884   | 0.0012951        | torch.Size([512])                |
| 660     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.q_proj                         | output              | qint8         | 0.1122312 | -13.6922007  | 12.9065819    | -0.0037902   | 12.9875240       | torch.Size([512, 2, 512])        |
| 661     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.k_proj                         | input               | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([512, 2, 512])        |
| 661     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.k_proj                         | weight              | torch.float32 |           | -0.3812873   | 0.4850378     | -0.0000840   | 0.0033379        | torch.Size([512, 512])           |
| 661     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.k_proj                         | bias                | torch.float32 |           | -0.0197120   | 0.0165953     | -0.0001635   | 0.0000225        | torch.Size([512])                |
| 661     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.k_proj                         | output              | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([512, 2, 512])        |
| 662     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.v_proj                         | input               | qint16        | 0.0001526 | -4.3124390   | 3.8990784     | 0.0009192    | 0.0348120        | torch.Size([512, 2, 512])        |
| 662     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.v_proj                         | weight              | torch.float32 |           | -0.1545264   | 0.1564725     | -0.0000621   | 0.0008573        | torch.Size([512, 512])           |
| 662     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.v_proj                         | bias                | torch.float32 |           | -0.1773102   | 0.2198186     | 0.0024783    | 0.0030017        | torch.Size([512])                |
| 662     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.v_proj                         | output              | qint8         | 0.0120048 | -1.5366176   | 1.0444198     | 0.0027862    | 0.0283507        | torch.Size([512, 2, 512])        |
| 663     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | input_0             | qint8         | 0.1122312 | -13.6922007  | 12.9065819    | -0.0037902   | 12.9875240       | torch.Size([512, 2, 512])        |
| 663     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | output              | qint8         | 0.1122312 | -13.6922007  | 12.9065819    | -0.0037902   | 12.9875240       | torch.Size([512, 16, 64])        |
| 664     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.1122312 | -13.6922007  | 12.9065819    | -0.0037902   | 12.9875240       | torch.Size([512, 16, 64])        |
| 664     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.1122312 | -13.6922007  | 12.9065819    | -0.0037902   | 12.9875240       | torch.Size([16, 512, 64])        |
| 665     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | input_0             | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([512, 2, 512])        |
| 665     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | output              | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([512, 16, 64])        |
| 666     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([512, 16, 64])        |
| 666     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([16, 512, 64])        |
| 667     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | input_0             | qint8         | 0.0120048 | -1.5366176   | 1.0444198     | 0.0027862    | 0.0283507        | torch.Size([512, 2, 512])        |
| 667     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | output              | qint8         | 0.0120048 | -1.5366176   | 1.0444198     | 0.0027862    | 0.0283507        | torch.Size([512, 16, 64])        |
| 668     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.0120048 | -1.5366176   | 1.0444198     | 0.0027862    | 0.0283507        | torch.Size([512, 16, 64])        |
| 668     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.0120048 | -1.5366176   | 1.0444198     | 0.0027862    | 0.0283507        | torch.Size([16, 512, 64])        |
| 669     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.1.attn.q_scale_mul                    | input_0             | qint8         | 0.1122312 | -13.6922007  | 12.9065819    | -0.0037902   | 12.9875240       | torch.Size([16, 512, 64])        |
| 669     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.1.attn.q_scale_mul                    | output              | qint8         | 0.0140289 | -1.7115251   | 1.6133227     | -0.0004738   | 0.2029300        | torch.Size([16, 512, 64])        |
| 670     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([16, 512, 64])        |
| 670     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([16, 64, 512])        |
| 671     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.1.attn.matmul                         | input_0             | qint8         | 0.0140289 | -1.7115251   | 1.6133227     | -0.0004738   | 0.2029300        | torch.Size([16, 512, 64])        |
| 671     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.1.attn.matmul                         | input_1             | qint8         | 0.1161386 | -14.8657398  | 14.6334629    | 0.0138541    | 7.0583878        | torch.Size([16, 64, 512])        |
| 671     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.1.attn.matmul                         | output              | qint8         | 2.7551215 | -352.6555481 | 264.4916687   | -9.7481766   | 1082.3946533     | torch.Size([16, 512, 512])       |
| 672     | torch.Tensor.max                                                            | head.layers.1.attn.softmax                        | input               | qint8         | 2.7551215 | -352.6555481 | 264.4916687   | -9.7481766   | 1082.3946533     | torch.Size([16, 512, 512])       |
| 672     | torch.Tensor.max                                                            | head.layers.1.attn.softmax                        | output_0            | qint8         | 2.7551215 | -11.0204859  | 264.4916687   | 61.0028000   | 2322.0754395     | torch.Size([16, 512, 1])         |
| 672     | torch.Tensor.max                                                            | head.layers.1.attn.softmax                        | output_1            | torch.int64   |           | 0.0000000    | 503.0000000   | 250.6577148  | 14339.4687500    | torch.Size([16, 512, 1])         |
| 673     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.1.attn.softmax.sub                    | input_0             | qint8         | 2.7551215 | -352.6555481 | 264.4916687   | -9.7481766   | 1082.3946533     | torch.Size([16, 512, 512])       |
| 673     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.1.attn.softmax.sub                    | input_1             | qint8         | 2.7551215 | -11.0204859  | 264.4916687   | 61.0028000   | 2322.0754395     | torch.Size([16, 512, 1])         |
| 673     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.1.attn.softmax.sub                    | output              | qint16        | 0.0349974 | -551.0348511 | 0.0000000     | -70.7510223  | 2778.9167480     | torch.Size([16, 512, 512])       |
| 674     | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.1.attn.softmax.exp                    | input               | qint16        | 0.0349974 | -551.0348511 | 0.0000000     | -70.7510223  | 2778.9167480     | torch.Size([16, 512, 512])       |
| 674     | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.1.attn.softmax.exp                    | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0264590    | 0.0255871        | torch.Size([16, 512, 512])       |
| 675     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.1.attn.softmax.sum                    | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0264590    | 0.0255871        | torch.Size([16, 512, 512])       |
| 675     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.1.attn.softmax.sum                    | output              | qint16        | 0.0009752 | 0.9995574    | 31.9536572    | 4.3203645    | 82.0057297       | torch.Size([16, 512, 1])         |
| 676     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.1.attn.softmax.reciprocal             | input               | qint16        | 0.0009752 | 0.9995574    | 31.9536572    | 4.3203645    | 82.0057297       | torch.Size([16, 512, 1])         |
| 676     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.1.attn.softmax.reciprocal             | output              | qint16        | 0.0000305 | 0.0312810    | 0.9999847     | 0.7893655    | 0.1112509        | torch.Size([16, 512, 1])         |
| 677     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.1.attn.softmax.mul                    | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0264590    | 0.0255871        | torch.Size([16, 512, 512])       |
| 677     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.1.attn.softmax.mul                    | input_1             | qint16        | 0.0000305 | 0.0312810    | 0.9999847     | 0.7893655    | 0.1112509        | torch.Size([16, 512, 1])         |
| 677     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.1.attn.softmax.mul                    | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0025047    | 0.0015019        | torch.Size([16, 512, 512])       |
| 678     | torch.nn.modules.dropout.Dropout                                            | head.layers.1.attn.attention_drop                 | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0025047    | 0.0015019        | torch.Size([16, 512, 512])       |
| 678     | torch.nn.modules.dropout.Dropout                                            | head.layers.1.attn.attention_drop                 | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0025047    | 0.0015019        | torch.Size([16, 512, 512])       |
| 679     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.1.attn.attn_matmul                    | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0025047    | 0.0015019        | torch.Size([16, 512, 512])       |
| 679     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.1.attn.attn_matmul                    | input_1             | qint8         | 0.0120048 | -1.5366176   | 1.0444198     | 0.0027862    | 0.0283507        | torch.Size([16, 512, 64])        |
| 679     | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.1.attn.attn_matmul                    | output              | qint8         | 0.0163199 | -2.0889513   | 2.0726314     | -0.0248879   | 0.1399430        | torch.Size([16, 512, 64])        |
| 680     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.0163199 | -2.0889513   | 2.0726314     | -0.0248879   | 0.1399430        | torch.Size([16, 512, 64])        |
| 680     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.0163199 | -2.0889513   | 2.0726314     | -0.0248879   | 0.1399430        | torch.Size([512, 16, 64])        |
| 681     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | input_0             | qint8         | 0.0163199 | -2.0889513   | 2.0726314     | -0.0248879   | 0.1399430        | torch.Size([512, 16, 64])        |
| 681     | torch.Tensor.reshape                                                        | head.layers.1.attn                                | output              | qint8         | 0.0163199 | -2.0889513   | 2.0726314     | -0.0248879   | 0.1399430        | torch.Size([512, 2, 512])        |
| 682     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.out_proj                       | input               | qint8         | 0.0163199 | -2.0889513   | 2.0726314     | -0.0248879   | 0.1399430        | torch.Size([512, 2, 512])        |
| 682     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.out_proj                       | weight              | torch.float32 |           | -0.1796134   | 0.1793741     | 0.0000376    | 0.0020221        | torch.Size([512, 512])           |
| 682     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.out_proj                       | bias                | torch.float32 |           | -0.3707179   | 0.3755981     | -0.0065476   | 0.0208958        | torch.Size([512])                |
| 682     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.1.attn.out_proj                       | output              | qint8         | 0.0158367 | -2.0271015   | 2.0112648     | 0.0235501    | 0.3729019        | torch.Size([512, 2, 512])        |
| 683     | torch.Tensor.view                                                           | head.layers.1.attn                                | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0025047    | 0.0015019        | torch.Size([16, 512, 512])       |
| 683     | torch.Tensor.view                                                           | head.layers.1.attn                                | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0025047    | 0.0015019        | torch.Size([2, 8, 512, 512])     |
| 684     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.1.attn.attn_weights_mean              | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0025047    | 0.0015019        | torch.Size([2, 8, 512, 512])     |
| 684     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.1.attn.attn_weights_mean              | output              | qint8         | 0.0039515 | 0.0000000    | 0.4899918     | 0.0025077    | 0.0002432        | torch.Size([2, 512, 512])        |
| 685     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | input_0             | qint8         | 0.0158367 | -2.0271015   | 2.0112648     | 0.0235501    | 0.3729019        | torch.Size([512, 2, 512])        |
| 685     | torch.Tensor.transpose                                                      | head.layers.1.attn                                | output              | qint8         | 0.0158367 | -2.0271015   | 2.0112648     | 0.0235501    | 0.3729019        | torch.Size([2, 512, 512])        |
| 686     | torch.nn.modules.dropout.Dropout                                            | head.layers.1.dropout                             | input               | qint8         | 0.0158367 | -2.0271015   | 2.0112648     | 0.0235501    | 0.3729019        | torch.Size([2, 512, 512])        |
| 686     | torch.nn.modules.dropout.Dropout                                            | head.layers.1.dropout                             | output              | qint8         | 0.0158367 | -2.0271015   | 2.0112648     | 0.0235501    | 0.3729019        | torch.Size([2, 512, 512])        |
| 687     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.1.add                                 | input_0             | qint8         | 0.0790192 | -6.1634998   | 7.1907496     | 0.0210576    | 0.7271906        | torch.Size([2, 512, 512])        |
| 687     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.1.add                                 | input_1             | qint8         | 0.0158367 | -2.0271015   | 2.0112648     | 0.0235501    | 0.3729019        | torch.Size([2, 512, 512])        |
| 687     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.1.add                                 | output              | qint8         | 0.0813882 | -6.9180012   | 7.2435541     | 0.0448842    | 1.1766211        | torch.Size([2, 512, 512])        |
| 688     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(1)                                  | input               | qint8         | 0.0813882 | -6.9180012   | 7.2435541     | 0.0448842    | 1.1766211        | torch.Size([2, 512, 512])        |
| 688     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(1)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 688     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(1)                                  | output              | qint16        | 0.0015259 | -37.2360229  | 30.0781250    | 0.0272843    | 13.6765394       | torch.Size([2, 512, 256])        |
| 689     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.2.input_mean.mean                     | input_0             | qint16        | 0.0015259 | -37.2360229  | 30.0781250    | 0.0272843    | 13.6765394       | torch.Size([2, 512, 256])        |
| 689     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.2.input_mean.mean                     | output              | qint16        | 0.0000043 | -0.0758782   | 0.1035689     | 0.0272843    | 0.0023472        | torch.Size([2, 512, 1])          |
| 690     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.2.sub                                 | input_0             | qint16        | 0.0015259 | -37.2360229  | 30.0781250    | 0.0272843    | 13.6765394       | torch.Size([2, 512, 256])        |
| 690     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.2.sub                                 | input_1             | qint16        | 0.0000043 | -0.0758782   | 0.1035689     | 0.0272843    | 0.0023472        | torch.Size([2, 512, 1])          |
| 690     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.2.sub                                 | output              | qint16        | 0.0015280 | -37.2708893  | 30.0434647    | -0.0000046   | 13.6741972       | torch.Size([2, 512, 256])        |
| 691     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.mul                                 | input_0             | qint16        | 0.0015280 | -37.2708893  | 30.0434647    | -0.0000046   | 13.6741972       | torch.Size([2, 512, 256])        |
| 691     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.mul                                 | input_1             | qint16        | 0.0015280 | -37.2708893  | 30.0434647    | -0.0000046   | 13.6741972       | torch.Size([2, 512, 256])        |
| 691     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.mul                                 | output              | qint16        | 0.0765131 | 0.0000000    | 1389.0948486  | 13.6734800   | 4990.1689453     | torch.Size([2, 512, 256])        |
| 692     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.2.var_mean.mean                       | input_0             | qint16        | 0.0765131 | 0.0000000    | 1389.0948486  | 13.6734800   | 4990.1689453     | torch.Size([2, 512, 256])        |
| 692     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.2.var_mean.mean                       | output              | qint16        | 0.0011154 | 6.5807672    | 25.9951458    | 13.6735573   | 27.4442635       | torch.Size([2, 512, 1])          |
| 693     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.2.rsqrt                               | input               | qint16        | 0.0011154 | 6.5807672    | 25.9951458    | 13.6735573   | 27.4442635       | torch.Size([2, 512, 1])          |
| 693     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.2.rsqrt                               | output              | qint16        | 0.0000134 | 0.1961361    | 0.3898230     | 0.2887049    | 0.0041073        | torch.Size([2, 512, 1])          |
| 694     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.out_mul                             | input_0             | qint16        | 0.0015280 | -37.2708893  | 30.0434647    | -0.0000046   | 13.6741972       | torch.Size([2, 512, 256])        |
| 694     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.out_mul                             | input_1             | qint16        | 0.0000134 | 0.1961361    | 0.3898230     | 0.2887049    | 0.0041073        | torch.Size([2, 512, 1])          |
| 694     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.out_mul                             | output              | qint16        | 0.0002608 | -7.4162765   | 6.1311607     | -0.0000010   | 1.0000535        | torch.Size([2, 512, 256])        |
| 695     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.2.weight_quant                        | input               | torch.float32 |           | 0.7212925    | 1.0280097     | 0.8725660    | 0.0030677        | torch.Size([256])                |
| 695     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.2.weight_quant                        | output              | qint16        | 0.0000314 | 0.7212931    | 1.0279940     | 0.8725657    | 0.0030677        | torch.Size([256])                |
| 696     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.weight_mul                          | input_0             | qint16        | 0.0002608 | -7.4162765   | 6.1311607     | -0.0000010   | 1.0000535        | torch.Size([2, 512, 256])        |
| 696     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.weight_mul                          | input_1             | qint16        | 0.0000314 | 0.7212931    | 1.0279940     | 0.8725657    | 0.0030677        | torch.Size([256])                |
| 696     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.2.weight_mul                          | output              | qint16        | 0.0002362 | -6.7175660   | 5.2792854     | -0.0019681   | 0.8024848        | torch.Size([2, 512, 256])        |
| 697     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.2.bias_quant                          | input               | torch.float32 |           | -0.1147615   | 0.1351990     | 0.0041992    | 0.0017473        | torch.Size([256])                |
| 697     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.2.bias_quant                          | output              | qint16        | 0.0000041 | -0.1147608   | 0.1351969     | 0.0041991    | 0.0017473        | torch.Size([256])                |
| 698     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.2.bias_add                            | input_0             | qint16        | 0.0002362 | -6.7175660   | 5.2792854     | -0.0019681   | 0.8024848        | torch.Size([2, 512, 256])        |
| 698     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.2.bias_add                            | input_1             | qint16        | 0.0000041 | -0.1147608   | 0.1351969     | 0.0041991    | 0.0017473        | torch.Size([256])                |
| 698     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.2.bias_add                            | output              | qint8         | 0.0574845 | -6.7256856   | 5.1736045     | 0.0018863    | 0.7937144        | torch.Size([2, 512, 256])        |
| 699     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.kps_generator.offset                | input               | qint8         | 0.0574845 | -6.7256856   | 5.1736045     | 0.0018863    | 0.7937144        | torch.Size([2, 512, 256])        |
| 699     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.kps_generator.offset                | weight              | torch.float32 |           | -0.3113222   | 0.3088498     | -0.0000128   | 0.0058743        | torch.Size([24, 256])            |
| 699     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.kps_generator.offset                | bias                | torch.float32 |           | -0.1541595   | 0.0698048     | -0.0043113   | 0.0048043        | torch.Size([24])                 |
| 699     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.kps_generator.offset                | output              | qint16        | 0.0005328 | -17.4600067  | 9.9363470     | -1.0342131   | 15.7592010       | torch.Size([2, 512, 24])         |
| 700     | torch.Tensor.view                                                           | head.layers.3.kps_generator                       | input_0             | qint16        | 0.0005328 | -17.4600067  | 9.9363470     | -1.0342131   | 15.7592010       | torch.Size([2, 512, 24])         |
| 700     | torch.Tensor.view                                                           | head.layers.3.kps_generator                       | output              | qint16        | 0.0005328 | -17.4600067  | 9.9363470     | -1.0342131   | 15.7592010       | torch.Size([2, 512, 8, 3])       |
| 701     | torch.Tensor.__getitem__                                                    | head.layers.3.kps_generator                       | input_0             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.4784671    | 77.4393997       | torch.Size([2, 512, 11])         |
| 701     | torch.Tensor.__getitem__                                                    | head.layers.3.kps_generator                       | output              | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 1.0650992    | 283.1613770      | torch.Size([2, 512, 1, 3])       |
| 702     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.kps_generator.keypoints_add         | input_0             | qint16        | 0.0005328 | -17.4600067  | 9.9363470     | -1.0342131   | 15.7592010       | torch.Size([2, 512, 8, 3])       |
| 702     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.kps_generator.keypoints_add         | input_1             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 1.0650992    | 283.1613770      | torch.Size([2, 512, 1, 3])       |
| 702     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.kps_generator.keypoints_add         | output              | qint16        | 0.0018743 | -60.9810638  | 56.7114906    | 0.0309091    | 289.8973389      | torch.Size([2, 512, 8, 3])       |
| 703     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.weight_add                          | input_0             | qint8         | 0.0574845 | -6.7256856   | 5.1736045     | 0.0018863    | 0.7937144        | torch.Size([2, 512, 256])        |
| 703     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.weight_add                          | input_1             | qint8         | 0.0569265 | -1.2523835   | 7.2296681     | 0.0620206    | 0.8451077        | torch.Size([2, 512, 256])        |
| 703     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.weight_add                          | output              | qint8         | 0.0606616 | -7.0974116   | 7.7040277     | 0.0639319    | 1.5352830        | torch.Size([2, 512, 256])        |
| 704     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 704     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 705     | torch.Tensor.reshape                                                        | head.layers.3                                     | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 705     | torch.Tensor.reshape                                                        | head.layers.3                                     | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 706     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.0                    | input               | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 706     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.0                    | weight              | torch.float32 |           | -0.6545363   | 0.5989806     | -0.0019711   | 0.0136002        | torch.Size([256, 12])            |
| 706     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.0                    | bias                | torch.float32 |           | -0.3380467   | 0.3536568     | 0.0151805    | 0.0322619        | torch.Size([256])                |
| 706     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.0                    | output              | torch.float32 |           | -1.2586790   | 1.5891705     | 0.0168529    | 0.2732103        | torch.Size([2, 6, 256])          |
| 707     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.3.camera_encoder.1                    | input               | torch.float32 |           | -1.2586790   | 1.5891705     | 0.0168529    | 0.2732103        | torch.Size([2, 6, 256])          |
| 707     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.3.camera_encoder.1                    | output              | qint8         | 0.0124301 | 0.0000000    | 1.5786196     | 0.2262824    | 0.1162190        | torch.Size([2, 6, 256])          |
| 708     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.input_mean.mean    | input_0             | qint8         | 0.0124301 | 0.0000000    | 1.5786196     | 0.2262824    | 0.1162190        | torch.Size([2, 6, 256])          |
| 708     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.input_mean.mean    | output              | qint16        | 0.0000077 | 0.1656200    | 0.2484529     | 0.2262821    | 0.0008464        | torch.Size([2, 6, 1])            |
| 709     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.2.sub                | input_0             | qint8         | 0.0124301 | 0.0000000    | 1.5786196     | 0.2262824    | 0.1162190        | torch.Size([2, 6, 256])          |
| 709     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.2.sub                | input_1             | qint16        | 0.0000077 | 0.1656200    | 0.2484529     | 0.2262821    | 0.0008464        | torch.Size([2, 6, 1])            |
| 709     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.2.sub                | output              | qint16        | 0.0000416 | -0.2484573   | 1.3417528     | 0.0000016    | 0.1154424        | torch.Size([2, 6, 256])          |
| 710     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.mul                | input_0             | qint16        | 0.0000416 | -0.2484573   | 1.3417528     | 0.0000016    | 0.1154424        | torch.Size([2, 6, 256])          |
| 710     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.mul                | input_1             | qint16        | 0.0000416 | -0.2484573   | 1.3417528     | 0.0000016    | 0.1154424        | torch.Size([2, 6, 256])          |
| 710     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.mul                | output              | qint16        | 0.0000567 | 0.0000000    | 1.8002928     | 0.1154035    | 0.0440197        | torch.Size([2, 6, 256])          |
| 711     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.var_mean.mean      | input_0             | qint16        | 0.0000567 | 0.0000000    | 1.8002928     | 0.1154035    | 0.0440197        | torch.Size([2, 6, 256])          |
| 711     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.var_mean.mean      | output              | qint16        | 0.0000043 | 0.0605714    | 0.1415748     | 0.1154031    | 0.0007441        | torch.Size([2, 6, 1])            |
| 712     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.3.camera_encoder.2.rsqrt              | input               | qint16        | 0.0000043 | 0.0605714    | 0.1415748     | 0.1154031    | 0.0007441        | torch.Size([2, 6, 1])            |
| 712     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.3.camera_encoder.2.rsqrt              | output              | qint16        | 0.0001240 | 2.6575804    | 4.0627480     | 3.0244949    | 0.2348712        | torch.Size([2, 6, 1])            |
| 713     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.out_mul            | input_0             | qint16        | 0.0000416 | -0.2484573   | 1.3417528     | 0.0000016    | 0.1154424        | torch.Size([2, 6, 256])          |
| 713     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.out_mul            | input_1             | qint16        | 0.0001240 | 2.6575804    | 4.0627480     | 3.0244949    | 0.2348712        | torch.Size([2, 6, 1])            |
| 713     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.out_mul            | output              | qint16        | 0.0001187 | -0.6952933   | 3.8898199     | 0.0000082    | 1.0002146        | torch.Size([2, 6, 256])          |
| 714     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.2.weight_quant       | input               | torch.float32 |           | 0.8028511    | 1.1667448     | 0.9937689    | 0.0040703        | torch.Size([256])                |
| 714     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.2.weight_quant       | output              | qint16        | 0.0000356 | 0.8028614    | 1.1667269     | 0.9937678    | 0.0040704        | torch.Size([256])                |
| 715     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.weight_mul         | input_0             | qint16        | 0.0001187 | -0.6952933   | 3.8898199     | 0.0000082    | 1.0002146        | torch.Size([2, 6, 256])          |
| 715     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.weight_mul         | input_1             | qint16        | 0.0000356 | 0.8028614    | 1.1667269     | 0.9937678    | 0.0040704        | torch.Size([256])                |
| 715     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.weight_mul         | output              | qint16        | 0.0001247 | -0.7849978   | 4.0874019     | -0.0056641   | 1.0214350        | torch.Size([2, 6, 256])          |
| 716     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.2.bias_quant         | input               | torch.float32 |           | -0.1349080   | 0.1125814     | -0.0114335   | 0.0026644        | torch.Size([256])                |
| 716     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.2.bias_quant         | output              | qint16        | 0.0000041 | -0.1349100   | 0.1125828     | -0.0114335   | 0.0026644        | torch.Size([256])                |
| 717     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.2.bias_add           | input_0             | qint16        | 0.0001247 | -0.7849978   | 4.0874019     | -0.0056641   | 1.0214350        | torch.Size([2, 6, 256])          |
| 717     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.2.bias_add           | input_1             | qint16        | 0.0000041 | -0.1349100   | 0.1125828     | -0.0114335   | 0.0026644        | torch.Size([256])                |
| 717     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.2.bias_add           | output              | qint8         | 0.0323979 | -0.9071407   | 4.1145310     | -0.0170743   | 1.0788050        | torch.Size([2, 6, 256])          |
| 718     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.3                    | input               | qint8         | 0.0323979 | -0.9071407   | 4.1145310     | -0.0170743   | 1.0788050        | torch.Size([2, 6, 256])          |
| 718     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.3                    | weight              | torch.float32 |           | -0.4090023   | 0.4386477     | 0.0001596    | 0.0048304        | torch.Size([256, 256])           |
| 718     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.3                    | bias                | torch.float32 |           | -0.0807881   | 0.3063670     | -0.0007200   | 0.0023478        | torch.Size([256])                |
| 718     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.camera_encoder.3                    | output              | torch.float32 |           | -7.4587708   | 59.1969376    | -0.0802887   | 39.0520020       | torch.Size([2, 6, 256])          |
| 719     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.3.camera_encoder.4                    | input               | torch.float32 |           | -7.4587708   | 59.1969376    | -0.0802887   | 39.0520020       | torch.Size([2, 6, 256])          |
| 719     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.3.camera_encoder.4                    | output              | qint8         | 0.4642041 | 0.0000000    | 58.9539185    | 1.3394221    | 34.2752800       | torch.Size([2, 6, 256])          |
| 720     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.input_mean.mean    | input_0             | qint8         | 0.4642041 | 0.0000000    | 58.9539185    | 1.3394221    | 34.2752800       | torch.Size([2, 6, 256])          |
| 720     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.input_mean.mean    | output              | qint16        | 0.0000424 | 1.3092043    | 1.3903321     | 1.3392384    | 0.0007211        | torch.Size([2, 6, 1])            |
| 721     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.5.sub                | input_0             | qint8         | 0.4642041 | 0.0000000    | 58.9539185    | 1.3394221    | 34.2752800       | torch.Size([2, 6, 256])          |
| 721     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.5.sub                | input_1             | qint16        | 0.0000424 | 1.3092043    | 1.3903321     | 1.3392384    | 0.0007211        | torch.Size([2, 6, 1])            |
| 721     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.5.sub                | output              | qint16        | 0.0017638 | -1.3898827   | 57.6289825    | -0.0000081   | 34.2751503       | torch.Size([2, 6, 256])          |
| 722     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.mul                | input_0             | qint16        | 0.0017638 | -1.3898827   | 57.6289825    | -0.0000081   | 34.2751503       | torch.Size([2, 6, 256])          |
| 722     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.mul                | input_1             | qint16        | 0.0017638 | -1.3898827   | 57.6289825    | -0.0000081   | 34.2751503       | torch.Size([2, 6, 256])          |
| 722     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.mul                | output              | qint16        | 0.1019406 | 0.0000000    | 3321.1235352  | 34.2555351   | 76627.6328125    | torch.Size([2, 6, 256])          |
| 723     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.var_mean.mean      | input_0             | qint16        | 0.1019406 | 0.0000000    | 3321.1235352  | 34.2555351   | 76627.6328125    | torch.Size([2, 6, 256])          |
| 723     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.var_mean.mean      | output              | qint16        | 0.0010894 | 33.2550125   | 35.5208893    | 34.2555962   | 0.6568741        | torch.Size([2, 6, 1])            |
| 724     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.3.camera_encoder.5.rsqrt              | input               | qint16        | 0.0010894 | 33.2550125   | 35.5208893    | 34.2555962   | 0.6568741        | torch.Size([2, 6, 1])            |
| 724     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.3.camera_encoder.5.rsqrt              | output              | qint16        | 0.0000053 | 0.1677864    | 0.1734108     | 0.1708912    | 0.0000041        | torch.Size([2, 6, 1])            |
| 725     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.out_mul            | input_0             | qint16        | 0.0017638 | -1.3898827   | 57.6289825    | -0.0000081   | 34.2751503       | torch.Size([2, 6, 256])          |
| 725     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.out_mul            | input_1             | qint16        | 0.0000053 | 0.1677864    | 0.1734108     | 0.1708912    | 0.0000041        | torch.Size([2, 6, 1])            |
| 725     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.out_mul            | output              | qint16        | 0.0002998 | -0.2332709   | 9.7874842     | 0.0000140    | 1.0005777        | torch.Size([2, 6, 256])          |
| 726     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.5.weight_quant       | input               | torch.float32 |           | 0.5028567    | 1.4622601     | 0.8814595    | 0.0321079        | torch.Size([256])                |
| 726     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.5.weight_quant       | output              | qint16        | 0.0000446 | 0.5028381    | 1.4622378     | 0.8814595    | 0.0321078        | torch.Size([256])                |
| 727     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.weight_mul         | input_0             | qint16        | 0.0002998 | -0.2332709   | 9.7874842     | 0.0000140    | 1.0005777        | torch.Size([2, 6, 256])          |
| 727     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.weight_mul         | input_1             | qint16        | 0.0000446 | 0.5028381    | 1.4622378     | 0.8814595    | 0.0321078        | torch.Size([256])                |
| 727     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.weight_mul         | output              | qint16        | 0.0002287 | -0.3411613   | 7.4925146     | -0.0258643   | 0.5570999        | torch.Size([2, 6, 256])          |
| 728     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.5.bias_quant         | input               | torch.float32 |           | -0.5241177   | 0.5032777     | 0.0442741    | 0.0375308        | torch.Size([256])                |
| 728     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.camera_encoder.5.bias_quant         | output              | qint16        | 0.0000160 | -0.5241257   | 0.5032842     | 0.0442740    | 0.0375309        | torch.Size([256])                |
| 729     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.5.bias_add           | input_0             | qint16        | 0.0002287 | -0.3411613   | 7.4925146     | -0.0258643   | 0.5570999        | torch.Size([2, 6, 256])          |
| 729     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.5.bias_add           | input_1             | qint16        | 0.0000160 | -0.5241257   | 0.5032842     | 0.0442740    | 0.0375309        | torch.Size([256])                |
| 729     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.5.bias_add           | output              | qint8         | 0.0577063 | -0.8655946   | 7.3287005     | 0.0168686    | 0.5412211        | torch.Size([2, 6, 256])          |
| 730     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint8         | 0.0606616 | -7.0974116   | 7.7040277     | 0.0639319    | 1.5352830        | torch.Size([2, 512, 256])        |
| 730     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint8         | 0.0606616 | -7.0974116   | 7.7040277     | 0.0639319    | 1.5352830        | torch.Size([2, 512, 1, 256])     |
| 731     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint8         | 0.0577063 | -0.8655946   | 7.3287005     | 0.0168686    | 0.5412211        | torch.Size([2, 6, 256])          |
| 731     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint8         | 0.0577063 | -0.8655946   | 7.3287005     | 0.0168686    | 0.5412211        | torch.Size([2, 1, 6, 256])       |
| 732     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.cam_add                             | input_0             | qint8         | 0.0606616 | -7.0974116   | 7.7040277     | 0.0639319    | 1.5352830        | torch.Size([2, 512, 1, 256])     |
| 732     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.cam_add                             | input_1             | qint8         | 0.0577063 | -0.8655946   | 7.3287005     | 0.0168686    | 0.5412211        | torch.Size([2, 1, 6, 256])       |
| 732     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.3.cam_add                             | output              | qint8         | 0.0539613 | -5.1802807   | 6.8530798     | 0.0804883    | 1.1386442        | torch.Size([2, 512, 6, 256])     |
| 733     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.weights_fc                          | input               | qint8         | 0.0539613 | -5.1802807   | 6.8530798     | 0.0804883    | 1.1386442        | torch.Size([2, 512, 6, 256])     |
| 733     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.weights_fc                          | weight              | torch.float32 |           | -0.4302702   | 0.3039190     | -0.0007312   | 0.0026000        | torch.Size([64, 256])            |
| 733     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.weights_fc                          | bias                | torch.float32 |           | -0.0972505   | 0.0706504     | 0.0092854    | 0.0013785        | torch.Size([64])                 |
| 733     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.weights_fc                          | output              | qint8         | 0.0679543 | -6.7954345   | 6.6595259     | 0.3356152    | 5.5511985        | torch.Size([2, 512, 6, 64])      |
| 734     | torch.Tensor.reshape                                                        | head.layers.3                                     | input_0             | qint8         | 0.0679543 | -6.7954345   | 6.6595259     | 0.3356152    | 5.5511985        | torch.Size([2, 512, 6, 64])      |
| 734     | torch.Tensor.reshape                                                        | head.layers.3                                     | output              | qint8         | 0.0679543 | -6.7954345   | 6.6595259     | 0.3356152    | 5.5511985        | torch.Size([2, 512, 48, 8])      |
| 735     | torch.Tensor.max                                                            | head.layers.3.weight_softmax                      | input               | qint8         | 0.0679543 | -6.7954345   | 6.6595259     | 0.3356152    | 5.5511985        | torch.Size([2, 512, 48, 8])      |
| 735     | torch.Tensor.max                                                            | head.layers.3.weight_softmax                      | output_0            | qint8         | 0.0679543 | 1.6309043    | 6.6595259     | 3.7311597    | 0.9822362        | torch.Size([2, 512, 1, 8])       |
| 735     | torch.Tensor.max                                                            | head.layers.3.weight_softmax                      | output_1            | torch.int64   |           | 0.0000000    | 46.0000000    | 18.8739014   | 124.3568268      | torch.Size([2, 512, 1, 8])       |
| 736     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.weight_softmax.sub                  | input_0             | qint8         | 0.0679543 | -6.7954345   | 6.6595259     | 0.3356152    | 5.5511985        | torch.Size([2, 512, 48, 8])      |
| 736     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.weight_softmax.sub                  | input_1             | qint8         | 0.0679543 | 1.6309043    | 6.6595259     | 3.7311597    | 0.9822362        | torch.Size([2, 512, 1, 8])       |
| 736     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.3.weight_softmax.sub                  | output              | qint16        | 0.0004480 | -11.3481741  | 0.0000000     | -3.3955493   | 6.2307186        | torch.Size([2, 512, 48, 8])      |
| 737     | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.3.weight_softmax.exp                  | input               | qint16        | 0.0004480 | -11.3481741  | 0.0000000     | -3.3955493   | 6.2307186        | torch.Size([2, 512, 48, 8])      |
| 737     | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.3.weight_softmax.exp                  | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2209620    | 0.1022010        | torch.Size([2, 512, 48, 8])      |
| 738     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.3.weight_softmax.sum                  | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2209620    | 0.1022010        | torch.Size([2, 512, 48, 8])      |
| 738     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.3.weight_softmax.sum                  | output              | qint16        | 0.0009988 | 5.3646703    | 30.0495434    | 10.6061935   | 11.7634668       | torch.Size([2, 512, 1, 8])       |
| 739     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.3.weight_softmax.reciprocal           | input               | qint16        | 0.0009988 | 5.3646703    | 30.0495434    | 10.6061935   | 11.7634668       | torch.Size([2, 512, 1, 8])       |
| 739     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.3.weight_softmax.reciprocal           | output              | qint16        | 0.0000057 | 0.0332767    | 0.1855648     | 0.1034420    | 0.0009157        | torch.Size([2, 512, 1, 8])       |
| 740     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.weight_softmax.mul                  | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2209620    | 0.1022010        | torch.Size([2, 512, 48, 8])      |
| 740     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.weight_softmax.mul                  | input_1             | qint16        | 0.0000057 | 0.0332767    | 0.1855648     | 0.1034420    | 0.0009157        | torch.Size([2, 512, 1, 8])       |
| 740     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.weight_softmax.mul                  | output              | qint8         | 0.0013256 | 0.0000000    | 0.1683467     | 0.0207759    | 0.0010833        | torch.Size([2, 512, 48, 8])      |
| 741     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint16        | 0.0018743 | -60.9810638  | 56.7114906    | 0.0309091    | 289.8973389      | torch.Size([2, 512, 8, 3])       |
| 741     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint16        | 0.0018743 | -60.9810638  | 53.3134499    | -0.1089886   | 323.2515564      | torch.Size([2, 512, 8, 1])       |
| 742     | torch.ones_like                                                             | head.layers.3                                     | input               | qint16        | 0.0018743 | -60.9810638  | 53.3134499    | -0.1089886   | 323.2515564      | torch.Size([2, 512, 8, 1])       |
| 742     | torch.ones_like                                                             | head.layers.3                                     | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 743     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.point_quant_stub                    | input               | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 743     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.3.point_quant_stub                    | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 744     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.point_cat                           | input_0             | qint16        | 0.0018743 | -60.9810638  | 56.7114906    | 0.0309091    | 289.8973389      | torch.Size([2, 512, 8, 3])       |
| 744     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.point_cat                           | input_1             | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 744     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.point_cat                           | output              | qint16        | 0.0018311 | -60.0000000  | 56.7114258    | 0.2732204    | 217.5830231      | torch.Size([2, 512, 8, 4])       |
| 745     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 745     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 746     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint16        | 0.0018311 | -60.0000000  | 56.7114258    | 0.2732204    | 217.5830231      | torch.Size([2, 512, 8, 4])       |
| 746     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint16        | 0.0018311 | -60.0000000  | 56.7114258    | 0.2732204    | 217.5830231      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 747     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.point_matmul                        | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 747     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.point_matmul                        | input_1             | qint16        | 0.0018311 | -60.0000000  | 56.7114258    | 0.2732204    | 217.5830231      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 747     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.point_matmul                        | output              | qint16        | 0.0028207 | -92.4286957  | 84.6379242    | 0.2323625    | 97.8605652       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 748     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.3.point_sum                           | input               | qint16        | 0.0028207 | -92.4286957  | 84.6379242    | 0.2323625    | 97.8605652       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 748     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.3.point_sum                           | output              | qint16        | 0.0030133 | -96.1935577  | 91.6404648    | 0.9291637    | 386.0966492      | torch.Size([2, 6, 512, 8, 4])    |
| 749     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint16        | 0.0030133 | -96.1935577  | 91.6404648    | 0.9291637    | 386.0966492      | torch.Size([2, 6, 512, 8, 4])    |
| 749     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint16        | 0.0030133 | -64.8883896  | 63.3425713    | -0.5851461   | 428.6154175      | torch.Size([2, 6, 512, 8, 1])    |
| 750     | torch.clamp                                                                 | head.layers.3                                     | input               | qint16        | 0.0030133 | -64.8883896  | 63.3425713    | -0.5851461   | 428.6154175      | torch.Size([2, 6, 512, 8, 1])    |
| 750     | torch.clamp                                                                 | head.layers.3                                     | output              | qint16        | 0.0030133 | 0.0000000    | 63.3425713    | 7.4536333    | 150.0910797      | torch.Size([2, 6, 512, 8, 1])    |
| 751     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.3.reciprocal_op                       | input               | qint16        | 0.0030133 | 0.0000000    | 63.3425713    | 7.4536333    | 150.0910797      | torch.Size([2, 6, 512, 8, 1])    |
| 751     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.3.reciprocal_op                       | output              | qint16        | 0.0003357 | 0.0157776    | 10.9996643    | 6.0594511    | 28.7409420       | torch.Size([2, 6, 512, 8, 1])    |
| 752     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint16        | 0.0030133 | -96.1935577  | 91.6404648    | 0.9291637    | 386.0966492      | torch.Size([2, 6, 512, 8, 4])    |
| 752     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint16        | 0.0030133 | -96.1935577  | 91.6404648    | 1.6521995    | 556.2200317      | torch.Size([2, 6, 512, 8, 2])    |
| 753     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.point_mul                           | input_0             | qint16        | 0.0030133 | -96.1935577  | 91.6404648    | 1.6521995    | 556.2200317      | torch.Size([2, 6, 512, 8, 2])    |
| 753     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.point_mul                           | input_1             | qint16        | 0.0003357 | 0.0157776    | 10.9996643    | 6.0594511    | 28.7409420       | torch.Size([2, 6, 512, 8, 1])    |
| 753     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.point_mul                           | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2664815    | 0.8641997        | torch.Size([2, 6, 512, 8, 2])    |
| 754     | torch.Tensor.flatten                                                        | head.layers.3                                     | input               | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2664815    | 0.8641997        | torch.Size([2, 6, 512, 8, 2])    |
| 754     | torch.Tensor.flatten                                                        | head.layers.3                                     | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2664815    | 0.8641997        | torch.Size([12, 512, 8, 2])      |
| 755     | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.3                                     | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 755     | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.3                                     | input_1             | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2664815    | 0.8641997        | torch.Size([12, 512, 8, 2])      |
| 755     | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.3                                     | output              | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([12, 256, 512, 8])    |
| 756     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.feat_cat                            | input               | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([12, 256, 512, 8])    |
| 756     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.feat_cat                            | output              | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([12, 256, 512, 8])    |
| 757     | torch.Tensor.view                                                           | head.layers.3                                     | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([12, 256, 512, 8])    |
| 757     | torch.Tensor.view                                                           | head.layers.3                                     | output              | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([2, 6, 256, 512, 8])  |
| 758     | torch.Tensor.permute                                                        | head.layers.3                                     | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([2, 6, 256, 512, 8])  |
| 758     | torch.Tensor.permute                                                        | head.layers.3                                     | output              | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([2, 512, 6, 8, 256])  |
| 759     | torch.Tensor.contiguous                                                     | head.layers.3                                     | input               | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528500        | torch.Size([2, 512, 6, 8, 256])  |
| 759     | torch.Tensor.contiguous                                                     | head.layers.3                                     | output              | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528498        | torch.Size([2, 512, 6, 8, 256])  |
| 760     | torch.Tensor.view                                                           | head.layers.3                                     | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528498        | torch.Size([2, 512, 6, 8, 256])  |
| 760     | torch.Tensor.view                                                           | head.layers.3                                     | output              | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528498        | torch.Size([2, 512, 48, 256])    |
| 761     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | input_0             | qint8         | 0.0013256 | 0.0000000    | 0.1683467     | 0.0207759    | 0.0010833        | torch.Size([2, 512, 48, 8])      |
| 761     | torch.Tensor.__getitem__                                                    | head.layers.3                                     | output              | qint8         | 0.0013256 | 0.0000000    | 0.1683467     | 0.0207759    | 0.0010833        | torch.Size([2, 512, 48, 8, 1])   |
| 762     | torch.Tensor.reshape                                                        | head.layers.3                                     | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528498        | torch.Size([2, 512, 48, 256])    |
| 762     | torch.Tensor.reshape                                                        | head.layers.3                                     | output              | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528498        | torch.Size([2, 512, 48, 8, 32])  |
| 763     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.feat_mul                            | input_0             | qint8         | 0.0013256 | 0.0000000    | 0.1683467     | 0.0207759    | 0.0010833        | torch.Size([2, 512, 48, 8, 1])   |
| 763     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.feat_mul                            | input_1             | qint8         | 0.2235520 | -28.6146584  | 27.7204494    | 0.0342169    | 3.0528498        | torch.Size([2, 512, 48, 8, 32])  |
| 763     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.3.feat_mul                            | output              | qint8         | 0.0198471 | -2.5404303   | 2.5205832     | 0.0005424    | 0.0040218        | torch.Size([2, 512, 48, 8, 32])  |
| 764     | torch.Tensor.view                                                           | head.layers.3                                     | input_0             | qint8         | 0.0198471 | -2.5404303   | 2.5205832     | 0.0005424    | 0.0040218        | torch.Size([2, 512, 48, 8, 32])  |
| 764     | torch.Tensor.view                                                           | head.layers.3                                     | output              | qint8         | 0.0198471 | -2.5404303   | 2.5205832     | 0.0005424    | 0.0040218        | torch.Size([2, 512, 48, 256])    |
| 765     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.3.feat_sum                            | input               | qint8         | 0.0198471 | -2.5404303   | 2.5205832     | 0.0005424    | 0.0040218        | torch.Size([2, 512, 48, 256])    |
| 765     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.3.feat_sum                            | output              | qint8         | 0.0307460 | -3.9354827   | 3.9047368     | 0.0259988    | 0.3555349        | torch.Size([2, 512, 256])        |
| 766     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.output_proj                         | input               | qint8         | 0.0307460 | -3.9354827   | 3.9047368     | 0.0259988    | 0.3555349        | torch.Size([2, 512, 256])        |
| 766     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.output_proj                         | weight              | torch.float32 |           | -0.2840032   | 0.2785434     | -0.0005137   | 0.0057385        | torch.Size([256, 256])           |
| 766     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.output_proj                         | bias                | torch.float32 |           | -0.0963255   | 0.0840218     | -0.0024079   | 0.0011414        | torch.Size([256])                |
| 766     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.3.output_proj                         | output              | qint8         | 0.0432269 | -5.5330472   | 5.4898205     | 0.0302487    | 0.8101959        | torch.Size([2, 512, 256])        |
| 767     | torch.nn.modules.dropout.Dropout                                            | head.layers.3.proj_drop                           | input               | qint8         | 0.0432269 | -5.5330472   | 5.4898205     | 0.0302487    | 0.8101959        | torch.Size([2, 512, 256])        |
| 767     | torch.nn.modules.dropout.Dropout                                            | head.layers.3.proj_drop                           | output              | qint8         | 0.0432269 | -5.5330472   | 5.4898205     | 0.0302487    | 0.8101959        | torch.Size([2, 512, 256])        |
| 768     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.residual_op                         | input_0             | qint8         | 0.0432269 | -5.5330472   | 5.4898205     | 0.0302487    | 0.8101959        | torch.Size([2, 512, 256])        |
| 768     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.residual_op                         | input_1             | qint8         | 0.0574845 | -6.7256856   | 5.1736045     | 0.0018863    | 0.7937144        | torch.Size([2, 512, 256])        |
| 768     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.3.residual_op                         | output              | qint8         | 0.0568129 | -6.7039280   | 5.5108562     | 0.0158490    | 0.7996764        | torch.Size([2, 512, 512])        |
| 769     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.input_mean.mean            | input_0             | qint8         | 0.0568129 | -6.7039280   | 5.5108562     | 0.0158490    | 0.7996764        | torch.Size([2, 512, 512])        |
| 769     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.input_mean.mean            | output              | qint16        | 0.0000029 | -0.0691286   | 0.0941141     | 0.0158413    | 0.0005651        | torch.Size([2, 512, 1])          |
| 770     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.4.pre_norm.sub                        | input_0             | qint8         | 0.0568129 | -6.7039280   | 5.5108562     | 0.0158490    | 0.7996764        | torch.Size([2, 512, 512])        |
| 770     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.4.pre_norm.sub                        | input_1             | qint16        | 0.0000029 | -0.0691286   | 0.0941141     | 0.0158413    | 0.0005651        | torch.Size([2, 512, 1])          |
| 770     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.4.pre_norm.sub                        | output              | qint16        | 0.0002461 | -6.7600865   | 5.5382299     | 0.0000076    | 0.7991097        | torch.Size([2, 512, 512])        |
| 771     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.mul                        | input_0             | qint16        | 0.0002461 | -6.7600865   | 5.5382299     | 0.0000076    | 0.7991097        | torch.Size([2, 512, 512])        |
| 771     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.mul                        | input_1             | qint16        | 0.0002461 | -6.7600865   | 5.5382299     | 0.0000076    | 0.7991097        | torch.Size([2, 512, 512])        |
| 771     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.mul                        | output              | qint16        | 0.0019990 | 0.0000000    | 45.6985130    | 0.7990861    | 7.7654023        | torch.Size([2, 512, 512])        |
| 772     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.var_mean.mean              | input_0             | qint16        | 0.0019990 | 0.0000000    | 45.6985130    | 0.7990861    | 7.7654023        | torch.Size([2, 512, 512])        |
| 772     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.var_mean.mean              | output              | qint16        | 0.0000733 | 0.4206141    | 1.9686525     | 0.7990820    | 0.0768246        | torch.Size([2, 512, 1])          |
| 773     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.4.pre_norm.rsqrt                      | input               | qint16        | 0.0000733 | 0.4206141    | 1.9686525     | 0.7990820    | 0.0768246        | torch.Size([2, 512, 1])          |
| 773     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.4.pre_norm.rsqrt                      | output              | qint16        | 0.0000465 | 0.7127200    | 1.5252888     | 1.1741781    | 0.0491413        | torch.Size([2, 512, 1])          |
| 774     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.out_mul                    | input_0             | qint16        | 0.0002461 | -6.7600865   | 5.5382299     | 0.0000076    | 0.7991097        | torch.Size([2, 512, 512])        |
| 774     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.out_mul                    | input_1             | qint16        | 0.0000465 | 0.7127200    | 1.5252888     | 1.1741781    | 0.0491413        | torch.Size([2, 512, 1])          |
| 774     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.out_mul                    | output              | qint16        | 0.0003360 | -9.6383810   | 7.5469961     | 0.0000072    | 0.9973633        | torch.Size([2, 512, 512])        |
| 775     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.4.pre_norm.weight_quant               | input               | torch.float32 |           | 0.7392406    | 1.6099653     | 1.0357572    | 0.0463764        | torch.Size([512])                |
| 775     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.4.pre_norm.weight_quant               | output              | qint16        | 0.0000491 | 0.7392550    | 1.6099408     | 1.0357568    | 0.0463762        | torch.Size([512])                |
| 776     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.weight_mul                 | input_0             | qint16        | 0.0003360 | -9.6383810   | 7.5469961     | 0.0000072    | 0.9973633        | torch.Size([2, 512, 512])        |
| 776     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.weight_mul                 | input_1             | qint16        | 0.0000491 | 0.7392550    | 1.6099408     | 1.0357568    | 0.0463762        | torch.Size([512])                |
| 776     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.weight_mul                 | output              | qint16        | 0.0002515 | -7.2137475   | 6.1204810     | 0.0022044    | 0.8199776        | torch.Size([2, 512, 512])        |
| 777     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.4.pre_norm.bias_quant                 | input               | torch.float32 |           | -0.2265132   | 0.2360181     | -0.0012928   | 0.0045628        | torch.Size([512])                |
| 777     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.4.pre_norm.bias_quant                 | output              | qint16        | 0.0000072 | -0.2265140   | 0.2360145     | -0.0012928   | 0.0045628        | torch.Size([512])                |
| 778     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.4.pre_norm.bias_add                   | input_0             | qint16        | 0.0002515 | -7.2137475   | 6.1204810     | 0.0022044    | 0.8199776        | torch.Size([2, 512, 512])        |
| 778     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.4.pre_norm.bias_add                   | input_1             | qint16        | 0.0000072 | -0.2265140   | 0.2360145     | -0.0012928   | 0.0045628        | torch.Size([512])                |
| 778     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.4.pre_norm.bias_add                   | output              | qint8         | 0.0527374 | -6.7503867   | 5.9593258     | 0.0008785    | 0.8084084        | torch.Size([2, 512, 512])        |
| 779     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.0.0                          | input               | qint8         | 0.0527374 | -6.7503867   | 5.9593258     | 0.0008785    | 0.8084084        | torch.Size([2, 512, 512])        |
| 779     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.0.0                          | weight              | torch.float32 |           | -0.5703671   | 0.6200907     | -0.0004717   | 0.0053330        | torch.Size([1024, 512])          |
| 779     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.0.0                          | bias                | torch.float32 |           | -0.2541566   | 0.0612331     | -0.0505678   | 0.0011895        | torch.Size([1024])               |
| 779     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.0.0                          | output              | torch.float32 |           | -20.3686676  | 9.4154835     | -3.5231502   | 9.4678516        | torch.Size([2, 512, 1024])       |
| 780     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.4.activate                            | input               | torch.float32 |           | -20.3686676  | 9.4154835     | -3.5231502   | 9.4678516        | torch.Size([2, 512, 1024])       |
| 780     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.4.activate                            | output              | qint8         | 0.1015108 | 0.0000000    | 9.4405069     | 0.2003226    | 0.5463499        | torch.Size([2, 512, 1024])       |
| 781     | torch.nn.modules.dropout.Dropout                                            | head.layers.4.layers.0.2                          | input               | qint8         | 0.1015108 | 0.0000000    | 9.4405069     | 0.2003226    | 0.5463499        | torch.Size([2, 512, 1024])       |
| 781     | torch.nn.modules.dropout.Dropout                                            | head.layers.4.layers.0.2                          | output              | qint8         | 0.1015108 | 0.0000000    | 9.4405069     | 0.2003226    | 0.5463499        | torch.Size([2, 512, 1024])       |
| 782     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.1                            | input               | qint8         | 0.1015108 | 0.0000000    | 9.4405069     | 0.2003226    | 0.5463499        | torch.Size([2, 512, 1024])       |
| 782     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.1                            | weight              | torch.float32 |           | -0.5260783   | 0.6165652     | 0.0003563    | 0.0055565        | torch.Size([256, 1024])          |
| 782     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.1                            | bias                | torch.float32 |           | -0.1731907   | 0.1124924     | 0.0009047    | 0.0009486        | torch.Size([256])                |
| 782     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.layers.1                            | output              | qint8         | 0.2691682 | -16.1500931  | 14.8042526    | 0.1120490    | 10.3314514       | torch.Size([2, 512, 256])        |
| 783     | torch.nn.modules.dropout.Dropout                                            | head.layers.4.layers.2                            | input               | qint8         | 0.2691682 | -16.1500931  | 14.8042526    | 0.1120490    | 10.3314514       | torch.Size([2, 512, 256])        |
| 783     | torch.nn.modules.dropout.Dropout                                            | head.layers.4.layers.2                            | output              | qint8         | 0.2691682 | -16.1500931  | 14.8042526    | 0.1120490    | 10.3314514       | torch.Size([2, 512, 256])        |
| 784     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.identity_fc                         | input               | qint8         | 0.0527374 | -6.7503867   | 5.9593258     | 0.0008785    | 0.8084084        | torch.Size([2, 512, 512])        |
| 784     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.identity_fc                         | weight              | torch.float32 |           | -0.4295534   | 0.5292953     | 0.0001577    | 0.0064885        | torch.Size([256, 512])           |
| 784     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.identity_fc                         | bias                | torch.float32 |           | -0.2421585   | 0.1580013     | 0.0014464    | 0.0019690        | torch.Size([256])                |
| 784     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.4.identity_fc                         | output              | torch.float32 |           | -34.1556091  | 19.0467148    | 0.2444032    | 14.2809381       | torch.Size([2, 512, 256])        |
| 785     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.4.short_add                           | input_0             | torch.float32 |           | -34.1556091  | 19.0467148    | 0.2444032    | 14.2809381       | torch.Size([2, 512, 256])        |
| 785     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.4.short_add                           | input_1             | qint8         | 0.2691682 | -16.1500931  | 14.8042526    | 0.1120490    | 10.3314514       | torch.Size([2, 512, 256])        |
| 785     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.4.short_add                           | output              | qint8         | 0.4019631 | -39.7943420  | 26.5295620    | 0.3552904    | 32.7493782       | torch.Size([2, 512, 256])        |
| 786     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.5.input_mean.mean                     | input_0             | qint8         | 0.4019631 | -39.7943420  | 26.5295620    | 0.3552904    | 32.7493782       | torch.Size([2, 512, 256])        |
| 786     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.5.input_mean.mean                     | output              | qint16        | 0.0000359 | 0.1209141    | 0.5982499     | 0.3552883    | 0.0046631        | torch.Size([2, 512, 1])          |
| 787     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.5.sub                                 | input_0             | qint8         | 0.4019631 | -39.7943420  | 26.5295620    | 0.3552904    | 32.7493782       | torch.Size([2, 512, 256])        |
| 787     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.5.sub                                 | input_1             | qint16        | 0.0000359 | 0.1209141    | 0.5982499     | 0.3552883    | 0.0046631        | torch.Size([2, 512, 1])          |
| 787     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.5.sub                                 | output              | qint16        | 0.0022205 | -40.2176285  | 26.1174774    | 0.0000103    | 32.7448692       | torch.Size([2, 512, 256])        |
| 788     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.mul                                 | input_0             | qint16        | 0.0022205 | -40.2176285  | 26.1174774    | 0.0000103    | 32.7448692       | torch.Size([2, 512, 256])        |
| 788     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.mul                                 | input_1             | qint16        | 0.0022205 | -40.2176285  | 26.1174774    | 0.0000103    | 32.7448692       | torch.Size([2, 512, 256])        |
| 788     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.mul                                 | output              | qint16        | 0.1623887 | 0.0000000    | 1617.3917236  | 32.7432060   | 5411.9487305     | torch.Size([2, 512, 256])        |
| 789     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.5.var_mean.mean                       | input_0             | qint16        | 0.1623887 | 0.0000000    | 1617.3917236  | 32.7432060   | 5411.9487305     | torch.Size([2, 512, 256])        |
| 789     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.5.var_mean.mean                       | output              | qint16        | 0.0153172 | 10.7373590   | 61.7589607    | 32.7424355   | 273.0711670      | torch.Size([2, 512, 1])          |
| 790     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.5.rsqrt                               | input               | qint16        | 0.0153172 | 10.7373590   | 61.7589607    | 32.7424355   | 273.0711670      | torch.Size([2, 512, 1])          |
| 790     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.5.rsqrt                               | output              | qint16        | 0.0000095 | 0.1272462    | 0.3051810     | 0.1906578    | 0.0018752        | torch.Size([2, 512, 1])          |
| 791     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.out_mul                             | input_0             | qint16        | 0.0022205 | -40.2176285  | 26.1174774    | 0.0000103    | 32.7448692       | torch.Size([2, 512, 256])        |
| 791     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.out_mul                             | input_1             | qint16        | 0.0000095 | 0.1272462    | 0.3051810     | 0.1906578    | 0.0018752        | torch.Size([2, 512, 1])          |
| 791     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.out_mul                             | output              | qint16        | 0.0002562 | -7.9822760   | 4.7770190     | 0.0000011    | 1.0000564        | torch.Size([2, 512, 256])        |
| 792     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.5.weight_quant                        | input               | torch.float32 |           | 0.5714198    | 1.0232420     | 0.8086407    | 0.0070534        | torch.Size([256])                |
| 792     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.5.weight_quant                        | output              | qint16        | 0.0000312 | 0.5714292    | 1.0232264     | 0.8086408    | 0.0070532        | torch.Size([256])                |
| 793     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.weight_mul                          | input_0             | qint16        | 0.0002562 | -7.9822760   | 4.7770190     | 0.0000011    | 1.0000564        | torch.Size([2, 512, 256])        |
| 793     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.weight_mul                          | input_1             | qint16        | 0.0000312 | 0.5714292    | 1.0232264     | 0.8086408    | 0.0070532        | torch.Size([256])                |
| 793     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.5.weight_mul                          | output              | qint16        | 0.0001769 | -5.5121946   | 3.3677773     | 0.0071082    | 0.6458451        | torch.Size([2, 512, 256])        |
| 794     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.5.bias_quant                          | input               | torch.float32 |           | -0.2882900   | 0.3227517     | -0.0009565   | 0.0035641        | torch.Size([256])                |
| 794     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.5.bias_quant                          | output              | qint16        | 0.0000098 | -0.2882924   | 0.3227468     | -0.0009565   | 0.0035641        | torch.Size([256])                |
| 795     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.5.bias_add                            | input_0             | qint16        | 0.0001769 | -5.5121946   | 3.3677773     | 0.0071082    | 0.6458451        | torch.Size([2, 512, 256])        |
| 795     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.5.bias_add                            | input_1             | qint16        | 0.0000098 | -0.2882924   | 0.3227468     | -0.0009565   | 0.0035641        | torch.Size([256])                |
| 795     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.5.bias_add                            | output              | qint8         | 0.0377982 | -4.8381691   | 3.2506449     | 0.0063716    | 0.6154287        | torch.Size([2, 512, 256])        |
| 796     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.add1                                | input_0             | qint8         | 0.0377982 | -4.8381691   | 3.2506449     | 0.0063716    | 0.6154287        | torch.Size([2, 512, 256])        |
| 796     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.add1                                | input_1             | qint8         | 0.0569265 | -1.2523835   | 7.2296681     | 0.0620206    | 0.8451077        | torch.Size([2, 512, 256])        |
| 796     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.add1                                | output              | qint8         | 0.0554749 | -4.7153625   | 7.0453062     | 0.0681794    | 1.1691675        | torch.Size([2, 512, 256])        |
| 797     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.0                            | input               | qint8         | 0.0554749 | -4.7153625   | 7.0453062     | 0.0681794    | 1.1691675        | torch.Size([2, 512, 256])        |
| 797     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.0                            | weight              | torch.float32 |           | -0.8907645   | 0.6765569     | -0.0007754   | 0.0049254        | torch.Size([256, 256])           |
| 797     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.0                            | bias                | torch.float32 |           | -0.1592708   | 0.1005408     | -0.0223481   | 0.0024216        | torch.Size([256])                |
| 797     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.0                            | output              | torch.float32 |           | -10.1956425  | 9.4898376     | -0.7681980   | 5.0212331        | torch.Size([2, 512, 256])        |
| 798     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.1                            | input               | torch.float32 |           | -10.1956425  | 9.4898376     | -0.7681980   | 5.0212331        | torch.Size([2, 512, 256])        |
| 798     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.1                            | output              | qint8         | 0.0747598 | 0.0000000    | 9.4944887     | 0.5530868    | 1.1751910        | torch.Size([2, 512, 256])        |
| 799     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.2                            | input               | qint8         | 0.0747598 | 0.0000000    | 9.4944887     | 0.5530868    | 1.1751910        | torch.Size([2, 512, 256])        |
| 799     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.2                            | weight              | torch.float32 |           | -1.1173502   | 0.6858456     | -0.0046823   | 0.0057485        | torch.Size([256, 256])           |
| 799     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.2                            | bias                | torch.float32 |           | -0.1407867   | 0.1756395     | -0.0032350   | 0.0043332        | torch.Size([256])                |
| 799     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.2                            | output              | torch.float32 |           | -13.7618780  | 10.6948233    | -0.4937999   | 7.4081106        | torch.Size([2, 512, 256])        |
| 800     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.3                            | input               | torch.float32 |           | -13.7618780  | 10.6948233    | -0.4937999   | 7.4081106        | torch.Size([2, 512, 256])        |
| 800     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.3                            | output              | qint8         | 0.0833771 | 0.0000000    | 10.5888977    | 0.8301367    | 1.7763274        | torch.Size([2, 512, 256])        |
| 801     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.input_mean.mean            | input_0             | qint8         | 0.0833771 | 0.0000000    | 10.5888977    | 0.8301367    | 1.7763274        | torch.Size([2, 512, 256])        |
| 801     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.input_mean.mean            | output              | qint16        | 0.0000467 | 0.4341242    | 1.3819201     | 0.8301390    | 0.0326494        | torch.Size([2, 512, 1])          |
| 802     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.6.layers.4.sub                        | input_0             | qint8         | 0.0833771 | 0.0000000    | 10.5888977    | 0.8301367    | 1.7763274        | torch.Size([2, 512, 256])        |
| 802     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.6.layers.4.sub                        | input_1             | qint16        | 0.0000467 | 0.4341242    | 1.3819201     | 0.8301390    | 0.0326494        | torch.Size([2, 512, 1])          |
| 802     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.6.layers.4.sub                        | output              | qint16        | 0.0003764 | -1.3818501   | 9.4576368     | 0.0000129    | 1.7437184        | torch.Size([2, 512, 256])        |
| 803     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.mul                        | input_0             | qint16        | 0.0003764 | -1.3818501   | 9.4576368     | 0.0000129    | 1.7437184        | torch.Size([2, 512, 256])        |
| 803     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.mul                        | input_1             | qint16        | 0.0003764 | -1.3818501   | 9.4576368     | 0.0000129    | 1.7437184        | torch.Size([2, 512, 256])        |
| 803     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.mul                        | output              | qint16        | 0.0046432 | 0.0000000    | 89.4465942    | 1.7437912    | 16.0412941       | torch.Size([2, 512, 256])        |
| 804     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.var_mean.mean              | input_0             | qint16        | 0.0046432 | 0.0000000    | 89.4465942    | 1.7437912    | 16.0412941       | torch.Size([2, 512, 256])        |
| 804     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.var_mean.mean              | output              | qint16        | 0.0001595 | 0.4598035    | 4.9088249     | 1.7438002    | 0.3435951        | torch.Size([2, 512, 1])          |
| 805     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.6.layers.4.rsqrt                      | input               | qint16        | 0.0001595 | 0.4598035    | 4.9088249     | 1.7438002    | 0.3435951        | torch.Size([2, 512, 1])          |
| 805     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.6.layers.4.rsqrt                      | output              | qint16        | 0.0000464 | 0.4513643    | 1.4747230     | 0.7864341    | 0.0153892        | torch.Size([2, 512, 1])          |
| 806     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.out_mul                    | input_0             | qint16        | 0.0003764 | -1.3818501   | 9.4576368     | 0.0000129    | 1.7437184        | torch.Size([2, 512, 256])        |
| 806     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.out_mul                    | input_1             | qint16        | 0.0000464 | 0.4513643    | 1.4747230     | 0.7864341    | 0.0153892        | torch.Size([2, 512, 1])          |
| 806     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.out_mul                    | output              | qint16        | 0.0001986 | -0.7752832   | 5.8239574     | 0.0000053    | 0.9999323        | torch.Size([2, 512, 256])        |
| 807     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.4.weight_quant               | input               | torch.float32 |           | 0.7643027    | 1.2954148     | 0.9712850    | 0.0065330        | torch.Size([256])                |
| 807     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.4.weight_quant               | output              | qint16        | 0.0000395 | 0.7643017    | 1.2953950     | 0.9712849    | 0.0065329        | torch.Size([256])                |
| 808     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.weight_mul                 | input_0             | qint16        | 0.0001986 | -0.7752832   | 5.8239574     | 0.0000053    | 0.9999323        | torch.Size([2, 512, 256])        |
| 808     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.weight_mul                 | input_1             | qint16        | 0.0000395 | 0.7643017    | 1.2953950     | 0.9712849    | 0.0065329        | torch.Size([256])                |
| 808     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.weight_mul                 | output              | qint16        | 0.0002229 | -0.9623665   | 6.9594383     | 0.0109425    | 0.9817408        | torch.Size([2, 512, 256])        |
| 809     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.4.bias_quant                 | input               | torch.float32 |           | -0.0766388   | 0.2512619     | 0.0415314    | 0.0046091        | torch.Size([256])                |
| 809     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.4.bias_quant                 | output              | qint16        | 0.0000077 | -0.0766419   | 0.2512580     | 0.0415315    | 0.0046091        | torch.Size([256])                |
| 810     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.layers.4.bias_add                   | input_0             | qint16        | 0.0002229 | -0.9623665   | 6.9594383     | 0.0109425    | 0.9817408        | torch.Size([2, 512, 256])        |
| 810     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.layers.4.bias_add                   | input_1             | qint16        | 0.0000077 | -0.0766419   | 0.2512580     | 0.0415315    | 0.0046091        | torch.Size([256])                |
| 810     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.layers.4.bias_add                   | output              | qint8         | 0.0436988 | -0.9613727   | 5.5497427     | 0.0525751    | 0.9363214        | torch.Size([2, 512, 256])        |
| 811     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.5                            | input               | qint8         | 0.0436988 | -0.9613727   | 5.5497427     | 0.0525751    | 0.9363214        | torch.Size([2, 512, 256])        |
| 811     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.5                            | weight              | torch.float32 |           | -0.9964333   | 0.5091414     | 0.0013438    | 0.0046180        | torch.Size([256, 256])           |
| 811     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.5                            | bias                | torch.float32 |           | -0.1558311   | 0.1135808     | -0.0241591   | 0.0024907        | torch.Size([256])                |
| 811     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.5                            | output              | torch.float32 |           | -11.1097584  | 12.3183079    | -0.7789288   | 7.4834547        | torch.Size([2, 512, 256])        |
| 812     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.6                            | input               | torch.float32 |           | -11.1097584  | 12.3183079    | -0.7789288   | 7.4834547        | torch.Size([2, 512, 256])        |
| 812     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.6                            | output              | qint8         | 0.0785729 | 0.0000000    | 9.9787579     | 0.7725000    | 2.2612998        | torch.Size([2, 512, 256])        |
| 813     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.7                            | input               | qint8         | 0.0785729 | 0.0000000    | 9.9787579     | 0.7725000    | 2.2612998        | torch.Size([2, 512, 256])        |
| 813     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.7                            | weight              | torch.float32 |           | -1.0164918   | 0.5062547     | -0.0056709   | 0.0047400        | torch.Size([256, 256])           |
| 813     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.7                            | bias                | torch.float32 |           | -0.0927861   | 0.2361103     | -0.0030607   | 0.0021607        | torch.Size([256])                |
| 813     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.7                            | output              | torch.float32 |           | -31.9134102  | 50.4511452    | -1.5040553   | 29.1848431       | torch.Size([2, 512, 256])        |
| 814     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.8                            | input               | torch.float32 |           | -31.9134102  | 50.4511452    | -1.5040553   | 29.1848431       | torch.Size([2, 512, 256])        |
| 814     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.6.layers.8                            | output              | qint8         | 0.3750069 | 0.0000000    | 47.6258774    | 1.2400317    | 12.6481886       | torch.Size([2, 512, 256])        |
| 815     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.input_mean.mean            | input_0             | qint8         | 0.3750069 | 0.0000000    | 47.6258774    | 1.2400317    | 12.6481886       | torch.Size([2, 512, 256])        |
| 815     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.input_mean.mean            | output              | qint16        | 0.0000617 | 0.5917841    | 2.0127201     | 1.2400255    | 0.1839464        | torch.Size([2, 512, 1])          |
| 816     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.6.layers.9.sub                        | input_0             | qint8         | 0.3750069 | 0.0000000    | 47.6258774    | 1.2400317    | 12.6481886       | torch.Size([2, 512, 256])        |
| 816     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.6.layers.9.sub                        | input_1             | qint16        | 0.0000617 | 0.5917841    | 2.0127201     | 1.2400255    | 0.1839464        | torch.Size([2, 512, 1])          |
| 816     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.6.layers.9.sub                        | output              | qint16        | 0.0015031 | -2.0126157   | 46.0451584    | -0.0000801   | 12.4645500       | torch.Size([2, 512, 256])        |
| 817     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.mul                        | input_0             | qint16        | 0.0015031 | -2.0126157   | 46.0451584    | -0.0000801   | 12.4645500       | torch.Size([2, 512, 256])        |
| 817     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.mul                        | input_1             | qint16        | 0.0015031 | -2.0126157   | 46.0451584    | -0.0000801   | 12.4645500       | torch.Size([2, 512, 256])        |
| 817     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.mul                        | output              | qint16        | 0.0740407 | 0.0000000    | 2120.1552734  | 12.4689693   | 8426.7099609     | torch.Size([2, 512, 256])        |
| 818     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.var_mean.mean              | input_0             | qint16        | 0.0740407 | 0.0000000    | 2120.1552734  | 12.4689693   | 8426.7099609     | torch.Size([2, 512, 256])        |
| 818     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.var_mean.mean              | output              | qint16        | 0.0008905 | 2.4540939    | 27.4669743    | 12.4689503   | 30.0031376       | torch.Size([2, 512, 1])          |
| 819     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.6.layers.9.rsqrt                      | input               | qint16        | 0.0008905 | 2.4540939    | 27.4669743    | 12.4689503   | 30.0031376       | torch.Size([2, 512, 1])          |
| 819     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.6.layers.9.rsqrt                      | output              | qint16        | 0.0000450 | 0.1907982    | 0.6383485     | 0.3047782    | 0.0047675        | torch.Size([2, 512, 1])          |
| 820     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.out_mul                    | input_0             | qint16        | 0.0015031 | -2.0126157   | 46.0451584    | -0.0000801   | 12.4645500       | torch.Size([2, 512, 256])        |
| 820     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.out_mul                    | input_1             | qint16        | 0.0000450 | 0.1907982    | 0.6383485     | 0.3047782    | 0.0047675        | torch.Size([2, 512, 1])          |
| 820     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.out_mul                    | output              | qint16        | 0.0003373 | -0.6360810   | 10.6309166    | 0.0000021    | 0.9994317        | torch.Size([2, 512, 256])        |
| 821     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.9.weight_quant               | input               | torch.float32 |           | 0.7671473    | 1.2264483     | 0.9391562    | 0.0043200        | torch.Size([256])                |
| 821     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.9.weight_quant               | output              | qint16        | 0.0000374 | 0.7671407    | 1.2264296     | 0.9391561    | 0.0043201        | torch.Size([256])                |
| 822     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.weight_mul                 | input_0             | qint16        | 0.0003373 | -0.6360810   | 10.6309166    | 0.0000021    | 0.9994317        | torch.Size([2, 512, 256])        |
| 822     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.weight_mul                 | input_1             | qint16        | 0.0000374 | 0.7671407    | 1.2264296     | 0.9391561    | 0.0043201        | torch.Size([256])                |
| 822     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.weight_mul                 | output              | qint16        | 0.0003042 | -0.7799868   | 8.3897257     | -0.0045292   | 0.7473795        | torch.Size([2, 512, 256])        |
| 823     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.9.bias_quant                 | input               | torch.float32 |           | -0.1997112   | 0.1607553     | 0.0453104    | 0.0026038        | torch.Size([256])                |
| 823     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.9.bias_quant                 | output              | qint16        | 0.0000061 | -0.1997081   | 0.1607563     | 0.0453103    | 0.0026038        | torch.Size([256])                |
| 824     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.layers.9.bias_add                   | input_0             | qint16        | 0.0003042 | -0.7799868   | 8.3897257     | -0.0045292   | 0.7473795        | torch.Size([2, 512, 256])        |
| 824     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.layers.9.bias_add                   | input_1             | qint16        | 0.0000061 | -0.1997081   | 0.1607563     | 0.0453103    | 0.0026038        | torch.Size([256])                |
| 824     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.layers.9.bias_add                   | output              | qint8         | 0.0646966 | -0.7116623   | 8.2164650     | 0.0407174    | 0.7011866        | torch.Size([2, 512, 256])        |
| 825     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.10                           | input               | qint8         | 0.0646966 | -0.7116623   | 8.2164650     | 0.0407174    | 0.7011866        | torch.Size([2, 512, 256])        |
| 825     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.10                           | weight              | torch.float32 |           | -0.4182900   | 0.4529850     | 0.0011075    | 0.0032468        | torch.Size([11, 256])            |
| 825     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.10                           | bias                | torch.float32 |           | -0.0536531   | 0.0304303     | -0.0171225   | 0.0007017        | torch.Size([11])                 |
| 825     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.6.layers.10                           | output              | qint16        | 0.0005289 | -14.3190298  | 14.9282827    | -0.2837311   | 4.4266238        | torch.Size([2, 512, 11])         |
| 826     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.11.scale_quant_stub          | input               | torch.float32 |           | 0.1975845    | 1.0542313     | 0.5982738    | 0.1064052        | torch.Size([11])                 |
| 826     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.6.layers.11.scale_quant_stub          | output              | qint16        | 0.0000322 | 0.1975749    | 1.0542152     | 0.5982730    | 0.1064060        | torch.Size([11])                 |
| 827     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.11.mul                       | input_0             | qint16        | 0.0005289 | -14.3190298  | 14.9282827    | -0.2837311   | 4.4266238        | torch.Size([2, 512, 11])         |
| 827     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.11.mul                       | input_1             | qint16        | 0.0000322 | 0.1975749    | 1.0542152     | 0.5982730    | 0.1064060        | torch.Size([11])                 |
| 827     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.6.layers.11.mul                       | output              | qint16        | 0.0005575 | -15.0955667  | 15.0793982    | -0.2625251   | 4.3155446        | torch.Size([2, 512, 11])         |
| 828     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.add2                                | input_0             | qint16        | 0.0005575 | -15.0955667  | 15.0793982    | -0.2625251   | 4.3155446        | torch.Size([2, 512, 11])         |
| 828     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.add2                                | input_1             | qint16        | 0.0018311 | -52.9577637  | 52.8442383    | 0.4784671    | 77.4393997       | torch.Size([2, 512, 11])         |
| 828     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.6.add2                                | output              | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 829     | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant                                      | input               | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 829     | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant                                      | output              | torch.float32 |           | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 830     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 830     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.7071505    | 273.0369873      | torch.Size([2, 512, 3])          |
| 831     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(2)                   | input               | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.7071505    | 273.0369873      | torch.Size([2, 512, 3])          |
| 831     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(2)                   | weight              | torch.float32 |           | -0.9216561   | 0.9167990     | -0.0046354   | 0.1373587        | torch.Size([128, 3])             |
| 831     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(2)                   | bias                | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3650480        | torch.Size([128])                |
| 831     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(2)                   | output              | torch.float32 |           | -32.6966476  | 34.4734688    | -0.1038913   | 66.7401810       | torch.Size([2, 512, 128])        |
| 832     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(2)                   | input               | torch.float32 |           | -32.6966476  | 34.4734688    | -0.1038913   | 66.7401810       | torch.Size([2, 512, 128])        |
| 832     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(2)                   | output              | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.7792649    | 24.7203522       | torch.Size([2, 512, 128])        |
| 833     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(2)   | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.7792649    | 24.7203522       | torch.Size([2, 512, 128])        |
| 833     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(2)   | output              | qint16        | 0.0002498 | 0.2268012    | 7.3070946     | 2.7792747    | 3.9993007        | torch.Size([2, 512, 1])          |
| 834     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(2)               | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.7792649    | 24.7203522       | torch.Size([2, 512, 128])        |
| 834     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(2)               | input_1             | qint16        | 0.0002498 | 0.2268012    | 7.3070946     | 2.7792747    | 3.9993007        | torch.Size([2, 512, 1])          |
| 834     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(2)               | output              | qint16        | 0.0008924 | -7.3071895   | 27.5545654    | 0.0000068    | 20.7248383       | torch.Size([2, 512, 128])        |
| 835     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(2)               | input_0             | qint16        | 0.0008924 | -7.3071895   | 27.5545654    | 0.0000068    | 20.7248383       | torch.Size([2, 512, 128])        |
| 835     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(2)               | input_1             | qint16        | 0.0008924 | -7.3071895   | 27.5545654    | 0.0000068    | 20.7248383       | torch.Size([2, 512, 128])        |
| 835     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(2)               | output              | qint16        | 0.0261809 | 0.0000000    | 759.2446899   | 20.7244644   | 2424.7392578     | torch.Size([2, 512, 128])        |
| 836     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(2)     | input_0             | qint16        | 0.0261809 | 0.0000000    | 759.2446899   | 20.7244644   | 2424.7392578     | torch.Size([2, 512, 128])        |
| 836     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(2)     | output              | qint16        | 0.0029473 | 0.1002084    | 73.2641449    | 20.7248001   | 446.1040344      | torch.Size([2, 512, 1])          |
| 837     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(2)             | input               | qint16        | 0.0029473 | 0.1002084    | 73.2641449    | 20.7248001   | 446.1040344      | torch.Size([2, 512, 1])          |
| 837     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(2)             | output              | qint16        | 0.0000538 | 0.1168065    | 1.7621539     | 0.6494051    | 0.4503534        | torch.Size([2, 512, 1])          |
| 838     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(2)           | input_0             | qint16        | 0.0008924 | -7.3071895   | 27.5545654    | 0.0000068    | 20.7248383       | torch.Size([2, 512, 128])        |
| 838     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(2)           | input_1             | qint16        | 0.0000538 | 0.1168065    | 1.7621539     | 0.6494051    | 0.4503534        | torch.Size([2, 512, 1])          |
| 838     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(2)           | output              | qint16        | 0.0001192 | -0.8855181   | 3.9062698     | 0.0000068    | 0.8272152        | torch.Size([2, 512, 128])        |
| 839     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(2)      | input               | torch.float32 |           | 0.7278287    | 1.3287159     | 0.9627235    | 0.0086877        | torch.Size([128])                |
| 839     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(2)      | output              | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 840     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(2)        | input_0             | qint16        | 0.0001192 | -0.8855181   | 3.9062698     | 0.0000068    | 0.8272152        | torch.Size([2, 512, 128])        |
| 840     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(2)        | input_1             | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 840     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(2)        | output              | qint16        | 0.0001208 | -1.1598188   | 3.8095391     | -0.0014816   | 0.7639081        | torch.Size([2, 512, 128])        |
| 841     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(2)        | input               | torch.float32 |           | -0.0562531   | 0.0804052     | 0.0088204    | 0.0005294        | torch.Size([128])                |
| 841     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(2)        | output              | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 842     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(2)          | input_0             | qint16        | 0.0001208 | -1.1598188   | 3.8095391     | -0.0014816   | 0.7639081        | torch.Size([2, 512, 128])        |
| 842     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(2)          | input_1             | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 842     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(2)          | output              | qint8         | 0.0271288 | -1.1665392   | 3.4453597     | 0.0074954    | 0.7595036        | torch.Size([2, 512, 128])        |
| 843     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(2)                   | input               | qint8         | 0.0271288 | -1.1665392   | 3.4453597     | 0.0074954    | 0.7595036        | torch.Size([2, 512, 128])        |
| 843     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(2)                   | weight              | torch.float32 |           | -0.3750711   | 0.3968706     | 0.0019093    | 0.0048458        | torch.Size([128, 128])           |
| 843     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(2)                   | bias                | torch.float32 |           | -0.1863807   | 0.1385574     | -0.0156467   | 0.0047256        | torch.Size([128])                |
| 843     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(2)                   | output              | torch.float32 |           | -5.5228529   | 6.4116607     | -0.1124930   | 1.8409945        | torch.Size([2, 512, 128])        |
| 844     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(2)                   | input               | torch.float32 |           | -5.5228529   | 6.4116607     | -0.1124930   | 1.8409945        | torch.Size([2, 512, 128])        |
| 844     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(2)                   | output              | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.4958915    | 0.6350891        | torch.Size([2, 512, 128])        |
| 845     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(2)   | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.4958915    | 0.6350891        | torch.Size([2, 512, 128])        |
| 845     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(2)   | output              | qint16        | 0.0000298 | 0.2860329    | 0.8676232     | 0.4958945    | 0.0261442        | torch.Size([2, 512, 1])          |
| 846     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(2)               | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.4958915    | 0.6350891        | torch.Size([2, 512, 128])        |
| 846     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(2)               | input_1             | qint16        | 0.0000298 | 0.2860329    | 0.8676232     | 0.4958945    | 0.0261442        | torch.Size([2, 512, 1])          |
| 846     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(2)               | output              | qint16        | 0.0001641 | -0.8677015   | 5.1037979     | -0.0000046   | 0.6089740        | torch.Size([2, 512, 128])        |
| 847     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(2)               | input_0             | qint16        | 0.0001641 | -0.8677015   | 5.1037979     | -0.0000046   | 0.6089740        | torch.Size([2, 512, 128])        |
| 847     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(2)               | input_1             | qint16        | 0.0001641 | -0.8677015   | 5.1037979     | -0.0000046   | 0.6089740        | torch.Size([2, 512, 128])        |
| 847     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(2)               | output              | qint16        | 0.0008856 | 0.0000000    | 26.0483227    | 0.6089758    | 2.5570939        | torch.Size([2, 512, 128])        |
| 848     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(2)     | input_0             | qint16        | 0.0008856 | 0.0000000    | 26.0483227    | 0.6089758    | 2.5570939        | torch.Size([2, 512, 128])        |
| 848     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(2)     | output              | qint16        | 0.0000499 | 0.3048100    | 1.3232061     | 0.6089780    | 0.0482515        | torch.Size([2, 512, 1])          |
| 849     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(2)             | input               | qint16        | 0.0000499 | 0.3048100    | 1.3232061     | 0.6089780    | 0.0482515        | torch.Size([2, 512, 1])          |
| 849     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(2)             | output              | qint16        | 0.0000553 | 0.8693142    | 1.8112417     | 1.3439130    | 0.0559886        | torch.Size([2, 512, 1])          |
| 850     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(2)           | input_0             | qint16        | 0.0001641 | -0.8677015   | 5.1037979     | -0.0000046   | 0.6089740        | torch.Size([2, 512, 128])        |
| 850     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(2)           | input_1             | qint16        | 0.0000553 | 0.8693142    | 1.8112417     | 1.3439130    | 0.0559886        | torch.Size([2, 512, 1])          |
| 850     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(2)           | output              | qint16        | 0.0002164 | -0.7787826   | 7.0923762     | -0.0000098   | 0.9999832        | torch.Size([2, 512, 128])        |
| 851     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(2)      | input               | torch.float32 |           | 0.5925044    | 1.4726304     | 0.9182085    | 0.0175060        | torch.Size([128])                |
| 851     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(2)      | output              | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 852     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(2)        | input_0             | qint16        | 0.0002164 | -0.7787826   | 7.0923762     | -0.0000098   | 0.9999832        | torch.Size([2, 512, 128])        |
| 852     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(2)        | input_1             | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 852     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(2)        | output              | qint16        | 0.0002127 | -0.9111572   | 6.9689488     | 0.0380091    | 0.9670178        | torch.Size([2, 512, 128])        |
| 853     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(2)        | input               | torch.float32 |           | -0.0644210   | 0.2426097     | 0.0318023    | 0.0030999        | torch.Size([128])                |
| 853     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(2)        | output              | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 854     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(2)          | input_0             | qint16        | 0.0002127 | -0.9111572   | 6.9689488     | 0.0380091    | 0.9670178        | torch.Size([2, 512, 128])        |
| 854     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(2)          | input_1             | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 854     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(2)          | output              | qint8         | 0.0521229 | -0.8860894   | 6.6196094     | 0.0699821    | 0.9405245        | torch.Size([2, 512, 128])        |
| 855     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(2)                   | input               | qint8         | 0.0521229 | -0.8860894   | 6.6196094     | 0.0699821    | 0.9405245        | torch.Size([2, 512, 128])        |
| 855     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(2)                   | weight              | torch.float32 |           | -0.7504157   | 0.4182976     | -0.0024651   | 0.0052447        | torch.Size([128, 128])           |
| 855     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(2)                   | bias                | torch.float32 |           | -0.1397866   | 0.1210779     | 0.0064616    | 0.0040949        | torch.Size([128])                |
| 855     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(2)                   | output              | torch.float32 |           | -10.1855278  | 7.0454173     | -0.0317620   | 5.3797741        | torch.Size([2, 512, 128])        |
| 856     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(2)                   | input               | torch.float32 |           | -10.1855278  | 7.0454173     | -0.0317620   | 5.3797741        | torch.Size([2, 512, 128])        |
| 856     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(2)                   | output              | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8624906    | 1.6690315        | torch.Size([2, 512, 128])        |
| 857     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(2)   | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8624906    | 1.6690315        | torch.Size([2, 512, 128])        |
| 857     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(2)   | output              | qint16        | 0.0000319 | 0.5515732    | 1.0447656     | 0.7712023    | 0.0277385        | torch.Size([2, 512, 1])          |
| 858     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(2)               | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8624906    | 1.6690315        | torch.Size([2, 512, 128])        |
| 858     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(2)               | input_1             | qint16        | 0.0000319 | 0.5515732    | 1.0447656     | 0.7712023    | 0.0277385        | torch.Size([2, 512, 1])          |
| 858     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(2)               | output              | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0912907    | 1.5913665        | torch.Size([2, 512, 128])        |
| 859     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(2)               | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0912907    | 1.5913665        | torch.Size([2, 512, 128])        |
| 859     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(2)               | input_1             | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0912907    | 1.5913665        | torch.Size([2, 512, 128])        |
| 859     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(2)               | output              | qint16        | 0.0011151 | 0.0000000    | 31.5550842    | 1.5997137    | 11.8485804       | torch.Size([2, 512, 128])        |
| 860     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(2)     | input_0             | qint16        | 0.0011151 | 0.0000000    | 31.5550842    | 1.5997137    | 11.8485804       | torch.Size([2, 512, 128])        |
| 860     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(2)     | output              | qint16        | 0.0000656 | 0.8137763    | 2.1495371     | 1.3607347    | 0.2307115        | torch.Size([2, 512, 1])          |
| 861     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(2)             | input               | qint16        | 0.0000656 | 0.8137763    | 2.1495371     | 1.3607347    | 0.2307115        | torch.Size([2, 512, 1])          |
| 861     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(2)             | output              | qint16        | 0.0000338 | 0.6820595    | 1.1069363     | 0.8924045    | 0.0183959        | torch.Size([2, 512, 1])          |
| 862     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(2)           | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0912907    | 1.5913665        | torch.Size([2, 512, 128])        |
| 862     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(2)           | input_1             | qint16        | 0.0000338 | 0.6820595    | 1.1069363     | 0.8924045    | 0.0183959        | torch.Size([2, 512, 1])          |
| 862     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(2)           | output              | qint16        | 0.0001537 | -0.7484934   | 4.9783263     | 0.0622674    | 1.1072776        | torch.Size([2, 512, 128])        |
| 863     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(2)      | input               | torch.float32 |           | 0.7673740    | 1.1249810     | 0.9671495    | 0.0053221        | torch.Size([128])                |
| 863     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(2)      | output              | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 864     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(2)        | input_0             | qint16        | 0.0001537 | -0.7484934   | 4.9783263     | 0.0622674    | 1.1072776        | torch.Size([2, 512, 128])        |
| 864     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(2)        | input_1             | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 864     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(2)        | output              | qint16        | 0.0001601 | -0.8419933   | 5.1889834     | 0.0769084    | 1.1044856        | torch.Size([2, 512, 128])        |
| 865     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(2)        | input               | torch.float32 |           | -0.0537279   | 0.1594015     | 0.0216380    | 0.0014148        | torch.Size([128])                |
| 865     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(2)        | output              | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 866     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(2)          | input_0             | qint16        | 0.0001601 | -0.8419933   | 5.1889834     | 0.0769084    | 1.1044856        | torch.Size([2, 512, 128])        |
| 866     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(2)          | input_1             | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 866     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(2)          | output              | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0983867    | 1.0873291        | torch.Size([2, 512, 128])        |
| 867     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(2)                   | input               | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0983867    | 1.0873291        | torch.Size([2, 512, 128])        |
| 867     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(2)                   | weight              | torch.float32 |           | -0.4264432   | 0.3183554     | 0.0005866    | 0.0053991        | torch.Size([128, 128])           |
| 867     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(2)                   | bias                | torch.float32 |           | -0.1690418   | 0.1536980     | -0.0166056   | 0.0039884        | torch.Size([128])                |
| 867     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(2)                   | output              | torch.float32 |           | -11.9209080  | 10.6502199    | -0.4322613   | 5.0346475        | torch.Size([2, 512, 128])        |
| 868     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(2)                  | input               | torch.float32 |           | -11.9209080  | 10.6502199    | -0.4322613   | 5.0346475        | torch.Size([2, 512, 128])        |
| 868     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(2)                  | output              | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6584511    | 1.6575776        | torch.Size([2, 512, 128])        |
| 869     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(2)  | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6584511    | 1.6575776        | torch.Size([2, 512, 128])        |
| 869     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(2)  | output              | qint16        | 0.0000231 | 0.5170735    | 0.7555045     | 0.6478528    | 0.0055418        | torch.Size([2, 512, 1])          |
| 870     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(2)              | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6584511    | 1.6575776        | torch.Size([2, 512, 128])        |
| 870     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(2)              | input_1             | qint16        | 0.0000231 | 0.5170735    | 0.7555045     | 0.6478528    | 0.0055418        | torch.Size([2, 512, 1])          |
| 870     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(2)              | output              | qint16        | 0.0003154 | -0.7554005   | 9.9375372     | 0.0106137    | 1.6497328        | torch.Size([2, 512, 128])        |
| 871     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(2)              | input_0             | qint16        | 0.0003154 | -0.7554005   | 9.9375372     | 0.0106137    | 1.6497328        | torch.Size([2, 512, 128])        |
| 871     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(2)              | input_1             | qint16        | 0.0003154 | -0.7554005   | 9.9375372     | 0.0106137    | 1.6497328        | torch.Size([2, 512, 128])        |
| 871     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(2)              | output              | qint16        | 0.0032599 | 0.0000000    | 98.7539520    | 1.6498363    | 26.6991100       | torch.Size([2, 512, 128])        |
| 872     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(2)    | input_0             | qint16        | 0.0032599 | 0.0000000    | 98.7539520    | 1.6498363    | 26.6991100       | torch.Size([2, 512, 128])        |
| 872     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(2)    | output              | qint16        | 0.0000598 | 1.0410302    | 1.9535078     | 1.6498374    | 0.0255362        | torch.Size([2, 512, 1])          |
| 873     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(2)            | input               | qint16        | 0.0000598 | 1.0410302    | 1.9535078     | 1.6498374    | 0.0255362        | torch.Size([2, 512, 1])          |
| 873     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(2)            | output              | qint16        | 0.0000315 | 0.7154621    | 0.9800938     | 0.7815371    | 0.0016965        | torch.Size([2, 512, 1])          |
| 874     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(2)          | input_0             | qint16        | 0.0003154 | -0.7554005   | 9.9375372     | 0.0106137    | 1.6497328        | torch.Size([2, 512, 128])        |
| 874     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(2)          | input_1             | qint16        | 0.0000315 | 0.7154621    | 0.9800938     | 0.7815371    | 0.0016965        | torch.Size([2, 512, 1])          |
| 874     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(2)          | output              | qint16        | 0.0002431 | -0.6098735   | 7.7443480     | 0.0079997    | 0.9999420        | torch.Size([2, 512, 128])        |
| 875     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(2)     | input               | torch.float32 |           | 0.7088336    | 1.4002132     | 0.9292046    | 0.0145085        | torch.Size([128])                |
| 875     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(2)     | output              | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 876     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(2)       | input_0             | qint16        | 0.0002431 | -0.6098735   | 7.7443480     | 0.0079997    | 0.9999420        | torch.Size([2, 512, 128])        |
| 876     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(2)       | input_1             | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 876     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(2)       | output              | qint16        | 0.0002455 | -0.8409635   | 7.8228021     | 0.0173331    | 0.9028803        | torch.Size([2, 512, 128])        |
| 877     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(2)       | input               | torch.float32 |           | -0.0965041   | 0.2669707     | 0.0619903    | 0.0064956        | torch.Size([128])                |
| 877     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(2)       | output              | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 878     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(2)         | input_0             | qint16        | 0.0002455 | -0.8409635   | 7.8228021     | 0.0173331    | 0.9028803        | torch.Size([2, 512, 128])        |
| 878     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(2)         | input_1             | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 878     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(2)         | output              | qint8         | 0.0587279 | -0.8221908   | 7.4584455     | 0.0792172    | 0.8686116        | torch.Size([2, 512, 128])        |
| 879     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 879     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017927 | -0.7511498   | 2.3394997     | 0.3438037    | 0.3591243        | torch.Size([2, 512, 3])          |
| 880     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(2)                  | input               | qint16        | 0.0017927 | -0.7511498   | 2.3394997     | 0.3438037    | 0.3591243        | torch.Size([2, 512, 3])          |
| 880     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(2)                  | weight              | torch.float32 |           | -0.8288664   | 0.6362330     | 0.0683853    | 0.1118651        | torch.Size([32, 3])              |
| 880     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(2)                  | bias                | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1068659        | torch.Size([32])                 |
| 880     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(2)                  | output              | torch.float32 |           | -1.8500274   | 2.2684441     | 0.1346004    | 0.2381817        | torch.Size([2, 512, 32])         |
| 881     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(2)                  | input               | torch.float32 |           | -1.8500274   | 2.2684441     | 0.1346004    | 0.2381817        | torch.Size([2, 512, 32])         |
| 881     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(2)                  | output              | qint8         | 0.0194126 | 0.0000000    | 2.2712741     | 0.2700465    | 0.1032708        | torch.Size([2, 512, 32])         |
| 882     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(2)  | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.2712741     | 0.2700465    | 0.1032708        | torch.Size([2, 512, 32])         |
| 882     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(2)  | output              | qint16        | 0.0000252 | 0.1662328    | 0.6375789     | 0.2700439    | 0.0130366        | torch.Size([2, 512, 1])          |
| 883     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(2)              | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.2712741     | 0.2700465    | 0.1032708        | torch.Size([2, 512, 32])         |
| 883     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(2)              | input_1             | qint16        | 0.0000252 | 0.1662328    | 0.6375789     | 0.2700439    | 0.0130366        | torch.Size([2, 512, 1])          |
| 883     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(2)              | output              | qint16        | 0.0000639 | -0.6375625   | 1.6336938     | 0.0000027    | 0.0902468        | torch.Size([2, 512, 32])         |
| 884     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(2)              | input_0             | qint16        | 0.0000639 | -0.6375625   | 1.6336938     | 0.0000027    | 0.0902468        | torch.Size([2, 512, 32])         |
| 884     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(2)              | input_1             | qint16        | 0.0000639 | -0.6375625   | 1.6336938     | 0.0000027    | 0.0902468        | torch.Size([2, 512, 32])         |
| 884     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(2)              | output              | qint16        | 0.0001394 | 0.0000000    | 2.6690142     | 0.0902407    | 0.0271152        | torch.Size([2, 512, 32])         |
| 885     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(2)    | input_0             | qint16        | 0.0001394 | 0.0000000    | 2.6690142     | 0.0902407    | 0.0271152        | torch.Size([2, 512, 32])         |
| 885     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(2)    | output              | qint16        | 0.0000212 | 0.0319917    | 0.3730387     | 0.0902412    | 0.0046671        | torch.Size([2, 512, 1])          |
| 886     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(2)            | input               | qint16        | 0.0000212 | 0.0319917    | 0.3730387     | 0.0902412    | 0.0046671        | torch.Size([2, 512, 1])          |
| 886     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(2)            | output              | qint16        | 0.0001649 | 1.6372472    | 5.4031301     | 3.9466178    | 1.4241437        | torch.Size([2, 512, 1])          |
| 887     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(2)          | input_0             | qint16        | 0.0000639 | -0.6375625   | 1.6336938     | 0.0000027    | 0.0902468        | torch.Size([2, 512, 32])         |
| 887     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(2)          | input_1             | qint16        | 0.0001649 | 1.6372472    | 5.4031301     | 3.9466178    | 1.4241437        | torch.Size([2, 512, 1])          |
| 887     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(2)          | output              | qint16        | 0.0000919 | -1.1036454   | 3.0128427     | 0.0000032    | 0.9834701        | torch.Size([2, 512, 32])         |
| 888     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(2)     | input               | torch.float32 |           | 0.8401937    | 1.1936733     | 0.9969203    | 0.0071658        | torch.Size([32])                 |
| 888     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(2)     | output              | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 889     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(2)       | input_0             | qint16        | 0.0000919 | -1.1036454   | 3.0128427     | 0.0000032    | 0.9834701        | torch.Size([2, 512, 32])         |
| 889     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(2)       | input_1             | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 889     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(2)       | output              | qint16        | 0.0001022 | -1.3174068   | 3.2300847     | 0.0052106    | 0.9634316        | torch.Size([2, 512, 32])         |
| 890     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(2)       | input               | torch.float32 |           | -0.1003950   | 0.1085345     | 0.0035262    | 0.0030721        | torch.Size([32])                 |
| 890     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(2)       | output              | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 891     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(2)         | input_0             | qint16        | 0.0001022 | -1.3174068   | 3.2300847     | 0.0052106    | 0.9634316        | torch.Size([2, 512, 32])         |
| 891     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(2)         | input_1             | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 891     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(2)         | output              | qint8         | 0.0232598 | -1.3025488   | 2.9539945     | 0.0083469    | 0.9020107        | torch.Size([2, 512, 32])         |
| 892     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(2)                  | input               | qint8         | 0.0232598 | -1.3025488   | 2.9539945     | 0.0083469    | 0.9020107        | torch.Size([2, 512, 32])         |
| 892     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(2)                  | weight              | torch.float32 |           | -0.5793310   | 0.5422795     | -0.0032135   | 0.0176575        | torch.Size([32, 32])             |
| 892     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(2)                  | bias                | torch.float32 |           | -0.1716317   | 0.2230143     | 0.0007250    | 0.0126328        | torch.Size([32])                 |
| 892     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(2)                  | output              | torch.float32 |           | -3.9605961   | 2.1621277     | -0.1773218   | 1.3504920        | torch.Size([2, 512, 32])         |
| 893     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(2)                  | input               | torch.float32 |           | -3.9605961   | 2.1621277     | -0.1773218   | 1.3504920        | torch.Size([2, 512, 32])         |
| 893     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(2)                  | output              | qint8         | 0.0172935 | 0.0000000    | 2.1616912     | 0.3694075    | 0.2546529        | torch.Size([2, 512, 32])         |
| 894     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(2)  | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1616912     | 0.3694075    | 0.2546529        | torch.Size([2, 512, 32])         |
| 894     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(2)  | output              | qint16        | 0.0000141 | 0.3026406    | 0.4258592     | 0.3694061    | 0.0007263        | torch.Size([2, 512, 1])          |
| 895     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(2)              | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1616912     | 0.3694075    | 0.2546529        | torch.Size([2, 512, 32])         |
| 895     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(2)              | input_1             | qint16        | 0.0000141 | 0.3026406    | 0.4258592     | 0.3694061    | 0.0007263        | torch.Size([2, 512, 1])          |
| 895     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(2)              | output              | qint16        | 0.0000617 | -0.4258570   | 1.8131174     | 0.0000028    | 0.2539262        | torch.Size([2, 512, 32])         |
| 896     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(2)              | input_0             | qint16        | 0.0000617 | -0.4258570   | 1.8131174     | 0.0000028    | 0.2539262        | torch.Size([2, 512, 32])         |
| 896     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(2)              | input_1             | qint16        | 0.0000617 | -0.4258570   | 1.8131174     | 0.0000028    | 0.2539262        | torch.Size([2, 512, 32])         |
| 896     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(2)              | output              | qint16        | 0.0001252 | 0.0000000    | 3.2873805     | 0.2539182    | 0.1829432        | torch.Size([2, 512, 32])         |
| 897     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(2)    | input_0             | qint16        | 0.0001252 | 0.0000000    | 3.2873805     | 0.2539182    | 0.1829432        | torch.Size([2, 512, 32])         |
| 897     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(2)    | output              | qint16        | 0.0000132 | 0.1525149    | 0.3777487     | 0.2539183    | 0.0041020        | torch.Size([2, 512, 1])          |
| 898     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(2)            | input               | qint16        | 0.0000132 | 0.1525149    | 0.3777487     | 0.2539183    | 0.0041020        | torch.Size([2, 512, 1])          |
| 898     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(2)            | output              | qint16        | 0.0000777 | 1.6269813    | 2.5457854     | 2.0391088    | 0.0825211        | torch.Size([2, 512, 1])          |
| 899     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(2)          | input_0             | qint16        | 0.0000617 | -0.4258570   | 1.8131174     | 0.0000028    | 0.2539262        | torch.Size([2, 512, 32])         |
| 899     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(2)          | input_1             | qint16        | 0.0000777 | 1.6269813    | 2.5457854     | 2.0391088    | 0.0825211        | torch.Size([2, 512, 1])          |
| 899     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(2)          | output              | qint16        | 0.0001125 | -0.9134025   | 3.5233810     | 0.0000099    | 0.9999595        | torch.Size([2, 512, 32])         |
| 900     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(2)     | input               | torch.float32 |           | 0.8191299    | 1.0923718     | 0.9808199    | 0.0031231        | torch.Size([32])                 |
| 900     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(2)     | output              | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 901     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(2)       | input_0             | qint16        | 0.0001125 | -0.9134025   | 3.5233810     | 0.0000099    | 0.9999595        | torch.Size([2, 512, 32])         |
| 901     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(2)       | input_1             | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 901     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(2)       | output              | qint16        | 0.0001113 | -0.9215445   | 3.4992416     | 0.0121320    | 0.9991432        | torch.Size([2, 512, 32])         |
| 902     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(2)       | input               | torch.float32 |           | -0.0704119   | 0.0788569     | 0.0097621    | 0.0015200        | torch.Size([32])                 |
| 902     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(2)       | output              | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 903     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(2)         | input_0             | qint16        | 0.0001113 | -0.9215445   | 3.4992416     | 0.0121320    | 0.9991432        | torch.Size([2, 512, 32])         |
| 903     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(2)         | input_1             | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 903     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(2)         | output              | qint8         | 0.0262611 | -0.8928760   | 3.3351545     | 0.0220896    | 0.9664961        | torch.Size([2, 512, 32])         |
| 904     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(2)                  | input               | qint8         | 0.0262611 | -0.8928760   | 3.3351545     | 0.0220896    | 0.9664961        | torch.Size([2, 512, 32])         |
| 904     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(2)                  | weight              | torch.float32 |           | -0.5712157   | 0.5219681     | -0.0062917   | 0.0166056        | torch.Size([32, 32])             |
| 904     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(2)                  | bias                | torch.float32 |           | -0.1649730   | 0.2318604     | 0.0253026    | 0.0136139        | torch.Size([32])                 |
| 904     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(2)                  | output              | torch.float32 |           | -4.4555264   | 2.5130222     | -0.2132598   | 1.3889818        | torch.Size([2, 512, 32])         |
| 905     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(2)                  | input               | torch.float32 |           | -4.4555264   | 2.5130222     | -0.2132598   | 1.3889818        | torch.Size([2, 512, 32])         |
| 905     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(2)                  | output              | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3597914    | 0.2659967        | torch.Size([2, 512, 32])         |
| 906     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(2)  | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3597914    | 0.2659967        | torch.Size([2, 512, 32])         |
| 906     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(2)  | output              | qint16        | 0.0000154 | 0.1930979    | 0.4800988     | 0.3597915    | 0.0105325        | torch.Size([2, 512, 1])          |
| 907     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(2)              | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3597914    | 0.2659967        | torch.Size([2, 512, 32])         |
| 907     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(2)              | input_1             | qint16        | 0.0000154 | 0.1930979    | 0.4800988     | 0.3597915    | 0.0105325        | torch.Size([2, 512, 1])          |
| 907     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(2)              | output              | qint16        | 0.0000636 | -0.4801062   | 2.0113554     | -0.0000015   | 0.2554747        | torch.Size([2, 512, 32])         |
| 908     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(2)              | input_0             | qint16        | 0.0000636 | -0.4801062   | 2.0113554     | -0.0000015   | 0.2554747        | torch.Size([2, 512, 32])         |
| 908     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(2)              | input_1             | qint16        | 0.0000636 | -0.4801062   | 2.0113554     | -0.0000015   | 0.2554747        | torch.Size([2, 512, 32])         |
| 908     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(2)              | output              | qint16        | 0.0001333 | 0.0000000    | 4.0455327     | 0.2554706    | 0.2398561        | torch.Size([2, 512, 32])         |
| 909     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(2)    | input_0             | qint16        | 0.0001333 | 0.0000000    | 4.0455327     | 0.2554706    | 0.2398561        | torch.Size([2, 512, 32])         |
| 909     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(2)    | output              | qint16        | 0.0000116 | 0.1364302    | 0.3559060     | 0.2554700    | 0.0066333        | torch.Size([2, 512, 1])          |
| 910     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(2)            | input               | qint16        | 0.0000116 | 0.1364302    | 0.3559060     | 0.2554700    | 0.0066333        | torch.Size([2, 512, 1])          |
| 910     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(2)            | output              | qint16        | 0.0000821 | 1.6762339    | 2.6913540     | 2.0696149    | 0.1425816        | torch.Size([2, 512, 1])          |
| 911     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(2)          | input_0             | qint16        | 0.0000636 | -0.4801062   | 2.0113554     | -0.0000015   | 0.2554747        | torch.Size([2, 512, 32])         |
| 911     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(2)          | input_1             | qint16        | 0.0000821 | 1.6762339    | 2.6913540     | 2.0696149    | 0.1425816        | torch.Size([2, 512, 1])          |
| 911     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(2)          | output              | qint16        | 0.0001195 | -0.9489596   | 3.7952409     | -0.0000020   | 0.9999480        | torch.Size([2, 512, 32])         |
| 912     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(2)     | input               | torch.float32 |           | 0.8903234    | 1.1315480     | 0.9912031    | 0.0026835        | torch.Size([32])                 |
| 912     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(2)     | output              | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 913     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(2)       | input_0             | qint16        | 0.0001195 | -0.9489596   | 3.7952409     | -0.0000020   | 0.9999480        | torch.Size([2, 512, 32])         |
| 913     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(2)       | input_1             | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 913     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(2)       | output              | qint16        | 0.0001226 | -1.0737985   | 3.9105828     | 0.0050142    | 1.0200830        | torch.Size([2, 512, 32])         |
| 914     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(2)       | input               | torch.float32 |           | -0.0586081   | 0.0779655     | 0.0041962    | 0.0015323        | torch.Size([32])                 |
| 914     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(2)       | output              | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 915     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(2)         | input_0             | qint16        | 0.0001226 | -1.0737985   | 3.9105828     | 0.0050142    | 1.0200830        | torch.Size([2, 512, 32])         |
| 915     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(2)         | input_1             | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 915     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(2)         | output              | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0096302    | 0.9941162        | torch.Size([2, 512, 32])         |
| 916     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(2)                  | input               | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0096302    | 0.9941162        | torch.Size([2, 512, 32])         |
| 916     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(2)                  | weight              | torch.float32 |           | -0.3204980   | 0.3365203     | -0.0020388   | 0.0145364        | torch.Size([32, 32])             |
| 916     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(2)                  | bias                | torch.float32 |           | -0.1559148   | 0.2119379     | 0.0091616    | 0.0105488        | torch.Size([32])                 |
| 916     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(2)                  | output              | torch.float32 |           | -2.3448529   | 2.6775167     | 0.0209759    | 0.8297089        | torch.Size([2, 512, 32])         |
| 917     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(2)                 | input               | torch.float32 |           | -2.3448529   | 2.6775167     | 0.0209759    | 0.8297089        | torch.Size([2, 512, 32])         |
| 917     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(2)                 | output              | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3709765    | 0.3017158        | torch.Size([2, 512, 32])         |
| 918     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(2) | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3709765    | 0.3017158        | torch.Size([2, 512, 32])         |
| 918     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(2) | output              | qint16        | 0.0000157 | 0.2951415    | 0.5130996     | 0.3703727    | 0.0012373        | torch.Size([2, 512, 1])          |
| 919     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(2)             | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3709765    | 0.3017158        | torch.Size([2, 512, 32])         |
| 919     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(2)             | input_1             | qint16        | 0.0000157 | 0.2951415    | 0.5130996     | 0.3703727    | 0.0012373        | torch.Size([2, 512, 1])          |
| 919     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(2)             | output              | qint16        | 0.0000689 | -0.5131254   | 2.1947951     | 0.0006039    | 0.3003073        | torch.Size([2, 512, 32])         |
| 920     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(2)             | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1947951     | 0.0006039    | 0.3003073        | torch.Size([2, 512, 32])         |
| 920     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(2)             | input_1             | qint16        | 0.0000689 | -0.5131254   | 2.1947951     | 0.0006039    | 0.3003073        | torch.Size([2, 512, 32])         |
| 920     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(2)             | output              | qint16        | 0.0001557 | 0.0000000    | 4.8171782     | 0.3002929    | 0.4217512        | torch.Size([2, 512, 32])         |
| 921     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(2)   | input_0             | qint16        | 0.0001557 | 0.0000000    | 4.8171782     | 0.3002929    | 0.4217512        | torch.Size([2, 512, 32])         |
| 921     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(2)   | output              | qint16        | 0.0000123 | 0.1648482    | 0.3962158     | 0.3002927    | 0.0012787        | torch.Size([2, 512, 1])          |
| 922     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(2)           | input               | qint16        | 0.0000123 | 0.1648482    | 0.3962158     | 0.3002927    | 0.0012787        | torch.Size([2, 512, 1])          |
| 922     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(2)           | output              | qint16        | 0.0000803 | 1.5886739    | 2.4628823     | 1.8351134    | 0.0133932        | torch.Size([2, 512, 1])          |
| 923     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(2)         | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1947951     | 0.0006039    | 0.3003073        | torch.Size([2, 512, 32])         |
| 923     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(2)         | input_1             | qint16        | 0.0000803 | 1.5886739    | 2.4628823     | 1.8351134    | 0.0133932        | torch.Size([2, 512, 1])          |
| 923     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(2)         | output              | qint16        | 0.0001207 | -1.1734487   | 3.9522243     | 0.0011139    | 1.0000166        | torch.Size([2, 512, 32])         |
| 924     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(2)    | input               | torch.float32 |           | 0.8289159    | 1.6609058     | 1.2561316    | 0.0353652        | torch.Size([32])                 |
| 924     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(2)    | output              | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 925     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(2)      | input_0             | qint16        | 0.0001207 | -1.1734487   | 3.9522243     | 0.0011139    | 1.0000166        | torch.Size([2, 512, 32])         |
| 925     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(2)      | input_1             | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 925     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(2)      | output              | qint16        | 0.0001642 | -1.7785043   | 4.3598809     | -0.0388864   | 1.4050015        | torch.Size([2, 512, 32])         |
| 926     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(2)      | input               | torch.float32 |           | -0.1194881   | 0.2576658     | 0.0445686    | 0.0113612        | torch.Size([32])                 |
| 926     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(2)      | output              | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 927     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(2)        | input_0             | qint16        | 0.0001642 | -1.7785043   | 4.3598809     | -0.0388864   | 1.4050015        | torch.Size([2, 512, 32])         |
| 927     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(2)        | input_1             | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 927     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(2)        | output              | qint8         | 0.0385920 | -1.6980467   | 4.3223004     | 0.0060182    | 1.3111168        | torch.Size([2, 512, 32])         |
| 928     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 928     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017927 | -0.9931670   | 0.5127180     | -0.0233631   | 0.0265193        | torch.Size([2, 512, 2])          |
| 929     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(2)                   | input               | qint16        | 0.0017927 | -0.9931670   | 0.5127180     | -0.0233631   | 0.0265193        | torch.Size([2, 512, 2])          |
| 929     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(2)                   | weight              | torch.float32 |           | -0.7023237   | 0.7394427     | 0.0490668    | 0.1972211        | torch.Size([32, 2])              |
| 929     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(2)                   | bias                | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1641774        | torch.Size([32])                 |
| 929     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(2)                   | output              | torch.float32 |           | -1.5287185   | 1.0101972     | -0.1198189   | 0.1696421        | torch.Size([2, 512, 32])         |
| 930     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(2)                   | input               | torch.float32 |           | -1.5287185   | 1.0101972     | -0.1198189   | 0.1696421        | torch.Size([2, 512, 32])         |
| 930     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(2)                   | output              | qint8         | 0.0115854 | 0.0000000    | 1.0079331     | 0.1254503    | 0.0480557        | torch.Size([2, 512, 32])         |
| 931     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(2)   | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.0079331     | 0.1254503    | 0.0480557        | torch.Size([2, 512, 32])         |
| 931     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(2)   | output              | qint16        | 0.0000105 | 0.1129570    | 0.1665384     | 0.1254501    | 0.0000554        | torch.Size([2, 512, 1])          |
| 932     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(2)               | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.0079331     | 0.1254503    | 0.0480557        | torch.Size([2, 512, 32])         |
| 932     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(2)               | input_1             | qint16        | 0.0000105 | 0.1129570    | 0.1665384     | 0.1254501    | 0.0000554        | torch.Size([2, 512, 1])          |
| 932     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(2)               | output              | qint16        | 0.0000395 | -0.1665395   | 0.8428221     | -0.0000012   | 0.0480007        | torch.Size([2, 512, 32])         |
| 933     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(2)               | input_0             | qint16        | 0.0000395 | -0.1665395   | 0.8428221     | -0.0000012   | 0.0480007        | torch.Size([2, 512, 32])         |
| 933     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(2)               | input_1             | qint16        | 0.0000395 | -0.1665395   | 0.8428221     | -0.0000012   | 0.0480007        | torch.Size([2, 512, 32])         |
| 933     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(2)               | output              | qint16        | 0.0000524 | 0.0000000    | 0.7103443     | 0.0479982    | 0.0075688        | torch.Size([2, 512, 32])         |
| 934     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(2)     | input_0             | qint16        | 0.0000524 | 0.0000000    | 0.7103443     | 0.0479982    | 0.0075688        | torch.Size([2, 512, 32])         |
| 934     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(2)     | output              | qint16        | 0.0000071 | 0.0408663    | 0.0864803     | 0.0479982    | 0.0000512        | torch.Size([2, 512, 1])          |
| 935     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(2)             | input               | qint16        | 0.0000071 | 0.0408663    | 0.0864803     | 0.0479982    | 0.0000512        | torch.Size([2, 512, 1])          |
| 935     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(2)             | output              | qint16        | 0.0001514 | 3.4002531    | 4.9461665     | 4.5934973    | 0.0748574        | torch.Size([2, 512, 1])          |
| 936     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(2)           | input_0             | qint16        | 0.0000395 | -0.1665395   | 0.8428221     | -0.0000012   | 0.0480007        | torch.Size([2, 512, 32])         |
| 936     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(2)           | input_1             | qint16        | 0.0001514 | 3.4002531    | 4.9461665     | 4.5934973    | 0.0748574        | torch.Size([2, 512, 1])          |
| 936     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(2)           | output              | qint16        | 0.0001206 | -0.6455780   | 3.3939891     | -0.0000041   | 0.9998390        | torch.Size([2, 512, 32])         |
| 937     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(2)      | input               | torch.float32 |           | 0.8947600    | 1.1748335     | 0.9865216    | 0.0041537        | torch.Size([32])                 |
| 937     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(2)      | output              | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 938     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(2)        | input_0             | qint16        | 0.0001206 | -0.6455780   | 3.3939891     | -0.0000041   | 0.9998390        | torch.Size([2, 512, 32])         |
| 938     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(2)        | input_1             | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 938     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(2)        | output              | qint16        | 0.0001306 | -0.7584857   | 3.6751359     | 0.0037543    | 1.0134977        | torch.Size([2, 512, 32])         |
| 939     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(2)        | input               | torch.float32 |           | -0.0879948   | 0.1319895     | 0.0285039    | 0.0034159        | torch.Size([32])                 |
| 939     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(2)        | output              | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 940     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(2)          | input_0             | qint16        | 0.0001306 | -0.7584857   | 3.6751359     | 0.0037543    | 1.0134977        | torch.Size([2, 512, 32])         |
| 940     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(2)          | input_1             | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 940     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(2)          | output              | qint8         | 0.0302674 | -0.7264165   | 3.6018150     | 0.0318736    | 0.9298907        | torch.Size([2, 512, 32])         |
| 941     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(2)                   | input               | qint8         | 0.0302674 | -0.7264165   | 3.6018150     | 0.0318736    | 0.9298907        | torch.Size([2, 512, 32])         |
| 941     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(2)                   | weight              | torch.float32 |           | -1.0547366   | 0.5812716     | 0.0070099    | 0.0187704        | torch.Size([32, 32])             |
| 941     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(2)                   | bias                | torch.float32 |           | -0.2183180   | 0.1396109     | -0.0140744   | 0.0103446        | torch.Size([32])                 |
| 941     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(2)                   | output              | torch.float32 |           | -4.4460702   | 1.6359755     | -0.5643228   | 1.4874957        | torch.Size([2, 512, 32])         |
| 942     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(2)                   | input               | torch.float32 |           | -4.4460702   | 1.6359755     | -0.5643228   | 1.4874957        | torch.Size([2, 512, 32])         |
| 942     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(2)                   | output              | qint8         | 0.0142143 | 0.0000000    | 1.6346442     | 0.2248993    | 0.1214531        | torch.Size([2, 512, 32])         |
| 943     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(2)   | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6346442     | 0.2248993    | 0.1214531        | torch.Size([2, 512, 32])         |
| 943     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(2)   | output              | qint16        | 0.0000116 | 0.1914545    | 0.2452007     | 0.2248991    | 0.0000850        | torch.Size([2, 512, 1])          |
| 944     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(2)               | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6346442     | 0.2248993    | 0.1214531        | torch.Size([2, 512, 32])         |
| 944     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(2)               | input_1             | qint16        | 0.0000116 | 0.1914545    | 0.2452007     | 0.2248991    | 0.0000850        | torch.Size([2, 512, 1])          |
| 944     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(2)               | output              | qint16        | 0.0000516 | -0.2451896   | 1.3894249     | -0.0000002   | 0.1213690        | torch.Size([2, 512, 32])         |
| 945     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(2)               | input_0             | qint16        | 0.0000516 | -0.2451896   | 1.3894249     | -0.0000002   | 0.1213690        | torch.Size([2, 512, 32])         |
| 945     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(2)               | input_1             | qint16        | 0.0000516 | -0.2451896   | 1.3894249     | -0.0000002   | 0.1213690        | torch.Size([2, 512, 32])         |
| 945     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(2)               | output              | qint16        | 0.0000889 | 0.0000000    | 1.9304806     | 0.1213661    | 0.0494537        | torch.Size([2, 512, 32])         |
| 946     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(2)     | input_0             | qint16        | 0.0000889 | 0.0000000    | 1.9304806     | 0.1213661    | 0.0494537        | torch.Size([2, 512, 32])         |
| 946     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(2)     | output              | qint16        | 0.0000089 | 0.0952523    | 0.1540861     | 0.1213668    | 0.0000655        | torch.Size([2, 512, 1])          |
| 947     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(2)             | input               | qint16        | 0.0000089 | 0.0952523    | 0.1540861     | 0.1213668    | 0.0000655        | torch.Size([2, 512, 1])          |
| 947     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(2)             | output              | qint16        | 0.0001114 | 2.5474203    | 3.2399120     | 2.8750963    | 0.0091807        | torch.Size([2, 512, 1])          |
| 948     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(2)           | input_0             | qint16        | 0.0000516 | -0.2451896   | 1.3894249     | -0.0000002   | 0.1213690        | torch.Size([2, 512, 32])         |
| 948     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(2)           | input_1             | qint16        | 0.0001114 | 2.5474203    | 3.2399120     | 2.8750963    | 0.0091807        | torch.Size([2, 512, 1])          |
| 948     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(2)           | output              | qint16        | 0.0001083 | -0.6740232   | 3.5501876     | 0.0000007    | 0.9999206        | torch.Size([2, 512, 32])         |
| 949     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(2)      | input               | torch.float32 |           | 0.8550419    | 1.1198171     | 0.9805899    | 0.0036729        | torch.Size([32])                 |
| 949     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(2)      | output              | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 950     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(2)        | input_0             | qint16        | 0.0001083 | -0.6740232   | 3.5501876     | 0.0000007    | 0.9999206        | torch.Size([2, 512, 32])         |
| 950     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(2)        | input_1             | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 950     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(2)        | output              | qint16        | 0.0001106 | -0.7547538   | 3.6229506     | -0.0015882   | 0.9825902        | torch.Size([2, 512, 32])         |
| 951     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(2)        | input               | torch.float32 |           | -0.0792132   | 0.1045145     | 0.0242442    | 0.0021608        | torch.Size([32])                 |
| 951     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(2)        | output              | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 952     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(2)          | input_0             | qint16        | 0.0001106 | -0.7547538   | 3.6229506     | -0.0015882   | 0.9825902        | torch.Size([2, 512, 32])         |
| 952     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(2)          | input_1             | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 952     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(2)          | output              | qint8         | 0.0268612 | -0.7521123   | 3.4113667     | 0.0226625    | 0.9188876        | torch.Size([2, 512, 32])         |
| 953     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(2)                   | input               | qint8         | 0.0268612 | -0.7521123   | 3.4113667     | 0.0226625    | 0.9188876        | torch.Size([2, 512, 32])         |
| 953     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(2)                   | weight              | torch.float32 |           | -0.4480607   | 0.3678726     | 0.0004879    | 0.0160908        | torch.Size([32, 32])             |
| 953     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(2)                   | bias                | torch.float32 |           | -0.1861591   | 0.1739754     | 0.0155446    | 0.0137690        | torch.Size([32])                 |
| 953     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(2)                   | output              | torch.float32 |           | -3.6473067   | 1.8488679     | -0.3454899   | 1.6936289        | torch.Size([2, 512, 32])         |
| 954     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(2)                   | input               | torch.float32 |           | -3.6473067   | 1.8488679     | -0.3454899   | 1.6936289        | torch.Size([2, 512, 32])         |
| 954     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(2)                   | output              | qint8         | 0.0183966 | 0.0000000    | 1.8580562     | 0.3371942    | 0.1872847        | torch.Size([2, 512, 32])         |
| 955     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(2)   | input_0             | qint8         | 0.0183966 | 0.0000000    | 1.8580562     | 0.3371942    | 0.1872847        | torch.Size([2, 512, 32])         |
| 955     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(2)   | output              | qint16        | 0.0000156 | 0.2828411    | 0.3587307     | 0.3371950    | 0.0000745        | torch.Size([2, 512, 1])          |
| 956     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(2)               | input_0             | qint8         | 0.0183966 | 0.0000000    | 1.8580562     | 0.3371942    | 0.1872847        | torch.Size([2, 512, 32])         |
| 956     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(2)               | input_1             | qint16        | 0.0000156 | 0.2828411    | 0.3587307     | 0.3371950    | 0.0000745        | torch.Size([2, 512, 1])          |
| 956     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(2)               | output              | qint16        | 0.0000645 | -0.3587138   | 1.5481671     | 0.0000003    | 0.1872095        | torch.Size([2, 512, 32])         |
| 957     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(2)               | input_0             | qint16        | 0.0000645 | -0.3587138   | 1.5481671     | 0.0000003    | 0.1872095        | torch.Size([2, 512, 32])         |
| 957     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(2)               | input_1             | qint16        | 0.0000645 | -0.3587138   | 1.5481671     | 0.0000003    | 0.1872095        | torch.Size([2, 512, 32])         |
| 957     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(2)               | output              | qint16        | 0.0001365 | 0.0000000    | 2.3968720     | 0.1872026    | 0.0659086        | torch.Size([2, 512, 32])         |
| 958     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(2)     | input_0             | qint16        | 0.0001365 | 0.0000000    | 2.3968720     | 0.1872026    | 0.0659086        | torch.Size([2, 512, 32])         |
| 958     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(2)     | output              | qint16        | 0.0000123 | 0.1584801    | 0.2166585     | 0.1872033    | 0.0000485        | torch.Size([2, 512, 1])          |
| 959     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(2)             | input               | qint16        | 0.0000123 | 0.1584801    | 0.2166585     | 0.1872033    | 0.0000485        | torch.Size([2, 512, 1])          |
| 959     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(2)             | output              | qint16        | 0.0000749 | 2.1483150    | 2.4551423     | 2.3121595    | 0.0016650        | torch.Size([2, 512, 1])          |
| 960     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(2)           | input_0             | qint16        | 0.0000645 | -0.3587138   | 1.5481671     | 0.0000003    | 0.1872095        | torch.Size([2, 512, 32])         |
| 960     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(2)           | input_1             | qint16        | 0.0000749 | 2.1483150    | 2.4551423     | 2.3121595    | 0.0016650        | torch.Size([2, 512, 1])          |
| 960     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(2)           | output              | qint16        | 0.0001267 | -0.8559434   | 3.3372674     | 0.0000039    | 0.9998510        | torch.Size([2, 512, 32])         |
| 961     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(2)      | input               | torch.float32 |           | 0.8469434    | 1.1090456     | 0.9866461    | 0.0031007        | torch.Size([32])                 |
| 961     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(2)      | output              | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 962     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(2)        | input_0             | qint16        | 0.0001267 | -0.8559434   | 3.3372674     | 0.0000039    | 0.9998510        | torch.Size([2, 512, 32])         |
| 962     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(2)        | input_1             | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 962     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(2)        | output              | qint16        | 0.0001376 | -0.9492074   | 3.3870194     | -0.0059859   | 0.9918117        | torch.Size([2, 512, 32])         |
| 963     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(2)        | input               | torch.float32 |           | -0.0626723   | 0.0887763     | 0.0071697    | 0.0011301        | torch.Size([32])                 |
| 963     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(2)        | output              | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 964     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(2)          | input_0             | qint16        | 0.0001376 | -0.9492074   | 3.3870194     | -0.0059859   | 0.9918117        | torch.Size([2, 512, 32])         |
| 964     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(2)          | input_1             | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 964     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(2)          | output              | qint8         | 0.0326290 | -0.9462408   | 3.3607864     | 0.0014717    | 0.9627404        | torch.Size([2, 512, 32])         |
| 965     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(2)                   | input               | qint8         | 0.0326290 | -0.9462408   | 3.3607864     | 0.0014717    | 0.9627404        | torch.Size([2, 512, 32])         |
| 965     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(2)                   | weight              | torch.float32 |           | -0.5597425   | 0.7001730     | 0.0015679    | 0.0160348        | torch.Size([32, 32])             |
| 965     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(2)                   | bias                | torch.float32 |           | -0.1810580   | 0.1736723     | -0.0279047   | 0.0091159        | torch.Size([32])                 |
| 965     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(2)                   | output              | torch.float32 |           | -4.2996197   | 3.0667350     | -0.2493125   | 1.3232698        | torch.Size([2, 512, 32])         |
| 966     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(2)                  | input               | torch.float32 |           | -4.2996197   | 3.0667350     | -0.2493125   | 1.3232698        | torch.Size([2, 512, 32])         |
| 966     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(2)                  | output              | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2859282    | 0.3855304        | torch.Size([2, 512, 32])         |
| 967     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(2)  | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2859282    | 0.3855304        | torch.Size([2, 512, 32])         |
| 967     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(2)  | output              | qint16        | 0.0000121 | 0.2200791    | 0.3730376     | 0.2859278    | 0.0008452        | torch.Size([2, 512, 1])          |
| 968     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(2)              | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2859282    | 0.3855304        | torch.Size([2, 512, 32])         |
| 968     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(2)              | input_1             | qint16        | 0.0000121 | 0.2200791    | 0.3730376     | 0.2859278    | 0.0008452        | torch.Size([2, 512, 1])          |
| 968     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(2)              | output              | qint16        | 0.0000976 | -0.3730391   | 2.8057969     | -0.0000012   | 0.3846866        | torch.Size([2, 512, 32])         |
| 969     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(2)              | input_0             | qint16        | 0.0000976 | -0.3730391   | 2.8057969     | -0.0000012   | 0.3846866        | torch.Size([2, 512, 32])         |
| 969     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(2)              | input_1             | qint16        | 0.0000976 | -0.3730391   | 2.8057969     | -0.0000012   | 0.3846866        | torch.Size([2, 512, 32])         |
| 969     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(2)              | output              | qint16        | 0.0003122 | 0.0000000    | 7.8725953     | 0.3846771    | 1.4337027        | torch.Size([2, 512, 32])         |
| 970     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(2)    | input_0             | qint16        | 0.0003122 | 0.0000000    | 7.8725953     | 0.3846771    | 1.4337027        | torch.Size([2, 512, 32])         |
| 970     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(2)    | output              | qint16        | 0.0000136 | 0.2315263    | 0.4199012     | 0.3846767    | 0.0011523        | torch.Size([2, 512, 1])          |
| 971     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(2)            | input               | qint16        | 0.0000136 | 0.2315263    | 0.4199012     | 0.3846767    | 0.0011523        | torch.Size([2, 512, 1])          |
| 971     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(2)            | output              | qint16        | 0.0000802 | 1.5432148    | 2.0782003     | 1.6177766    | 0.0067121        | torch.Size([2, 512, 1])          |
| 972     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(2)          | input_0             | qint16        | 0.0000976 | -0.3730391   | 2.8057969     | -0.0000012   | 0.3846866        | torch.Size([2, 512, 32])         |
| 972     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(2)          | input_1             | qint16        | 0.0000802 | 1.5432148    | 2.0782003     | 1.6177766    | 0.0067121        | torch.Size([2, 512, 1])          |
| 972     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(2)          | output              | qint16        | 0.0001482 | -0.6975445   | 4.7900620     | 0.0000013    | 0.9999964        | torch.Size([2, 512, 32])         |
| 973     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(2)     | input               | torch.float32 |           | 0.8363900    | 1.4688344     | 1.0570920    | 0.0396277        | torch.Size([32])                 |
| 973     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(2)     | output              | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 974     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(2)       | input_0             | qint16        | 0.0001482 | -0.6975445   | 4.7900620     | 0.0000013    | 0.9999964        | torch.Size([2, 512, 32])         |
| 974     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(2)       | input_1             | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 974     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(2)       | output              | qint16        | 0.0001637 | -1.0245721   | 4.0201931     | -0.0716184   | 0.8201889        | torch.Size([2, 512, 32])         |
| 975     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(2)       | input               | torch.float32 |           | -0.1492936   | 0.2842544     | 0.0803791    | 0.0109446        | torch.Size([32])                 |
| 975     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(2)       | output              | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 976     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(2)         | input_0             | qint16        | 0.0001637 | -1.0245721   | 4.0201931     | -0.0716184   | 0.8201889        | torch.Size([2, 512, 32])         |
| 976     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(2)         | input_1             | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 976     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(2)         | output              | qint8         | 0.0373904 | -0.8599797   | 3.9633846     | 0.0089870    | 0.7156194        | torch.Size([2, 512, 32])         |
| 977     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 977     | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017927 | -2.4183795   | 0.4732781     | -0.2434236   | 0.4384635        | torch.Size([2, 512, 3])          |
| 978     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(2)                   | input               | qint16        | 0.0017927 | -2.4183795   | 0.4732781     | -0.2434236   | 0.4384635        | torch.Size([2, 512, 3])          |
| 978     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(2)                   | weight              | torch.float32 |           | -1.0475703   | 0.9848034     | -0.0054673   | 0.2080412        | torch.Size([64, 3])              |
| 978     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(2)                   | bias                | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1294928        | torch.Size([64])                 |
| 978     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(2)                   | output              | torch.float32 |           | -2.1006489   | 1.5779978     | -0.0844958   | 0.3068176        | torch.Size([2, 512, 64])         |
| 979     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(2)                   | input               | torch.float32 |           | -2.1006489   | 1.5779978     | -0.0844958   | 0.3068176        | torch.Size([2, 512, 64])         |
| 979     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(2)                   | output              | qint8         | 0.0729980 | 0.0000000    | 1.6059562     | 0.1729905    | 0.0678592        | torch.Size([2, 512, 64])         |
| 980     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(2)   | input_0             | qint8         | 0.0729980 | 0.0000000    | 1.6059562     | 0.1729905    | 0.0678592        | torch.Size([2, 512, 64])         |
| 980     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(2)   | output              | qint16        | 0.0000685 | 0.1208711    | 0.3056722     | 0.1729923    | 0.0054210        | torch.Size([2, 512, 1])          |
| 981     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(2)               | input_0             | qint8         | 0.0729980 | 0.0000000    | 1.6059562     | 0.1729905    | 0.0678592        | torch.Size([2, 512, 64])         |
| 981     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(2)               | input_1             | qint16        | 0.0000685 | 0.1208711    | 0.3056722     | 0.1729923    | 0.0054210        | torch.Size([2, 512, 1])          |
| 981     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(2)               | output              | qint16        | 0.0002902 | -0.3055897   | 1.3004248     | 0.0000106    | 0.0624384        | torch.Size([2, 512, 64])         |
| 982     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(2)               | input_0             | qint16        | 0.0002902 | -0.3055897   | 1.3004248     | 0.0000106    | 0.0624384        | torch.Size([2, 512, 64])         |
| 982     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(2)               | input_1             | qint16        | 0.0002902 | -0.3055897   | 1.3004248     | 0.0000106    | 0.0624384        | torch.Size([2, 512, 64])         |
| 982     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(2)               | output              | qint16        | 0.0029551 | 0.0000000    | 1.6903145     | 0.0625295    | 0.0258134        | torch.Size([2, 512, 64])         |
| 983     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(2)     | input_0             | qint16        | 0.0029551 | 0.0000000    | 1.6903145     | 0.0625295    | 0.0258134        | torch.Size([2, 512, 64])         |
| 983     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(2)     | output              | qint16        | 0.0003723 | 0.0245721    | 0.1619526     | 0.0625298    | 0.0029637        | torch.Size([2, 512, 1])          |
| 984     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(2)             | input               | qint16        | 0.0003723 | 0.0245721    | 0.1619526     | 0.0625298    | 0.0029637        | torch.Size([2, 512, 1])          |
| 984     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(2)             | output              | qint16        | 0.0001859 | 2.4847414    | 6.0927577     | 4.9020109    | 1.9835507        | torch.Size([2, 512, 1])          |
| 985     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(2)           | input_0             | qint16        | 0.0002902 | -0.3055897   | 1.3004248     | 0.0000106    | 0.0624384        | torch.Size([2, 512, 64])         |
| 985     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(2)           | input_1             | qint16        | 0.0001859 | 2.4847414    | 6.0927577     | 4.9020109    | 1.9835507        | torch.Size([2, 512, 1])          |
| 985     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(2)           | output              | qint16        | 0.0001160 | -0.8195412   | 3.2545106     | 0.0000417    | 0.9961491        | torch.Size([2, 512, 64])         |
| 986     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(2)      | input               | torch.float32 |           | 0.8691067    | 1.1281288     | 0.9794419    | 0.0036082        | torch.Size([64])                 |
| 986     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(2)      | output              | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 987     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(2)        | input_0             | qint16        | 0.0001160 | -0.8195412   | 3.2545106     | 0.0000417    | 0.9961491        | torch.Size([2, 512, 64])         |
| 987     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(2)        | input_1             | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 987     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(2)        | output              | qint16        | 0.0001189 | -0.8961522   | 3.1661904     | 0.0121723    | 0.9416845        | torch.Size([2, 512, 64])         |
| 988     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(2)        | input               | torch.float32 |           | -0.1133662   | 0.1493634     | 0.0304540    | 0.0046508        | torch.Size([64])                 |
| 988     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(2)        | output              | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 989     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(2)          | input_0             | qint16        | 0.0001189 | -0.8961522   | 3.1661904     | 0.0121723    | 0.9416845        | torch.Size([2, 512, 64])         |
| 989     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(2)          | input_1             | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 989     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(2)          | output              | qint8         | 0.0267452 | -0.8825915   | 3.1291883     | 0.0429117    | 0.8471662        | torch.Size([2, 512, 64])         |
| 990     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(2)                   | input               | qint8         | 0.0267452 | -0.8825915   | 3.1291883     | 0.0429117    | 0.8471662        | torch.Size([2, 512, 64])         |
| 990     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(2)                   | weight              | torch.float32 |           | -0.4523612   | 0.4813256     | -0.0014562   | 0.0096743        | torch.Size([64, 64])             |
| 990     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(2)                   | bias                | torch.float32 |           | -0.1183558   | 0.2243176     | 0.0150283    | 0.0049289        | torch.Size([64])                 |
| 990     | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(2)                   | output              | torch.float32 |           | -5.5199113   | 2.7083700     | -0.4363731   | 2.1863048        | torch.Size([2, 512, 64])         |
| 991     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(2)                   | input               | torch.float32 |           | -5.5199113   | 2.7083700     | -0.4363731   | 2.1863048        | torch.Size([2, 512, 64])         |
| 991     | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(2)                   | output              | qint8         | 0.0337689 | 0.0000000    | 2.7015116     | 0.3179272    | 0.2132572        | torch.Size([2, 512, 64])         |
| 992     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(2)   | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.7015116     | 0.3179272    | 0.2132572        | torch.Size([2, 512, 64])         |
| 992     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(2)   | output              | qint16        | 0.0000195 | 0.2047336    | 0.4490241     | 0.3179276    | 0.0053370        | torch.Size([2, 512, 1])          |
| 993     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(2)               | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.7015116     | 0.3179272    | 0.2132572        | torch.Size([2, 512, 64])         |
| 993     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(2)               | input_1             | qint16        | 0.0000195 | 0.2047336    | 0.4490241     | 0.3179276    | 0.0053370        | torch.Size([2, 512, 1])          |
| 993     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(2)               | output              | qint16        | 0.0001376 | -0.4490715   | 2.3232534     | -0.0000029   | 0.2079252        | torch.Size([2, 512, 64])         |
| 994     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(2)               | input_0             | qint16        | 0.0001376 | -0.4490715   | 2.3232534     | -0.0000029   | 0.2079252        | torch.Size([2, 512, 64])         |
| 994     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(2)               | input_1             | qint16        | 0.0001376 | -0.4490715   | 2.3232534     | -0.0000029   | 0.2079252        | torch.Size([2, 512, 64])         |
| 994     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(2)               | output              | qint16        | 0.0006236 | 0.0000000    | 5.3975811     | 0.2079105    | 0.2115387        | torch.Size([2, 512, 64])         |
| 995     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(2)     | input_0             | qint16        | 0.0006236 | 0.0000000    | 5.3975811     | 0.2079105    | 0.2115387        | torch.Size([2, 512, 64])         |
| 995     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(2)     | output              | qint16        | 0.0000322 | 0.0881651    | 0.3898301     | 0.2079110    | 0.0055059        | torch.Size([2, 512, 1])          |
| 996     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(2)             | input               | qint16        | 0.0000322 | 0.0881651    | 0.3898301     | 0.2079110    | 0.0055059        | torch.Size([2, 512, 1])          |
| 996     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(2)             | output              | qint16        | 0.0001060 | 1.6016248    | 3.3676631     | 2.3552818    | 0.3461614        | torch.Size([2, 512, 1])          |
| 997     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(2)           | input_0             | qint16        | 0.0001376 | -0.4490715   | 2.3232534     | -0.0000029   | 0.2079252        | torch.Size([2, 512, 64])         |
| 997     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(2)           | input_1             | qint16        | 0.0001060 | 1.6016248    | 3.3676631     | 2.3552818    | 0.3461614        | torch.Size([2, 512, 1])          |
| 997     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(2)           | output              | qint16        | 0.0001466 | -0.8656879   | 4.4375300     | -0.0000093   | 1.0000505        | torch.Size([2, 512, 64])         |
| 998     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(2)      | input               | torch.float32 |           | 0.8333027    | 1.1388558     | 0.9778216    | 0.0042186        | torch.Size([64])                 |
| 998     | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(2)      | output              | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 999     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(2)        | input_0             | qint16        | 0.0001466 | -0.8656879   | 4.4375300     | -0.0000093   | 1.0000505        | torch.Size([2, 512, 64])         |
| 999     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(2)        | input_1             | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 999     | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(2)        | output              | qint16        | 0.0001474 | -0.9260188   | 4.3128209     | 0.0040674    | 0.9897189        | torch.Size([2, 512, 64])         |
| 1000    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(2)        | input               | torch.float32 |           | -0.0757831   | 0.1161729     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1000    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(2)        | output              | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1001    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(2)          | input_0             | qint16        | 0.0001474 | -0.9260188   | 4.3128209     | 0.0040674    | 0.9897189        | torch.Size([2, 512, 64])         |
| 1001    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(2)          | input_1             | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1001    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(2)          | output              | qint8         | 0.0350382 | -0.9109923   | 4.2746563     | 0.0206146    | 0.9432058        | torch.Size([2, 512, 64])         |
| 1002    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(2)                   | input               | qint8         | 0.0350382 | -0.9109923   | 4.2746563     | 0.0206146    | 0.9432058        | torch.Size([2, 512, 64])         |
| 1002    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(2)                   | weight              | torch.float32 |           | -0.5707353   | 0.3620123     | -0.0010372   | 0.0088292        | torch.Size([64, 64])             |
| 1002    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(2)                   | bias                | torch.float32 |           | -0.1720246   | 0.1340137     | -0.0235144   | 0.0050507        | torch.Size([64])                 |
| 1002    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(2)                   | output              | torch.float32 |           | -5.4545336   | 3.7079690     | -0.3543807   | 2.2105508        | torch.Size([2, 512, 64])         |
| 1003    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(2)                   | input               | torch.float32 |           | -5.4545336   | 3.7079690     | -0.3543807   | 2.2105508        | torch.Size([2, 512, 64])         |
| 1003    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(2)                   | output              | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4475815    | 0.5206293        | torch.Size([2, 512, 64])         |
| 1004    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(2)   | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4475815    | 0.5206293        | torch.Size([2, 512, 64])         |
| 1004    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(2)   | output              | qint16        | 0.0000166 | 0.3489401    | 0.5202731     | 0.4475804    | 0.0035738        | torch.Size([2, 512, 1])          |
| 1005    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(2)               | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4475815    | 0.5206293        | torch.Size([2, 512, 64])         |
| 1005    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(2)               | input_1             | qint16        | 0.0000166 | 0.3489401    | 0.5202731     | 0.4475804    | 0.0035738        | torch.Size([2, 512, 1])          |
| 1005    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(2)               | output              | qint16        | 0.0000988 | -0.5202692   | 3.1881309     | -0.0000030   | 0.5170615        | torch.Size([2, 512, 64])         |
| 1006    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(2)               | input_0             | qint16        | 0.0000988 | -0.5202692   | 3.1881309     | -0.0000030   | 0.5170615        | torch.Size([2, 512, 64])         |
| 1006    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(2)               | input_1             | qint16        | 0.0000988 | -0.5202692   | 3.1881309     | -0.0000030   | 0.5170615        | torch.Size([2, 512, 64])         |
| 1006    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(2)               | output              | qint16        | 0.0003201 | 0.0000000    | 10.1640558    | 0.5170309    | 1.1643975        | torch.Size([2, 512, 64])         |
| 1007    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(2)     | input_0             | qint16        | 0.0003201 | 0.0000000    | 10.1640558    | 0.5170309    | 1.1643975        | torch.Size([2, 512, 64])         |
| 1007    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(2)     | output              | qint16        | 0.0000230 | 0.3257590    | 0.7222183     | 0.5170317    | 0.0137273        | torch.Size([2, 512, 1])          |
| 1008    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(2)             | input               | qint16        | 0.0000230 | 0.3257590    | 0.7222183     | 0.5170317    | 0.0137273        | torch.Size([2, 512, 1])          |
| 1008    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(2)             | output              | qint16        | 0.0000608 | 1.1766768    | 1.7520746     | 1.4241457    | 0.0373303        | torch.Size([2, 512, 1])          |
| 1009    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(2)           | input_0             | qint16        | 0.0000988 | -0.5202692   | 3.1881309     | -0.0000030   | 0.5170615        | torch.Size([2, 512, 64])         |
| 1009    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(2)           | input_1             | qint16        | 0.0000608 | 1.1766768    | 1.7520746     | 1.4241457    | 0.0373303        | torch.Size([2, 512, 1])          |
| 1009    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(2)           | output              | qint16        | 0.0001598 | -0.6831929   | 4.1627769     | -0.0000033   | 1.0000603        | torch.Size([2, 512, 64])         |
| 1010    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(2)      | input               | torch.float32 |           | 0.8006503    | 1.1495361     | 0.9818506    | 0.0032003        | torch.Size([64])                 |
| 1010    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(2)      | output              | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 1011    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(2)        | input_0             | qint16        | 0.0001598 | -0.6831929   | 4.1627769     | -0.0000033   | 1.0000603        | torch.Size([2, 512, 64])         |
| 1011    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(2)        | input_1             | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 1011    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(2)        | output              | qint16        | 0.0001633 | -0.7853432   | 4.3042030     | 0.0063147    | 1.0022354        | torch.Size([2, 512, 64])         |
| 1012    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(2)        | input               | torch.float32 |           | -0.0461140   | 0.1411197     | 0.0132828    | 0.0015701        | torch.Size([64])                 |
| 1012    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(2)        | output              | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 1013    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(2)          | input_0             | qint16        | 0.0001633 | -0.7853432   | 4.3042030     | 0.0063147    | 1.0022354        | torch.Size([2, 512, 64])         |
| 1013    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(2)          | input_1             | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 1013    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(2)          | output              | qint8         | 0.0387038 | -0.7740757   | 4.2961206     | 0.0198757    | 0.9823666        | torch.Size([2, 512, 64])         |
| 1014    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(2)                   | input               | qint8         | 0.0387038 | -0.7740757   | 4.2961206     | 0.0198757    | 0.9823666        | torch.Size([2, 512, 64])         |
| 1014    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(2)                   | weight              | torch.float32 |           | -0.5701389   | 0.3477888     | 0.0006721    | 0.0085883        | torch.Size([64, 64])             |
| 1014    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(2)                   | bias                | torch.float32 |           | -0.1677032   | 0.1709885     | -0.0237130   | 0.0070098        | torch.Size([64])                 |
| 1014    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(2)                   | output              | torch.float32 |           | -4.7525859   | 7.2159615     | -0.5075759   | 1.8125674        | torch.Size([2, 512, 64])         |
| 1015    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(2)                  | input               | torch.float32 |           | -4.7525859   | 7.2159615     | -0.5075759   | 1.8125674        | torch.Size([2, 512, 64])         |
| 1015    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(2)                  | output              | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2590360    | 0.6800066        | torch.Size([2, 512, 64])         |
| 1016    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(2)  | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2590360    | 0.6800066        | torch.Size([2, 512, 64])         |
| 1016    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(2)  | output              | qint16        | 0.0000138 | 0.2020663    | 0.3408770     | 0.2590380    | 0.0018238        | torch.Size([2, 512, 1])          |
| 1017    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(2)              | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2590360    | 0.6800066        | torch.Size([2, 512, 64])         |
| 1017    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(2)              | input_1             | qint16        | 0.0000138 | 0.2020663    | 0.3408770     | 0.2590380    | 0.0018238        | torch.Size([2, 512, 1])          |
| 1017    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(2)              | output              | qint16        | 0.0002137 | -0.3407953   | 6.9387641     | 0.0000107    | 0.6781748        | torch.Size([2, 512, 64])         |
| 1018    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(2)              | input_0             | qint16        | 0.0002137 | -0.3407953   | 6.9387641     | 0.0000107    | 0.6781748        | torch.Size([2, 512, 64])         |
| 1018    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(2)              | input_1             | qint16        | 0.0002137 | -0.3407953   | 6.9387641     | 0.0000107    | 0.6781748        | torch.Size([2, 512, 64])         |
| 1018    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(2)              | output              | qint16        | 0.0014959 | 0.0000000    | 48.1464005    | 0.6782380    | 19.7174072       | torch.Size([2, 512, 64])         |
| 1019    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(2)    | input_0             | qint16        | 0.0014959 | 0.0000000    | 48.1464005    | 0.6782380    | 19.7174072       | torch.Size([2, 512, 64])         |
| 1019    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(2)    | output              | qint16        | 0.0000253 | 0.4570146    | 0.8274851     | 0.6782387    | 0.0128832        | torch.Size([2, 512, 1])          |
| 1020    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(2)            | input               | qint16        | 0.0000253 | 0.4570146    | 0.8274851     | 0.6782387    | 0.0128832        | torch.Size([2, 512, 1])          |
| 1020    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(2)            | output              | qint16        | 0.0000680 | 1.0993282    | 1.4791950     | 1.2283041    | 0.0123800        | torch.Size([2, 512, 1])          |
| 1021    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(2)          | input_0             | qint16        | 0.0002137 | -0.3407953   | 6.9387641     | 0.0000107    | 0.6781748        | torch.Size([2, 512, 64])         |
| 1021    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(2)          | input_1             | qint16        | 0.0000680 | 1.0993282    | 1.4791950     | 1.2283041    | 0.0123800        | torch.Size([2, 512, 1])          |
| 1021    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(2)          | output              | qint16        | 0.0002366 | -0.4989291   | 7.7517352     | 0.0000049    | 0.9998443        | torch.Size([2, 512, 64])         |
| 1022    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(2)     | input               | torch.float32 |           | 0.7297163    | 1.2824999     | 1.0134131    | 0.0161719        | torch.Size([64])                 |
| 1022    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(2)     | output              | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 1023    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(2)       | input_0             | qint16        | 0.0002366 | -0.4989291   | 7.7517352     | 0.0000049    | 0.9998443        | torch.Size([2, 512, 64])         |
| 1023    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(2)       | input_1             | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 1023    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(2)       | output              | qint16        | 0.0001954 | -0.6399224   | 5.8507471     | -0.0316711   | 0.7125379        | torch.Size([2, 512, 64])         |
| 1024    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(2)       | input               | torch.float32 |           | -0.2385408   | 0.3192695     | 0.0900053    | 0.0129013        | torch.Size([64])                 |
| 1024    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(2)       | output              | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 1025    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(2)         | input_0             | qint16        | 0.0001954 | -0.6399224   | 5.8507471     | -0.0316711   | 0.7125379        | torch.Size([2, 512, 64])         |
| 1025    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(2)         | input_1             | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 1025    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(2)         | output              | qint8         | 0.0462055 | -0.6468776   | 5.7756929     | 0.0582462    | 0.6260098        | torch.Size([2, 512, 64])         |
| 1026    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_0             | qint8         | 0.0587279 | -0.8221908   | 7.4584455     | 0.0792172    | 0.8686116        | torch.Size([2, 512, 128])        |
| 1026    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_1             | qint8         | 0.0385920 | -1.6980467   | 4.3223004     | 0.0060182    | 1.3111168        | torch.Size([2, 512, 32])         |
| 1026    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_2             | qint8         | 0.0373904 | -0.8599797   | 3.9633846     | 0.0089870    | 0.7156194        | torch.Size([2, 512, 32])         |
| 1026    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_3             | qint8         | 0.0462055 | -0.6468776   | 5.7756929     | 0.0582462    | 0.6260098        | torch.Size([2, 512, 64])         |
| 1026    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | output              | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0593189    | 0.8416965        | torch.Size([2, 512, 256])        |
| 1027    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(2)                                 | input               | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 1027    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(2)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 1027    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(2)                                 | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 1028    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.7.query_cat                           | input_0             | qint8         | 0.0377982 | -4.8381691   | 3.2506449     | 0.0063716    | 0.6154287        | torch.Size([2, 512, 256])        |
| 1028    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.7.query_cat                           | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0593189    | 0.8416965        | torch.Size([2, 512, 256])        |
| 1028    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.7.query_cat                           | output              | qint8         | 0.0531841 | -4.8397570   | 6.7543864     | 0.0350001    | 0.7287509        | torch.Size([2, 512, 512])        |
| 1029    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.7.key_cat                             | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 1029    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.7.key_cat                             | input_1             | qint8         | 0.0569265 | -1.0246774   | 5.3510933     | 0.0736042    | 0.8488365        | torch.Size([2, 256, 256])        |
| 1029    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.7.key_cat                             | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 1030    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0531841 | -4.8397570   | 6.7543864     | 0.0350001    | 0.7287509        | torch.Size([2, 512, 512])        |
| 1030    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0531841 | -4.8397570   | 6.7543864     | 0.0350001    | 0.7287509        | torch.Size([512, 2, 512])        |
| 1031    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 1031    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1032    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 1032    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1033    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | input_0             | qint8         | 0.0531841 | -4.8397570   | 6.7543864     | 0.0350001    | 0.7287509        | torch.Size([512, 2, 512])        |
| 1033    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | output              | qint8         | 0.0531841 | -4.8397570   | 6.7543864     | 0.0350001    | 0.7287509        | torch.Size([512, 2, 512])        |
| 1034    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1034    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1035    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1035    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1036    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.q_proj                         | input               | qint8         | 0.0531841 | -4.8397570   | 6.7543864     | 0.0350001    | 0.7287509        | torch.Size([512, 2, 512])        |
| 1036    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.q_proj                         | weight              | torch.float32 |           | -0.2652678   | 0.2628567     | -0.0000400   | 0.0033250        | torch.Size([512, 512])           |
| 1036    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.q_proj                         | bias                | torch.float32 |           | -0.1143946   | 0.1122871     | 0.0018811    | 0.0010206        | torch.Size([512])                |
| 1036    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.q_proj                         | output              | qint8         | 0.0979568 | -12.5384760  | 12.4405193    | 0.0904596    | 10.9056482       | torch.Size([512, 2, 512])        |
| 1037    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.k_proj                         | input               | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1037    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.k_proj                         | weight              | torch.float32 |           | -0.2754143   | 0.2652588     | 0.0001362    | 0.0034943        | torch.Size([512, 512])           |
| 1037    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.k_proj                         | bias                | torch.float32 |           | -0.0046830   | 0.0034708     | 0.0000401    | 0.0000013        | torch.Size([512])                |
| 1037    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.k_proj                         | output              | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([256, 2, 512])        |
| 1038    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.v_proj                         | input               | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1038    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.v_proj                         | weight              | torch.float32 |           | -0.1505703   | 0.1412487     | 0.0000714    | 0.0010024        | torch.Size([512, 512])           |
| 1038    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.v_proj                         | bias                | torch.float32 |           | -0.0650689   | 0.0530252     | 0.0005504    | 0.0003445        | torch.Size([512])                |
| 1038    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.v_proj                         | output              | qint8         | 0.0063235 | -0.0632350   | 0.0505880     | 0.0004570    | 0.0003441        | torch.Size([256, 2, 512])        |
| 1039    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | input_0             | qint8         | 0.0979568 | -12.5384760  | 12.4405193    | 0.0904596    | 10.9056482       | torch.Size([512, 2, 512])        |
| 1039    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | output              | qint8         | 0.0979568 | -12.5384760  | 12.4405193    | 0.0904596    | 10.9056482       | torch.Size([512, 16, 64])        |
| 1040    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0979568 | -12.5384760  | 12.4405193    | 0.0904596    | 10.9056482       | torch.Size([512, 16, 64])        |
| 1040    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0979568 | -12.5384760  | 12.4405193    | 0.0904596    | 10.9056482       | torch.Size([16, 512, 64])        |
| 1041    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | input_0             | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([256, 2, 512])        |
| 1041    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | output              | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([256, 16, 64])        |
| 1042    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([256, 16, 64])        |
| 1042    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([16, 256, 64])        |
| 1043    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | input_0             | qint8         | 0.0063235 | -0.0632350   | 0.0505880     | 0.0004570    | 0.0003441        | torch.Size([256, 2, 512])        |
| 1043    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | output              | qint8         | 0.0063235 | -0.0632350   | 0.0505880     | 0.0004570    | 0.0003441        | torch.Size([256, 16, 64])        |
| 1044    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0063235 | -0.0632350   | 0.0505880     | 0.0004570    | 0.0003441        | torch.Size([256, 16, 64])        |
| 1044    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0063235 | -0.0632350   | 0.0505880     | 0.0004570    | 0.0003441        | torch.Size([16, 256, 64])        |
| 1045    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.7.attn.q_scale_mul                    | input_0             | qint8         | 0.0979568 | -12.5384760  | 12.4405193    | 0.0904596    | 10.9056482       | torch.Size([16, 512, 64])        |
| 1045    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.7.attn.q_scale_mul                    | output              | qint8         | 0.0122446 | -1.5673095   | 1.5550649     | 0.0113075    | 0.1704008        | torch.Size([16, 512, 64])        |
| 1046    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([16, 256, 64])        |
| 1046    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([16, 64, 256])        |
| 1047    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.7.attn.matmul                         | input_0             | qint8         | 0.0122446 | -1.5673095   | 1.5550649     | 0.0113075    | 0.1704008        | torch.Size([16, 512, 64])        |
| 1047    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.7.attn.matmul                         | input_1             | qint8         | 0.0786716 | -3.7762377   | 4.7989688     | 0.0534721    | 2.5751073        | torch.Size([16, 64, 256])        |
| 1047    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.7.attn.matmul                         | output              | qint8         | 1.8383843 | -77.2121429  | 33.0909157    | -10.4351645  | 446.6509705      | torch.Size([16, 512, 256])       |
| 1048    | torch.Tensor.max                                                            | head.layers.7.attn.softmax                        | input               | qint8         | 1.8383843 | -77.2121429  | 33.0909157    | -10.4351645  | 446.6509705      | torch.Size([16, 512, 256])       |
| 1048    | torch.Tensor.max                                                            | head.layers.7.attn.softmax                        | output_0            | qint8         | 1.8383843 | -77.2121429  | 33.0909157    | -10.4351645  | 446.7052917      | torch.Size([16, 512, 1])         |
| 1048    | torch.Tensor.max                                                            | head.layers.7.attn.softmax                        | output_1            | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1049    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.7.attn.softmax.sub                    | input_0             | qint8         | 1.8383843 | -77.2121429  | 33.0909157    | -10.4351645  | 446.6509705      | torch.Size([16, 512, 256])       |
| 1049    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.7.attn.softmax.sub                    | input_1             | qint8         | 1.8383843 | -77.2121429  | 33.0909157    | -10.4351645  | 446.7052917      | torch.Size([16, 512, 1])         |
| 1049    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.7.attn.softmax.sub                    | output              | qint16        | 0.0149309 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1050    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.7.attn.softmax.exp                    | input               | qint16        | 0.0149309 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1050    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.7.attn.softmax.exp                    | output              | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1051    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.7.attn.softmax.sum                    | input               | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1051    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.7.attn.softmax.sum                    | output              | qint16        | 0.0037285 | 122.1723404  | 122.1723404   | 122.1723404  | 0.0000000        | torch.Size([16, 512, 1])         |
| 1052    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.7.attn.softmax.reciprocal             | input               | qint16        | 0.0037285 | 122.1723404  | 122.1723404   | 122.1723404  | 0.0000000        | torch.Size([16, 512, 1])         |
| 1052    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.7.attn.softmax.reciprocal             | output              | qint16        | 0.0000305 | 0.0081788    | 0.0081788     | 0.0081788    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1053    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.7.attn.softmax.mul                    | input_0             | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1053    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.7.attn.softmax.mul                    | input_1             | qint16        | 0.0000305 | 0.0081788    | 0.0081788     | 0.0081788    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1053    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.7.attn.softmax.mul                    | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1054    | torch.nn.modules.dropout.Dropout                                            | head.layers.7.attn.attention_drop                 | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1054    | torch.nn.modules.dropout.Dropout                                            | head.layers.7.attn.attention_drop                 | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1055    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.7.attn.attn_matmul                    | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1055    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.7.attn.attn_matmul                    | input_1             | qint8         | 0.0063235 | -0.0632350   | 0.0505880     | 0.0004570    | 0.0003441        | torch.Size([16, 256, 64])        |
| 1055    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.7.attn.attn_matmul                    | output              | qint8         | 0.0063311 | -0.1266230   | 0.1012984     | 0.0009150    | 0.0013799        | torch.Size([16, 512, 64])        |
| 1056    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0063311 | -0.1266230   | 0.1012984     | 0.0009150    | 0.0013799        | torch.Size([16, 512, 64])        |
| 1056    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0063311 | -0.1266230   | 0.1012984     | 0.0009150    | 0.0013799        | torch.Size([512, 16, 64])        |
| 1057    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | input_0             | qint8         | 0.0063311 | -0.1266230   | 0.1012984     | 0.0009150    | 0.0013799        | torch.Size([512, 16, 64])        |
| 1057    | torch.Tensor.reshape                                                        | head.layers.7.attn                                | output              | qint8         | 0.0063311 | -0.1266230   | 0.1012984     | 0.0009150    | 0.0013799        | torch.Size([512, 2, 512])        |
| 1058    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.out_proj                       | input               | qint8         | 0.0063311 | -0.1266230   | 0.1012984     | 0.0009150    | 0.0013799        | torch.Size([512, 2, 512])        |
| 1058    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.out_proj                       | weight              | torch.float32 |           | -0.1888028   | 0.1700685     | 0.0000971    | 0.0020714        | torch.Size([512, 512])           |
| 1058    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.out_proj                       | bias                | torch.float32 |           | -0.2538213   | 0.2903754     | 0.0073539    | 0.0048732        | torch.Size([512])                |
| 1058    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.7.attn.out_proj                       | output              | qint8         | 0.0085256 | -0.7076241   | 0.7246752     | 0.0135877    | 0.0263982        | torch.Size([512, 2, 512])        |
| 1059    | torch.Tensor.view                                                           | head.layers.7.attn                                | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1059    | torch.Tensor.view                                                           | head.layers.7.attn                                | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 1060    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.7.attn.attn_weights_mean              | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 1060    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.7.attn.attn_weights_mean              | output              | qint8         | 0.0028934 | 0.0086801    | 0.0086801     | 0.0086801    | 0.0000000        | torch.Size([2, 512, 256])        |
| 1061    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | input_0             | qint8         | 0.0085256 | -0.7076241   | 0.7246752     | 0.0135877    | 0.0263982        | torch.Size([512, 2, 512])        |
| 1061    | torch.Tensor.transpose                                                      | head.layers.7.attn                                | output              | qint8         | 0.0085256 | -0.7076241   | 0.7246752     | 0.0135877    | 0.0263982        | torch.Size([2, 512, 512])        |
| 1062    | torch.nn.modules.dropout.Dropout                                            | head.layers.7.dropout                             | input               | qint8         | 0.0085256 | -0.7076241   | 0.7246752     | 0.0135877    | 0.0263982        | torch.Size([2, 512, 512])        |
| 1062    | torch.nn.modules.dropout.Dropout                                            | head.layers.7.dropout                             | output              | qint8         | 0.0085256 | -0.7076241   | 0.7246752     | 0.0135877    | 0.0263982        | torch.Size([2, 512, 512])        |
| 1063    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.7.add                                 | input_0             | qint8         | 0.0531841 | -4.8397570   | 6.7543864     | 0.0350001    | 0.7287509        | torch.Size([2, 512, 512])        |
| 1063    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.7.add                                 | input_1             | qint8         | 0.0085256 | -0.7076241   | 0.7246752     | 0.0135877    | 0.0263982        | torch.Size([2, 512, 512])        |
| 1063    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.7.add                                 | output              | qint8         | 0.0501450 | -4.1620336   | 6.3684130     | 0.0489036    | 0.6503991        | torch.Size([2, 512, 512])        |
| 1064    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(2)                                  | input               | qint8         | 0.0501450 | -4.1620336   | 6.3684130     | 0.0489036    | 0.6503991        | torch.Size([2, 512, 512])        |
| 1064    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(2)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 1064    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(2)                                  | output              | qint16        | 0.0015259 | -6.0150146   | 7.5180054     | -0.0134100   | 0.7968006        | torch.Size([2, 512, 256])        |
| 1065    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(3)                                 | input               | qint16        | 0.0015259 | -6.0150146   | 7.5180054     | -0.0134100   | 0.7968006        | torch.Size([2, 512, 256])        |
| 1065    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(3)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 1065    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(3)                                 | output              | qint16        | 0.0001526 | -3.5119629   | 3.5377502     | 0.0058412    | 0.0484033        | torch.Size([2, 512, 512])        |
| 1066    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.8.query_cat                           | input_0             | qint16        | 0.0015259 | -6.0150146   | 7.5180054     | -0.0134100   | 0.7968006        | torch.Size([2, 512, 256])        |
| 1066    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.8.query_cat                           | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0593189    | 0.8416965        | torch.Size([2, 512, 256])        |
| 1066    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.8.query_cat                           | output              | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([2, 512, 512])        |
| 1067    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.8.key_cat                             | input_0             | qint16        | 0.0015259 | -6.0150146   | 7.5180054     | -0.0134100   | 0.7968006        | torch.Size([2, 512, 256])        |
| 1067    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.8.key_cat                             | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0593189    | 0.8416965        | torch.Size([2, 512, 256])        |
| 1067    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.8.key_cat                             | output              | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([2, 512, 512])        |
| 1068    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([2, 512, 512])        |
| 1068    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1069    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([2, 512, 512])        |
| 1069    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1070    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint16        | 0.0001526 | -3.5119629   | 3.5377502     | 0.0058412    | 0.0484033        | torch.Size([2, 512, 512])        |
| 1070    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint16        | 0.0001526 | -3.5119629   | 3.5377502     | 0.0058412    | 0.0484033        | torch.Size([512, 2, 512])        |
| 1071    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | input_0             | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1071    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | output              | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1072    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | input_0             | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1072    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | output              | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1073    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | input_0             | qint16        | 0.0001526 | -3.5119629   | 3.5377502     | 0.0058412    | 0.0484033        | torch.Size([512, 2, 512])        |
| 1073    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | output              | qint16        | 0.0001526 | -3.5119629   | 3.5377502     | 0.0058412    | 0.0484033        | torch.Size([512, 2, 512])        |
| 1074    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.q_proj                         | input               | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1074    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.q_proj                         | weight              | torch.float32 |           | -0.4437911   | 0.3668911     | -0.0000340   | 0.0026953        | torch.Size([512, 512])           |
| 1074    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.q_proj                         | bias                | torch.float32 |           | -0.1242760   | 0.1437089     | -0.0000070   | 0.0009090        | torch.Size([512])                |
| 1074    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.q_proj                         | output              | qint8         | 0.0870049 | -10.1795692  | 11.0496178    | 0.0260189    | 5.4213982        | torch.Size([512, 2, 512])        |
| 1075    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.k_proj                         | input               | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([512, 2, 512])        |
| 1075    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.k_proj                         | weight              | torch.float32 |           | -0.5519633   | 0.4679662     | -0.0001220   | 0.0030018        | torch.Size([512, 512])           |
| 1075    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.k_proj                         | bias                | torch.float32 |           | -0.1264462   | 0.1836499     | 0.0014424    | 0.0003835        | torch.Size([512])                |
| 1075    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.k_proj                         | output              | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([512, 2, 512])        |
| 1076    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.v_proj                         | input               | qint16        | 0.0001526 | -3.5119629   | 3.5377502     | 0.0058412    | 0.0484033        | torch.Size([512, 2, 512])        |
| 1076    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.v_proj                         | weight              | torch.float32 |           | -0.3248511   | 0.2856031     | -0.0000271   | 0.0013692        | torch.Size([512, 512])           |
| 1076    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.v_proj                         | bias                | torch.float32 |           | -0.2827679   | 0.3053629     | -0.0033159   | 0.0075418        | torch.Size([512])                |
| 1076    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.v_proj                         | output              | qint8         | 0.0209362 | -2.3867321   | 2.4495409     | -0.0055700   | 0.1134455        | torch.Size([512, 2, 512])        |
| 1077    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | input_0             | qint8         | 0.0870049 | -10.1795692  | 11.0496178    | 0.0260189    | 5.4213982        | torch.Size([512, 2, 512])        |
| 1077    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | output              | qint8         | 0.0870049 | -10.1795692  | 11.0496178    | 0.0260189    | 5.4213982        | torch.Size([512, 16, 64])        |
| 1078    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0870049 | -10.1795692  | 11.0496178    | 0.0260189    | 5.4213982        | torch.Size([512, 16, 64])        |
| 1078    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0870049 | -10.1795692  | 11.0496178    | 0.0260189    | 5.4213982        | torch.Size([16, 512, 64])        |
| 1079    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | input_0             | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([512, 2, 512])        |
| 1079    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | output              | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([512, 16, 64])        |
| 1080    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([512, 16, 64])        |
| 1080    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([16, 512, 64])        |
| 1081    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | input_0             | qint8         | 0.0209362 | -2.3867321   | 2.4495409     | -0.0055700   | 0.1134455        | torch.Size([512, 2, 512])        |
| 1081    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | output              | qint8         | 0.0209362 | -2.3867321   | 2.4495409     | -0.0055700   | 0.1134455        | torch.Size([512, 16, 64])        |
| 1082    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0209362 | -2.3867321   | 2.4495409     | -0.0055700   | 0.1134455        | torch.Size([512, 16, 64])        |
| 1082    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0209362 | -2.3867321   | 2.4495409     | -0.0055700   | 0.1134455        | torch.Size([16, 512, 64])        |
| 1083    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.8.attn.q_scale_mul                    | input_0             | qint8         | 0.0870049 | -10.1795692  | 11.0496178    | 0.0260189    | 5.4213982        | torch.Size([16, 512, 64])        |
| 1083    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.8.attn.q_scale_mul                    | output              | qint8         | 0.0108756 | -1.2724462   | 1.3812022     | 0.0032524    | 0.0847093        | torch.Size([16, 512, 64])        |
| 1084    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([16, 512, 64])        |
| 1084    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([16, 64, 512])        |
| 1085    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.8.attn.matmul                         | input_0             | qint8         | 0.0108756 | -1.2724462   | 1.3812022     | 0.0032524    | 0.0847093        | torch.Size([16, 512, 64])        |
| 1085    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.8.attn.matmul                         | input_1             | qint8         | 0.0885442 | -11.3336525  | 11.0680199    | -0.0214980   | 5.1589975        | torch.Size([16, 64, 512])        |
| 1085    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.8.attn.matmul                         | output              | qint8         | 1.4383408 | -120.8206253 | 152.4641266   | -0.6072932   | 321.2579651      | torch.Size([16, 512, 512])       |
| 1086    | torch.Tensor.max                                                            | head.layers.8.attn.softmax                        | input               | qint8         | 1.4383408 | -120.8206253 | 152.4641266   | -0.6072932   | 321.2579651      | torch.Size([16, 512, 512])       |
| 1086    | torch.Tensor.max                                                            | head.layers.8.attn.softmax                        | output_0            | qint8         | 1.4383408 | 5.7533631    | 152.4641266   | 35.1322479   | 617.4669800      | torch.Size([16, 512, 1])         |
| 1086    | torch.Tensor.max                                                            | head.layers.8.attn.softmax                        | output_1            | torch.int64   |           | 128.0000000  | 511.0000000   | 269.7901611  | 11850.1279297    | torch.Size([16, 512, 1])         |
| 1087    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.8.attn.softmax.sub                    | input_0             | qint8         | 1.4383408 | -120.8206253 | 152.4641266   | -0.6072932   | 321.2579651      | torch.Size([16, 512, 512])       |
| 1087    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.8.attn.softmax.sub                    | input_1             | qint8         | 1.4383408 | 5.7533631    | 152.4641266   | 35.1322479   | 617.4669800      | torch.Size([16, 512, 1])         |
| 1087    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.8.attn.softmax.sub                    | output              | qint16        | 0.0114520 | -260.3389282 | 0.0000000     | -35.7399063  | 993.0215454      | torch.Size([16, 512, 512])       |
| 1088    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.8.attn.softmax.exp                    | input               | qint16        | 0.0114520 | -260.3389282 | 0.0000000     | -35.7399063  | 993.0215454      | torch.Size([16, 512, 512])       |
| 1088    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.8.attn.softmax.exp                    | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0096897    | 0.0065407        | torch.Size([16, 512, 512])       |
| 1089    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.8.attn.softmax.sum                    | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0096897    | 0.0065407        | torch.Size([16, 512, 512])       |
| 1089    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.8.attn.softmax.sum                    | output              | qint16        | 0.0018703 | 1.0006306    | 43.1617813    | 4.9611130    | 20.8364735       | torch.Size([16, 512, 1])         |
| 1090    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.8.attn.softmax.reciprocal             | input               | qint16        | 0.0018703 | 1.0006306    | 43.1617813    | 4.9611130    | 20.8364735       | torch.Size([16, 512, 1])         |
| 1090    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.8.attn.softmax.reciprocal             | output              | qint16        | 0.0000305 | 0.0231632    | 0.9993744     | 0.3550736    | 0.0593815        | torch.Size([16, 512, 1])         |
| 1091    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.8.attn.softmax.mul                    | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0096897    | 0.0065407        | torch.Size([16, 512, 512])       |
| 1091    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.8.attn.softmax.mul                    | input_1             | qint16        | 0.0000305 | 0.0231632    | 0.9993744     | 0.3550736    | 0.0593815        | torch.Size([16, 512, 1])         |
| 1091    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.8.attn.softmax.mul                    | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019163    | 0.0005006        | torch.Size([16, 512, 512])       |
| 1092    | torch.nn.modules.dropout.Dropout                                            | head.layers.8.attn.attention_drop                 | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019163    | 0.0005006        | torch.Size([16, 512, 512])       |
| 1092    | torch.nn.modules.dropout.Dropout                                            | head.layers.8.attn.attention_drop                 | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019163    | 0.0005006        | torch.Size([16, 512, 512])       |
| 1093    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.8.attn.attn_matmul                    | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019163    | 0.0005006        | torch.Size([16, 512, 512])       |
| 1093    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.8.attn.attn_matmul                    | input_1             | qint8         | 0.0209362 | -2.3867321   | 2.4495409     | -0.0055700   | 0.1134455        | torch.Size([16, 512, 64])        |
| 1093    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.8.attn.attn_matmul                    | output              | qint8         | 0.0177738 | -1.8662447   | 2.0262084     | -0.0091976   | 0.0702977        | torch.Size([16, 512, 64])        |
| 1094    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0177738 | -1.8662447   | 2.0262084     | -0.0091976   | 0.0702977        | torch.Size([16, 512, 64])        |
| 1094    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0177738 | -1.8662447   | 2.0262084     | -0.0091976   | 0.0702977        | torch.Size([512, 16, 64])        |
| 1095    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | input_0             | qint8         | 0.0177738 | -1.8662447   | 2.0262084     | -0.0091976   | 0.0702977        | torch.Size([512, 16, 64])        |
| 1095    | torch.Tensor.reshape                                                        | head.layers.8.attn                                | output              | qint8         | 0.0177738 | -1.8662447   | 2.0262084     | -0.0091976   | 0.0702977        | torch.Size([512, 2, 512])        |
| 1096    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.out_proj                       | input               | qint8         | 0.0177738 | -1.8662447   | 2.0262084     | -0.0091976   | 0.0702977        | torch.Size([512, 2, 512])        |
| 1096    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.out_proj                       | weight              | torch.float32 |           | -0.2233234   | 0.2726021     | -0.0000586   | 0.0024737        | torch.Size([512, 512])           |
| 1096    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.out_proj                       | bias                | torch.float32 |           | -0.3740546   | 0.4565917     | -0.0073158   | 0.0213863        | torch.Size([512])                |
| 1096    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.8.attn.out_proj                       | output              | qint8         | 0.0232625 | -2.4425588   | 2.4425588     | 0.0083925    | 0.3581129        | torch.Size([512, 2, 512])        |
| 1097    | torch.Tensor.view                                                           | head.layers.8.attn                                | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019163    | 0.0005006        | torch.Size([16, 512, 512])       |
| 1097    | torch.Tensor.view                                                           | head.layers.8.attn                                | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019163    | 0.0005006        | torch.Size([2, 8, 512, 512])     |
| 1098    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.8.attn.attn_weights_mean              | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019163    | 0.0005006        | torch.Size([2, 8, 512, 512])     |
| 1098    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.8.attn.attn_weights_mean              | output              | qint8         | 0.0025220 | 0.0000000    | 0.2446360     | 0.0018638    | 0.0000733        | torch.Size([2, 512, 512])        |
| 1099    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | input_0             | qint8         | 0.0232625 | -2.4425588   | 2.4425588     | 0.0083925    | 0.3581129        | torch.Size([512, 2, 512])        |
| 1099    | torch.Tensor.transpose                                                      | head.layers.8.attn                                | output              | qint8         | 0.0232625 | -2.4425588   | 2.4425588     | 0.0083925    | 0.3581129        | torch.Size([2, 512, 512])        |
| 1100    | torch.nn.modules.dropout.Dropout                                            | head.layers.8.dropout                             | input               | qint8         | 0.0232625 | -2.4425588   | 2.4425588     | 0.0083925    | 0.3581129        | torch.Size([2, 512, 512])        |
| 1100    | torch.nn.modules.dropout.Dropout                                            | head.layers.8.dropout                             | output              | qint8         | 0.0232625 | -2.4425588   | 2.4425588     | 0.0083925    | 0.3581129        | torch.Size([2, 512, 512])        |
| 1101    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.8.add                                 | input_0             | qint8         | 0.0531841 | -6.0098081   | 6.7543864     | 0.0246416    | 0.8147126        | torch.Size([2, 512, 512])        |
| 1101    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.8.add                                 | input_1             | qint8         | 0.0232625 | -2.4425588   | 2.4425588     | 0.0083925    | 0.3581129        | torch.Size([2, 512, 512])        |
| 1101    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.8.add                                 | output              | qint8         | 0.0558511 | -7.0372343   | 7.0930853     | 0.0330117    | 1.0771980        | torch.Size([2, 512, 512])        |
| 1102    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(3)                                  | input               | qint8         | 0.0558511 | -7.0372343   | 7.0930853     | 0.0330117    | 1.0771980        | torch.Size([2, 512, 512])        |
| 1102    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(3)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 1102    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(3)                                  | output              | qint16        | 0.0015259 | -50.0000000  | 38.6550903    | 0.0216251    | 18.8238354       | torch.Size([2, 512, 256])        |
| 1103    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.9.input_mean.mean                     | input_0             | qint16        | 0.0015259 | -50.0000000  | 38.6550903    | 0.0216251    | 18.8238354       | torch.Size([2, 512, 256])        |
| 1103    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.9.input_mean.mean                     | output              | qint16        | 0.0000063 | -0.0490784   | 0.0805805     | 0.0216249    | 0.0006332        | torch.Size([2, 512, 1])          |
| 1104    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.9.sub                                 | input_0             | qint16        | 0.0015259 | -50.0000000  | 38.6550903    | 0.0216251    | 18.8238354       | torch.Size([2, 512, 256])        |
| 1104    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.9.sub                                 | input_1             | qint16        | 0.0000063 | -0.0490784   | 0.0805805     | 0.0216249    | 0.0006332        | torch.Size([2, 512, 1])          |
| 1104    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.9.sub                                 | output              | qint16        | 0.0015373 | -50.0794601  | 38.6280251    | 0.0000001    | 18.8231506       | torch.Size([2, 512, 256])        |
| 1105    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.mul                                 | input_0             | qint16        | 0.0015373 | -50.0794601  | 38.6280251    | 0.0000001    | 18.8231506       | torch.Size([2, 512, 256])        |
| 1105    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.mul                                 | input_1             | qint16        | 0.0015373 | -50.0794601  | 38.6280251    | 0.0000001    | 18.8231506       | torch.Size([2, 512, 256])        |
| 1105    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.mul                                 | output              | qint16        | 0.0774452 | 0.0000000    | 2507.9858398  | 18.8222733   | 13072.7382812    | torch.Size([2, 512, 256])        |
| 1106    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.9.var_mean.mean                       | input_0             | qint16        | 0.0774452 | 0.0000000    | 2507.9858398  | 18.8222733   | 13072.7382812    | torch.Size([2, 512, 256])        |
| 1106    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.9.var_mean.mean                       | output              | qint16        | 0.0010811 | 6.8477654    | 35.4248085    | 18.8007107   | 52.2066650       | torch.Size([2, 512, 1])          |
| 1107    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.9.rsqrt                               | input               | qint16        | 0.0010811 | 6.8477654    | 35.4248085    | 18.8007107   | 52.2066650       | torch.Size([2, 512, 1])          |
| 1107    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.9.rsqrt                               | output              | qint16        | 0.0000132 | 0.1680161    | 0.3821365     | 0.2448551    | 0.0024965        | torch.Size([2, 512, 1])          |
| 1108    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.out_mul                             | input_0             | qint16        | 0.0015373 | -50.0794601  | 38.6280251    | 0.0000001    | 18.8231506       | torch.Size([2, 512, 256])        |
| 1108    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.out_mul                             | input_1             | qint16        | 0.0000132 | 0.1680161    | 0.3821365     | 0.2448551    | 0.0024965        | torch.Size([2, 512, 1])          |
| 1108    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.out_mul                             | output              | qint16        | 0.0002637 | -8.4445620   | 6.4901371     | -0.0000002   | 1.0006704        | torch.Size([2, 512, 256])        |
| 1109    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.9.weight_quant                        | input               | torch.float32 |           | 0.7484364    | 1.0673635     | 0.8810046    | 0.0025054        | torch.Size([256])                |
| 1109    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.9.weight_quant                        | output              | qint16        | 0.0000326 | 0.7484493    | 1.0673473     | 0.8810048    | 0.0025054        | torch.Size([256])                |
| 1110    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.weight_mul                          | input_0             | qint16        | 0.0002637 | -8.4445620   | 6.4901371     | -0.0000002   | 1.0006704        | torch.Size([2, 512, 256])        |
| 1110    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.weight_mul                          | input_1             | qint16        | 0.0000326 | 0.7484493    | 1.0673473     | 0.8810048    | 0.0025054        | torch.Size([256])                |
| 1110    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.9.weight_mul                          | output              | qint16        | 0.0002415 | -7.7328448   | 5.8013844     | -0.0002508   | 0.8017287        | torch.Size([2, 512, 256])        |
| 1111    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.9.bias_quant                          | input               | torch.float32 |           | -0.0912300   | 0.1098549     | -0.0018977   | 0.0007133        | torch.Size([256])                |
| 1111    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.9.bias_quant                          | output              | qint16        | 0.0000034 | -0.0912297   | 0.1098532     | -0.0018977   | 0.0007133        | torch.Size([256])                |
| 1112    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.9.bias_add                            | input_0             | qint16        | 0.0002415 | -7.7328448   | 5.8013844     | -0.0002508   | 0.8017287        | torch.Size([2, 512, 256])        |
| 1112    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.9.bias_add                            | input_1             | qint16        | 0.0000034 | -0.0912297   | 0.1098532     | -0.0018977   | 0.0007133        | torch.Size([256])                |
| 1112    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.9.bias_add                            | output              | qint8         | 0.0580720 | -7.4332142   | 5.7491264     | -0.0022952   | 0.7825898        | torch.Size([2, 512, 256])        |
| 1113    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.kps_generator.offset               | input               | qint8         | 0.0580720 | -7.4332142   | 5.7491264     | -0.0022952   | 0.7825898        | torch.Size([2, 512, 256])        |
| 1113    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.kps_generator.offset               | weight              | torch.float32 |           | -0.3201400   | 0.3177086     | 0.0014321    | 0.0068747        | torch.Size([24, 256])            |
| 1113    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.kps_generator.offset               | bias                | torch.float32 |           | -0.1534995   | 0.1723033     | 0.0028447    | 0.0042549        | torch.Size([24])                 |
| 1113    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.kps_generator.offset               | output              | qint16        | 0.0005682 | -13.6583385  | 15.9903193    | 0.1943678    | 16.2370319       | torch.Size([2, 512, 24])         |
| 1114    | torch.Tensor.view                                                           | head.layers.10.kps_generator                      | input_0             | qint16        | 0.0005682 | -13.6583385  | 15.9903193    | 0.1943678    | 16.2370319       | torch.Size([2, 512, 24])         |
| 1114    | torch.Tensor.view                                                           | head.layers.10.kps_generator                      | output              | qint16        | 0.0005682 | -13.6583385  | 15.9903193    | 0.1943678    | 16.2370319       | torch.Size([2, 512, 8, 3])       |
| 1115    | torch.Tensor.__getitem__                                                    | head.layers.10.kps_generator                      | input_0             | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 1115    | torch.Tensor.__getitem__                                                    | head.layers.10.kps_generator                      | output              | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.7071505    | 273.0369873      | torch.Size([2, 512, 1, 3])       |
| 1116    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.kps_generator.keypoints_add        | input_0             | qint16        | 0.0005682 | -13.6583385  | 15.9903193    | 0.1943678    | 16.2370319       | torch.Size([2, 512, 8, 3])       |
| 1116    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.kps_generator.keypoints_add        | input_1             | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.7071505    | 273.0369873      | torch.Size([2, 512, 1, 3])       |
| 1116    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.kps_generator.keypoints_add        | output              | qint16        | 0.0020822 | -63.5249443  | 65.7196274    | 0.9015428    | 288.5512390      | torch.Size([2, 512, 8, 3])       |
| 1117    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.weight_add                         | input_0             | qint8         | 0.0580720 | -7.4332142   | 5.7491264     | -0.0022952   | 0.7825898        | torch.Size([2, 512, 256])        |
| 1117    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.weight_add                         | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0593189    | 0.8416965        | torch.Size([2, 512, 256])        |
| 1117    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.weight_add                         | output              | qint8         | 0.0613813 | -7.8568125   | 7.7954311     | 0.0550607    | 1.4967724        | torch.Size([2, 512, 256])        |
| 1118    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 1118    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 1119    | torch.Tensor.reshape                                                        | head.layers.10                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 1119    | torch.Tensor.reshape                                                        | head.layers.10                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 1120    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.0                   | input               | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 1120    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.0                   | weight              | torch.float32 |           | -1.0164793   | 0.8352295     | 0.0021029    | 0.0230761        | torch.Size([256, 12])            |
| 1120    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.0                   | bias                | torch.float32 |           | -0.3216627   | 0.3002117     | 0.0078120    | 0.0275127        | torch.Size([256])                |
| 1120    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.0                   | output              | torch.float32 |           | -1.1073740   | 1.2420985     | 0.0094809    | 0.2001274        | torch.Size([2, 6, 256])          |
| 1121    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.10.camera_encoder.1                   | input               | torch.float32 |           | -1.1073740   | 1.2420985     | 0.0094809    | 0.2001274        | torch.Size([2, 6, 256])          |
| 1121    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.10.camera_encoder.1                   | output              | qint8         | 0.0096764 | 0.0000000    | 1.2289008     | 0.1932347    | 0.0688233        | torch.Size([2, 6, 256])          |
| 1122    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.input_mean.mean   | input_0             | qint8         | 0.0096764 | 0.0000000    | 1.2289008     | 0.1932347    | 0.0688233        | torch.Size([2, 6, 256])          |
| 1122    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.input_mean.mean   | output              | qint16        | 0.0000068 | 0.1360762    | 0.2157524     | 0.1932358    | 0.0007558        | torch.Size([2, 6, 1])            |
| 1123    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.2.sub               | input_0             | qint8         | 0.0096764 | 0.0000000    | 1.2289008     | 0.1932347    | 0.0688233        | torch.Size([2, 6, 256])          |
| 1123    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.2.sub               | input_1             | qint16        | 0.0000068 | 0.1360762    | 0.2157524     | 0.1932358    | 0.0007558        | torch.Size([2, 6, 1])            |
| 1123    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.2.sub               | output              | qint16        | 0.0000319 | -0.2157428   | 1.0208527     | -0.0000007   | 0.0681300        | torch.Size([2, 6, 256])          |
| 1124    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.mul               | input_0             | qint16        | 0.0000319 | -0.2157428   | 1.0208527     | -0.0000007   | 0.0681300        | torch.Size([2, 6, 256])          |
| 1124    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.mul               | input_1             | qint16        | 0.0000319 | -0.2157428   | 1.0208527     | -0.0000007   | 0.0681300        | torch.Size([2, 6, 256])          |
| 1124    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.mul               | output              | qint16        | 0.0000334 | 0.0000000    | 1.0421393     | 0.0681073    | 0.0136362        | torch.Size([2, 6, 256])          |
| 1125    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.var_mean.mean     | input_0             | qint16        | 0.0000334 | 0.0000000    | 1.0421393     | 0.0681073    | 0.0136362        | torch.Size([2, 6, 256])          |
| 1125    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.var_mean.mean     | output              | qint16        | 0.0000028 | 0.0256056    | 0.0878842     | 0.0681075    | 0.0004458        | torch.Size([2, 6, 1])            |
| 1126    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.10.camera_encoder.2.rsqrt             | input               | qint16        | 0.0000028 | 0.0256056    | 0.0878842     | 0.0681075    | 0.0004458        | torch.Size([2, 6, 1])            |
| 1126    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.10.camera_encoder.2.rsqrt             | output              | qint16        | 0.0001884 | 3.3730276    | 6.1748700     | 4.0500898    | 0.9901486        | torch.Size([2, 6, 1])            |
| 1127    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.out_mul           | input_0             | qint16        | 0.0000319 | -0.2157428   | 1.0208527     | -0.0000007   | 0.0681300        | torch.Size([2, 6, 256])          |
| 1127    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.out_mul           | input_1             | qint16        | 0.0001884 | 3.3730276    | 6.1748700     | 4.0500898    | 0.9901486        | torch.Size([2, 6, 1])            |
| 1127    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.out_mul           | output              | qint16        | 0.0001238 | -0.8403264   | 3.9651277     | -0.0000040   | 0.9982076        | torch.Size([2, 6, 256])          |
| 1128    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.2.weight_quant      | input               | torch.float32 |           | 0.7735876    | 1.1663378     | 0.9820545    | 0.0040344        | torch.Size([256])                |
| 1128    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.2.weight_quant      | output              | qint16        | 0.0000356 | 0.7735720    | 1.1663200     | 0.9820542    | 0.0040345        | torch.Size([256])                |
| 1129    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.weight_mul        | input_0             | qint16        | 0.0001238 | -0.8403264   | 3.9651277     | -0.0000040   | 0.9982076        | torch.Size([2, 6, 256])          |
| 1129    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.weight_mul        | input_1             | qint16        | 0.0000356 | 0.7735720    | 1.1663200     | 0.9820542    | 0.0040345        | torch.Size([256])                |
| 1129    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.weight_mul        | output              | qint16        | 0.0001231 | -0.9305338   | 3.9430463     | 0.0007862    | 0.9860449        | torch.Size([2, 6, 256])          |
| 1130    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.2.bias_quant        | input               | torch.float32 |           | -0.0987514   | 0.1280675     | 0.0000397    | 0.0013846        | torch.Size([256])                |
| 1130    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.2.bias_quant        | output              | qint16        | 0.0000039 | -0.0987528   | 0.1280655     | 0.0000397    | 0.0013846        | torch.Size([256])                |
| 1131    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.2.bias_add          | input_0             | qint16        | 0.0001231 | -0.9305338   | 3.9430463     | 0.0007862    | 0.9860449        | torch.Size([2, 6, 256])          |
| 1131    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.2.bias_add          | input_1             | qint16        | 0.0000039 | -0.0987528   | 0.1280655     | 0.0000397    | 0.0013846        | torch.Size([256])                |
| 1131    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.2.bias_add          | output              | qint8         | 0.0310984 | -0.9951485   | 3.9494958     | 0.0008301    | 0.9982727        | torch.Size([2, 6, 256])          |
| 1132    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.3                   | input               | qint8         | 0.0310984 | -0.9951485   | 3.9494958     | 0.0008301    | 0.9982727        | torch.Size([2, 6, 256])          |
| 1132    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.3                   | weight              | torch.float32 |           | -0.3692743   | 0.3998400     | -0.0000485   | 0.0051414        | torch.Size([256, 256])           |
| 1132    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.3                   | bias                | torch.float32 |           | -0.0814586   | 0.2724895     | -0.0004629   | 0.0023738        | torch.Size([256])                |
| 1132    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.camera_encoder.3                   | output              | torch.float32 |           | -7.5101581   | 47.0946350    | 0.0124796    | 33.5583801       | torch.Size([2, 6, 256])          |
| 1133    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.10.camera_encoder.4                   | input               | torch.float32 |           | -7.5101581   | 47.0946350    | 0.0124796    | 33.5583801       | torch.Size([2, 6, 256])          |
| 1133    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.10.camera_encoder.4                   | output              | qint8         | 0.3681145 | 0.0000000    | 46.7505379    | 1.5270998    | 27.2959976       | torch.Size([2, 6, 256])          |
| 1134    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.input_mean.mean   | input_0             | qint8         | 0.3681145 | 0.0000000    | 46.7505379    | 1.5270998    | 27.2959976       | torch.Size([2, 6, 256])          |
| 1134    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.input_mean.mean   | output              | qint16        | 0.0000519 | 1.4019762    | 1.6953360     | 1.5271006    | 0.0103422        | torch.Size([2, 6, 1])            |
| 1135    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.5.sub               | input_0             | qint8         | 0.3681145 | 0.0000000    | 46.7505379    | 1.5270998    | 27.2959976       | torch.Size([2, 6, 256])          |
| 1135    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.5.sub               | input_1             | qint16        | 0.0000519 | 1.4019762    | 1.6953360     | 1.5271006    | 0.0103422        | torch.Size([2, 6, 1])            |
| 1135    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.5.sub               | output              | qint16        | 0.0013896 | -1.6952597   | 45.3481979    | 0.0000683    | 27.2861652       | torch.Size([2, 6, 256])          |
| 1136    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.mul               | input_0             | qint16        | 0.0013896 | -1.6952597   | 45.3481979    | 0.0000683    | 27.2861652       | torch.Size([2, 6, 256])          |
| 1136    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.mul               | input_1             | qint16        | 0.0013896 | -1.6952597   | 45.3481979    | 0.0000683    | 27.2861652       | torch.Size([2, 6, 256])          |
| 1136    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.mul               | output              | qint16        | 0.0632698 | 0.0000000    | 2056.4570312  | 27.2767448   | 33405.7500000    | torch.Size([2, 6, 256])          |
| 1137    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.var_mean.mean     | input_0             | qint16        | 0.0632698 | 0.0000000    | 2056.4570312  | 27.2767448   | 33405.7500000    | torch.Size([2, 6, 256])          |
| 1137    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.var_mean.mean     | output              | qint16        | 0.0008635 | 24.4474583   | 28.1690178    | 27.2767639   | 1.2194730        | torch.Size([2, 6, 1])            |
| 1138    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.10.camera_encoder.5.rsqrt             | input               | qint16        | 0.0008635 | 24.4474583   | 28.1690178    | 27.2767639   | 1.2194730        | torch.Size([2, 6, 1])            |
| 1138    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.10.camera_encoder.5.rsqrt             | output              | qint16        | 0.0000062 | 0.1884134    | 0.2015916     | 0.1915300    | 0.0000153        | torch.Size([2, 6, 1])            |
| 1139    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.out_mul           | input_0             | qint16        | 0.0013896 | -1.6952597   | 45.3481979    | 0.0000683    | 27.2861652       | torch.Size([2, 6, 256])          |
| 1139    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.out_mul           | input_1             | qint16        | 0.0000062 | 0.1884134    | 0.2015916     | 0.1915300    | 0.0000153        | torch.Size([2, 6, 1])            |
| 1139    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.out_mul           | output              | qint16        | 0.0002666 | -0.3199672   | 8.6647110     | 0.0000173    | 0.9997947        | torch.Size([2, 6, 256])          |
| 1140    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.5.weight_quant      | input               | torch.float32 |           | 0.5887775    | 1.2592373     | 0.8845733    | 0.0137082        | torch.Size([256])                |
| 1140    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.5.weight_quant      | output              | qint16        | 0.0000384 | 0.5887777    | 1.2592181     | 0.8845729    | 0.0137082        | torch.Size([256])                |
| 1141    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.weight_mul        | input_0             | qint16        | 0.0002666 | -0.3199672   | 8.6647110     | 0.0000173    | 0.9997947        | torch.Size([2, 6, 256])          |
| 1141    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.weight_mul        | input_1             | qint16        | 0.0000384 | 0.5887777    | 1.2592181     | 0.8845729    | 0.0137082        | torch.Size([256])                |
| 1141    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.weight_mul        | output              | qint16        | 0.0002556 | -0.4028186   | 8.3058329     | -0.0184238   | 0.6755667        | torch.Size([2, 6, 256])          |
| 1142    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.5.bias_quant        | input               | torch.float32 |           | -0.3856634   | 0.3310284     | 0.0403769    | 0.0131642        | torch.Size([256])                |
| 1142    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.camera_encoder.5.bias_quant        | output              | qint16        | 0.0000118 | -0.3856693   | 0.3310226     | 0.0403769    | 0.0131642        | torch.Size([256])                |
| 1143    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.5.bias_add          | input_0             | qint16        | 0.0002556 | -0.4028186   | 8.3058329     | -0.0184238   | 0.6755667        | torch.Size([2, 6, 256])          |
| 1143    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.5.bias_add          | input_1             | qint16        | 0.0000118 | -0.3856693   | 0.3310226     | 0.0403769    | 0.0131642        | torch.Size([256])                |
| 1143    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.5.bias_add          | output              | qint8         | 0.0650902 | -0.7159922   | 8.2664547     | 0.0224595    | 0.6448171        | torch.Size([2, 6, 256])          |
| 1144    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint8         | 0.0613813 | -7.8568125   | 7.7954311     | 0.0550607    | 1.4967724        | torch.Size([2, 512, 256])        |
| 1144    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint8         | 0.0613813 | -7.8568125   | 7.7954311     | 0.0550607    | 1.4967724        | torch.Size([2, 512, 1, 256])     |
| 1145    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint8         | 0.0650902 | -0.7159922   | 8.2664547     | 0.0224595    | 0.6448171        | torch.Size([2, 6, 256])          |
| 1145    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint8         | 0.0650902 | -0.7159922   | 8.2664547     | 0.0224595    | 0.6448171        | torch.Size([2, 1, 6, 256])       |
| 1146    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.cam_add                            | input_0             | qint8         | 0.0613813 | -7.8568125   | 7.7954311     | 0.0550607    | 1.4967724        | torch.Size([2, 512, 1, 256])     |
| 1146    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.cam_add                            | input_1             | qint8         | 0.0650902 | -0.7159922   | 8.2664547     | 0.0224595    | 0.6448171        | torch.Size([2, 1, 6, 256])       |
| 1146    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.10.cam_add                            | output              | qint8         | 0.0561475 | -5.7831955   | 7.1307359     | 0.0773054    | 1.0905113        | torch.Size([2, 512, 6, 256])     |
| 1147    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.weights_fc                         | input               | qint8         | 0.0561475 | -5.7831955   | 7.1307359     | 0.0773054    | 1.0905113        | torch.Size([2, 512, 6, 256])     |
| 1147    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.weights_fc                         | weight              | torch.float32 |           | -0.3316146   | 0.2786153     | 0.0008751    | 0.0028934        | torch.Size([64, 256])            |
| 1147    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.weights_fc                         | bias                | torch.float32 |           | -0.0985109   | 0.1124940     | -0.0119324   | 0.0019689        | torch.Size([64])                 |
| 1147    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.weights_fc                         | output              | qint8         | 0.0667361 | -8.5422192   | 5.4056230     | -0.5466758   | 5.0024781        | torch.Size([2, 512, 6, 64])      |
| 1148    | torch.Tensor.reshape                                                        | head.layers.10                                    | input_0             | qint8         | 0.0667361 | -8.5422192   | 5.4056230     | -0.5466758   | 5.0024781        | torch.Size([2, 512, 6, 64])      |
| 1148    | torch.Tensor.reshape                                                        | head.layers.10                                    | output              | qint8         | 0.0667361 | -8.5422192   | 5.4056230     | -0.5466758   | 5.0024781        | torch.Size([2, 512, 48, 8])      |
| 1149    | torch.Tensor.max                                                            | head.layers.10.weight_softmax                     | input               | qint8         | 0.0667361 | -8.5422192   | 5.4056230     | -0.5466758   | 5.0024781        | torch.Size([2, 512, 48, 8])      |
| 1149    | torch.Tensor.max                                                            | head.layers.10.weight_softmax                     | output_0            | qint8         | 0.0667361 | 1.0677774    | 5.4056230     | 2.8307605    | 0.7800635        | torch.Size([2, 512, 1, 8])       |
| 1149    | torch.Tensor.max                                                            | head.layers.10.weight_softmax                     | output_1            | torch.int64   |           | 0.0000000    | 47.0000000    | 26.8393555   | 187.1803741      | torch.Size([2, 512, 1, 8])       |
| 1150    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.weight_softmax.sub                 | input_0             | qint8         | 0.0667361 | -8.5422192   | 5.4056230     | -0.5466758   | 5.0024781        | torch.Size([2, 512, 48, 8])      |
| 1150    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.weight_softmax.sub                 | input_1             | qint8         | 0.0667361 | 1.0677774    | 5.4056230     | 2.8307605    | 0.7800635        | torch.Size([2, 512, 1, 8])       |
| 1150    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.10.weight_softmax.sub                 | output              | qint16        | 0.0004461 | -12.8802814  | 0.0000000     | -3.3774402   | 5.1189151        | torch.Size([2, 512, 48, 8])      |
| 1151    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.10.weight_softmax.exp                 | input               | qint16        | 0.0004461 | -12.8802814  | 0.0000000     | -3.3774402   | 5.1189151        | torch.Size([2, 512, 48, 8])      |
| 1151    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.10.weight_softmax.exp                 | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1864338    | 0.0813548        | torch.Size([2, 512, 48, 8])      |
| 1152    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.10.weight_softmax.sum                 | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1864338    | 0.0813548        | torch.Size([2, 512, 48, 8])      |
| 1152    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.10.weight_softmax.sum                 | output              | qint16        | 0.0006678 | 2.0927339    | 20.9099770    | 8.9488373    | 8.9620953        | torch.Size([2, 512, 1, 8])       |
| 1153    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.10.weight_softmax.reciprocal          | input               | qint16        | 0.0006678 | 2.0927339    | 20.9099770    | 8.9488373    | 8.9620953        | torch.Size([2, 512, 1, 8])       |
| 1153    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.10.weight_softmax.reciprocal          | output              | qint16        | 0.0000228 | 0.0478188    | 0.4778460     | 0.1294213    | 0.0039741        | torch.Size([2, 512, 1, 8])       |
| 1154    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.weight_softmax.mul                 | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1864338    | 0.0813548        | torch.Size([2, 512, 48, 8])      |
| 1154    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.weight_softmax.mul                 | input_1             | qint16        | 0.0000228 | 0.0478188    | 0.4778460     | 0.1294213    | 0.0039741        | torch.Size([2, 512, 1, 8])       |
| 1154    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.weight_softmax.mul                 | output              | qint8         | 0.0048180 | 0.0000000    | 0.4769850     | 0.0206258    | 0.0011986        | torch.Size([2, 512, 48, 8])      |
| 1155    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint16        | 0.0020822 | -63.5249443  | 65.7196274    | 0.9015428    | 288.5512390      | torch.Size([2, 512, 8, 3])       |
| 1155    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint16        | 0.0020822 | -62.1444206  | 59.1022682    | 0.8007543    | 310.2439575      | torch.Size([2, 512, 8, 1])       |
| 1156    | torch.ones_like                                                             | head.layers.10                                    | input               | qint16        | 0.0020822 | -62.1444206  | 59.1022682    | 0.8007543    | 310.2439575      | torch.Size([2, 512, 8, 1])       |
| 1156    | torch.ones_like                                                             | head.layers.10                                    | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1157    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.point_quant_stub                   | input               | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1157    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.10.point_quant_stub                   | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1158    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.point_cat                          | input_0             | qint16        | 0.0020822 | -63.5249443  | 65.7196274    | 0.9015428    | 288.5512390      | torch.Size([2, 512, 8, 3])       |
| 1158    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.point_cat                          | input_1             | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1158    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.point_cat                          | output              | qint16        | 0.0018311 | -60.0000000  | 59.9981689    | 0.9251903    | 216.1541443      | torch.Size([2, 512, 8, 4])       |
| 1159    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 1159    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1160    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint16        | 0.0018311 | -60.0000000  | 59.9981689    | 0.9251903    | 216.1541443      | torch.Size([2, 512, 8, 4])       |
| 1160    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint16        | 0.0018311 | -60.0000000  | 59.9981689    | 0.9251903    | 216.1541443      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1161    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.point_matmul                       | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1161    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.point_matmul                       | input_1             | qint16        | 0.0018311 | -60.0000000  | 59.9981689    | 0.9251903    | 216.1541443      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1161    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.point_matmul                       | output              | qint16        | 0.0031495 | -94.9753494  | 93.4730530    | 0.0734765    | 97.6376038       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1162    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.10.point_sum                          | input               | qint16        | 0.0031495 | -94.9753494  | 93.4730530    | 0.0734765    | 97.6376038       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1162    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.10.point_sum                          | output              | qint16        | 0.0032739 | -103.0780945 | 102.2759933   | 0.2939344    | 385.9329224      | torch.Size([2, 6, 512, 8, 4])    |
| 1163    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint16        | 0.0032739 | -103.0780945 | 102.2759933   | 0.2939344    | 385.9329224      | torch.Size([2, 6, 512, 8, 4])    |
| 1163    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint16        | 0.0032739 | -63.8635674  | 63.7457047    | -0.5139163   | 427.2063293      | torch.Size([2, 6, 512, 8, 1])    |
| 1164    | torch.clamp                                                                 | head.layers.10                                    | input               | qint16        | 0.0032739 | -63.8635674  | 63.7457047    | -0.5139163   | 427.2063293      | torch.Size([2, 6, 512, 8, 1])    |
| 1164    | torch.clamp                                                                 | head.layers.10                                    | output              | qint16        | 0.0032739 | 0.0000000    | 63.7457047    | 7.4262495    | 151.6109314      | torch.Size([2, 6, 512, 8, 1])    |
| 1165    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.10.reciprocal_op                      | input               | qint16        | 0.0032739 | 0.0000000    | 63.7457047    | 7.4262495    | 151.6109314      | torch.Size([2, 6, 512, 8, 1])    |
| 1165    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.10.reciprocal_op                      | output              | qint16        | 0.0003357 | 0.0157776    | 10.9996643    | 6.3122478    | 28.5627308       | torch.Size([2, 6, 512, 8, 1])    |
| 1166    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint16        | 0.0032739 | -103.0780945 | 102.2759933   | 0.2939344    | 385.9329224      | torch.Size([2, 6, 512, 8, 4])    |
| 1166    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint16        | 0.0032739 | -103.0780945 | 102.2759933   | 0.3455602    | 557.6915283      | torch.Size([2, 6, 512, 8, 2])    |
| 1167    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.point_mul                          | input_0             | qint16        | 0.0032739 | -103.0780945 | 102.2759933   | 0.3455602    | 557.6915283      | torch.Size([2, 6, 512, 8, 2])    |
| 1167    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.point_mul                          | input_1             | qint16        | 0.0003357 | 0.0157776    | 10.9996643    | 6.3122478    | 28.5627308       | torch.Size([2, 6, 512, 8, 1])    |
| 1167    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.point_mul                          | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.0988333    | 0.9518753        | torch.Size([2, 6, 512, 8, 2])    |
| 1168    | torch.Tensor.flatten                                                        | head.layers.10                                    | input               | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.0988333    | 0.9518753        | torch.Size([2, 6, 512, 8, 2])    |
| 1168    | torch.Tensor.flatten                                                        | head.layers.10                                    | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.0988333    | 0.9518753        | torch.Size([12, 512, 8, 2])      |
| 1169    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.10                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 1169    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.10                                    | input_1             | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.0988333    | 0.9518753        | torch.Size([12, 512, 8, 2])      |
| 1169    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.10                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([12, 256, 512, 8])    |
| 1170    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.feat_cat                           | input               | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([12, 256, 512, 8])    |
| 1170    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.feat_cat                           | output              | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([12, 256, 512, 8])    |
| 1171    | torch.Tensor.view                                                           | head.layers.10                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([12, 256, 512, 8])    |
| 1171    | torch.Tensor.view                                                           | head.layers.10                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([2, 6, 256, 512, 8])  |
| 1172    | torch.Tensor.permute                                                        | head.layers.10                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([2, 6, 256, 512, 8])  |
| 1172    | torch.Tensor.permute                                                        | head.layers.10                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([2, 512, 6, 8, 256])  |
| 1173    | torch.Tensor.contiguous                                                     | head.layers.10                                    | input               | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676581        | torch.Size([2, 512, 6, 8, 256])  |
| 1173    | torch.Tensor.contiguous                                                     | head.layers.10                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676583        | torch.Size([2, 512, 6, 8, 256])  |
| 1174    | torch.Tensor.view                                                           | head.layers.10                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676583        | torch.Size([2, 512, 6, 8, 256])  |
| 1174    | torch.Tensor.view                                                           | head.layers.10                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676583        | torch.Size([2, 512, 48, 256])    |
| 1175    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | input_0             | qint8         | 0.0048180 | 0.0000000    | 0.4769850     | 0.0206258    | 0.0011986        | torch.Size([2, 512, 48, 8])      |
| 1175    | torch.Tensor.__getitem__                                                    | head.layers.10                                    | output              | qint8         | 0.0048180 | 0.0000000    | 0.4769850     | 0.0206258    | 0.0011986        | torch.Size([2, 512, 48, 8, 1])   |
| 1176    | torch.Tensor.reshape                                                        | head.layers.10                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676583        | torch.Size([2, 512, 48, 256])    |
| 1176    | torch.Tensor.reshape                                                        | head.layers.10                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676583        | torch.Size([2, 512, 48, 8, 32])  |
| 1177    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.feat_mul                           | input_0             | qint8         | 0.0048180 | 0.0000000    | 0.4769850     | 0.0206258    | 0.0011986        | torch.Size([2, 512, 48, 8, 1])   |
| 1177    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.feat_mul                           | input_1             | qint8         | 0.2235520 | -28.6146584  | 28.1675549    | 0.0209296    | 2.6676583        | torch.Size([2, 512, 48, 8, 32])  |
| 1177    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.10.feat_mul                           | output              | qint8         | 0.0186602 | -2.3885057   | 2.3698454     | 0.0002569    | 0.0037161        | torch.Size([2, 512, 48, 8, 32])  |
| 1178    | torch.Tensor.view                                                           | head.layers.10                                    | input_0             | qint8         | 0.0186602 | -2.3885057   | 2.3698454     | 0.0002569    | 0.0037161        | torch.Size([2, 512, 48, 8, 32])  |
| 1178    | torch.Tensor.view                                                           | head.layers.10                                    | output              | qint8         | 0.0186602 | -2.3885057   | 2.3698454     | 0.0002569    | 0.0037161        | torch.Size([2, 512, 48, 256])    |
| 1179    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.10.feat_sum                           | input               | qint8         | 0.0186602 | -2.3885057   | 2.3698454     | 0.0002569    | 0.0037161        | torch.Size([2, 512, 48, 256])    |
| 1179    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.10.feat_sum                           | output              | qint8         | 0.0331845 | -4.2476125   | 4.2144279     | 0.0123258    | 0.3066379        | torch.Size([2, 512, 256])        |
| 1180    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.output_proj                        | input               | qint8         | 0.0331845 | -4.2476125   | 4.2144279     | 0.0123258    | 0.3066379        | torch.Size([2, 512, 256])        |
| 1180    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.output_proj                        | weight              | torch.float32 |           | -0.2663807   | 0.2879749     | 0.0001328    | 0.0059484        | torch.Size([256, 256])           |
| 1180    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.output_proj                        | bias                | torch.float32 |           | -0.0821608   | 0.1140266     | 0.0010564    | 0.0009855        | torch.Size([256])                |
| 1180    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.10.output_proj                        | output              | qint8         | 0.0357418 | -4.5749559   | 4.5392141     | 0.0296078    | 0.6975840        | torch.Size([2, 512, 256])        |
| 1181    | torch.nn.modules.dropout.Dropout                                            | head.layers.10.proj_drop                          | input               | qint8         | 0.0357418 | -4.5749559   | 4.5392141     | 0.0296078    | 0.6975840        | torch.Size([2, 512, 256])        |
| 1181    | torch.nn.modules.dropout.Dropout                                            | head.layers.10.proj_drop                          | output              | qint8         | 0.0357418 | -4.5749559   | 4.5392141     | 0.0296078    | 0.6975840        | torch.Size([2, 512, 256])        |
| 1182    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.residual_op                        | input_0             | qint8         | 0.0357418 | -4.5749559   | 4.5392141     | 0.0296078    | 0.6975840        | torch.Size([2, 512, 256])        |
| 1182    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.residual_op                        | input_1             | qint8         | 0.0580720 | -7.4332142   | 5.7491264     | -0.0022952   | 0.7825898        | torch.Size([2, 512, 256])        |
| 1182    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.10.residual_op                        | output              | qint8         | 0.0574541 | -7.3541203   | 5.7454066     | 0.0133034    | 0.7395365        | torch.Size([2, 512, 512])        |
| 1183    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.input_mean.mean           | input_0             | qint8         | 0.0574541 | -7.3541203   | 5.7454066     | 0.0133034    | 0.7395365        | torch.Size([2, 512, 512])        |
| 1183    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.input_mean.mean           | output              | qint16        | 0.0000031 | -0.0292889   | 0.0923524     | 0.0133033    | 0.0001893        | torch.Size([2, 512, 1])          |
| 1184    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.11.pre_norm.sub                       | input_0             | qint8         | 0.0574541 | -7.3541203   | 5.7454066     | 0.0133034    | 0.7395365        | torch.Size([2, 512, 512])        |
| 1184    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.11.pre_norm.sub                       | input_1             | qint16        | 0.0000031 | -0.0292889   | 0.0923524     | 0.0133033    | 0.0001893        | torch.Size([2, 512, 1])          |
| 1184    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.11.pre_norm.sub                       | output              | qint16        | 0.0002416 | -7.4464808   | 5.7403831     | 0.0000013    | 0.7393465        | torch.Size([2, 512, 512])        |
| 1185    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.mul                       | input_0             | qint16        | 0.0002416 | -7.4464808   | 5.7403831     | 0.0000013    | 0.7393465        | torch.Size([2, 512, 512])        |
| 1185    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.mul                       | input_1             | qint16        | 0.0002416 | -7.4464808   | 5.7403831     | 0.0000013    | 0.7393465        | torch.Size([2, 512, 512])        |
| 1185    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.mul                       | output              | qint16        | 0.0019125 | 0.0000000    | 55.4507675    | 0.7393736    | 9.6482916        | torch.Size([2, 512, 512])        |
| 1186    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.var_mean.mean             | input_0             | qint16        | 0.0019125 | 0.0000000    | 55.4507675    | 0.7393736    | 9.6482916        | torch.Size([2, 512, 512])        |
| 1186    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.var_mean.mean             | output              | qint16        | 0.0000611 | 0.4298313    | 2.0023150     | 0.7392018    | 0.0476000        | torch.Size([2, 512, 1])          |
| 1187    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.11.pre_norm.rsqrt                     | input               | qint16        | 0.0000611 | 0.4298313    | 2.0023150     | 0.7392018    | 0.0476000        | torch.Size([2, 512, 1])          |
| 1187    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.11.pre_norm.rsqrt                     | output              | qint16        | 0.0000472 | 0.7066966    | 1.5252727     | 1.1961827    | 0.0247469        | torch.Size([2, 512, 1])          |
| 1188    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.out_mul                   | input_0             | qint16        | 0.0002416 | -7.4464808   | 5.7403831     | 0.0000013    | 0.7393465        | torch.Size([2, 512, 512])        |
| 1188    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.out_mul                   | input_1             | qint16        | 0.0000472 | 0.7066966    | 1.5252727     | 1.1961827    | 0.0247469        | torch.Size([2, 512, 1])          |
| 1188    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.out_mul                   | output              | qint16        | 0.0003249 | -10.6106844  | 8.0549965     | 0.0000014    | 1.0000223        | torch.Size([2, 512, 512])        |
| 1189    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.11.pre_norm.weight_quant              | input               | torch.float32 |           | 0.7318589    | 1.5822344     | 1.0533838    | 0.0550146        | torch.Size([512])                |
| 1189    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.11.pre_norm.weight_quant              | output              | qint16        | 0.0000483 | 0.7318815    | 1.5822103     | 1.0533831    | 0.0550143        | torch.Size([512])                |
| 1190    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.weight_mul                | input_0             | qint16        | 0.0003249 | -10.6106844  | 8.0549965     | 0.0000014    | 1.0000223        | torch.Size([2, 512, 512])        |
| 1190    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.weight_mul                | input_1             | qint16        | 0.0000483 | 0.7318815    | 1.5822103     | 1.0533831    | 0.0550143        | torch.Size([512])                |
| 1190    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.weight_mul                | output              | qint16        | 0.0002579 | -8.4241552   | 5.8952842     | 0.0016128    | 0.7524478        | torch.Size([2, 512, 512])        |
| 1191    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.11.pre_norm.bias_quant                | input               | torch.float32 |           | -0.1939566   | 0.1783928     | -0.0027595   | 0.0020715        | torch.Size([512])                |
| 1191    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.11.pre_norm.bias_quant                | output              | qint16        | 0.0000059 | -0.1939595   | 0.1783921     | -0.0027596   | 0.0020715        | torch.Size([512])                |
| 1192    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.11.pre_norm.bias_add                  | input_0             | qint16        | 0.0002579 | -8.4241552   | 5.8952842     | 0.0016128    | 0.7524478        | torch.Size([2, 512, 512])        |
| 1192    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.11.pre_norm.bias_add                  | input_1             | qint16        | 0.0000059 | -0.1939595   | 0.1783921     | -0.0027596   | 0.0020715        | torch.Size([512])                |
| 1192    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.11.pre_norm.bias_add                  | output              | qint8         | 0.0564713 | -7.2283211   | 5.7035971     | -0.0012603   | 0.7283816        | torch.Size([2, 512, 512])        |
| 1193    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.0.0                         | input               | qint8         | 0.0564713 | -7.2283211   | 5.7035971     | -0.0012603   | 0.7283816        | torch.Size([2, 512, 512])        |
| 1193    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.0.0                         | weight              | torch.float32 |           | -0.5279155   | 0.4437539     | -0.0006416   | 0.0056500        | torch.Size([1024, 512])          |
| 1193    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.0.0                         | bias                | torch.float32 |           | -0.1276487   | 0.0716278     | -0.0487325   | 0.0010013        | torch.Size([1024])               |
| 1193    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.0.0                         | output              | torch.float32 |           | -18.8202457  | 8.9770298     | -2.9461215   | 6.6667404        | torch.Size([2, 512, 1024])       |
| 1194    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.11.activate                           | input               | torch.float32 |           | -18.8202457  | 8.9770298     | -2.9461215   | 6.6667404        | torch.Size([2, 512, 1024])       |
| 1194    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.11.activate                           | output              | qint8         | 0.0805294 | 0.0000000    | 8.9387636     | 0.1807297    | 0.4802148        | torch.Size([2, 512, 1024])       |
| 1195    | torch.nn.modules.dropout.Dropout                                            | head.layers.11.layers.0.2                         | input               | qint8         | 0.0805294 | 0.0000000    | 8.9387636     | 0.1807297    | 0.4802148        | torch.Size([2, 512, 1024])       |
| 1195    | torch.nn.modules.dropout.Dropout                                            | head.layers.11.layers.0.2                         | output              | qint8         | 0.0805294 | 0.0000000    | 8.9387636     | 0.1807297    | 0.4802148        | torch.Size([2, 512, 1024])       |
| 1196    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.1                           | input               | qint8         | 0.0805294 | 0.0000000    | 8.9387636     | 0.1807297    | 0.4802148        | torch.Size([2, 512, 1024])       |
| 1196    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.1                           | weight              | torch.float32 |           | -0.5053306   | 0.4998906     | 0.0001121    | 0.0056677        | torch.Size([256, 1024])          |
| 1196    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.1                           | bias                | torch.float32 |           | -0.0872618   | 0.0770759     | -0.0007722   | 0.0009508        | torch.Size([256])                |
| 1196    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.layers.1                           | output              | qint8         | 0.1432478 | -18.3357143  | 16.1869984    | 0.0321294    | 10.1522856       | torch.Size([2, 512, 256])        |
| 1197    | torch.nn.modules.dropout.Dropout                                            | head.layers.11.layers.2                           | input               | qint8         | 0.1432478 | -18.3357143  | 16.1869984    | 0.0321294    | 10.1522856       | torch.Size([2, 512, 256])        |
| 1197    | torch.nn.modules.dropout.Dropout                                            | head.layers.11.layers.2                           | output              | qint8         | 0.1432478 | -18.3357143  | 16.1869984    | 0.0321294    | 10.1522856       | torch.Size([2, 512, 256])        |
| 1198    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.identity_fc                        | input               | qint8         | 0.0564713 | -7.2283211   | 5.7035971     | -0.0012603   | 0.7283816        | torch.Size([2, 512, 512])        |
| 1198    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.identity_fc                        | weight              | torch.float32 |           | -0.4656178   | 0.4816367     | -0.0002583   | 0.0071310        | torch.Size([256, 512])           |
| 1198    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.identity_fc                        | bias                | torch.float32 |           | -0.1430661   | 0.0827197     | -0.0009835   | 0.0011322        | torch.Size([256])                |
| 1198    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.11.identity_fc                        | output              | torch.float32 |           | -19.1649742  | 10.4623175    | -0.0068982   | 8.2774839        | torch.Size([2, 512, 256])        |
| 1199    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.11.short_add                          | input_0             | torch.float32 |           | -19.1649742  | 10.4623175    | -0.0068982   | 8.2774839        | torch.Size([2, 512, 256])        |
| 1199    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.11.short_add                          | input_1             | qint8         | 0.1432478 | -18.3357143  | 16.1869984    | 0.0321294    | 10.1522856       | torch.Size([2, 512, 256])        |
| 1199    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.11.short_add                          | output              | qint8         | 0.1949557 | -24.9543285  | 20.8602581    | 0.0256702    | 21.9712601       | torch.Size([2, 512, 256])        |
| 1200    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.12.input_mean.mean                    | input_0             | qint8         | 0.1949557 | -24.9543285  | 20.8602581    | 0.0256702    | 21.9712601       | torch.Size([2, 512, 256])        |
| 1200    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.12.input_mean.mean                    | output              | qint16        | 0.0000108 | -0.2018073   | 0.2543561     | 0.0256696    | 0.0166417        | torch.Size([2, 512, 1])          |
| 1201    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.12.sub                                | input_0             | qint8         | 0.1949557 | -24.9543285  | 20.8602581    | 0.0256702    | 21.9712601       | torch.Size([2, 512, 256])        |
| 1201    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.12.sub                                | input_1             | qint16        | 0.0000108 | -0.2018073   | 0.2543561     | 0.0256696    | 0.0166417        | torch.Size([2, 512, 1])          |
| 1201    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.12.sub                                | output              | qint16        | 0.0009432 | -25.1709461  | 20.6428547    | 0.0000023    | 21.9546509       | torch.Size([2, 512, 256])        |
| 1202    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.mul                                | input_0             | qint16        | 0.0009432 | -25.1709461  | 20.6428547    | 0.0000023    | 21.9546509       | torch.Size([2, 512, 256])        |
| 1202    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.mul                                | input_1             | qint16        | 0.0009432 | -25.1709461  | 20.6428547    | 0.0000023    | 21.9546509       | torch.Size([2, 512, 256])        |
| 1202    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.mul                                | output              | qint16        | 0.0291805 | 0.0000000    | 633.5662231   | 21.9543457   | 2109.3110352     | torch.Size([2, 512, 256])        |
| 1203    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.12.var_mean.mean                      | input_0             | qint16        | 0.0291805 | 0.0000000    | 633.5662231   | 21.9543457   | 2109.3110352     | torch.Size([2, 512, 256])        |
| 1203    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.12.var_mean.mean                      | output              | qint16        | 0.0035570 | 5.7659345    | 54.1058769    | 21.9540138   | 333.1574707      | torch.Size([2, 512, 1])          |
| 1204    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.12.rsqrt                              | input               | qint16        | 0.0035570 | 5.7659345    | 54.1058769    | 21.9540138   | 333.1574707      | torch.Size([2, 512, 1])          |
| 1204    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.12.rsqrt                              | output              | qint16        | 0.0000123 | 0.1359515    | 0.4014710     | 0.2616949    | 0.0065101        | torch.Size([2, 512, 1])          |
| 1205    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.out_mul                            | input_0             | qint16        | 0.0009432 | -25.1709461  | 20.6428547    | 0.0000023    | 21.9546509       | torch.Size([2, 512, 256])        |
| 1205    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.out_mul                            | input_1             | qint16        | 0.0000123 | 0.1359515    | 0.4014710     | 0.2616949    | 0.0065101        | torch.Size([2, 512, 1])          |
| 1205    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.out_mul                            | output              | qint16        | 0.0001786 | -5.8538380   | 4.2722940     | 0.0000025    | 0.9997516        | torch.Size([2, 512, 256])        |
| 1206    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.12.weight_quant                       | input               | torch.float32 |           | 0.6993152    | 1.0544560     | 0.9030904    | 0.0036567        | torch.Size([256])                |
| 1206    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.12.weight_quant                       | output              | qint16        | 0.0000322 | 0.6993021    | 1.0544399     | 0.9030892    | 0.0036566        | torch.Size([256])                |
| 1207    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.weight_mul                         | input_0             | qint16        | 0.0001786 | -5.8538380   | 4.2722940     | 0.0000025    | 0.9997516        | torch.Size([2, 512, 256])        |
| 1207    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.weight_mul                         | input_1             | qint16        | 0.0000322 | 0.6993021    | 1.0544399     | 0.9030892    | 0.0036566        | torch.Size([256])                |
| 1207    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.12.weight_mul                         | output              | qint16        | 0.0001719 | -5.6318250   | 3.8928478     | -0.0000985   | 0.8268961        | torch.Size([2, 512, 256])        |
| 1208    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.12.bias_quant                         | input               | torch.float32 |           | -0.1003586   | 0.1476445     | 0.0017286    | 0.0014609        | torch.Size([256])                |
| 1208    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.12.bias_quant                         | output              | qint16        | 0.0000045 | -0.1003581   | 0.1476422     | 0.0017285    | 0.0014609        | torch.Size([256])                |
| 1209    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.12.bias_add                           | input_0             | qint16        | 0.0001719 | -5.6318250   | 3.8928478     | -0.0000985   | 0.8268961        | torch.Size([2, 512, 256])        |
| 1209    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.12.bias_add                           | input_1             | qint16        | 0.0000045 | -0.1003581   | 0.1476422     | 0.0017285    | 0.0014609        | torch.Size([256])                |
| 1209    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.12.bias_add                           | output              | qint8         | 0.0339342 | -4.3435817   | 3.9024367     | 0.0016676    | 0.8098754        | torch.Size([2, 512, 256])        |
| 1210    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.add1                               | input_0             | qint8         | 0.0339342 | -4.3435817   | 3.9024367     | 0.0016676    | 0.8098754        | torch.Size([2, 512, 256])        |
| 1210    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.add1                               | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0593189    | 0.8416965        | torch.Size([2, 512, 256])        |
| 1210    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.add1                               | output              | qint8         | 0.0603513 | -4.0435357   | 7.6646123     | 0.0611046    | 1.2924720        | torch.Size([2, 512, 256])        |
| 1211    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.0                           | input               | qint8         | 0.0603513 | -4.0435357   | 7.6646123     | 0.0611046    | 1.2924720        | torch.Size([2, 512, 256])        |
| 1211    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.0                           | weight              | torch.float32 |           | -0.6005406   | 0.4653489     | -0.0001235   | 0.0049280        | torch.Size([256, 256])           |
| 1211    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.0                           | bias                | torch.float32 |           | -0.2076813   | 0.0865848     | -0.0322298   | 0.0026380        | torch.Size([256])                |
| 1211    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.0                           | output              | torch.float32 |           | -10.7668085  | 11.1668148    | -0.7273505   | 5.2607045        | torch.Size([2, 512, 256])        |
| 1212    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.1                           | input               | torch.float32 |           | -10.7668085  | 11.1668148    | -0.7273505   | 5.2607045        | torch.Size([2, 512, 256])        |
| 1212    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.1                           | output              | qint8         | 0.0770955 | 0.0000000    | 9.7911234     | 0.5870714    | 1.1896847        | torch.Size([2, 512, 256])        |
| 1213    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.2                           | input               | qint8         | 0.0770955 | 0.0000000    | 9.7911234     | 0.5870714    | 1.1896847        | torch.Size([2, 512, 256])        |
| 1213    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.2                           | weight              | torch.float32 |           | -0.6167275   | 0.5256047     | -0.0056006   | 0.0049711        | torch.Size([256, 256])           |
| 1213    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.2                           | bias                | torch.float32 |           | -0.1263612   | 0.1803766     | -0.0060339   | 0.0029060        | torch.Size([256])                |
| 1213    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.2                           | output              | torch.float32 |           | -12.8403177  | 8.9602365     | -0.9046052   | 6.4299250        | torch.Size([2, 512, 256])        |
| 1214    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.3                           | input               | torch.float32 |           | -12.8403177  | 8.9602365     | -0.9046052   | 6.4299250        | torch.Size([2, 512, 256])        |
| 1214    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.3                           | output              | qint8         | 0.0832207 | 0.0000000    | 8.9878387     | 0.5758368    | 1.1891524        | torch.Size([2, 512, 256])        |
| 1215    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.input_mean.mean           | input_0             | qint8         | 0.0832207 | 0.0000000    | 8.9878387     | 0.5758368    | 1.1891524        | torch.Size([2, 512, 256])        |
| 1215    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.input_mean.mean           | output              | qint16        | 0.0000323 | 0.2568282    | 0.8416391     | 0.5758359    | 0.0183586        | torch.Size([2, 512, 1])          |
| 1216    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.13.layers.4.sub                       | input_0             | qint8         | 0.0832207 | 0.0000000    | 8.9878387     | 0.5758368    | 1.1891524        | torch.Size([2, 512, 256])        |
| 1216    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.13.layers.4.sub                       | input_1             | qint16        | 0.0000323 | 0.2568282    | 0.8416391     | 0.5758359    | 0.0183586        | torch.Size([2, 512, 1])          |
| 1216    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.13.layers.4.sub                       | output              | qint16        | 0.0004650 | -0.8417121   | 8.2129707     | 0.0000067    | 1.1708111        | torch.Size([2, 512, 256])        |
| 1217    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.mul                       | input_0             | qint16        | 0.0004650 | -0.8417121   | 8.2129707     | 0.0000067    | 1.1708111        | torch.Size([2, 512, 256])        |
| 1217    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.mul                       | input_1             | qint16        | 0.0004650 | -0.8417121   | 8.2129707     | 0.0000067    | 1.1708111        | torch.Size([2, 512, 256])        |
| 1217    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.mul                       | output              | qint16        | 0.0071376 | 0.0000000    | 67.4506607    | 1.1709062    | 11.3214808       | torch.Size([2, 512, 256])        |
| 1218    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.var_mean.mean             | input_0             | qint16        | 0.0071376 | 0.0000000    | 67.4506607    | 1.1709062    | 11.3214808       | torch.Size([2, 512, 256])        |
| 1218    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.var_mean.mean             | output              | qint16        | 0.0001333 | 0.1731409    | 2.4078448     | 1.1708949    | 0.5055616        | torch.Size([2, 512, 1])          |
| 1219    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.13.layers.4.rsqrt                     | input               | qint16        | 0.0001333 | 0.1731409    | 2.4078448     | 1.1708949    | 0.5055616        | torch.Size([2, 512, 1])          |
| 1219    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.13.layers.4.rsqrt                     | output              | qint16        | 0.0000608 | 0.6444448    | 1.9928768     | 1.0468550    | 0.0818212        | torch.Size([2, 512, 1])          |
| 1220    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.out_mul                   | input_0             | qint16        | 0.0004650 | -0.8417121   | 8.2129707     | 0.0000067    | 1.1708111        | torch.Size([2, 512, 256])        |
| 1220    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.out_mul                   | input_1             | qint16        | 0.0000608 | 0.6444448    | 1.9928768     | 1.0468550    | 0.0818212        | torch.Size([2, 512, 1])          |
| 1220    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.out_mul                   | output              | qint16        | 0.0002518 | -0.7271848   | 6.6370726     | 0.0000098    | 0.9997427        | torch.Size([2, 512, 256])        |
| 1221    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.4.weight_quant              | input               | torch.float32 |           | 0.6633201    | 1.2187128     | 0.9636809    | 0.0072749        | torch.Size([256])                |
| 1221    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.4.weight_quant              | output              | qint16        | 0.0000372 | 0.6633323    | 1.2186942     | 0.9636803    | 0.0072748        | torch.Size([256])                |
| 1222    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.weight_mul                | input_0             | qint16        | 0.0002518 | -0.7271848   | 6.6370726     | 0.0000098    | 0.9997427        | torch.Size([2, 512, 256])        |
| 1222    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.weight_mul                | input_1             | qint16        | 0.0000372 | 0.6633323    | 1.2186942     | 0.9636803    | 0.0072748        | torch.Size([256])                |
| 1222    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.weight_mul                | output              | qint16        | 0.0002599 | -0.8861288   | 6.9889836     | 0.0151181    | 0.9779812        | torch.Size([2, 512, 256])        |
| 1223    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.4.bias_quant                | input               | torch.float32 |           | -0.0931333   | 0.3241574     | 0.0448928    | 0.0063926        | torch.Size([256])                |
| 1223    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.4.bias_quant                | output              | qint16        | 0.0000099 | -0.0931294   | 0.3241524     | 0.0448926    | 0.0063926        | torch.Size([256])                |
| 1224    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.layers.4.bias_add                  | input_0             | qint16        | 0.0002599 | -0.8861288   | 6.9889836     | 0.0151181    | 0.9779812        | torch.Size([2, 512, 256])        |
| 1224    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.layers.4.bias_add                  | input_1             | qint16        | 0.0000099 | -0.0931294   | 0.3241524     | 0.0448926    | 0.0063926        | torch.Size([256])                |
| 1224    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.layers.4.bias_add                  | output              | qint8         | 0.0566182 | -0.8492731   | 6.9640393     | 0.0600450    | 0.9258547        | torch.Size([2, 512, 256])        |
| 1225    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.5                           | input               | qint8         | 0.0566182 | -0.8492731   | 6.9640393     | 0.0600450    | 0.9258547        | torch.Size([2, 512, 256])        |
| 1225    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.5                           | weight              | torch.float32 |           | -0.4115984   | 0.4671635     | 0.0042406    | 0.0040801        | torch.Size([256, 256])           |
| 1225    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.5                           | bias                | torch.float32 |           | -0.1536481   | 0.0778537     | -0.0241879   | 0.0025930        | torch.Size([256])                |
| 1225    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.5                           | output              | torch.float32 |           | -8.3282604   | 10.9784498    | -0.9817600   | 5.0233130        | torch.Size([2, 512, 256])        |
| 1226    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.6                           | input               | torch.float32 |           | -8.3282604   | 10.9784498    | -0.9817600   | 5.0233130        | torch.Size([2, 512, 256])        |
| 1226    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.6                           | output              | qint8         | 0.0781968 | 0.0000000    | 9.9309902     | 0.5273277    | 1.2988844        | torch.Size([2, 512, 256])        |
| 1227    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.7                           | input               | qint8         | 0.0781968 | 0.0000000    | 9.9309902     | 0.5273277    | 1.2988844        | torch.Size([2, 512, 256])        |
| 1227    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.7                           | weight              | torch.float32 |           | -0.6832550   | 0.4791626     | -0.0062377   | 0.0030764        | torch.Size([256, 256])           |
| 1227    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.7                           | bias                | torch.float32 |           | -0.1049601   | 0.1796888     | -0.0124101   | 0.0017829        | torch.Size([256])                |
| 1227    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.7                           | output              | torch.float32 |           | -13.9785843  | 29.0113621    | -1.7417307   | 8.0835085        | torch.Size([2, 512, 256])        |
| 1228    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.8                           | input               | torch.float32 |           | -13.9785843  | 29.0113621    | -1.7417307   | 8.0835085        | torch.Size([2, 512, 256])        |
| 1228    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.13.layers.8                           | output              | qint8         | 0.2279995 | 0.0000000    | 28.9559383    | 0.4330498    | 3.2582827        | torch.Size([2, 512, 256])        |
| 1229    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.input_mean.mean           | input_0             | qint8         | 0.2279995 | 0.0000000    | 28.9559383    | 0.4330498    | 3.2582827        | torch.Size([2, 512, 256])        |
| 1229    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.input_mean.mean           | output              | qint16        | 0.0000269 | 0.2564942    | 0.7107223     | 0.4330477    | 0.0139825        | torch.Size([2, 512, 1])          |
| 1230    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.13.layers.9.sub                       | input_0             | qint8         | 0.2279995 | 0.0000000    | 28.9559383    | 0.4330498    | 3.2582827        | torch.Size([2, 512, 256])        |
| 1230    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.13.layers.9.sub                       | input_1             | qint16        | 0.0000269 | 0.2564942    | 0.7107223     | 0.4330477    | 0.0139825        | torch.Size([2, 512, 1])          |
| 1230    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.13.layers.9.sub                       | output              | qint16        | 0.0009236 | -0.7111719   | 28.5817242    | -0.0000683   | 3.2443459        | torch.Size([2, 512, 256])        |
| 1231    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.mul                       | input_0             | qint16        | 0.0009236 | -0.7111719   | 28.5817242    | -0.0000683   | 3.2443459        | torch.Size([2, 512, 256])        |
| 1231    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.mul                       | input_1             | qint16        | 0.0009236 | -0.7111719   | 28.5817242    | -0.0000683   | 3.2443459        | torch.Size([2, 512, 256])        |
| 1231    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.mul                       | output              | qint16        | 0.0279645 | 0.0000000    | 816.9268799   | 3.2417574    | 887.3370361      | torch.Size([2, 512, 256])        |
| 1232    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.var_mean.mean             | input_0             | qint16        | 0.0279645 | 0.0000000    | 816.9268799   | 3.2417574    | 887.3370361      | torch.Size([2, 512, 256])        |
| 1232    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.var_mean.mean             | output              | qint16        | 0.0002213 | 0.8502802    | 5.7985926     | 3.2417765    | 0.7275240        | torch.Size([2, 512, 1])          |
| 1233    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.13.layers.9.rsqrt                     | input               | qint16        | 0.0002213 | 0.8502802    | 5.7985926     | 3.2417765    | 0.7275240        | torch.Size([2, 512, 1])          |
| 1233    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.13.layers.9.rsqrt                     | output              | qint16        | 0.0000437 | 0.4152634    | 1.0844638     | 0.5716932    | 0.0074352        | torch.Size([2, 512, 1])          |
| 1234    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.out_mul                   | input_0             | qint16        | 0.0009236 | -0.7111719   | 28.5817242    | -0.0000683   | 3.2443459        | torch.Size([2, 512, 256])        |
| 1234    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.out_mul                   | input_1             | qint16        | 0.0000437 | 0.4152634    | 1.0844638     | 0.5716932    | 0.0074352        | torch.Size([2, 512, 1])          |
| 1234    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.out_mul                   | output              | qint16        | 0.0003842 | -0.4733163   | 12.5885992    | -0.0000136   | 1.0008072        | torch.Size([2, 512, 256])        |
| 1235    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.9.weight_quant              | input               | torch.float32 |           | 0.8125745    | 1.0292015     | 0.9108959    | 0.0012828        | torch.Size([256])                |
| 1235    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.9.weight_quant              | output              | qint16        | 0.0000314 | 0.8125879    | 1.0291859     | 0.9108953    | 0.0012828        | torch.Size([256])                |
| 1236    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.weight_mul                | input_0             | qint16        | 0.0003842 | -0.4733163   | 12.5885992    | -0.0000136   | 1.0008072        | torch.Size([2, 512, 256])        |
| 1236    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.weight_mul                | input_1             | qint16        | 0.0000314 | 0.8125879    | 1.0291859     | 0.9108953    | 0.0012828        | torch.Size([256])                |
| 1236    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.weight_mul                | output              | qint16        | 0.0003199 | -0.4859902   | 10.4835024    | -0.0008445   | 0.7611462        | torch.Size([2, 512, 256])        |
| 1237    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.9.bias_quant                | input               | torch.float32 |           | -0.1482258   | 0.1146019     | 0.0601919    | 0.0022211        | torch.Size([256])                |
| 1237    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.9.bias_quant                | output              | qint16        | 0.0000045 | -0.1482236   | 0.1145999     | 0.0601918    | 0.0022211        | torch.Size([256])                |
| 1238    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.layers.9.bias_add                  | input_0             | qint16        | 0.0003199 | -0.4859902   | 10.4835024    | -0.0008445   | 0.7611462        | torch.Size([2, 512, 256])        |
| 1238    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.layers.9.bias_add                  | input_1             | qint16        | 0.0000045 | -0.1482236   | 0.1145999     | 0.0601918    | 0.0022211        | torch.Size([256])                |
| 1238    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.layers.9.bias_add                  | output              | qint8         | 0.0753963 | -0.5277742   | 9.5753317     | 0.0587015    | 0.7173378        | torch.Size([2, 512, 256])        |
| 1239    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.10                          | input               | qint8         | 0.0753963 | -0.5277742   | 9.5753317     | 0.0587015    | 0.7173378        | torch.Size([2, 512, 256])        |
| 1239    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.10                          | weight              | torch.float32 |           | -0.3740715   | 0.2434908     | -0.0008235   | 0.0021038        | torch.Size([11, 256])            |
| 1239    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.10                          | bias                | torch.float32 |           | -0.0558710   | 0.0500459     | -0.0099527   | 0.0010864        | torch.Size([11])                 |
| 1239    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.13.layers.10                          | output              | qint16        | 0.0002766 | -6.7434373   | 6.6184053     | -0.0192920   | 0.8220431        | torch.Size([2, 512, 11])         |
| 1240    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.11.scale_quant_stub         | input               | torch.float32 |           | 0.1286822    | 0.7985592     | 0.4143039    | 0.0426970        | torch.Size([11])                 |
| 1240    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.13.layers.11.scale_quant_stub         | output              | qint16        | 0.0000244 | 0.1286761    | 0.7985470     | 0.4143001    | 0.0426961        | torch.Size([11])                 |
| 1241    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.11.mul                      | input_0             | qint16        | 0.0002766 | -6.7434373   | 6.6184053     | -0.0192920   | 0.8220431        | torch.Size([2, 512, 11])         |
| 1241    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.11.mul                      | input_1             | qint16        | 0.0000244 | 0.1286761    | 0.7985470     | 0.4143001    | 0.0426961        | torch.Size([11])                 |
| 1241    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.13.layers.11.mul                      | output              | qint16        | 0.0001893 | -5.3850121   | 5.2850618     | -0.0073840   | 0.3351027        | torch.Size([2, 512, 11])         |
| 1242    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.add2                               | input_0             | qint16        | 0.0001893 | -5.3850121   | 5.2850618     | -0.0073840   | 0.3351027        | torch.Size([2, 512, 11])         |
| 1242    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.add2                               | input_1             | qint16        | 0.0017927 | -53.6453590  | 53.2904015    | 0.2159878    | 74.8075333       | torch.Size([2, 512, 11])         |
| 1242    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.13.add2                               | output              | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1243    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(1)                                   | input               | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1243    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(1)                                   | output              | torch.float32 |           | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1244    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1244    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.7208947    | 272.6247253      | torch.Size([2, 512, 3])          |
| 1245    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(3)                   | input               | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.7208947    | 272.6247253      | torch.Size([2, 512, 3])          |
| 1245    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(3)                   | weight              | torch.float32 |           | -0.9216561   | 0.9167990     | -0.0046354   | 0.1373587        | torch.Size([128, 3])             |
| 1245    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(3)                   | bias                | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3650480        | torch.Size([128])                |
| 1245    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(3)                   | output              | torch.float32 |           | -32.9889145  | 34.3895912    | -0.1116314   | 66.5919418       | torch.Size([2, 512, 128])        |
| 1246    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(3)                   | input               | torch.float32 |           | -32.9889145  | 34.3895912    | -0.1116314   | 66.5919418       | torch.Size([2, 512, 128])        |
| 1246    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(3)                   | output              | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.7801385    | 24.5497551       | torch.Size([2, 512, 128])        |
| 1247    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(3)   | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.7801385    | 24.5497551       | torch.Size([2, 512, 128])        |
| 1247    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(3)   | output              | qint16        | 0.0002498 | 0.2245532    | 7.2646317     | 2.7801309    | 3.9430490        | torch.Size([2, 512, 1])          |
| 1248    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(3)               | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.7801385    | 24.5497551       | torch.Size([2, 512, 128])        |
| 1248    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(3)               | input_1             | qint16        | 0.0002498 | 0.2245532    | 7.2646317     | 2.7801309    | 3.9430490        | torch.Size([2, 512, 1])          |
| 1248    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(3)               | output              | qint16        | 0.0008924 | -7.2643528   | 27.5135136    | -0.0000366   | 20.6106052       | torch.Size([2, 512, 128])        |
| 1249    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(3)               | input_0             | qint16        | 0.0008924 | -7.2643528   | 27.5135136    | -0.0000366   | 20.6106052       | torch.Size([2, 512, 128])        |
| 1249    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(3)               | input_1             | qint16        | 0.0008924 | -7.2643528   | 27.5135136    | -0.0000366   | 20.6106052       | torch.Size([2, 512, 128])        |
| 1249    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(3)               | output              | qint16        | 0.0261809 | 0.0000000    | 756.9931030   | 20.6101646   | 2373.3837891     | torch.Size([2, 512, 128])        |
| 1250    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(3)     | input_0             | qint16        | 0.0261809 | 0.0000000    | 756.9931030   | 20.6101646   | 2373.3837891     | torch.Size([2, 512, 128])        |
| 1250    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(3)     | output              | qint16        | 0.0029473 | 0.1061030    | 73.5765610    | 20.6100597   | 435.6549683      | torch.Size([2, 512, 1])          |
| 1251    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(3)             | input               | qint16        | 0.0029473 | 0.1061030    | 73.5765610    | 20.6100597   | 435.6549683      | torch.Size([2, 512, 1])          |
| 1251    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(3)             | output              | qint16        | 0.0000538 | 0.1165914    | 1.7621539     | 0.6470534    | 0.4508688        | torch.Size([2, 512, 1])          |
| 1252    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(3)           | input_0             | qint16        | 0.0008924 | -7.2643528   | 27.5135136    | -0.0000366   | 20.6106052       | torch.Size([2, 512, 128])        |
| 1252    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(3)           | input_1             | qint16        | 0.0000538 | 0.1165914    | 1.7621539     | 0.6470534    | 0.4508688        | torch.Size([2, 512, 1])          |
| 1252    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(3)           | output              | qint16        | 0.0001192 | -0.8861142   | 3.8429675     | -0.0000627   | 0.8340458        | torch.Size([2, 512, 128])        |
| 1253    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(3)      | input               | torch.float32 |           | 0.7278287    | 1.3287159     | 0.9627235    | 0.0086877        | torch.Size([128])                |
| 1253    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(3)      | output              | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 1254    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(3)        | input_0             | qint16        | 0.0001192 | -0.8861142   | 3.8429675     | -0.0000627   | 0.8340458        | torch.Size([2, 512, 128])        |
| 1254    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(3)        | input_1             | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 1254    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(3)        | output              | qint16        | 0.0001208 | -1.0519651   | 3.7477014     | -0.0015810   | 0.7731021        | torch.Size([2, 512, 128])        |
| 1255    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(3)        | input               | torch.float32 |           | -0.0562531   | 0.0804052     | 0.0088204    | 0.0005294        | torch.Size([128])                |
| 1255    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(3)        | output              | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 1256    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(3)          | input_0             | qint16        | 0.0001208 | -1.0519651   | 3.7477014     | -0.0015810   | 0.7731021        | torch.Size([2, 512, 128])        |
| 1256    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(3)          | input_1             | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 1256    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(3)          | output              | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0072195    | 0.7681515        | torch.Size([2, 512, 128])        |
| 1257    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(3)                   | input               | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0072195    | 0.7681515        | torch.Size([2, 512, 128])        |
| 1257    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(3)                   | weight              | torch.float32 |           | -0.3750711   | 0.3968706     | 0.0019093    | 0.0048458        | torch.Size([128, 128])           |
| 1257    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(3)                   | bias                | torch.float32 |           | -0.1863807   | 0.1385574     | -0.0156467   | 0.0047256        | torch.Size([128])                |
| 1257    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(3)                   | output              | torch.float32 |           | -5.5598001   | 6.0832496     | -0.1083819   | 1.8630155        | torch.Size([2, 512, 128])        |
| 1258    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(3)                   | input               | torch.float32 |           | -5.5598001   | 6.0832496     | -0.1083819   | 1.8630155        | torch.Size([2, 512, 128])        |
| 1258    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(3)                   | output              | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.4982578    | 0.6260052        | torch.Size([2, 512, 128])        |
| 1259    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(3)   | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.4982578    | 0.6260052        | torch.Size([2, 512, 128])        |
| 1259    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(3)   | output              | qint16        | 0.0000298 | 0.2860329    | 0.8808087     | 0.4982585    | 0.0284061        | torch.Size([2, 512, 1])          |
| 1260    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(3)               | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.4982578    | 0.6260052        | torch.Size([2, 512, 128])        |
| 1260    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(3)               | input_1             | qint16        | 0.0000298 | 0.2860329    | 0.8808087     | 0.4982585    | 0.0284061        | torch.Size([2, 512, 1])          |
| 1260    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(3)               | output              | qint16        | 0.0001641 | -0.8808311   | 5.1031418     | -0.0000071   | 0.5976393        | torch.Size([2, 512, 128])        |
| 1261    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(3)               | input_0             | qint16        | 0.0001641 | -0.8808311   | 5.1031418     | -0.0000071   | 0.5976393        | torch.Size([2, 512, 128])        |
| 1261    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(3)               | input_1             | qint16        | 0.0001641 | -0.8808311   | 5.1031418     | -0.0000071   | 0.5976393        | torch.Size([2, 512, 128])        |
| 1261    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(3)               | output              | qint16        | 0.0008856 | 0.0000000    | 26.0421238    | 0.5976102    | 2.3989873        | torch.Size([2, 512, 128])        |
| 1262    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(3)     | input_0             | qint16        | 0.0008856 | 0.0000000    | 26.0421238    | 0.5976102    | 2.3989873        | torch.Size([2, 512, 128])        |
| 1262    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(3)     | output              | qint16        | 0.0000499 | 0.3062076    | 1.3796561     | 0.5976137    | 0.0436291        | torch.Size([2, 512, 1])          |
| 1263    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(3)             | input               | qint16        | 0.0000499 | 0.3062076    | 1.3796561     | 0.5976137    | 0.0436291        | torch.Size([2, 512, 1])          |
| 1263    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(3)             | output              | qint16        | 0.0000553 | 0.8513406    | 1.8070940     | 1.3530324    | 0.0539699        | torch.Size([2, 512, 1])          |
| 1264    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(3)           | input_0             | qint16        | 0.0001641 | -0.8808311   | 5.1031418     | -0.0000071   | 0.5976393        | torch.Size([2, 512, 128])        |
| 1264    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(3)           | input_1             | qint16        | 0.0000553 | 0.8513406    | 1.8070940     | 1.3530324    | 0.0539699        | torch.Size([2, 512, 1])          |
| 1264    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(3)           | output              | qint16        | 0.0002164 | -0.8049729   | 7.0521169     | -0.0000087   | 1.0000083        | torch.Size([2, 512, 128])        |
| 1265    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(3)      | input               | torch.float32 |           | 0.5925044    | 1.4726304     | 0.9182085    | 0.0175060        | torch.Size([128])                |
| 1265    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(3)      | output              | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 1266    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(3)        | input_0             | qint16        | 0.0002164 | -0.8049729   | 7.0521169     | -0.0000087   | 1.0000083        | torch.Size([2, 512, 128])        |
| 1266    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(3)        | input_1             | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 1266    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(3)        | output              | qint16        | 0.0002127 | -0.9417843   | 6.9293885     | 0.0361066    | 0.9538843        | torch.Size([2, 512, 128])        |
| 1267    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(3)        | input               | torch.float32 |           | -0.0644210   | 0.2426097     | 0.0318023    | 0.0030999        | torch.Size([128])                |
| 1267    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(3)        | output              | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 1268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(3)          | input_0             | qint16        | 0.0002127 | -0.9417843   | 6.9293885     | 0.0361066    | 0.9538843        | torch.Size([2, 512, 128])        |
| 1268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(3)          | input_1             | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 1268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(3)          | output              | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0677261    | 0.9285302        | torch.Size([2, 512, 128])        |
| 1269    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(3)                   | input               | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0677261    | 0.9285302        | torch.Size([2, 512, 128])        |
| 1269    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(3)                   | weight              | torch.float32 |           | -0.7504157   | 0.4182976     | -0.0024651   | 0.0052447        | torch.Size([128, 128])           |
| 1269    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(3)                   | bias                | torch.float32 |           | -0.1397866   | 0.1210779     | 0.0064616    | 0.0040949        | torch.Size([128])                |
| 1269    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(3)                   | output              | torch.float32 |           | -9.3688898   | 6.9567556     | -0.0398981   | 4.9529819        | torch.Size([2, 512, 128])        |
| 1270    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(3)                   | input               | torch.float32 |           | -9.3688898   | 6.9567556     | -0.0398981   | 4.9529819        | torch.Size([2, 512, 128])        |
| 1270    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(3)                   | output              | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8339832    | 1.5563823        | torch.Size([2, 512, 128])        |
| 1271    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(3)   | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8339832    | 1.5563823        | torch.Size([2, 512, 128])        |
| 1271    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(3)   | output              | qint16        | 0.0000319 | 0.5480659    | 1.0447656     | 0.7690068    | 0.0279580        | torch.Size([2, 512, 1])          |
| 1272    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(3)               | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8339832    | 1.5563823        | torch.Size([2, 512, 128])        |
| 1272    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(3)               | input_1             | qint16        | 0.0000319 | 0.5480659    | 1.0447656     | 0.7690068    | 0.0279580        | torch.Size([2, 512, 1])          |
| 1272    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(3)               | output              | qint16        | 0.0001844 | -1.0447190   | 5.6260624     | 0.0649785    | 1.4926023        | torch.Size([2, 512, 128])        |
| 1273    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(3)               | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6260624     | 0.0649785    | 1.4926023        | torch.Size([2, 512, 128])        |
| 1273    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(3)               | input_1             | qint16        | 0.0001844 | -1.0447190   | 5.6260624     | 0.0649785    | 1.4926023        | torch.Size([2, 512, 128])        |
| 1273    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(3)               | output              | qint16        | 0.0011151 | 0.0000000    | 31.6521015    | 1.4968327    | 10.6262493       | torch.Size([2, 512, 128])        |
| 1274    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(3)     | input_0             | qint16        | 0.0011151 | 0.0000000    | 31.6521015    | 1.4968327    | 10.6262493       | torch.Size([2, 512, 128])        |
| 1274    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(3)     | output              | qint16        | 0.0000656 | 0.8236820    | 2.1495371     | 1.3581433    | 0.2304609        | torch.Size([2, 512, 1])          |
| 1275    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(3)             | input               | qint16        | 0.0000656 | 0.8236820    | 2.1495371     | 1.3581433    | 0.2304609        | torch.Size([2, 512, 1])          |
| 1275    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(3)             | output              | qint16        | 0.0000338 | 0.6820595    | 1.1018353     | 0.8932279    | 0.0183575        | torch.Size([2, 512, 1])          |
| 1276    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(3)           | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6260624     | 0.0649785    | 1.4926023        | torch.Size([2, 512, 128])        |
| 1276    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(3)           | input_1             | qint16        | 0.0000338 | 0.6820595    | 1.1018353     | 0.8932279    | 0.0183575        | torch.Size([2, 512, 1])          |
| 1276    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(3)           | output              | qint16        | 0.0001537 | -0.7515672   | 5.0199776     | 0.0443225    | 1.0625377        | torch.Size([2, 512, 128])        |
| 1277    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(3)      | input               | torch.float32 |           | 0.7673740    | 1.1249810     | 0.9671495    | 0.0053221        | torch.Size([128])                |
| 1277    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(3)      | output              | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 1278    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(3)        | input_0             | qint16        | 0.0001537 | -0.7515672   | 5.0199776     | 0.0443225    | 1.0625377        | torch.Size([2, 512, 128])        |
| 1278    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(3)        | input_1             | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 1278    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(3)        | output              | qint16        | 0.0001601 | -0.8455149   | 5.2325239     | 0.0591466    | 1.0595031        | torch.Size([2, 512, 128])        |
| 1279    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(3)        | input               | torch.float32 |           | -0.0537279   | 0.1594015     | 0.0216380    | 0.0014148        | torch.Size([128])                |
| 1279    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(3)        | output              | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 1280    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(3)          | input_0             | qint16        | 0.0001601 | -0.8455149   | 5.2325239     | 0.0591466    | 1.0595031        | torch.Size([2, 512, 128])        |
| 1280    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(3)          | input_1             | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 1280    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(3)          | output              | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0806297    | 1.0439591        | torch.Size([2, 512, 128])        |
| 1281    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(3)                   | input               | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0806297    | 1.0439591        | torch.Size([2, 512, 128])        |
| 1281    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(3)                   | weight              | torch.float32 |           | -0.4264432   | 0.3183554     | 0.0005866    | 0.0053991        | torch.Size([128, 128])           |
| 1281    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(3)                   | bias                | torch.float32 |           | -0.1690418   | 0.1536980     | -0.0166056   | 0.0039884        | torch.Size([128])                |
| 1281    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(3)                   | output              | torch.float32 |           | -11.7958250  | 10.7459002    | -0.4283889   | 4.7849817        | torch.Size([2, 512, 128])        |
| 1282    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(3)                  | input               | torch.float32 |           | -11.7958250  | 10.7459002    | -0.4283889   | 4.7849817        | torch.Size([2, 512, 128])        |
| 1282    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(3)                  | output              | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6501983    | 1.6184459        | torch.Size([2, 512, 128])        |
| 1283    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(3)  | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6501983    | 1.6184459        | torch.Size([2, 512, 128])        |
| 1283    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(3)  | output              | qint16        | 0.0000231 | 0.5241749    | 0.7555045     | 0.6470237    | 0.0054732        | torch.Size([2, 512, 1])          |
| 1284    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(3)              | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.4939823    | 0.6501983    | 1.6184459        | torch.Size([2, 512, 128])        |
| 1284    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(3)              | input_1             | qint16        | 0.0000231 | 0.5241749    | 0.7555045     | 0.6470237    | 0.0054732        | torch.Size([2, 512, 1])          |
| 1284    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(3)              | output              | qint16        | 0.0003154 | -0.7554005   | 9.9331217     | 0.0031861    | 1.6122662        | torch.Size([2, 512, 128])        |
| 1285    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(3)              | input_0             | qint16        | 0.0003154 | -0.7554005   | 9.9331217     | 0.0031861    | 1.6122662        | torch.Size([2, 512, 128])        |
| 1285    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(3)              | input_1             | qint16        | 0.0003154 | -0.7554005   | 9.9331217     | 0.0031861    | 1.6122662        | torch.Size([2, 512, 128])        |
| 1285    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(3)              | output              | qint16        | 0.0032599 | 0.0000000    | 98.6659317    | 1.6122932    | 26.5271263       | torch.Size([2, 512, 128])        |
| 1286    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(3)    | input_0             | qint16        | 0.0032599 | 0.0000000    | 98.6659317    | 1.6122932    | 26.5271263       | torch.Size([2, 512, 128])        |
| 1286    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(3)    | output              | qint16        | 0.0000598 | 1.0465882    | 1.9544042     | 1.6122880    | 0.0216715        | torch.Size([2, 512, 1])          |
| 1287    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(3)            | input               | qint16        | 0.0000598 | 1.0465882    | 1.9544042     | 1.6122880    | 0.0216715        | torch.Size([2, 512, 1])          |
| 1287    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(3)            | output              | qint16        | 0.0000315 | 0.7153048    | 0.9774814     | 0.7901160    | 0.0014153        | torch.Size([2, 512, 1])          |
| 1288    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(3)          | input_0             | qint16        | 0.0003154 | -0.7554005   | 9.9331217     | 0.0031861    | 1.6122662        | torch.Size([2, 512, 128])        |
| 1288    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(3)          | input_1             | qint16        | 0.0000315 | 0.7153048    | 0.9774814     | 0.7901160    | 0.0014153        | torch.Size([2, 512, 1])          |
| 1288    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(3)          | output              | qint16        | 0.0002431 | -0.6125473   | 7.6663213     | 0.0025251    | 0.9999872        | torch.Size([2, 512, 128])        |
| 1289    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(3)     | input               | torch.float32 |           | 0.7088336    | 1.4002132     | 0.9292046    | 0.0145085        | torch.Size([128])                |
| 1289    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(3)     | output              | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 1290    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(3)       | input_0             | qint16        | 0.0002431 | -0.6125473   | 7.6663213     | 0.0025251    | 0.9999872        | torch.Size([2, 512, 128])        |
| 1290    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(3)       | input_1             | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 1290    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(3)       | output              | qint16        | 0.0002455 | -0.8498029   | 7.7439852     | 0.0110079    | 0.9015369        | torch.Size([2, 512, 128])        |
| 1291    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(3)       | input               | torch.float32 |           | -0.0965041   | 0.2669707     | 0.0619903    | 0.0064956        | torch.Size([128])                |
| 1291    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(3)       | output              | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 1292    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(3)         | input_0             | qint16        | 0.0002455 | -0.8498029   | 7.7439852     | 0.0110079    | 0.9015369        | torch.Size([2, 512, 128])        |
| 1292    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(3)         | input_1             | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 1292    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(3)         | output              | qint8         | 0.0587279 | -0.8809187   | 7.4584455     | 0.0732871    | 0.8685381        | torch.Size([2, 512, 128])        |
| 1293    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1293    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017920 | -0.9641003   | 2.7095160     | 0.2900836    | 0.3665981        | torch.Size([2, 512, 3])          |
| 1294    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(3)                  | input               | qint16        | 0.0017920 | -0.9641003   | 2.7095160     | 0.2900836    | 0.3665981        | torch.Size([2, 512, 3])          |
| 1294    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(3)                  | weight              | torch.float32 |           | -0.8288664   | 0.6362330     | 0.0683853    | 0.1118651        | torch.Size([32, 3])              |
| 1294    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(3)                  | bias                | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1068659        | torch.Size([32])                 |
| 1294    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(3)                  | output              | torch.float32 |           | -1.9688500   | 2.3345158     | 0.1231696    | 0.2333593        | torch.Size([2, 512, 32])         |
| 1295    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(3)                  | input               | torch.float32 |           | -1.9688500   | 2.3345158     | 0.1231696    | 0.2333593        | torch.Size([2, 512, 32])         |
| 1295    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(3)                  | output              | qint8         | 0.0194126 | 0.0000000    | 2.3295119     | 0.2609302    | 0.0991587        | torch.Size([2, 512, 32])         |
| 1296    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(3)  | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.3295119     | 0.2609302    | 0.0991587        | torch.Size([2, 512, 32])         |
| 1296    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(3)  | output              | qint16        | 0.0000252 | 0.1631796    | 0.6697254     | 0.2609308    | 0.0128065        | torch.Size([2, 512, 1])          |
| 1297    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(3)              | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.3295119     | 0.2609302    | 0.0991587        | torch.Size([2, 512, 32])         |
| 1297    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(3)              | input_1             | qint16        | 0.0000252 | 0.1631796    | 0.6697254     | 0.2609308    | 0.0128065        | torch.Size([2, 512, 1])          |
| 1297    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(3)              | output              | qint16        | 0.0000639 | -0.6697122   | 1.6597716     | -0.0000022   | 0.0863645        | torch.Size([2, 512, 32])         |
| 1298    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(3)              | input_0             | qint16        | 0.0000639 | -0.6697122   | 1.6597716     | -0.0000022   | 0.0863645        | torch.Size([2, 512, 32])         |
| 1298    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(3)              | input_1             | qint16        | 0.0000639 | -0.6697122   | 1.6597716     | -0.0000022   | 0.0863645        | torch.Size([2, 512, 32])         |
| 1298    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(3)              | output              | qint16        | 0.0001394 | 0.0000000    | 2.7548642     | 0.0863639    | 0.0266801        | torch.Size([2, 512, 32])         |
| 1299    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(3)    | input_0             | qint16        | 0.0001394 | 0.0000000    | 2.7548642     | 0.0863639    | 0.0266801        | torch.Size([2, 512, 32])         |
| 1299    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(3)    | output              | qint16        | 0.0000212 | 0.0316099    | 0.4322489     | 0.0863640    | 0.0048504        | torch.Size([2, 512, 1])          |
| 1300    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(3)            | input               | qint16        | 0.0000212 | 0.0316099    | 0.4322489     | 0.0863640    | 0.0048504        | torch.Size([2, 512, 1])          |
| 1300    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(3)            | output              | qint16        | 0.0001649 | 1.5209959    | 5.4031301     | 4.0196862    | 1.3495189        | torch.Size([2, 512, 1])          |
| 1301    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(3)          | input_0             | qint16        | 0.0000639 | -0.6697122   | 1.6597716     | -0.0000022   | 0.0863645        | torch.Size([2, 512, 32])         |
| 1301    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(3)          | input_1             | qint16        | 0.0001649 | 1.5209959    | 5.4031301     | 4.0196862    | 1.3495189        | torch.Size([2, 512, 1])          |
| 1301    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(3)          | output              | qint16        | 0.0000919 | -1.1062200   | 3.0128427     | -0.0000181   | 0.9823112        | torch.Size([2, 512, 32])         |
| 1302    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(3)     | input               | torch.float32 |           | 0.8401937    | 1.1936733     | 0.9969203    | 0.0071658        | torch.Size([32])                 |
| 1302    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(3)     | output              | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 1303    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(3)       | input_0             | qint16        | 0.0000919 | -1.1062200   | 3.0128427     | -0.0000181   | 0.9823112        | torch.Size([2, 512, 32])         |
| 1303    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(3)       | input_1             | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 1303    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(3)       | output              | qint16        | 0.0001022 | -1.3204736   | 3.2300847     | 0.0066526    | 0.9668367        | torch.Size([2, 512, 32])         |
| 1304    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(3)       | input               | torch.float32 |           | -0.1003950   | 0.1085345     | 0.0035262    | 0.0030721        | torch.Size([32])                 |
| 1304    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(3)       | output              | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 1305    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(3)         | input_0             | qint16        | 0.0001022 | -1.3204736   | 3.2300847     | 0.0066526    | 0.9668367        | torch.Size([2, 512, 32])         |
| 1305    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(3)         | input_1             | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 1305    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(3)         | output              | qint8         | 0.0232598 | -1.3025488   | 2.9539945     | 0.0099221    | 0.9072723        | torch.Size([2, 512, 32])         |
| 1306    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(3)                  | input               | qint8         | 0.0232598 | -1.3025488   | 2.9539945     | 0.0099221    | 0.9072723        | torch.Size([2, 512, 32])         |
| 1306    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(3)                  | weight              | torch.float32 |           | -0.5793310   | 0.5422795     | -0.0032135   | 0.0176575        | torch.Size([32, 32])             |
| 1306    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(3)                  | bias                | torch.float32 |           | -0.1716317   | 0.2230143     | 0.0007250    | 0.0126328        | torch.Size([32])                 |
| 1306    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(3)                  | output              | torch.float32 |           | -4.2815003   | 2.1510110     | -0.1940392   | 1.3913602        | torch.Size([2, 512, 32])         |
| 1307    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(3)                  | input               | torch.float32 |           | -4.2815003   | 2.1510110     | -0.1940392   | 1.3913602        | torch.Size([2, 512, 32])         |
| 1307    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(3)                  | output              | qint8         | 0.0172935 | 0.0000000    | 2.1443977     | 0.3678469    | 0.2586789        | torch.Size([2, 512, 32])         |
| 1308    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(3)  | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1443977     | 0.3678469    | 0.2586789        | torch.Size([2, 512, 32])         |
| 1308    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(3)  | output              | qint16        | 0.0000141 | 0.2680463    | 0.4274683     | 0.3678460    | 0.0008650        | torch.Size([2, 512, 1])          |
| 1309    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(3)              | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1443977     | 0.3678469    | 0.2586789        | torch.Size([2, 512, 32])         |
| 1309    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(3)              | input_1             | qint16        | 0.0000141 | 0.2680463    | 0.4274683     | 0.3678460    | 0.0008650        | torch.Size([2, 512, 1])          |
| 1309    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(3)              | output              | qint16        | 0.0000617 | -0.4274619   | 1.8179939     | 0.0000003    | 0.2578149        | torch.Size([2, 512, 32])         |
| 1310    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(3)              | input_0             | qint16        | 0.0000617 | -0.4274619   | 1.8179939     | 0.0000003    | 0.2578149        | torch.Size([2, 512, 32])         |
| 1310    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(3)              | input_1             | qint16        | 0.0000617 | -0.4274619   | 1.8179939     | 0.0000003    | 0.2578149        | torch.Size([2, 512, 32])         |
| 1310    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(3)              | output              | qint16        | 0.0001252 | 0.0000000    | 3.3050396     | 0.2578084    | 0.1950251        | torch.Size([2, 512, 32])         |
| 1311    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(3)    | input_0             | qint16        | 0.0001252 | 0.0000000    | 3.3050396     | 0.2578084    | 0.1950251        | torch.Size([2, 512, 32])         |
| 1311    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(3)    | output              | qint16        | 0.0000132 | 0.1529647    | 0.3676286     | 0.2578084    | 0.0038274        | torch.Size([2, 512, 1])          |
| 1312    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(3)            | input               | qint16        | 0.0000132 | 0.1529647    | 0.3676286     | 0.2578084    | 0.0038274        | torch.Size([2, 512, 1])          |
| 1312    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(3)            | output              | qint16        | 0.0000777 | 1.6492792    | 2.5457854     | 2.0195827    | 0.0766979        | torch.Size([2, 512, 1])          |
| 1313    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(3)          | input_0             | qint16        | 0.0000617 | -0.4274619   | 1.8179939     | 0.0000003    | 0.2578149        | torch.Size([2, 512, 32])         |
| 1313    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(3)          | input_1             | qint16        | 0.0000777 | 1.6492792    | 2.5457854     | 2.0195827    | 0.0766979        | torch.Size([2, 512, 1])          |
| 1313    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(3)          | output              | qint16        | 0.0001125 | -0.9136274   | 3.6849864     | -0.0000193   | 0.9998267        | torch.Size([2, 512, 32])         |
| 1314    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(3)     | input               | torch.float32 |           | 0.8191299    | 1.0923718     | 0.9808199    | 0.0031231        | torch.Size([32])                 |
| 1314    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(3)     | output              | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 1315    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(3)       | input_0             | qint16        | 0.0001125 | -0.9136274   | 3.6849864     | -0.0000193   | 0.9998267        | torch.Size([2, 512, 32])         |
| 1315    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(3)       | input_1             | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 1315    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(3)       | output              | qint16        | 0.0001113 | -0.9217672   | 3.5213978     | 0.0110937    | 0.9993867        | torch.Size([2, 512, 32])         |
| 1316    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(3)       | input               | torch.float32 |           | -0.0704119   | 0.0788569     | 0.0097621    | 0.0015200        | torch.Size([32])                 |
| 1316    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(3)       | output              | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 1317    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(3)         | input_0             | qint16        | 0.0001113 | -0.9217672   | 3.5213978     | 0.0110937    | 0.9993867        | torch.Size([2, 512, 32])         |
| 1317    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(3)         | input_1             | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 1317    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(3)         | output              | qint8         | 0.0262611 | -0.9453982   | 3.3351545     | 0.0206535    | 0.9676637        | torch.Size([2, 512, 32])         |
| 1318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(3)                  | input               | qint8         | 0.0262611 | -0.9453982   | 3.3351545     | 0.0206535    | 0.9676637        | torch.Size([2, 512, 32])         |
| 1318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(3)                  | weight              | torch.float32 |           | -0.5712157   | 0.5219681     | -0.0062917   | 0.0166056        | torch.Size([32, 32])             |
| 1318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(3)                  | bias                | torch.float32 |           | -0.1649730   | 0.2318604     | 0.0253026    | 0.0136139        | torch.Size([32])                 |
| 1318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(3)                  | output              | torch.float32 |           | -4.4142337   | 2.5961981     | -0.1923283   | 1.4032271        | torch.Size([2, 512, 32])         |
| 1319    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(3)                  | input               | torch.float32 |           | -4.4142337   | 2.5961981     | -0.1923283   | 1.4032271        | torch.Size([2, 512, 32])         |
| 1319    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(3)                  | output              | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3703275    | 0.2738559        | torch.Size([2, 512, 32])         |
| 1320    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(3)  | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3703275    | 0.2738559        | torch.Size([2, 512, 32])         |
| 1320    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(3)  | output              | qint16        | 0.0000154 | 0.1854214    | 0.4830526     | 0.3703272    | 0.0097013        | torch.Size([2, 512, 1])          |
| 1321    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(3)              | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3703275    | 0.2738559        | torch.Size([2, 512, 32])         |
| 1321    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(3)              | input_1             | qint16        | 0.0000154 | 0.1854214    | 0.4830526     | 0.3703272    | 0.0097013        | torch.Size([2, 512, 1])          |
| 1321    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(3)              | output              | qint16        | 0.0000636 | -0.4830333   | 2.0025105     | 0.0000032    | 0.2641615        | torch.Size([2, 512, 32])         |
| 1322    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(3)              | input_0             | qint16        | 0.0000636 | -0.4830333   | 2.0025105     | 0.0000032    | 0.2641615        | torch.Size([2, 512, 32])         |
| 1322    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(3)              | input_1             | qint16        | 0.0000636 | -0.4830333   | 2.0025105     | 0.0000032    | 0.2641615        | torch.Size([2, 512, 32])         |
| 1322    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(3)              | output              | qint16        | 0.0001333 | 0.0000000    | 4.0100670     | 0.2641544    | 0.2594754        | torch.Size([2, 512, 32])         |
| 1323    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(3)    | input_0             | qint16        | 0.0001333 | 0.0000000    | 4.0100670     | 0.2641544    | 0.2594754        | torch.Size([2, 512, 32])         |
| 1323    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(3)    | output              | qint16        | 0.0000116 | 0.1314983    | 0.3720877     | 0.2641553    | 0.0060850        | torch.Size([2, 512, 1])          |
| 1324    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(3)            | input               | qint16        | 0.0000116 | 0.1314983    | 0.3720877     | 0.2641553    | 0.0060850        | torch.Size([2, 512, 1])          |
| 1324    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(3)            | output              | qint16        | 0.0000821 | 1.6393547    | 2.6913540     | 2.0253727    | 0.1266209        | torch.Size([2, 512, 1])          |
| 1325    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(3)          | input_0             | qint16        | 0.0000636 | -0.4830333   | 2.0025105     | 0.0000032    | 0.2641615        | torch.Size([2, 512, 32])         |
| 1325    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(3)          | input_1             | qint16        | 0.0000821 | 1.6393547    | 2.6913540     | 2.0253727    | 0.1266209        | torch.Size([2, 512, 1])          |
| 1325    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(3)          | output              | qint16        | 0.0001195 | -0.9503942   | 3.8351693     | 0.0000006    | 0.9999279        | torch.Size([2, 512, 32])         |
| 1326    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(3)     | input               | torch.float32 |           | 0.8903234    | 1.1315480     | 0.9912031    | 0.0026835        | torch.Size([32])                 |
| 1326    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(3)     | output              | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 1327    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(3)       | input_0             | qint16        | 0.0001195 | -0.9503942   | 3.8351693     | 0.0000006    | 0.9999279        | torch.Size([2, 512, 32])         |
| 1327    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(3)       | input_1             | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 1327    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(3)       | output              | qint16        | 0.0001226 | -1.0753919   | 3.9516423     | 0.0052331    | 1.0214638        | torch.Size([2, 512, 32])         |
| 1328    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(3)       | input               | torch.float32 |           | -0.0586081   | 0.0779655     | 0.0041962    | 0.0015323        | torch.Size([32])                 |
| 1328    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(3)       | output              | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 1329    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(3)         | input_0             | qint16        | 0.0001226 | -1.0753919   | 3.9516423     | 0.0052331    | 1.0214638        | torch.Size([2, 512, 32])         |
| 1329    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(3)         | input_1             | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 1329    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(3)         | output              | qint8         | 0.0302522 | -1.0588285   | 3.8420348     | 0.0099247    | 0.9965265        | torch.Size([2, 512, 32])         |
| 1330    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(3)                  | input               | qint8         | 0.0302522 | -1.0588285   | 3.8420348     | 0.0099247    | 0.9965265        | torch.Size([2, 512, 32])         |
| 1330    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(3)                  | weight              | torch.float32 |           | -0.3204980   | 0.3365203     | -0.0020388   | 0.0145364        | torch.Size([32, 32])             |
| 1330    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(3)                  | bias                | torch.float32 |           | -0.1559148   | 0.2119379     | 0.0091616    | 0.0105488        | torch.Size([32])                 |
| 1330    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(3)                  | output              | torch.float32 |           | -2.3280940   | 2.6596460     | 0.0160351    | 0.8227754        | torch.Size([2, 512, 32])         |
| 1331    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(3)                 | input               | torch.float32 |           | -2.3280940   | 2.6596460     | 0.0160351    | 0.8227754        | torch.Size([2, 512, 32])         |
| 1331    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(3)                 | output              | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3665597    | 0.2951833        | torch.Size([2, 512, 32])         |
| 1332    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(3) | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3665597    | 0.2951833        | torch.Size([2, 512, 32])         |
| 1332    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(3) | output              | qint16        | 0.0000157 | 0.2701340    | 0.5130996     | 0.3654224    | 0.0015519        | torch.Size([2, 512, 1])          |
| 1333    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(3)             | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3665597    | 0.2951833        | torch.Size([2, 512, 32])         |
| 1333    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(3)             | input_1             | qint16        | 0.0000157 | 0.2701340    | 0.5130996     | 0.3654224    | 0.0015519        | torch.Size([2, 512, 1])          |
| 1333    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(3)             | output              | qint16        | 0.0000689 | -0.5131254   | 2.1898313     | 0.0011390    | 0.2932965        | torch.Size([2, 512, 32])         |
| 1334    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(3)             | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1898313     | 0.0011390    | 0.2932965        | torch.Size([2, 512, 32])         |
| 1334    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(3)             | input_1             | qint16        | 0.0000689 | -0.5131254   | 2.1898313     | 0.0011390    | 0.2932965        | torch.Size([2, 512, 32])         |
| 1334    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(3)             | output              | qint16        | 0.0001557 | 0.0000000    | 4.7953744     | 0.2932932    | 0.3965263        | torch.Size([2, 512, 32])         |
| 1335    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(3)   | input_0             | qint16        | 0.0001557 | 0.0000000    | 4.7953744     | 0.2932932    | 0.3965263        | torch.Size([2, 512, 32])         |
| 1335    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(3)   | output              | qint16        | 0.0000123 | 0.1640616    | 0.4027424     | 0.2932910    | 0.0013767        | torch.Size([2, 512, 1])          |
| 1336    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(3)           | input               | qint16        | 0.0000123 | 0.1640616    | 0.4027424     | 0.2932910    | 0.0013767        | torch.Size([2, 512, 1])          |
| 1336    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(3)           | output              | qint16        | 0.0000803 | 1.5757436    | 2.4687452     | 1.8582816    | 0.0156716        | torch.Size([2, 512, 1])          |
| 1337    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(3)         | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1898313     | 0.0011390    | 0.2932965        | torch.Size([2, 512, 32])         |
| 1337    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(3)         | input_1             | qint16        | 0.0000803 | 1.5757436    | 2.4687452     | 1.8582816    | 0.0156716        | torch.Size([2, 512, 1])          |
| 1337    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(3)         | output              | qint16        | 0.0001207 | -1.2429935   | 3.9562087     | 0.0021222    | 0.9999562        | torch.Size([2, 512, 32])         |
| 1338    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(3)    | input               | torch.float32 |           | 0.8289159    | 1.6609058     | 1.2561316    | 0.0353652        | torch.Size([32])                 |
| 1338    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(3)    | output              | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 1339    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(3)      | input_0             | qint16        | 0.0001207 | -1.2429935   | 3.9562087     | 0.0021222    | 0.9999562        | torch.Size([2, 512, 32])         |
| 1339    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(3)      | input_1             | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 1339    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(3)      | output              | qint16        | 0.0001642 | -1.8839335   | 4.9847383     | -0.0355620   | 1.4206244        | torch.Size([2, 512, 32])         |
| 1340    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(3)      | input               | torch.float32 |           | -0.1194881   | 0.2576658     | 0.0445686    | 0.0113612        | torch.Size([32])                 |
| 1340    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(3)      | output              | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 1341    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(3)        | input_0             | qint16        | 0.0001642 | -1.8839335   | 4.9847383     | -0.0355620   | 1.4206244        | torch.Size([2, 512, 32])         |
| 1341    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(3)        | input_1             | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 1341    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(3)        | output              | qint8         | 0.0385920 | -1.7366387   | 4.9011803     | 0.0088365    | 1.3299049        | torch.Size([2, 512, 32])         |
| 1342    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1342    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017920 | -1.1361331   | 1.1002929     | -0.0341803   | 0.1063831        | torch.Size([2, 512, 2])          |
| 1343    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(3)                   | input               | qint16        | 0.0017920 | -1.1361331   | 1.1002929     | -0.0341803   | 0.1063831        | torch.Size([2, 512, 2])          |
| 1343    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(3)                   | weight              | torch.float32 |           | -0.7023237   | 0.7394427     | 0.0490668    | 0.1972211        | torch.Size([32, 2])              |
| 1343    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(3)                   | bias                | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1641774        | torch.Size([32])                 |
| 1343    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(3)                   | output              | torch.float32 |           | -1.5288026   | 1.2249651     | -0.1212026   | 0.2013421        | torch.Size([2, 512, 32])         |
| 1344    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(3)                   | input               | torch.float32 |           | -1.5288026   | 1.2249651     | -0.1212026   | 0.2013421        | torch.Size([2, 512, 32])         |
| 1344    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(3)                   | output              | qint8         | 0.0115854 | 0.0000000    | 1.2280566     | 0.1347147    | 0.0562026        | torch.Size([2, 512, 32])         |
| 1345    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(3)   | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.2280566     | 0.1347147    | 0.0562026        | torch.Size([2, 512, 32])         |
| 1345    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(3)   | output              | qint16        | 0.0000105 | 0.1082505    | 0.2436525     | 0.1347139    | 0.0006619        | torch.Size([2, 512, 1])          |
| 1346    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(3)               | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.2280566     | 0.1347147    | 0.0562026        | torch.Size([2, 512, 32])         |
| 1346    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(3)               | input_1             | qint16        | 0.0000105 | 0.1082505    | 0.2436525     | 0.1347139    | 0.0006619        | torch.Size([2, 512, 1])          |
| 1346    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(3)               | output              | qint16        | 0.0000395 | -0.2436595   | 0.9927117     | 0.0000029    | 0.0555406        | torch.Size([2, 512, 32])         |
| 1347    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(3)               | input_0             | qint16        | 0.0000395 | -0.2436595   | 0.9927117     | 0.0000029    | 0.0555406        | torch.Size([2, 512, 32])         |
| 1347    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(3)               | input_1             | qint16        | 0.0000395 | -0.2436595   | 0.9927117     | 0.0000029    | 0.0555406        | torch.Size([2, 512, 32])         |
| 1347    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(3)               | output              | qint16        | 0.0000524 | 0.0000000    | 0.9854632     | 0.0555381    | 0.0113339        | torch.Size([2, 512, 32])         |
| 1348    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(3)     | input_0             | qint16        | 0.0000524 | 0.0000000    | 0.9854632     | 0.0555381    | 0.0113339        | torch.Size([2, 512, 32])         |
| 1348    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(3)     | output              | qint16        | 0.0000071 | 0.0406389    | 0.1282919     | 0.0555382    | 0.0003265        | torch.Size([2, 512, 1])          |
| 1349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(3)             | input               | qint16        | 0.0000071 | 0.0406389    | 0.1282919     | 0.0555382    | 0.0003265        | torch.Size([2, 512, 1])          |
| 1349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(3)             | output              | qint16        | 0.0001514 | 2.7917292    | 4.9599452     | 4.3605566    | 0.2579457        | torch.Size([2, 512, 1])          |
| 1350    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(3)           | input_0             | qint16        | 0.0000395 | -0.2436595   | 0.9927117     | 0.0000029    | 0.0555406        | torch.Size([2, 512, 32])         |
| 1350    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(3)           | input_1             | qint16        | 0.0001514 | 2.7917292    | 4.9599452     | 4.3605566    | 0.2579457        | torch.Size([2, 512, 1])          |
| 1350    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(3)           | output              | qint16        | 0.0001206 | -0.6870726   | 3.9524767     | -0.0000082   | 0.9997166        | torch.Size([2, 512, 32])         |
| 1351    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(3)      | input               | torch.float32 |           | 0.8947600    | 1.1748335     | 0.9865216    | 0.0041537        | torch.Size([32])                 |
| 1351    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(3)      | output              | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 1352    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(3)        | input_0             | qint16        | 0.0001206 | -0.6870726   | 3.9524767     | -0.0000082   | 0.9997166        | torch.Size([2, 512, 32])         |
| 1352    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(3)        | input_1             | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 1352    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(3)        | output              | qint16        | 0.0001306 | -0.7962337   | 4.2798867     | 0.0037797    | 1.0093712        | torch.Size([2, 512, 32])         |
| 1353    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(3)        | input               | torch.float32 |           | -0.0879948   | 0.1319895     | 0.0285039    | 0.0034159        | torch.Size([32])                 |
| 1353    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(3)        | output              | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 1354    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(3)          | input_0             | qint16        | 0.0001306 | -0.7962337   | 4.2798867     | 0.0037797    | 1.0093712        | torch.Size([2, 512, 32])         |
| 1354    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(3)          | input_1             | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 1354    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(3)          | output              | qint8         | 0.0302674 | -0.7566838   | 3.8439538     | 0.0321951    | 0.9258808        | torch.Size([2, 512, 32])         |
| 1355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(3)                   | input               | qint8         | 0.0302674 | -0.7566838   | 3.8439538     | 0.0321951    | 0.9258808        | torch.Size([2, 512, 32])         |
| 1355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(3)                   | weight              | torch.float32 |           | -1.0547366   | 0.5812716     | 0.0070099    | 0.0187704        | torch.Size([32, 32])             |
| 1355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(3)                   | bias                | torch.float32 |           | -0.2183180   | 0.1396109     | -0.0140744   | 0.0103446        | torch.Size([32])                 |
| 1355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(3)                   | output              | torch.float32 |           | -4.9495955   | 1.6858671     | -0.5306085   | 1.4605427        | torch.Size([2, 512, 32])         |
| 1356    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(3)                   | input               | torch.float32 |           | -4.9495955   | 1.6858671     | -0.5306085   | 1.4605427        | torch.Size([2, 512, 32])         |
| 1356    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(3)                   | output              | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2270947    | 0.1233916        | torch.Size([2, 512, 32])         |
| 1357    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(3)   | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2270947    | 0.1233916        | torch.Size([2, 512, 32])         |
| 1357    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(3)   | output              | qint16        | 0.0000116 | 0.1714575    | 0.3796301     | 0.2270935    | 0.0006959        | torch.Size([2, 512, 1])          |
| 1358    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(3)               | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2270947    | 0.1233916        | torch.Size([2, 512, 32])         |
| 1358    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(3)               | input_1             | qint16        | 0.0000116 | 0.1714575    | 0.3796301     | 0.2270935    | 0.0006959        | torch.Size([2, 512, 1])          |
| 1358    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(3)               | output              | qint16        | 0.0000516 | -0.3796051   | 1.4303070     | 0.0000013    | 0.1226956        | torch.Size([2, 512, 32])         |
| 1359    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(3)               | input_0             | qint16        | 0.0000516 | -0.3796051   | 1.4303070     | 0.0000013    | 0.1226956        | torch.Size([2, 512, 32])         |
| 1359    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(3)               | input_1             | qint16        | 0.0000516 | -0.3796051   | 1.4303070     | 0.0000013    | 0.1226956        | torch.Size([2, 512, 32])         |
| 1359    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(3)               | output              | qint16        | 0.0000889 | 0.0000000    | 2.0457370     | 0.1226923    | 0.0488052        | torch.Size([2, 512, 32])         |
| 1360    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(3)     | input_0             | qint16        | 0.0000889 | 0.0000000    | 2.0457370     | 0.1226923    | 0.0488052        | torch.Size([2, 512, 32])         |
| 1360    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(3)     | output              | qint16        | 0.0000089 | 0.0790971    | 0.2626715     | 0.1226930    | 0.0004713        | torch.Size([2, 512, 1])          |
| 1361    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(3)             | input               | qint16        | 0.0000089 | 0.0790971    | 0.2626715     | 0.1226930    | 0.0004713        | torch.Size([2, 512, 1])          |
| 1361    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(3)             | output              | qint16        | 0.0001114 | 1.9511017    | 3.5554004     | 2.8843567    | 0.0534949        | torch.Size([2, 512, 1])          |
| 1362    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(3)           | input_0             | qint16        | 0.0000516 | -0.3796051   | 1.4303070     | 0.0000013    | 0.1226956        | torch.Size([2, 512, 32])         |
| 1362    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(3)           | input_1             | qint16        | 0.0001114 | 1.9511017    | 3.5554004     | 2.8843567    | 0.0534949        | torch.Size([2, 512, 1])          |
| 1362    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(3)           | output              | qint16        | 0.0001083 | -0.7527910   | 3.5501876     | -0.0000030   | 0.9998970        | torch.Size([2, 512, 32])         |
| 1363    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(3)      | input               | torch.float32 |           | 0.8550419    | 1.1198171     | 0.9805899    | 0.0036729        | torch.Size([32])                 |
| 1363    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(3)      | output              | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 1364    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(3)        | input_0             | qint16        | 0.0001083 | -0.7527910   | 3.5501876     | -0.0000030   | 0.9998970        | torch.Size([2, 512, 32])         |
| 1364    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(3)        | input_1             | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 1364    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(3)        | output              | qint16        | 0.0001106 | -0.8429890   | 3.6229506     | -0.0020733   | 0.9776294        | torch.Size([2, 512, 32])         |
| 1365    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(3)        | input               | torch.float32 |           | -0.0792132   | 0.1045145     | 0.0242442    | 0.0021608        | torch.Size([32])                 |
| 1365    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(3)        | output              | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 1366    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(3)          | input_0             | qint16        | 0.0001106 | -0.8429890   | 3.6229506     | -0.0020733   | 0.9776294        | torch.Size([2, 512, 32])         |
| 1366    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(3)          | input_1             | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 1366    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(3)          | output              | qint8         | 0.0268612 | -0.8326958   | 3.4113667     | 0.0216616    | 0.9195663        | torch.Size([2, 512, 32])         |
| 1367    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(3)                   | input               | qint8         | 0.0268612 | -0.8326958   | 3.4113667     | 0.0216616    | 0.9195663        | torch.Size([2, 512, 32])         |
| 1367    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(3)                   | weight              | torch.float32 |           | -0.4480607   | 0.3678726     | 0.0004879    | 0.0160908        | torch.Size([32, 32])             |
| 1367    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(3)                   | bias                | torch.float32 |           | -0.1861591   | 0.1739754     | 0.0155446    | 0.0137690        | torch.Size([32])                 |
| 1367    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(3)                   | output              | torch.float32 |           | -3.6406043   | 2.4660628     | -0.3070565   | 1.5406735        | torch.Size([2, 512, 32])         |
| 1368    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(3)                   | input               | torch.float32 |           | -3.6406043   | 2.4660628     | -0.3070565   | 1.5406735        | torch.Size([2, 512, 32])         |
| 1368    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(3)                   | output              | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3334995    | 0.1950628        | torch.Size([2, 512, 32])         |
| 1369    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(3)   | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3334995    | 0.1950628        | torch.Size([2, 512, 32])         |
| 1369    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(3)   | output              | qint16        | 0.0000156 | 0.2471991    | 0.3937950     | 0.3335008    | 0.0004070        | torch.Size([2, 512, 1])          |
| 1370    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(3)               | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3334995    | 0.1950628        | torch.Size([2, 512, 32])         |
| 1370    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(3)               | input_1             | qint16        | 0.0000156 | 0.2471991    | 0.3937950     | 0.3335008    | 0.0004070        | torch.Size([2, 512, 1])          |
| 1370    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(3)               | output              | qint16        | 0.0000645 | -0.3938173   | 2.0856893     | -0.0000003   | 0.1946559        | torch.Size([2, 512, 32])         |
| 1371    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(3)               | input_0             | qint16        | 0.0000645 | -0.3938173   | 2.0856893     | -0.0000003   | 0.1946559        | torch.Size([2, 512, 32])         |
| 1371    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(3)               | input_1             | qint16        | 0.0000645 | -0.3938173   | 2.0856893     | -0.0000003   | 0.1946559        | torch.Size([2, 512, 32])         |
| 1371    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(3)               | output              | qint16        | 0.0001365 | 0.0000000    | 4.3501067     | 0.1946437    | 0.0997626        | torch.Size([2, 512, 32])         |
| 1372    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(3)     | input_0             | qint16        | 0.0001365 | 0.0000000    | 4.3501067     | 0.1946437    | 0.0997626        | torch.Size([2, 512, 32])         |
| 1372    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(3)     | output              | qint16        | 0.0000123 | 0.1588870    | 0.2761193     | 0.1946449    | 0.0003259        | torch.Size([2, 512, 1])          |
| 1373    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(3)             | input               | qint16        | 0.0000123 | 0.1588870    | 0.2761193     | 0.1946449    | 0.0003259        | torch.Size([2, 512, 1])          |
| 1373    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(3)             | output              | qint16        | 0.0000749 | 1.9030031    | 2.4551423     | 2.2726703    | 0.0092139        | torch.Size([2, 512, 1])          |
| 1374    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(3)           | input_0             | qint16        | 0.0000645 | -0.3938173   | 2.0856893     | -0.0000003   | 0.1946559        | torch.Size([2, 512, 32])         |
| 1374    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(3)           | input_1             | qint16        | 0.0000749 | 1.9030031    | 2.4551423     | 2.2726703    | 0.0092139        | torch.Size([2, 512, 1])          |
| 1374    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(3)           | output              | qint16        | 0.0001267 | -0.8568300   | 4.1501474     | -0.0001197   | 0.9985269        | torch.Size([2, 512, 32])         |
| 1375    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(3)      | input               | torch.float32 |           | 0.8469434    | 1.1090456     | 0.9866461    | 0.0031007        | torch.Size([32])                 |
| 1375    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(3)      | output              | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 1376    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(3)        | input_0             | qint16        | 0.0001267 | -0.8568300   | 4.1501474     | -0.0001197   | 0.9985269        | torch.Size([2, 512, 32])         |
| 1376    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(3)        | input_1             | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 1376    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(3)        | output              | qint16        | 0.0001376 | -0.9503079   | 4.4246821     | -0.0036207   | 0.9934150        | torch.Size([2, 512, 32])         |
| 1377    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(3)        | input               | torch.float32 |           | -0.0626723   | 0.0887763     | 0.0071697    | 0.0011301        | torch.Size([32])                 |
| 1377    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(3)        | output              | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 1378    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(3)          | input_0             | qint16        | 0.0001376 | -0.9503079   | 4.4246821     | -0.0036207   | 0.9934150        | torch.Size([2, 512, 32])         |
| 1378    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(3)          | input_1             | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 1378    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(3)          | output              | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0029285    | 0.9653264        | torch.Size([2, 512, 32])         |
| 1379    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(3)                   | input               | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0029285    | 0.9653264        | torch.Size([2, 512, 32])         |
| 1379    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(3)                   | weight              | torch.float32 |           | -0.5597425   | 0.7001730     | 0.0015679    | 0.0160348        | torch.Size([32, 32])             |
| 1379    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(3)                   | bias                | torch.float32 |           | -0.1810580   | 0.1736723     | -0.0279047   | 0.0091159        | torch.Size([32])                 |
| 1379    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(3)                   | output              | torch.float32 |           | -4.3419070   | 3.0801165     | -0.2515956   | 1.2394245        | torch.Size([2, 512, 32])         |
| 1380    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(3)                  | input               | torch.float32 |           | -4.3419070   | 3.0801165     | -0.2515956   | 1.2394245        | torch.Size([2, 512, 32])         |
| 1380    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(3)                  | output              | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2808663    | 0.3371496        | torch.Size([2, 512, 32])         |
| 1381    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(3)  | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2808663    | 0.3371496        | torch.Size([2, 512, 32])         |
| 1381    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(3)  | output              | qint16        | 0.0000121 | 0.2141395    | 0.3959820     | 0.2808675    | 0.0013755        | torch.Size([2, 512, 1])          |
| 1382    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(3)              | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2808663    | 0.3371496        | torch.Size([2, 512, 32])         |
| 1382    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(3)              | input_1             | qint16        | 0.0000121 | 0.2141395    | 0.3959820     | 0.2808675    | 0.0013755        | torch.Size([2, 512, 1])          |
| 1382    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(3)              | output              | qint16        | 0.0000976 | -0.3959758   | 2.8015997     | -0.0000009   | 0.3357761        | torch.Size([2, 512, 32])         |
| 1383    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(3)              | input_0             | qint16        | 0.0000976 | -0.3959758   | 2.8015997     | -0.0000009   | 0.3357761        | torch.Size([2, 512, 32])         |
| 1383    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(3)              | input_1             | qint16        | 0.0000976 | -0.3959758   | 2.8015997     | -0.0000009   | 0.3357761        | torch.Size([2, 512, 32])         |
| 1383    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(3)              | output              | qint16        | 0.0003122 | 0.0000000    | 7.8488712     | 0.3357609    | 1.1444874        | torch.Size([2, 512, 32])         |
| 1384    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(3)    | input_0             | qint16        | 0.0003122 | 0.0000000    | 7.8488712     | 0.3357609    | 1.1444874        | torch.Size([2, 512, 32])         |
| 1384    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(3)    | output              | qint16        | 0.0000136 | 0.1380271    | 0.4197376     | 0.3357617    | 0.0056394        | torch.Size([2, 512, 1])          |
| 1385    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(3)            | input               | qint16        | 0.0000136 | 0.1380271    | 0.4197376     | 0.3357617    | 0.0056394        | torch.Size([2, 512, 1])          |
| 1385    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(3)            | output              | qint16        | 0.0000802 | 1.5435356    | 2.6273782     | 1.7697227    | 0.0666281        | torch.Size([2, 512, 1])          |
| 1386    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(3)          | input_0             | qint16        | 0.0000976 | -0.3959758   | 2.8015997     | -0.0000009   | 0.3357761        | torch.Size([2, 512, 32])         |
| 1386    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(3)          | input_1             | qint16        | 0.0000802 | 1.5435356    | 2.6273782     | 1.7697227    | 0.0666281        | torch.Size([2, 512, 1])          |
| 1386    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(3)          | output              | qint16        | 0.0001482 | -0.7612539   | 4.7965808     | -0.0000034   | 0.9999116        | torch.Size([2, 512, 32])         |
| 1387    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(3)     | input               | torch.float32 |           | 0.8363900    | 1.4688344     | 1.0570920    | 0.0396277        | torch.Size([32])                 |
| 1387    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(3)     | output              | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 1388    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(3)       | input_0             | qint16        | 0.0001482 | -0.7612539   | 4.7965808     | -0.0000034   | 0.9999116        | torch.Size([2, 512, 32])         |
| 1388    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(3)       | input_1             | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 1388    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(3)       | output              | qint16        | 0.0001637 | -1.1180574   | 4.1696711     | -0.0628617   | 0.8660907        | torch.Size([2, 512, 32])         |
| 1389    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(3)       | input               | torch.float32 |           | -0.1492936   | 0.2842544     | 0.0803791    | 0.0109446        | torch.Size([32])                 |
| 1389    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(3)       | output              | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 1390    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(3)         | input_0             | qint16        | 0.0001637 | -1.1180574   | 4.1696711     | -0.0628617   | 0.8660907        | torch.Size([2, 512, 32])         |
| 1390    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(3)         | input_1             | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 1390    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(3)         | output              | qint8         | 0.0373904 | -0.9347606   | 4.0755558     | 0.0184328    | 0.7784528        | torch.Size([2, 512, 32])         |
| 1391    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1391    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017920 | -2.3959146   | 0.7365153     | -0.2233196   | 0.4516332        | torch.Size([2, 512, 3])          |
| 1392    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(3)                   | input               | qint16        | 0.0017920 | -2.3959146   | 0.7365153     | -0.2233196   | 0.4516332        | torch.Size([2, 512, 3])          |
| 1392    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(3)                   | weight              | torch.float32 |           | -1.0475703   | 0.9848034     | -0.0054673   | 0.2080412        | torch.Size([64, 3])              |
| 1392    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(3)                   | bias                | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1294928        | torch.Size([64])                 |
| 1392    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(3)                   | output              | torch.float32 |           | -2.1004155   | 1.5458724     | -0.0823762   | 0.3082252        | torch.Size([2, 512, 64])         |
| 1393    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(3)                   | input               | torch.float32 |           | -2.1004155   | 1.5458724     | -0.0823762   | 0.3082252        | torch.Size([2, 512, 64])         |
| 1393    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(3)                   | output              | qint8         | 0.0729980 | 0.0000000    | 1.5329581     | 0.1748740    | 0.0685169        | torch.Size([2, 512, 64])         |
| 1394    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(3)   | input_0             | qint8         | 0.0729980 | 0.0000000    | 1.5329581     | 0.1748740    | 0.0685169        | torch.Size([2, 512, 64])         |
| 1394    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(3)   | output              | qint16        | 0.0000685 | 0.1220359    | 0.3022462     | 0.1748732    | 0.0052171        | torch.Size([2, 512, 1])          |
| 1395    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(3)               | input_0             | qint8         | 0.0729980 | 0.0000000    | 1.5329581     | 0.1748740    | 0.0685169        | torch.Size([2, 512, 64])         |
| 1395    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(3)               | input_1             | qint16        | 0.0000685 | 0.1220359    | 0.3022462     | 0.1748732    | 0.0052171        | torch.Size([2, 512, 1])          |
| 1395    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(3)               | output              | qint16        | 0.0002902 | -0.3021072   | 1.2362887     | 0.0000064    | 0.0632998        | torch.Size([2, 512, 64])         |
| 1396    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(3)               | input_0             | qint16        | 0.0002902 | -0.3021072   | 1.2362887     | 0.0000064    | 0.0632998        | torch.Size([2, 512, 64])         |
| 1396    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(3)               | input_1             | qint16        | 0.0002902 | -0.3021072   | 1.2362887     | 0.0000064    | 0.0632998        | torch.Size([2, 512, 64])         |
| 1396    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(3)               | output              | qint16        | 0.0029551 | 0.0000000    | 1.5277843     | 0.0634106    | 0.0240388        | torch.Size([2, 512, 64])         |
| 1397    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(3)     | input_0             | qint16        | 0.0029551 | 0.0000000    | 1.5277843     | 0.0634106    | 0.0240388        | torch.Size([2, 512, 64])         |
| 1397    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(3)     | output              | qint16        | 0.0003723 | 0.0249444    | 0.1559958     | 0.0634049    | 0.0027790        | torch.Size([2, 512, 1])          |
| 1398    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(3)             | input               | qint16        | 0.0003723 | 0.0249444    | 0.1559958     | 0.0634049    | 0.0027790        | torch.Size([2, 512, 1])          |
| 1398    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(3)             | output              | qint16        | 0.0001859 | 2.5317848    | 6.0927577     | 4.7909031    | 1.7976053        | torch.Size([2, 512, 1])          |
| 1399    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(3)           | input_0             | qint16        | 0.0002902 | -0.3021072   | 1.2362887     | 0.0000064    | 0.0632998        | torch.Size([2, 512, 64])         |
| 1399    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(3)           | input_1             | qint16        | 0.0001859 | 2.5317848    | 6.0927577     | 4.7909031    | 1.7976053        | torch.Size([2, 512, 1])          |
| 1399    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(3)           | output              | qint16        | 0.0001160 | -0.8237154   | 3.4634542     | 0.0000191    | 0.9981225        | torch.Size([2, 512, 64])         |
| 1400    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(3)      | input               | torch.float32 |           | 0.8691067    | 1.1281288     | 0.9794419    | 0.0036082        | torch.Size([64])                 |
| 1400    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(3)      | output              | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 1401    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(3)        | input_0             | qint16        | 0.0001160 | -0.8237154   | 3.4634542     | 0.0000191    | 0.9981225        | torch.Size([2, 512, 64])         |
| 1401    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(3)        | input_1             | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 1401    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(3)        | output              | qint16        | 0.0001189 | -0.9154089   | 3.3694561     | 0.0113200    | 0.9443713        | torch.Size([2, 512, 64])         |
| 1402    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(3)        | input               | torch.float32 |           | -0.1133662   | 0.1493634     | 0.0304540    | 0.0046508        | torch.Size([64])                 |
| 1402    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(3)        | output              | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 1403    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(3)          | input_0             | qint16        | 0.0001189 | -0.9154089   | 3.3694561     | 0.0113200    | 0.9443713        | torch.Size([2, 512, 64])         |
| 1403    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(3)          | input_1             | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 1403    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(3)          | output              | qint8         | 0.0267452 | -0.9093367   | 3.3164046     | 0.0420661    | 0.8520527        | torch.Size([2, 512, 64])         |
| 1404    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(3)                   | input               | qint8         | 0.0267452 | -0.9093367   | 3.3164046     | 0.0420661    | 0.8520527        | torch.Size([2, 512, 64])         |
| 1404    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(3)                   | weight              | torch.float32 |           | -0.4523612   | 0.4813256     | -0.0014562   | 0.0096743        | torch.Size([64, 64])             |
| 1404    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(3)                   | bias                | torch.float32 |           | -0.1183558   | 0.2243176     | 0.0150283    | 0.0049289        | torch.Size([64])                 |
| 1404    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(3)                   | output              | torch.float32 |           | -5.3933024   | 2.7576499     | -0.4187410   | 2.1728466        | torch.Size([2, 512, 64])         |
| 1405    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(3)                   | input               | torch.float32 |           | -5.3933024   | 2.7576499     | -0.4187410   | 2.1728466        | torch.Size([2, 512, 64])         |
| 1405    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(3)                   | output              | qint8         | 0.0337689 | 0.0000000    | 2.7690494     | 0.3283615    | 0.2192053        | torch.Size([2, 512, 64])         |
| 1406    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(3)   | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.7690494     | 0.3283615    | 0.2192053        | torch.Size([2, 512, 64])         |
| 1406    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(3)   | output              | qint16        | 0.0000195 | 0.2063105    | 0.5408301     | 0.3283615    | 0.0074510        | torch.Size([2, 512, 1])          |
| 1407    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(3)               | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.7690494     | 0.3283615    | 0.2192053        | torch.Size([2, 512, 64])         |
| 1407    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(3)               | input_1             | qint16        | 0.0000195 | 0.2063105    | 0.5408301     | 0.3283615    | 0.0074510        | torch.Size([2, 512, 1])          |
| 1407    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(3)               | output              | qint16        | 0.0001376 | -0.5408676   | 2.3489892     | -0.0000023   | 0.2117630        | torch.Size([2, 512, 64])         |
| 1408    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(3)               | input_0             | qint16        | 0.0001376 | -0.5408676   | 2.3489892     | -0.0000023   | 0.2117630        | torch.Size([2, 512, 64])         |
| 1408    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(3)               | input_1             | qint16        | 0.0001376 | -0.5408676   | 2.3489892     | -0.0000023   | 0.2117630        | torch.Size([2, 512, 64])         |
| 1408    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(3)               | output              | qint16        | 0.0006236 | 0.0000000    | 5.5179434     | 0.2117396    | 0.2184841        | torch.Size([2, 512, 64])         |
| 1409    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(3)     | input_0             | qint16        | 0.0006236 | 0.0000000    | 5.5179434     | 0.2117396    | 0.2184841        | torch.Size([2, 512, 64])         |
| 1409    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(3)     | output              | qint16        | 0.0000322 | 0.0818952    | 0.4866446     | 0.2117393    | 0.0076286        | torch.Size([2, 512, 1])          |
| 1410    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(3)             | input               | qint16        | 0.0000322 | 0.0818952    | 0.4866446     | 0.2117393    | 0.0076286        | torch.Size([2, 512, 1])          |
| 1410    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(3)             | output              | qint16        | 0.0001060 | 1.4335015    | 3.4734557     | 2.3759561    | 0.4320140        | torch.Size([2, 512, 1])          |
| 1411    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(3)           | input_0             | qint16        | 0.0001376 | -0.5408676   | 2.3489892     | -0.0000023   | 0.2117630        | torch.Size([2, 512, 64])         |
| 1411    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(3)           | input_1             | qint16        | 0.0001060 | 1.4335015    | 3.4734557     | 2.3759561    | 0.4320140        | torch.Size([2, 512, 1])          |
| 1411    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(3)           | output              | qint16        | 0.0001466 | -0.8892949   | 4.5939817     | -0.0000081   | 0.9979808        | torch.Size([2, 512, 64])         |
| 1412    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(3)      | input               | torch.float32 |           | 0.8333027    | 1.1388558     | 0.9778216    | 0.0042186        | torch.Size([64])                 |
| 1412    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(3)      | output              | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 1413    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(3)        | input_0             | qint16        | 0.0001466 | -0.8892949   | 4.5939817     | -0.0000081   | 0.9979808        | torch.Size([2, 512, 64])         |
| 1413    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(3)        | input_1             | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 1413    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(3)        | output              | qint16        | 0.0001474 | -0.9512296   | 4.4649701     | 0.0036632    | 0.9826770        | torch.Size([2, 512, 64])         |
| 1414    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(3)        | input               | torch.float32 |           | -0.0757831   | 0.1161729     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1414    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(3)        | output              | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1415    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(3)          | input_0             | qint16        | 0.0001474 | -0.9512296   | 4.4649701     | 0.0036632    | 0.9826770        | torch.Size([2, 512, 64])         |
| 1415    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(3)          | input_1             | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1415    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(3)          | output              | qint8         | 0.0350382 | -0.9109923   | 4.4148088     | 0.0198367    | 0.9372709        | torch.Size([2, 512, 64])         |
| 1416    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(3)                   | input               | qint8         | 0.0350382 | -0.9109923   | 4.4148088     | 0.0198367    | 0.9372709        | torch.Size([2, 512, 64])         |
| 1416    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(3)                   | weight              | torch.float32 |           | -0.5707353   | 0.3620123     | -0.0010372   | 0.0088292        | torch.Size([64, 64])             |
| 1416    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(3)                   | bias                | torch.float32 |           | -0.1720246   | 0.1340137     | -0.0235144   | 0.0050507        | torch.Size([64])                 |
| 1416    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(3)                   | output              | torch.float32 |           | -5.4419847   | 3.7189701     | -0.3469681   | 2.1238880        | torch.Size([2, 512, 64])         |
| 1417    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(3)                   | input               | torch.float32 |           | -5.4419847   | 3.7189701     | -0.3469681   | 2.1238880        | torch.Size([2, 512, 64])         |
| 1417    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(3)                   | output              | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4437246    | 0.5009913        | torch.Size([2, 512, 64])         |
| 1418    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(3)   | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4437246    | 0.5009913        | torch.Size([2, 512, 64])         |
| 1418    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(3)   | output              | qint16        | 0.0000166 | 0.3565956    | 0.5175771     | 0.4437255    | 0.0029879        | torch.Size([2, 512, 1])          |
| 1419    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(3)               | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4437246    | 0.5009913        | torch.Size([2, 512, 64])         |
| 1419    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(3)               | input_1             | qint16        | 0.0000166 | 0.3565956    | 0.5175771     | 0.4437255    | 0.0029879        | torch.Size([2, 512, 1])          |
| 1419    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(3)               | output              | qint16        | 0.0000988 | -0.5176006   | 3.1814101     | -0.0000012   | 0.4980066        | torch.Size([2, 512, 64])         |
| 1420    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(3)               | input_0             | qint16        | 0.0000988 | -0.5176006   | 3.1814101     | -0.0000012   | 0.4980066        | torch.Size([2, 512, 64])         |
| 1420    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(3)               | input_1             | qint16        | 0.0000988 | -0.5176006   | 3.1814101     | -0.0000012   | 0.4980066        | torch.Size([2, 512, 64])         |
| 1420    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(3)               | output              | qint16        | 0.0003201 | 0.0000000    | 10.1214819    | 0.4979823    | 1.1006726        | torch.Size([2, 512, 64])         |
| 1421    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(3)     | input_0             | qint16        | 0.0003201 | 0.0000000    | 10.1214819    | 0.4979823    | 1.1006726        | torch.Size([2, 512, 64])         |
| 1421    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(3)     | output              | qint16        | 0.0000230 | 0.3014414    | 0.7437088     | 0.4979822    | 0.0159557        | torch.Size([2, 512, 1])          |
| 1422    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(3)             | input               | qint16        | 0.0000230 | 0.3014414    | 0.7437088     | 0.4979822    | 0.0159557        | torch.Size([2, 512, 1])          |
| 1422    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(3)             | output              | qint16        | 0.0000608 | 1.1595442    | 1.8213338     | 1.4580003    | 0.0453845        | torch.Size([2, 512, 1])          |
| 1423    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(3)           | input_0             | qint16        | 0.0000988 | -0.5176006   | 3.1814101     | -0.0000012   | 0.4980066        | torch.Size([2, 512, 64])         |
| 1423    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(3)           | input_1             | qint16        | 0.0000608 | 1.1595442    | 1.8213338     | 1.4580003    | 0.0453845        | torch.Size([2, 512, 1])          |
| 1423    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(3)           | output              | qint16        | 0.0001598 | -0.6825535   | 4.2017803     | -0.0000025   | 1.0000424        | torch.Size([2, 512, 64])         |
| 1424    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(3)      | input               | torch.float32 |           | 0.8006503    | 1.1495361     | 0.9818506    | 0.0032003        | torch.Size([64])                 |
| 1424    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(3)      | output              | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 1425    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(3)        | input_0             | qint16        | 0.0001598 | -0.6825535   | 4.2017803     | -0.0000025   | 1.0000424        | torch.Size([2, 512, 64])         |
| 1425    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(3)        | input_1             | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 1425    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(3)        | output              | qint16        | 0.0001633 | -0.7845268   | 4.3445315     | 0.0061804    | 1.0023574        | torch.Size([2, 512, 64])         |
| 1426    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(3)        | input               | torch.float32 |           | -0.0461140   | 0.1411197     | 0.0132828    | 0.0015701        | torch.Size([64])                 |
| 1426    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(3)        | output              | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 1427    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(3)          | input_0             | qint16        | 0.0001633 | -0.7845268   | 4.3445315     | 0.0061804    | 1.0023574        | torch.Size([2, 512, 64])         |
| 1427    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(3)          | input_1             | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 1427    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(3)          | output              | qint8         | 0.0387038 | -0.7740757   | 4.3348241     | 0.0195025    | 0.9828053        | torch.Size([2, 512, 64])         |
| 1428    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(3)                   | input               | qint8         | 0.0387038 | -0.7740757   | 4.3348241     | 0.0195025    | 0.9828053        | torch.Size([2, 512, 64])         |
| 1428    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(3)                   | weight              | torch.float32 |           | -0.5701389   | 0.3477888     | 0.0006721    | 0.0085883        | torch.Size([64, 64])             |
| 1428    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(3)                   | bias                | torch.float32 |           | -0.1677032   | 0.1709885     | -0.0237130   | 0.0070098        | torch.Size([64])                 |
| 1428    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(3)                   | output              | torch.float32 |           | -4.8043346   | 7.2159615     | -0.4925384   | 1.7858505        | torch.Size([2, 512, 64])         |
| 1429    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(3)                  | input               | torch.float32 |           | -4.8043346   | 7.2159615     | -0.4925384   | 1.7858505        | torch.Size([2, 512, 64])         |
| 1429    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(3)                  | output              | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2578537    | 0.6712397        | torch.Size([2, 512, 64])         |
| 1430    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(3)  | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2578537    | 0.6712397        | torch.Size([2, 512, 64])         |
| 1430    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(3)  | output              | qint16        | 0.0000138 | 0.2020663    | 0.3356057     | 0.2578568    | 0.0012599        | torch.Size([2, 512, 1])          |
| 1431    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(3)              | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2578537    | 0.6712397        | torch.Size([2, 512, 64])         |
| 1431    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(3)              | input_1             | qint16        | 0.0000138 | 0.2020663    | 0.3356057     | 0.2578568    | 0.0012599        | torch.Size([2, 512, 1])          |
| 1431    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(3)              | output              | qint16        | 0.0002137 | -0.3356674   | 6.9387641     | -0.0000018   | 0.6699783        | torch.Size([2, 512, 64])         |
| 1432    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(3)              | input_0             | qint16        | 0.0002137 | -0.3356674   | 6.9387641     | -0.0000018   | 0.6699783        | torch.Size([2, 512, 64])         |
| 1432    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(3)              | input_1             | qint16        | 0.0002137 | -0.3356674   | 6.9387641     | -0.0000018   | 0.6699783        | torch.Size([2, 512, 64])         |
| 1432    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(3)              | output              | qint16        | 0.0014959 | 0.0000000    | 48.1464005    | 0.6700077    | 18.7764091       | torch.Size([2, 512, 64])         |
| 1433    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(3)    | input_0             | qint16        | 0.0014959 | 0.0000000    | 48.1464005    | 0.6700077    | 18.7764091       | torch.Size([2, 512, 64])         |
| 1433    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(3)    | output              | qint16        | 0.0000253 | 0.3985273    | 0.8249092     | 0.6700070    | 0.0120440        | torch.Size([2, 512, 1])          |
| 1434    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(3)            | input               | qint16        | 0.0000253 | 0.3985273    | 0.8249092     | 0.6700070    | 0.0120440        | torch.Size([2, 512, 1])          |
| 1434    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(3)            | output              | qint16        | 0.0000680 | 1.1010288    | 1.5840257     | 1.2348510    | 0.0114335        | torch.Size([2, 512, 1])          |
| 1435    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(3)          | input_0             | qint16        | 0.0002137 | -0.3356674   | 6.9387641     | -0.0000018   | 0.6699783        | torch.Size([2, 512, 64])         |
| 1435    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(3)          | input_1             | qint16        | 0.0000680 | 1.1010288    | 1.5840257     | 1.2348510    | 0.0114335        | torch.Size([2, 512, 1])          |
| 1435    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(3)          | output              | qint16        | 0.0002366 | -0.4729062   | 7.7517352     | -0.0000026   | 0.9999056        | torch.Size([2, 512, 64])         |
| 1436    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(3)     | input               | torch.float32 |           | 0.7297163    | 1.2824999     | 1.0134131    | 0.0161719        | torch.Size([64])                 |
| 1436    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(3)     | output              | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 1437    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(3)       | input_0             | qint16        | 0.0002366 | -0.4729062   | 7.7517352     | -0.0000026   | 0.9999056        | torch.Size([2, 512, 64])         |
| 1437    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(3)       | input_1             | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 1437    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(3)       | output              | qint16        | 0.0001954 | -0.6065097   | 6.0903029     | -0.0329220   | 0.7111865        | torch.Size([2, 512, 64])         |
| 1438    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(3)       | input               | torch.float32 |           | -0.2385408   | 0.3192695     | 0.0900053    | 0.0129013        | torch.Size([64])                 |
| 1438    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(3)       | output              | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 1439    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(3)         | input_0             | qint16        | 0.0001954 | -0.6065097   | 6.0903029     | -0.0329220   | 0.7111865        | torch.Size([2, 512, 64])         |
| 1439    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(3)         | input_1             | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 1439    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(3)         | output              | qint8         | 0.0462055 | -0.6006721   | 5.8681040     | 0.0568827    | 0.6215052        | torch.Size([2, 512, 64])         |
| 1440    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_0             | qint8         | 0.0587279 | -0.8809187   | 7.4584455     | 0.0732871    | 0.8685381        | torch.Size([2, 512, 128])        |
| 1440    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_1             | qint8         | 0.0385920 | -1.7366387   | 4.9011803     | 0.0088365    | 1.3299049        | torch.Size([2, 512, 32])         |
| 1440    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_2             | qint8         | 0.0373904 | -0.9347606   | 4.0755558     | 0.0184328    | 0.7784528        | torch.Size([2, 512, 32])         |
| 1440    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_3             | qint8         | 0.0462055 | -0.6006721   | 5.8681040     | 0.0568827    | 0.6215052        | torch.Size([2, 512, 64])         |
| 1440    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | output              | qint8         | 0.0569265 | -1.7647222   | 7.2296681     | 0.0575908    | 0.8505152        | torch.Size([2, 512, 256])        |
| 1441    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(4)                                 | input               | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 1441    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(4)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 1441    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(4)                                 | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 1442    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.14.query_cat                          | input_0             | qint8         | 0.0339342 | -4.3435817   | 3.9024367     | 0.0016676    | 0.8098754        | torch.Size([2, 512, 256])        |
| 1442    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.14.query_cat                          | input_1             | qint8         | 0.0569265 | -1.7647222   | 7.2296681     | 0.0575908    | 0.8505152        | torch.Size([2, 512, 256])        |
| 1442    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.14.query_cat                          | output              | qint8         | 0.0539313 | -4.3684354   | 6.8492751     | 0.0328364    | 0.8286310        | torch.Size([2, 512, 512])        |
| 1443    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.14.key_cat                            | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 1443    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.14.key_cat                            | input_1             | qint8         | 0.0569265 | -1.0246774   | 5.3510933     | 0.0736042    | 0.8488365        | torch.Size([2, 256, 256])        |
| 1443    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.14.key_cat                            | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 1444    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.0539313 | -4.3684354   | 6.8492751     | 0.0328364    | 0.8286310        | torch.Size([2, 512, 512])        |
| 1444    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.0539313 | -4.3684354   | 6.8492751     | 0.0328364    | 0.8286310        | torch.Size([512, 2, 512])        |
| 1445    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 1445    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1446    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 1446    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1447    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | input_0             | qint8         | 0.0539313 | -4.3684354   | 6.8492751     | 0.0328364    | 0.8286310        | torch.Size([512, 2, 512])        |
| 1447    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | output              | qint8         | 0.0539313 | -4.3684354   | 6.8492751     | 0.0328364    | 0.8286310        | torch.Size([512, 2, 512])        |
| 1448    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1448    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1449    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1449    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1450    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.q_proj                        | input               | qint8         | 0.0539313 | -4.3684354   | 6.8492751     | 0.0328364    | 0.8286310        | torch.Size([512, 2, 512])        |
| 1450    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.q_proj                        | weight              | torch.float32 |           | -0.2777553   | 0.2990031     | 0.0002842    | 0.0034354        | torch.Size([512, 512])           |
| 1450    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.q_proj                        | bias                | torch.float32 |           | -0.1035601   | 0.1086727     | -0.0026900   | 0.0010697        | torch.Size([512])                |
| 1450    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.q_proj                        | output              | qint8         | 0.1012007 | -12.9536915  | 12.8524904    | -0.0636249   | 12.5968666       | torch.Size([512, 2, 512])        |
| 1451    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.k_proj                        | input               | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1451    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.k_proj                        | weight              | torch.float32 |           | -0.3452844   | 0.4038241     | 0.0001369    | 0.0035582        | torch.Size([512, 512])           |
| 1451    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.k_proj                        | bias                | torch.float32 |           | -0.0042569   | 0.0036242     | -0.0000186   | 0.0000007        | torch.Size([512])                |
| 1451    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.k_proj                        | output              | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([256, 2, 512])        |
| 1452    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.v_proj                        | input               | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1452    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.v_proj                        | weight              | torch.float32 |           | -0.2388043   | 0.2738543     | 0.0000625    | 0.0012634        | torch.Size([512, 512])           |
| 1452    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.v_proj                        | bias                | torch.float32 |           | -0.0574798   | 0.0562508     | -0.0010481   | 0.0004109        | torch.Size([512])                |
| 1452    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.v_proj                        | output              | qint8         | 0.0099147 | -0.0594883   | 0.0594883     | -0.0011425   | 0.0004243        | torch.Size([256, 2, 512])        |
| 1453    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | input_0             | qint8         | 0.1012007 | -12.9536915  | 12.8524904    | -0.0636249   | 12.5968666       | torch.Size([512, 2, 512])        |
| 1453    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | output              | qint8         | 0.1012007 | -12.9536915  | 12.8524904    | -0.0636249   | 12.5968666       | torch.Size([512, 16, 64])        |
| 1454    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.1012007 | -12.9536915  | 12.8524904    | -0.0636249   | 12.5968666       | torch.Size([512, 16, 64])        |
| 1454    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.1012007 | -12.9536915  | 12.8524904    | -0.0636249   | 12.5968666       | torch.Size([16, 512, 64])        |
| 1455    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | input_0             | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([256, 2, 512])        |
| 1455    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | output              | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([256, 16, 64])        |
| 1456    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([256, 16, 64])        |
| 1456    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([16, 256, 64])        |
| 1457    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | input_0             | qint8         | 0.0099147 | -0.0594883   | 0.0594883     | -0.0011425   | 0.0004243        | torch.Size([256, 2, 512])        |
| 1457    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | output              | qint8         | 0.0099147 | -0.0594883   | 0.0594883     | -0.0011425   | 0.0004243        | torch.Size([256, 16, 64])        |
| 1458    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.0099147 | -0.0594883   | 0.0594883     | -0.0011425   | 0.0004243        | torch.Size([256, 16, 64])        |
| 1458    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.0099147 | -0.0594883   | 0.0594883     | -0.0011425   | 0.0004243        | torch.Size([16, 256, 64])        |
| 1459    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.14.attn.q_scale_mul                   | input_0             | qint8         | 0.1012007 | -12.9536915  | 12.8524904    | -0.0636249   | 12.5968666       | torch.Size([16, 512, 64])        |
| 1459    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.14.attn.q_scale_mul                   | output              | qint8         | 0.0126501 | -1.6192114   | 1.6065613     | -0.0079531   | 0.1968261        | torch.Size([16, 512, 64])        |
| 1460    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([16, 256, 64])        |
| 1460    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([16, 64, 256])        |
| 1461    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.14.attn.matmul                        | input_0             | qint8         | 0.0126501 | -1.6192114   | 1.6065613     | -0.0079531   | 0.1968261        | torch.Size([16, 512, 64])        |
| 1461    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.14.attn.matmul                        | input_1             | qint8         | 0.0797359 | -5.2625675   | 6.6978135     | 0.1554227    | 5.0819793        | torch.Size([16, 64, 256])        |
| 1461    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.14.attn.matmul                        | output              | qint8         | 1.7741020 | -134.8317566 | 108.2202225   | -10.0993004  | 1308.2890625     | torch.Size([16, 512, 256])       |
| 1462    | torch.Tensor.max                                                            | head.layers.14.attn.softmax                       | input               | qint8         | 1.7741020 | -134.8317566 | 108.2202225   | -10.0993004  | 1308.2890625     | torch.Size([16, 512, 256])       |
| 1462    | torch.Tensor.max                                                            | head.layers.14.attn.softmax                       | output_0            | qint8         | 1.7741020 | -134.8317566 | 108.2202225   | -10.0993004  | 1308.4478760     | torch.Size([16, 512, 1])         |
| 1462    | torch.Tensor.max                                                            | head.layers.14.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1463    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.14.attn.softmax.sub                   | input_0             | qint8         | 1.7741020 | -134.8317566 | 108.2202225   | -10.0993004  | 1308.2890625     | torch.Size([16, 512, 256])       |
| 1463    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.14.attn.softmax.sub                   | input_1             | qint8         | 1.7741020 | -134.8317566 | 108.2202225   | -10.0993004  | 1308.4478760     | torch.Size([16, 512, 1])         |
| 1463    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.14.attn.softmax.sub                   | output              | qint16        | 0.0134391 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1464    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.14.attn.softmax.exp                   | input               | qint16        | 0.0134391 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1464    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.14.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1465    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.14.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1465    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.14.attn.softmax.sum                   | output              | qint16        | 0.0037719 | 123.5929108  | 123.5929108   | 123.5929108  | 0.0000000        | torch.Size([16, 512, 1])         |
| 1466    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.14.attn.softmax.reciprocal            | input               | qint16        | 0.0037719 | 123.5929108  | 123.5929108   | 123.5929108  | 0.0000000        | torch.Size([16, 512, 1])         |
| 1466    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.14.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0080873    | 0.0080873     | 0.0080873    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1467    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.14.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1467    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.14.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0080873    | 0.0080873     | 0.0080873    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1467    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.14.attn.softmax.mul                   | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1468    | torch.nn.modules.dropout.Dropout                                            | head.layers.14.attn.attention_drop                | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1468    | torch.nn.modules.dropout.Dropout                                            | head.layers.14.attn.attention_drop                | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1469    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.14.attn.attn_matmul                   | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1469    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.14.attn.attn_matmul                   | input_1             | qint8         | 0.0099147 | -0.0594883   | 0.0594883     | -0.0011425   | 0.0004243        | torch.Size([16, 256, 64])        |
| 1469    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.14.attn.attn_matmul                   | output              | qint8         | 0.0098606 | -0.1183272   | 0.1183272     | -0.0022726   | 0.0016789        | torch.Size([16, 512, 64])        |
| 1470    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.0098606 | -0.1183272   | 0.1183272     | -0.0022726   | 0.0016789        | torch.Size([16, 512, 64])        |
| 1470    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.0098606 | -0.1183272   | 0.1183272     | -0.0022726   | 0.0016789        | torch.Size([512, 16, 64])        |
| 1471    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | input_0             | qint8         | 0.0098606 | -0.1183272   | 0.1183272     | -0.0022726   | 0.0016789        | torch.Size([512, 16, 64])        |
| 1471    | torch.Tensor.reshape                                                        | head.layers.14.attn                               | output              | qint8         | 0.0098606 | -0.1183272   | 0.1183272     | -0.0022726   | 0.0016789        | torch.Size([512, 2, 512])        |
| 1472    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.out_proj                      | input               | qint8         | 0.0098606 | -0.1183272   | 0.1183272     | -0.0022726   | 0.0016789        | torch.Size([512, 2, 512])        |
| 1472    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.out_proj                      | weight              | torch.float32 |           | -0.1960477   | 0.2013985     | -0.0001637   | 0.0022644        | torch.Size([512, 512])           |
| 1472    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.out_proj                      | bias                | torch.float32 |           | -0.2318651   | 0.2497024     | 0.0100625    | 0.0055016        | torch.Size([512])                |
| 1472    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.14.attn.out_proj                      | output              | qint8         | 0.0103107 | -0.6805070   | 0.5258463     | 0.0185271    | 0.0256352        | torch.Size([512, 2, 512])        |
| 1473    | torch.Tensor.view                                                           | head.layers.14.attn                               | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1473    | torch.Tensor.view                                                           | head.layers.14.attn                               | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 1474    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.14.attn.attn_weights_mean             | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 1474    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.14.attn.attn_weights_mean             | output              | qint8         | 0.0029645 | 0.0088936    | 0.0088936     | 0.0088936    | 0.0000000        | torch.Size([2, 512, 256])        |
| 1475    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | input_0             | qint8         | 0.0103107 | -0.6805070   | 0.5258463     | 0.0185271    | 0.0256352        | torch.Size([512, 2, 512])        |
| 1475    | torch.Tensor.transpose                                                      | head.layers.14.attn                               | output              | qint8         | 0.0103107 | -0.6805070   | 0.5258463     | 0.0185271    | 0.0256352        | torch.Size([2, 512, 512])        |
| 1476    | torch.nn.modules.dropout.Dropout                                            | head.layers.14.dropout                            | input               | qint8         | 0.0103107 | -0.6805070   | 0.5258463     | 0.0185271    | 0.0256352        | torch.Size([2, 512, 512])        |
| 1476    | torch.nn.modules.dropout.Dropout                                            | head.layers.14.dropout                            | output              | qint8         | 0.0103107 | -0.6805070   | 0.5258463     | 0.0185271    | 0.0256352        | torch.Size([2, 512, 512])        |
| 1477    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.14.add                                | input_0             | qint8         | 0.0539313 | -4.3684354   | 6.8492751     | 0.0328364    | 0.8286310        | torch.Size([2, 512, 512])        |
| 1477    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.14.add                                | input_1             | qint8         | 0.0103107 | -0.6805070   | 0.5258463     | 0.0185271    | 0.0256352        | torch.Size([2, 512, 512])        |
| 1477    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.14.add                                | output              | qint8         | 0.0539698 | -3.8858292   | 6.8002009     | 0.0518817    | 0.7730749        | torch.Size([2, 512, 512])        |
| 1478    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(4)                                  | input               | qint8         | 0.0539698 | -3.8858292   | 6.8002009     | 0.0518817    | 0.7730749        | torch.Size([2, 512, 512])        |
| 1478    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(4)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 1478    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(4)                                  | output              | qint16        | 0.0015259 | -7.2723389   | 9.0347290     | -0.0100933   | 1.0092525        | torch.Size([2, 512, 256])        |
| 1479    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(5)                                 | input               | qint16        | 0.0015259 | -7.2723389   | 9.0347290     | -0.0100933   | 1.0092525        | torch.Size([2, 512, 256])        |
| 1479    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(5)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 1479    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(5)                                 | output              | qint16        | 0.0001526 | -4.1218567   | 4.3327332     | 0.0003103    | 0.0515647        | torch.Size([2, 512, 512])        |
| 1480    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.15.query_cat                          | input_0             | qint16        | 0.0015259 | -7.2723389   | 9.0347290     | -0.0100933   | 1.0092525        | torch.Size([2, 512, 256])        |
| 1480    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.15.query_cat                          | input_1             | qint8         | 0.0569265 | -1.7647222   | 7.2296681     | 0.0575908    | 0.8505152        | torch.Size([2, 512, 256])        |
| 1480    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.15.query_cat                          | output              | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([2, 512, 512])        |
| 1481    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.15.key_cat                            | input_0             | qint16        | 0.0015259 | -7.2723389   | 9.0347290     | -0.0100933   | 1.0092525        | torch.Size([2, 512, 256])        |
| 1481    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.15.key_cat                            | input_1             | qint8         | 0.0569265 | -1.7647222   | 7.2296681     | 0.0575908    | 0.8505152        | torch.Size([2, 512, 256])        |
| 1481    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.15.key_cat                            | output              | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([2, 512, 512])        |
| 1482    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([2, 512, 512])        |
| 1482    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1483    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([2, 512, 512])        |
| 1483    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1484    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint16        | 0.0001526 | -4.1218567   | 4.3327332     | 0.0003103    | 0.0515647        | torch.Size([2, 512, 512])        |
| 1484    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint16        | 0.0001526 | -4.1218567   | 4.3327332     | 0.0003103    | 0.0515647        | torch.Size([512, 2, 512])        |
| 1485    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | input_0             | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1485    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | output              | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1486    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | input_0             | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1486    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | output              | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1487    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | input_0             | qint16        | 0.0001526 | -4.1218567   | 4.3327332     | 0.0003103    | 0.0515647        | torch.Size([512, 2, 512])        |
| 1487    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | output              | qint16        | 0.0001526 | -4.1218567   | 4.3327332     | 0.0003103    | 0.0515647        | torch.Size([512, 2, 512])        |
| 1488    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.q_proj                        | input               | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1488    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.q_proj                        | weight              | torch.float32 |           | -0.3136347   | 0.3103172     | -0.0000785   | 0.0029793        | torch.Size([512, 512])           |
| 1488    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.q_proj                        | bias                | torch.float32 |           | -0.0943940   | 0.0701011     | -0.0003392   | 0.0006187        | torch.Size([512])                |
| 1488    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.q_proj                        | output              | qint8         | 0.0872486 | -11.1678171  | 11.0805683    | -0.0003967   | 7.5958982        | torch.Size([512, 2, 512])        |
| 1489    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.k_proj                        | input               | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([512, 2, 512])        |
| 1489    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.k_proj                        | weight              | torch.float32 |           | -0.3332908   | 0.3325517     | -0.0000534   | 0.0031501        | torch.Size([512, 512])           |
| 1489    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.k_proj                        | bias                | torch.float32 |           | -0.1813514   | 0.2414232     | -0.0016250   | 0.0011009        | torch.Size([512])                |
| 1489    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.k_proj                        | output              | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([512, 2, 512])        |
| 1490    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.v_proj                        | input               | qint16        | 0.0001526 | -4.1218567   | 4.3327332     | 0.0003103    | 0.0515647        | torch.Size([512, 2, 512])        |
| 1490    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.v_proj                        | weight              | torch.float32 |           | -0.3830613   | 0.3038961     | 0.0000100    | 0.0012182        | torch.Size([512, 512])           |
| 1490    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.v_proj                        | bias                | torch.float32 |           | -0.2282076   | 0.3300797     | 0.0050480    | 0.0049596        | torch.Size([512])                |
| 1490    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.v_proj                        | output              | qint8         | 0.0238383 | -2.7414000   | 2.9559443     | -0.0000050   | 0.0739058        | torch.Size([512, 2, 512])        |
| 1491    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | input_0             | qint8         | 0.0872486 | -11.1678171  | 11.0805683    | -0.0003967   | 7.5958982        | torch.Size([512, 2, 512])        |
| 1491    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | output              | qint8         | 0.0872486 | -11.1678171  | 11.0805683    | -0.0003967   | 7.5958982        | torch.Size([512, 16, 64])        |
| 1492    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0872486 | -11.1678171  | 11.0805683    | -0.0003967   | 7.5958982        | torch.Size([512, 16, 64])        |
| 1492    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0872486 | -11.1678171  | 11.0805683    | -0.0003967   | 7.5958982        | torch.Size([16, 512, 64])        |
| 1493    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | input_0             | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([512, 2, 512])        |
| 1493    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | output              | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([512, 16, 64])        |
| 1494    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([512, 16, 64])        |
| 1494    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([16, 512, 64])        |
| 1495    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | input_0             | qint8         | 0.0238383 | -2.7414000   | 2.9559443     | -0.0000050   | 0.0739058        | torch.Size([512, 2, 512])        |
| 1495    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | output              | qint8         | 0.0238383 | -2.7414000   | 2.9559443     | -0.0000050   | 0.0739058        | torch.Size([512, 16, 64])        |
| 1496    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0238383 | -2.7414000   | 2.9559443     | -0.0000050   | 0.0739058        | torch.Size([512, 16, 64])        |
| 1496    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0238383 | -2.7414000   | 2.9559443     | -0.0000050   | 0.0739058        | torch.Size([16, 512, 64])        |
| 1497    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.15.attn.q_scale_mul                   | input_0             | qint8         | 0.0872486 | -11.1678171  | 11.0805683    | -0.0003967   | 7.5958982        | torch.Size([16, 512, 64])        |
| 1497    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.15.attn.q_scale_mul                   | output              | qint8         | 0.0109061 | -1.3959771   | 1.3850710     | -0.0000496   | 0.1186859        | torch.Size([16, 512, 64])        |
| 1498    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([16, 512, 64])        |
| 1498    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([16, 64, 512])        |
| 1499    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.15.attn.matmul                        | input_0             | qint8         | 0.0109061 | -1.3959771   | 1.3850710     | -0.0000496   | 0.1186859        | torch.Size([16, 512, 64])        |
| 1499    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.15.attn.matmul                        | input_1             | qint8         | 0.0834008 | -10.6753054  | 10.5919046    | 0.0105377    | 6.8614974        | torch.Size([16, 64, 512])        |
| 1499    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.15.attn.matmul                        | output              | qint8         | 1.0607780 | -117.7463608 | 134.7188110   | -2.7746854   | 611.4307251      | torch.Size([16, 512, 512])       |
| 1500    | torch.Tensor.max                                                            | head.layers.15.attn.softmax                       | input               | qint8         | 1.0607780 | -117.7463608 | 134.7188110   | -2.7746854   | 611.4307251      | torch.Size([16, 512, 512])       |
| 1500    | torch.Tensor.max                                                            | head.layers.15.attn.softmax                       | output_0            | qint8         | 1.0607780 | 5.3038902    | 134.7188110   | 47.0219116   | 763.8713989      | torch.Size([16, 512, 1])         |
| 1500    | torch.Tensor.max                                                            | head.layers.15.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 509.0000000   | 284.6033936  | 11225.1279297    | torch.Size([16, 512, 1])         |
| 1501    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.15.attn.softmax.sub                   | input_0             | qint8         | 1.0607780 | -117.7463608 | 134.7188110   | -2.7746854   | 611.4307251      | torch.Size([16, 512, 512])       |
| 1501    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.15.attn.softmax.sub                   | input_1             | qint8         | 1.0607780 | 5.3038902    | 134.7188110   | 47.0219116   | 763.8713989      | torch.Size([16, 512, 1])         |
| 1501    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.15.attn.softmax.sub                   | output              | qint16        | 0.0100352 | -229.1236877 | 0.0000000     | -49.7966652  | 1284.5112305     | torch.Size([16, 512, 512])       |
| 1502    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.15.attn.softmax.exp                   | input               | qint16        | 0.0100352 | -229.1236877 | 0.0000000     | -49.7966652  | 1284.5112305     | torch.Size([16, 512, 512])       |
| 1502    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.15.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0083160    | 0.0059244        | torch.Size([16, 512, 512])       |
| 1503    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.15.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0083160    | 0.0059244        | torch.Size([16, 512, 512])       |
| 1503    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.15.attn.softmax.sum                   | output              | qint16        | 0.0016268 | 1.0004582    | 53.3040886    | 3.7180023    | 34.1939163       | torch.Size([16, 512, 1])         |
| 1504    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.15.attn.softmax.reciprocal            | input               | qint16        | 0.0016268 | 1.0004582    | 53.3040886    | 3.7180023    | 34.1939163       | torch.Size([16, 512, 1])         |
| 1504    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.15.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0187686    | 0.9995270     | 0.5516784    | 0.0954583        | torch.Size([16, 512, 1])         |
| 1505    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.15.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0083160    | 0.0059244        | torch.Size([16, 512, 512])       |
| 1505    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.15.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0187686    | 0.9995270     | 0.5516784    | 0.0954583        | torch.Size([16, 512, 1])         |
| 1505    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.15.attn.softmax.mul                   | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019388    | 0.0008846        | torch.Size([16, 512, 512])       |
| 1506    | torch.nn.modules.dropout.Dropout                                            | head.layers.15.attn.attention_drop                | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019388    | 0.0008846        | torch.Size([16, 512, 512])       |
| 1506    | torch.nn.modules.dropout.Dropout                                            | head.layers.15.attn.attention_drop                | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019388    | 0.0008846        | torch.Size([16, 512, 512])       |
| 1507    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.15.attn.attn_matmul                   | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019388    | 0.0008846        | torch.Size([16, 512, 512])       |
| 1507    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.15.attn.attn_matmul                   | input_1             | qint8         | 0.0238383 | -2.7414000   | 2.9559443     | -0.0000050   | 0.0739058        | torch.Size([16, 512, 64])        |
| 1507    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.15.attn.attn_matmul                   | output              | qint8         | 0.0215847 | -1.8994493   | 2.4606504     | -0.0017910   | 0.0564851        | torch.Size([16, 512, 64])        |
| 1508    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0215847 | -1.8994493   | 2.4606504     | -0.0017910   | 0.0564851        | torch.Size([16, 512, 64])        |
| 1508    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0215847 | -1.8994493   | 2.4606504     | -0.0017910   | 0.0564851        | torch.Size([512, 16, 64])        |
| 1509    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | input_0             | qint8         | 0.0215847 | -1.8994493   | 2.4606504     | -0.0017910   | 0.0564851        | torch.Size([512, 16, 64])        |
| 1509    | torch.Tensor.reshape                                                        | head.layers.15.attn                               | output              | qint8         | 0.0215847 | -1.8994493   | 2.4606504     | -0.0017910   | 0.0564851        | torch.Size([512, 2, 512])        |
| 1510    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.out_proj                      | input               | qint8         | 0.0215847 | -1.8994493   | 2.4606504     | -0.0017910   | 0.0564851        | torch.Size([512, 2, 512])        |
| 1510    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.out_proj                      | weight              | torch.float32 |           | -0.2006125   | 0.2132747     | 0.0000258    | 0.0022547        | torch.Size([512, 512])           |
| 1510    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.out_proj                      | bias                | torch.float32 |           | -0.4402698   | 0.3843731     | -0.0079231   | 0.0224835        | torch.Size([512])                |
| 1510    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.15.attn.out_proj                      | output              | qint8         | 0.0170725 | -2.1682026   | 1.9291881     | -0.0028686   | 0.1881092        | torch.Size([512, 2, 512])        |
| 1511    | torch.Tensor.view                                                           | head.layers.15.attn                               | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019388    | 0.0008846        | torch.Size([16, 512, 512])       |
| 1511    | torch.Tensor.view                                                           | head.layers.15.attn                               | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019388    | 0.0008846        | torch.Size([2, 8, 512, 512])     |
| 1512    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.15.attn.attn_weights_mean             | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0019388    | 0.0008846        | torch.Size([2, 8, 512, 512])     |
| 1512    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.15.attn.attn_weights_mean             | output              | qint8         | 0.0020399 | 0.0000000    | 0.2060253     | 0.0018660    | 0.0001141        | torch.Size([2, 512, 512])        |
| 1513    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | input_0             | qint8         | 0.0170725 | -2.1682026   | 1.9291881     | -0.0028686   | 0.1881092        | torch.Size([512, 2, 512])        |
| 1513    | torch.Tensor.transpose                                                      | head.layers.15.attn                               | output              | qint8         | 0.0170725 | -2.1682026   | 1.9291881     | -0.0028686   | 0.1881092        | torch.Size([2, 512, 512])        |
| 1514    | torch.nn.modules.dropout.Dropout                                            | head.layers.15.dropout                            | input               | qint8         | 0.0170725 | -2.1682026   | 1.9291881     | -0.0028686   | 0.1881092        | torch.Size([2, 512, 512])        |
| 1514    | torch.nn.modules.dropout.Dropout                                            | head.layers.15.dropout                            | output              | qint8         | 0.0170725 | -2.1682026   | 1.9291881     | -0.0028686   | 0.1881092        | torch.Size([2, 512, 512])        |
| 1515    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.15.add                                | input_0             | qint8         | 0.0598410 | -7.3005981   | 7.5998030     | 0.0192102    | 0.9235319        | torch.Size([2, 512, 512])        |
| 1515    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.15.add                                | input_1             | qint8         | 0.0170725 | -2.1682026   | 1.9291881     | -0.0028686   | 0.1881092        | torch.Size([2, 512, 512])        |
| 1515    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.15.add                                | output              | qint8         | 0.0646338 | -7.3682542   | 7.9499583     | 0.0164208    | 1.1053233        | torch.Size([2, 512, 512])        |
| 1516    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(5)                                  | input               | qint8         | 0.0646338 | -7.3682542   | 7.9499583     | 0.0164208    | 1.1053233        | torch.Size([2, 512, 512])        |
| 1516    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(5)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 1516    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(5)                                  | output              | qint16        | 0.0015259 | -50.0000000  | 37.2055054    | 0.0434950    | 15.6119661       | torch.Size([2, 512, 256])        |
| 1517    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.16.input_mean.mean                    | input_0             | qint16        | 0.0015259 | -50.0000000  | 37.2055054    | 0.0434950    | 15.6119661       | torch.Size([2, 512, 256])        |
| 1517    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.16.input_mean.mean                    | output              | qint16        | 0.0000048 | -0.0711978   | 0.1523922     | 0.0434947    | 0.0018670        | torch.Size([2, 512, 1])          |
| 1518    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.16.sub                                | input_0             | qint16        | 0.0015259 | -50.0000000  | 37.2055054    | 0.0434950    | 15.6119661       | torch.Size([2, 512, 256])        |
| 1518    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.16.sub                                | input_1             | qint16        | 0.0000048 | -0.0711978   | 0.1523922     | 0.0434947    | 0.0018670        | torch.Size([2, 512, 1])          |
| 1518    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.16.sub                                | output              | qint16        | 0.0015692 | -50.0853348  | 37.1297150    | -0.0000116   | 15.6101112       | torch.Size([2, 512, 256])        |
| 1519    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.mul                                | input_0             | qint16        | 0.0015692 | -50.0853348  | 37.1297150    | -0.0000116   | 15.6101112       | torch.Size([2, 512, 256])        |
| 1519    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.mul                                | input_1             | qint16        | 0.0015692 | -50.0853348  | 37.1297150    | -0.0000116   | 15.6101112       | torch.Size([2, 512, 256])        |
| 1519    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.mul                                | output              | qint16        | 0.0806943 | 0.0000000    | 2508.5444336  | 15.6092224   | 9935.1562500     | torch.Size([2, 512, 256])        |
| 1520    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.16.var_mean.mean                      | input_0             | qint16        | 0.0806943 | 0.0000000    | 2508.5444336  | 15.6092224   | 9935.1562500     | torch.Size([2, 512, 256])        |
| 1520    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.16.var_mean.mean                      | output              | qint16        | 0.0012447 | 5.0794349    | 38.7834320    | 15.6091452   | 71.2869797       | torch.Size([2, 512, 1])          |
| 1521    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.16.rsqrt                              | input               | qint16        | 0.0012447 | 5.0794349    | 38.7834320    | 15.6091452   | 71.2869797       | torch.Size([2, 512, 1])          |
| 1521    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.16.rsqrt                              | output              | qint16        | 0.0000140 | 0.1605717    | 0.4437034     | 0.2884148    | 0.0081517        | torch.Size([2, 512, 1])          |
| 1522    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.out_mul                            | input_0             | qint16        | 0.0015692 | -50.0853348  | 37.1297150    | -0.0000116   | 15.6101112       | torch.Size([2, 512, 256])        |
| 1522    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.out_mul                            | input_1             | qint16        | 0.0000140 | 0.1605717    | 0.4437034     | 0.2884148    | 0.0081517        | torch.Size([2, 512, 1])          |
| 1522    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.out_mul                            | output              | qint16        | 0.0002603 | -8.5210152   | 6.2461581     | -0.0000040   | 1.0000886        | torch.Size([2, 512, 256])        |
| 1523    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.16.weight_quant                       | input               | torch.float32 |           | 0.7322687    | 0.9884943     | 0.8490973    | 0.0018859        | torch.Size([256])                |
| 1523    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.16.weight_quant                       | output              | qint16        | 0.0000302 | 0.7322716    | 0.9884792     | 0.8490971    | 0.0018858        | torch.Size([256])                |
| 1524    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.weight_mul                         | input_0             | qint16        | 0.0002603 | -8.5210152   | 6.2461581     | -0.0000040   | 1.0000886        | torch.Size([2, 512, 256])        |
| 1524    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.weight_mul                         | input_1             | qint16        | 0.0000302 | 0.7322716    | 0.9884792     | 0.8490971    | 0.0018858        | torch.Size([256])                |
| 1524    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.16.weight_mul                         | output              | qint16        | 0.0002319 | -7.5921192   | 4.8221345     | -0.0025642   | 0.7272558        | torch.Size([2, 512, 256])        |
| 1525    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.16.bias_quant                         | input               | torch.float32 |           | -0.1939087   | 0.1560507     | -0.0045885   | 0.0017009        | torch.Size([256])                |
| 1525    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.16.bias_quant                         | output              | qint16        | 0.0000059 | -0.1939116   | 0.1560501     | -0.0045886   | 0.0017009        | torch.Size([256])                |
| 1526    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.16.bias_add                           | input_0             | qint16        | 0.0002319 | -7.5921192   | 4.8221345     | -0.0025642   | 0.7272558        | torch.Size([2, 512, 256])        |
| 1526    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.16.bias_add                           | input_1             | qint16        | 0.0000059 | -0.1939116   | 0.1560501     | -0.0045886   | 0.0017009        | torch.Size([256])                |
| 1526    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.16.bias_add                           | output              | qint8         | 0.0556755 | -7.1264648   | 4.7880936     | -0.0070023   | 0.6926007        | torch.Size([2, 512, 256])        |
| 1527    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.kps_generator.offset               | input               | qint8         | 0.0556755 | -7.1264648   | 4.7880936     | -0.0070023   | 0.6926007        | torch.Size([2, 512, 256])        |
| 1527    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.kps_generator.offset               | weight              | torch.float32 |           | -0.1968990   | 0.1851189     | 0.0002006    | 0.0033782        | torch.Size([24, 256])            |
| 1527    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.kps_generator.offset               | bias                | torch.float32 |           | -0.0576364   | 0.0380543     | -0.0028053   | 0.0006696        | torch.Size([24])                 |
| 1527    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.kps_generator.offset               | output              | qint16        | 0.0001834 | -4.9444036   | 5.3722067     | -0.1392038   | 1.3516673        | torch.Size([2, 512, 24])         |
| 1528    | torch.Tensor.view                                                           | head.layers.17.kps_generator                      | input_0             | qint16        | 0.0001834 | -4.9444036   | 5.3722067     | -0.1392038   | 1.3516673        | torch.Size([2, 512, 24])         |
| 1528    | torch.Tensor.view                                                           | head.layers.17.kps_generator                      | output              | qint16        | 0.0001834 | -4.9444036   | 5.3722067     | -0.1392038   | 1.3516673        | torch.Size([2, 512, 8, 3])       |
| 1529    | torch.Tensor.__getitem__                                                    | head.layers.17.kps_generator                      | input_0             | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1529    | torch.Tensor.__getitem__                                                    | head.layers.17.kps_generator                      | output              | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.7208947    | 272.6247253      | torch.Size([2, 512, 1, 3])       |
| 1530    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.kps_generator.keypoints_add        | input_0             | qint16        | 0.0001834 | -4.9444036   | 5.3722067     | -0.1392038   | 1.3516673        | torch.Size([2, 512, 8, 3])       |
| 1530    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.kps_generator.keypoints_add        | input_1             | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.7208947    | 272.6247253      | torch.Size([2, 512, 1, 3])       |
| 1530    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.kps_generator.keypoints_add        | output              | qint16        | 0.0018546 | -55.9007034  | 56.3606529    | 0.5816829    | 275.4451294      | torch.Size([2, 512, 8, 3])       |
| 1531    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.weight_add                         | input_0             | qint8         | 0.0556755 | -7.1264648   | 4.7880936     | -0.0070023   | 0.6926007        | torch.Size([2, 512, 256])        |
| 1531    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.weight_add                         | input_1             | qint8         | 0.0569265 | -1.7647222   | 7.2296681     | 0.0575908    | 0.8505152        | torch.Size([2, 512, 256])        |
| 1531    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.weight_add                         | output              | qint8         | 0.0597181 | -7.5244751   | 7.5841932     | 0.0498919    | 1.4371779        | torch.Size([2, 512, 256])        |
| 1532    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 1532    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 1533    | torch.Tensor.reshape                                                        | head.layers.17                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 1533    | torch.Tensor.reshape                                                        | head.layers.17                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 1534    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.0                   | input               | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 1534    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.0                   | weight              | torch.float32 |           | -0.4340022   | 0.4555438     | -0.0011310   | 0.0120533        | torch.Size([256, 12])            |
| 1534    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.0                   | bias                | torch.float32 |           | -0.3300059   | 0.3633537     | 0.0122508    | 0.0318757        | torch.Size([256])                |
| 1534    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.0                   | output              | torch.float32 |           | -1.1883903   | 1.5345422     | 0.0200240    | 0.2620578        | torch.Size([2, 6, 256])          |
| 1535    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.17.camera_encoder.1                   | input               | torch.float32 |           | -1.1883903   | 1.5345422     | 0.0200240    | 0.2620578        | torch.Size([2, 6, 256])          |
| 1535    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.17.camera_encoder.1                   | output              | qint8         | 0.0119473 | 0.0000000    | 1.5173025     | 0.2254424    | 0.1043183        | torch.Size([2, 6, 256])          |
| 1536    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.input_mean.mean   | input_0             | qint8         | 0.0119473 | 0.0000000    | 1.5173025     | 0.2254424    | 0.1043183        | torch.Size([2, 6, 256])          |
| 1536    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.input_mean.mean   | output              | qint16        | 0.0000079 | 0.1725336    | 0.2585912     | 0.2254423    | 0.0008029        | torch.Size([2, 6, 1])            |
| 1537    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.2.sub               | input_0             | qint8         | 0.0119473 | 0.0000000    | 1.5173025     | 0.2254424    | 0.1043183        | torch.Size([2, 6, 256])          |
| 1537    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.2.sub               | input_1             | qint16        | 0.0000079 | 0.1725336    | 0.2585912     | 0.2254423    | 0.0008029        | torch.Size([2, 6, 1])            |
| 1537    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.2.sub               | output              | qint16        | 0.0000390 | -0.2585975   | 1.2694432     | 0.0000008    | 0.1035818        | torch.Size([2, 6, 256])          |
| 1538    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.mul               | input_0             | qint16        | 0.0000390 | -0.2585975   | 1.2694432     | 0.0000008    | 0.1035818        | torch.Size([2, 6, 256])          |
| 1538    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.mul               | input_1             | qint16        | 0.0000390 | -0.2585975   | 1.2694432     | 0.0000008    | 0.1035818        | torch.Size([2, 6, 256])          |
| 1538    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.mul               | output              | qint16        | 0.0000500 | 0.0000000    | 1.6114615     | 0.1035514    | 0.0377505        | torch.Size([2, 6, 256])          |
| 1539    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.var_mean.mean     | input_0             | qint16        | 0.0000500 | 0.0000000    | 1.6114615     | 0.1035514    | 0.0377505        | torch.Size([2, 6, 256])          |
| 1539    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.var_mean.mean     | output              | qint16        | 0.0000041 | 0.0562293    | 0.1330580     | 0.1035519    | 0.0007278        | torch.Size([2, 6, 1])            |
| 1540    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.17.camera_encoder.2.rsqrt             | input               | qint16        | 0.0000041 | 0.0562293    | 0.1330580     | 0.1035519    | 0.0007278        | torch.Size([2, 6, 1])            |
| 1540    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.17.camera_encoder.2.rsqrt             | output              | qint16        | 0.0001278 | 2.7413006    | 4.1870227     | 3.1970978    | 0.2553621        | torch.Size([2, 6, 1])            |
| 1541    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.out_mul           | input_0             | qint16        | 0.0000390 | -0.2585975   | 1.2694432     | 0.0000008    | 0.1035818        | torch.Size([2, 6, 256])          |
| 1541    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.out_mul           | input_1             | qint16        | 0.0001278 | 2.7413006    | 4.1870227     | 3.1970978    | 0.2553621        | torch.Size([2, 6, 1])            |
| 1541    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.out_mul           | output              | qint16        | 0.0001266 | -0.7316954   | 4.0799174     | 0.0000056    | 0.9990046        | torch.Size([2, 6, 256])          |
| 1542    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.2.weight_quant      | input               | torch.float32 |           | 0.8256041    | 1.2137457     | 0.9921471    | 0.0037993        | torch.Size([256])                |
| 1542    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.2.weight_quant      | output              | qint16        | 0.0000370 | 0.8256100    | 1.2137271     | 0.9921462    | 0.0037992        | torch.Size([256])                |
| 1543    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.weight_mul        | input_0             | qint16        | 0.0001266 | -0.7316954   | 4.0799174     | 0.0000056    | 0.9990046        | torch.Size([2, 6, 256])          |
| 1543    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.weight_mul        | input_1             | qint16        | 0.0000370 | 0.8256100    | 1.2137271     | 0.9921462    | 0.0037992        | torch.Size([256])                |
| 1543    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.weight_mul        | output              | qint16        | 0.0001328 | -0.8422537   | 4.2802167     | -0.0043719   | 1.0048696        | torch.Size([2, 6, 256])          |
| 1544    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.2.bias_quant        | input               | torch.float32 |           | -0.1173504   | 0.1054403     | -0.0015248   | 0.0022785        | torch.Size([256])                |
| 1544    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.2.bias_quant        | output              | qint16        | 0.0000036 | -0.1173522   | 0.1054408     | -0.0015247   | 0.0022785        | torch.Size([256])                |
| 1545    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.2.bias_add          | input_0             | qint16        | 0.0001328 | -0.8422537   | 4.2802167     | -0.0043719   | 1.0048696        | torch.Size([2, 6, 256])          |
| 1545    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.2.bias_add          | input_1             | qint16        | 0.0000036 | -0.1173522   | 0.1054408     | -0.0015247   | 0.0022785        | torch.Size([256])                |
| 1545    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.2.bias_add          | output              | qint8         | 0.0345118 | -0.9318199   | 4.3830042     | -0.0058531   | 1.0501083        | torch.Size([2, 6, 256])          |
| 1546    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.3                   | input               | qint8         | 0.0345118 | -0.9318199   | 4.3830042     | -0.0058531   | 1.0501083        | torch.Size([2, 6, 256])          |
| 1546    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.3                   | weight              | torch.float32 |           | -0.4107684   | 0.3999822     | 0.0008692    | 0.0045543        | torch.Size([256, 256])           |
| 1546    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.3                   | bias                | torch.float32 |           | -0.0767870   | 0.2690172     | -0.0036183   | 0.0019012        | torch.Size([256])                |
| 1546    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.camera_encoder.3                   | output              | torch.float32 |           | -5.6336164   | 52.1698990    | -0.6558539   | 23.1133480       | torch.Size([2, 6, 256])          |
| 1547    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.17.camera_encoder.4                   | input               | torch.float32 |           | -5.6336164   | 52.1698990    | -0.6558539   | 23.1133480       | torch.Size([2, 6, 256])          |
| 1547    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.17.camera_encoder.4                   | output              | qint8         | 0.4094311 | 0.0000000    | 51.9977531    | 0.8401868    | 19.7995529       | torch.Size([2, 6, 256])          |
| 1548    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.input_mean.mean   | input_0             | qint8         | 0.4094311 | 0.0000000    | 51.9977531    | 0.8401868    | 19.7995529       | torch.Size([2, 6, 256])          |
| 1548    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.input_mean.mean   | output              | qint16        | 0.0000269 | 0.8236557    | 0.8668363     | 0.8401841    | 0.0002811        | torch.Size([2, 6, 1])            |
| 1549    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.5.sub               | input_0             | qint8         | 0.4094311 | 0.0000000    | 51.9977531    | 0.8401868    | 19.7995529       | torch.Size([2, 6, 256])          |
| 1549    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.5.sub               | input_1             | qint16        | 0.0000269 | 0.8236557    | 0.8668363     | 0.8401841    | 0.0002811        | torch.Size([2, 6, 1])            |
| 1549    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.5.sub               | output              | qint16        | 0.0015678 | -0.8669683   | 51.1683731    | 0.0001189    | 19.7992229       | torch.Size([2, 6, 256])          |
| 1550    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.mul               | input_0             | qint16        | 0.0015678 | -0.8669683   | 51.1683731    | 0.0001189    | 19.7992229       | torch.Size([2, 6, 256])          |
| 1550    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.mul               | input_1             | qint16        | 0.0015678 | -0.8669683   | 51.1683731    | 0.0001189    | 19.7992229       | torch.Size([2, 6, 256])          |
| 1550    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.mul               | output              | qint16        | 0.0805377 | 0.0000000    | 2618.2014160  | 19.7913666   | 31735.1933594    | torch.Size([2, 6, 256])          |
| 1551    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.var_mean.mean     | input_0             | qint16        | 0.0805377 | 0.0000000    | 2618.2014160  | 19.7913666   | 31735.1933594    | torch.Size([2, 6, 256])          |
| 1551    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.var_mean.mean     | output              | qint16        | 0.0006404 | 18.3856449   | 20.9837132    | 19.7877274   | 0.7238078        | torch.Size([2, 6, 1])            |
| 1552    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.17.camera_encoder.5.rsqrt             | input               | qint16        | 0.0006404 | 18.3856449   | 20.9837132    | 19.7877274   | 0.7238078        | torch.Size([2, 6, 1])            |
| 1552    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.17.camera_encoder.5.rsqrt             | output              | qint16        | 0.0000071 | 0.2183007    | 0.2332204     | 0.2249489    | 0.0000241        | torch.Size([2, 6, 1])            |
| 1553    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.out_mul           | input_0             | qint16        | 0.0015678 | -0.8669683   | 51.1683731    | 0.0001189    | 19.7992229       | torch.Size([2, 6, 256])          |
| 1553    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.out_mul           | input_1             | qint16        | 0.0000071 | 0.2183007    | 0.2332204     | 0.2249489    | 0.0000241        | torch.Size([2, 6, 1])            |
| 1553    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.out_mul           | output              | qint16        | 0.0003567 | -0.1947476   | 11.6395578    | 0.0000620    | 1.0005368        | torch.Size([2, 6, 256])          |
| 1554    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.5.weight_quant      | input               | torch.float32 |           | 0.3230061    | 1.5668622     | 0.8971218    | 0.0266640        | torch.Size([256])                |
| 1554    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.5.weight_quant      | output              | qint16        | 0.0000478 | 0.3230077    | 1.5668384     | 0.8971219    | 0.0266639        | torch.Size([256])                |
| 1555    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.weight_mul        | input_0             | qint16        | 0.0003567 | -0.1947476   | 11.6395578    | 0.0000620    | 1.0005368        | torch.Size([2, 6, 256])          |
| 1555    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.weight_mul        | input_1             | qint16        | 0.0000478 | 0.3230077    | 1.5668384     | 0.8971219    | 0.0266639        | torch.Size([256])                |
| 1555    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.weight_mul        | output              | qint16        | 0.0002976 | -0.3050213   | 9.7109861     | -0.0172940   | 0.6443669        | torch.Size([2, 6, 256])          |
| 1556    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.5.bias_quant        | input               | torch.float32 |           | -0.5803625   | 0.6603993     | 0.0418145    | 0.0299207        | torch.Size([256])                |
| 1556    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.camera_encoder.5.bias_quant        | output              | qint16        | 0.0000202 | -0.5803573   | 0.6603892     | 0.0418147    | 0.0299205        | torch.Size([256])                |
| 1557    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.5.bias_add          | input_0             | qint16        | 0.0002976 | -0.3050213   | 9.7109861     | -0.0172940   | 0.6443669        | torch.Size([2, 6, 256])          |
| 1557    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.5.bias_add          | input_1             | qint16        | 0.0000202 | -0.5803573   | 0.6603892     | 0.0418147    | 0.0299205        | torch.Size([256])                |
| 1557    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.5.bias_add          | output              | qint8         | 0.0743304 | -0.8919649   | 9.4399614     | 0.0235670    | 0.6277303        | torch.Size([2, 6, 256])          |
| 1558    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint8         | 0.0597181 | -7.5244751   | 7.5841932     | 0.0498919    | 1.4371779        | torch.Size([2, 512, 256])        |
| 1558    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint8         | 0.0597181 | -7.5244751   | 7.5841932     | 0.0498919    | 1.4371779        | torch.Size([2, 512, 1, 256])     |
| 1559    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint8         | 0.0743304 | -0.8919649   | 9.4399614     | 0.0235670    | 0.6277303        | torch.Size([2, 6, 256])          |
| 1559    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint8         | 0.0743304 | -0.8919649   | 9.4399614     | 0.0235670    | 0.6277303        | torch.Size([2, 1, 6, 256])       |
| 1560    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.cam_add                            | input_0             | qint8         | 0.0597181 | -7.5244751   | 7.5841932     | 0.0498919    | 1.4371779        | torch.Size([2, 512, 1, 256])     |
| 1560    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.cam_add                            | input_1             | qint8         | 0.0743304 | -0.8919649   | 9.4399614     | 0.0235670    | 0.6277303        | torch.Size([2, 1, 6, 256])       |
| 1560    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.17.cam_add                            | output              | qint8         | 0.0662633 | -7.0901752   | 8.4154415     | 0.0733572    | 1.5800140        | torch.Size([2, 512, 6, 256])     |
| 1561    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.weights_fc                         | input               | qint8         | 0.0662633 | -7.0901752   | 8.4154415     | 0.0733572    | 1.5800140        | torch.Size([2, 512, 6, 256])     |
| 1561    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.weights_fc                         | weight              | torch.float32 |           | -0.3149840   | 0.2312223     | -0.0015179   | 0.0027572        | torch.Size([64, 256])            |
| 1561    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.weights_fc                         | bias                | torch.float32 |           | -0.0682593   | 0.0964835     | 0.0102252    | 0.0008483        | torch.Size([64])                 |
| 1561    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.weights_fc                         | output              | qint8         | 0.0673038 | -8.6148863   | 6.5957723     | 0.5495967    | 5.0414100        | torch.Size([2, 512, 6, 64])      |
| 1562    | torch.Tensor.reshape                                                        | head.layers.17                                    | input_0             | qint8         | 0.0673038 | -8.6148863   | 6.5957723     | 0.5495967    | 5.0414100        | torch.Size([2, 512, 6, 64])      |
| 1562    | torch.Tensor.reshape                                                        | head.layers.17                                    | output              | qint8         | 0.0673038 | -8.6148863   | 6.5957723     | 0.5495967    | 5.0414100        | torch.Size([2, 512, 48, 8])      |
| 1563    | torch.Tensor.max                                                            | head.layers.17.weight_softmax                     | input               | qint8         | 0.0673038 | -8.6148863   | 6.5957723     | 0.5495967    | 5.0414100        | torch.Size([2, 512, 48, 8])      |
| 1563    | torch.Tensor.max                                                            | head.layers.17.weight_softmax                     | output_0            | qint8         | 0.0673038 | 2.0864177    | 6.5957723     | 3.7154374    | 0.5693637        | torch.Size([2, 512, 1, 8])       |
| 1563    | torch.Tensor.max                                                            | head.layers.17.weight_softmax                     | output_1            | torch.int64   |           | 0.0000000    | 47.0000000    | 24.5849609   | 245.5051575      | torch.Size([2, 512, 1, 8])       |
| 1564    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.weight_softmax.sub                 | input_0             | qint8         | 0.0673038 | -8.6148863   | 6.5957723     | 0.5495967    | 5.0414100        | torch.Size([2, 512, 48, 8])      |
| 1564    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.weight_softmax.sub                 | input_1             | qint8         | 0.0673038 | 2.0864177    | 6.5957723     | 3.7154374    | 0.5693637        | torch.Size([2, 512, 1, 8])       |
| 1564    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.17.weight_softmax.sub                 | output              | qint16        | 0.0004178 | -12.8552322  | 0.0000000     | -3.1658332   | 5.1619563        | torch.Size([2, 512, 48, 8])      |
| 1565    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.17.weight_softmax.exp                 | input               | qint16        | 0.0004178 | -12.8552322  | 0.0000000     | -3.1658332   | 5.1619563        | torch.Size([2, 512, 48, 8])      |
| 1565    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.17.weight_softmax.exp                 | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2163679    | 0.0957419        | torch.Size([2, 512, 48, 8])      |
| 1566    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.17.weight_softmax.sum                 | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2163679    | 0.0957419        | torch.Size([2, 512, 48, 8])      |
| 1566    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.17.weight_softmax.sum                 | output              | qint16        | 0.0008960 | 4.3097458    | 26.1864090    | 10.3856382   | 8.4560814        | torch.Size([2, 512, 1, 8])       |
| 1567    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.17.weight_softmax.reciprocal          | input               | qint16        | 0.0008960 | 4.3097458    | 26.1864090    | 10.3856382   | 8.4560814        | torch.Size([2, 512, 1, 8])       |
| 1567    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.17.weight_softmax.reciprocal          | output              | qint16        | 0.0000073 | 0.0381904    | 0.2320315     | 0.1045458    | 0.0010880        | torch.Size([2, 512, 1, 8])       |
| 1568    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.weight_softmax.mul                 | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2163679    | 0.0957419        | torch.Size([2, 512, 48, 8])      |
| 1568    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.weight_softmax.mul                 | input_1             | qint16        | 0.0000073 | 0.0381904    | 0.2320315     | 0.1045458    | 0.0010880        | torch.Size([2, 512, 1, 8])       |
| 1568    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.weight_softmax.mul                 | output              | qint8         | 0.0014944 | 0.0000000    | 0.1897901     | 0.0207656    | 0.0009753        | torch.Size([2, 512, 48, 8])      |
| 1569    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint16        | 0.0018546 | -55.9007034  | 56.3606529    | 0.5816829    | 275.4451294      | torch.Size([2, 512, 8, 3])       |
| 1569    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint16        | 0.0018546 | -51.1491165  | 51.2344284    | 0.9429424    | 284.5878296      | torch.Size([2, 512, 8, 1])       |
| 1570    | torch.ones_like                                                             | head.layers.17                                    | input               | qint16        | 0.0018546 | -51.1491165  | 51.2344284    | 0.9429424    | 284.5878296      | torch.Size([2, 512, 8, 1])       |
| 1570    | torch.ones_like                                                             | head.layers.17                                    | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1571    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.point_quant_stub                   | input               | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1571    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.17.point_quant_stub                   | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1572    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.point_cat                          | input_0             | qint16        | 0.0018546 | -55.9007034  | 56.3606529    | 0.5816829    | 275.4451294      | torch.Size([2, 512, 8, 3])       |
| 1572    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.point_cat                          | input_1             | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1572    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.point_cat                          | output              | qint16        | 0.0018311 | -55.9002686  | 56.3598633    | 0.6861891    | 206.6145020      | torch.Size([2, 512, 8, 4])       |
| 1573    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 1573    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1574    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint16        | 0.0018311 | -55.9002686  | 56.3598633    | 0.6861891    | 206.6145020      | torch.Size([2, 512, 8, 4])       |
| 1574    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint16        | 0.0018311 | -55.9002686  | 56.3598633    | 0.6861891    | 206.6145020      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1575    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.point_matmul                       | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1575    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.point_matmul                       | input_1             | qint16        | 0.0018311 | -55.9002686  | 56.3598633    | 0.6861891    | 206.6145020      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1575    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.point_matmul                       | output              | qint16        | 0.0027704 | -83.0137558  | 84.1136246    | 0.3055450    | 94.2311478       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1576    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.17.point_sum                          | input               | qint16        | 0.0027704 | -83.0137558  | 84.1136246    | 0.3055450    | 94.2311478       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1576    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.17.point_sum                          | output              | qint16        | 0.0030088 | -87.3646317  | 90.2230301    | 1.2218955    | 369.9359741      | torch.Size([2, 6, 512, 8, 4])    |
| 1577    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint16        | 0.0030088 | -87.3646317  | 90.2230301    | 1.2218955    | 369.9359741      | torch.Size([2, 6, 512, 8, 4])    |
| 1577    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint16        | 0.0030088 | -57.2612076  | 56.3916512    | -0.5328906   | 406.7762146      | torch.Size([2, 6, 512, 8, 1])    |
| 1578    | torch.clamp                                                                 | head.layers.17                                    | input               | qint16        | 0.0030088 | -57.2612076  | 56.3916512    | -0.5328906   | 406.7762146      | torch.Size([2, 6, 512, 8, 1])    |
| 1578    | torch.clamp                                                                 | head.layers.17                                    | output              | qint16        | 0.0030088 | 0.0000000    | 56.3916512    | 7.1192603    | 145.9317169      | torch.Size([2, 6, 512, 8, 1])    |
| 1579    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.17.reciprocal_op                      | input               | qint16        | 0.0030088 | 0.0000000    | 56.3916512    | 7.1192603    | 145.9317169      | torch.Size([2, 6, 512, 8, 1])    |
| 1579    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.17.reciprocal_op                      | output              | qint16        | 0.0003357 | 0.0177917    | 10.9996643    | 6.2951851    | 27.7497330       | torch.Size([2, 6, 512, 8, 1])    |
| 1580    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint16        | 0.0030088 | -87.3646317  | 90.2230301    | 1.2218955    | 369.9359741      | torch.Size([2, 6, 512, 8, 4])    |
| 1580    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint16        | 0.0030088 | -87.3646317  | 90.2230301    | 2.2107689    | 533.9472656      | torch.Size([2, 6, 512, 8, 2])    |
| 1581    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.point_mul                          | input_0             | qint16        | 0.0030088 | -87.3646317  | 90.2230301    | 2.2107689    | 533.9472656      | torch.Size([2, 6, 512, 8, 2])    |
| 1581    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.point_mul                          | input_1             | qint16        | 0.0003357 | 0.0177917    | 10.9996643    | 6.2951851    | 27.7497330       | torch.Size([2, 6, 512, 8, 1])    |
| 1581    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.point_mul                          | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1698431    | 0.9137216        | torch.Size([2, 6, 512, 8, 2])    |
| 1582    | torch.Tensor.flatten                                                        | head.layers.17                                    | input               | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1698431    | 0.9137216        | torch.Size([2, 6, 512, 8, 2])    |
| 1582    | torch.Tensor.flatten                                                        | head.layers.17                                    | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1698431    | 0.9137216        | torch.Size([12, 512, 8, 2])      |
| 1583    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.17                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 1583    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.17                                    | input_1             | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1698431    | 0.9137216        | torch.Size([12, 512, 8, 2])      |
| 1583    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.17                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([12, 256, 512, 8])    |
| 1584    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.feat_cat                           | input               | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([12, 256, 512, 8])    |
| 1584    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.feat_cat                           | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([12, 256, 512, 8])    |
| 1585    | torch.Tensor.view                                                           | head.layers.17                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([12, 256, 512, 8])    |
| 1585    | torch.Tensor.view                                                           | head.layers.17                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([2, 6, 256, 512, 8])  |
| 1586    | torch.Tensor.permute                                                        | head.layers.17                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([2, 6, 256, 512, 8])  |
| 1586    | torch.Tensor.permute                                                        | head.layers.17                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([2, 512, 6, 8, 256])  |
| 1587    | torch.Tensor.contiguous                                                     | head.layers.17                                    | input               | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333849        | torch.Size([2, 512, 6, 8, 256])  |
| 1587    | torch.Tensor.contiguous                                                     | head.layers.17                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333844        | torch.Size([2, 512, 6, 8, 256])  |
| 1588    | torch.Tensor.view                                                           | head.layers.17                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333844        | torch.Size([2, 512, 6, 8, 256])  |
| 1588    | torch.Tensor.view                                                           | head.layers.17                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333844        | torch.Size([2, 512, 48, 256])    |
| 1589    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | input_0             | qint8         | 0.0014944 | 0.0000000    | 0.1897901     | 0.0207656    | 0.0009753        | torch.Size([2, 512, 48, 8])      |
| 1589    | torch.Tensor.__getitem__                                                    | head.layers.17                                    | output              | qint8         | 0.0014944 | 0.0000000    | 0.1897901     | 0.0207656    | 0.0009753        | torch.Size([2, 512, 48, 8, 1])   |
| 1590    | torch.Tensor.reshape                                                        | head.layers.17                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333844        | torch.Size([2, 512, 48, 256])    |
| 1590    | torch.Tensor.reshape                                                        | head.layers.17                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333844        | torch.Size([2, 512, 48, 8, 32])  |
| 1591    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.feat_mul                           | input_0             | qint8         | 0.0014944 | 0.0000000    | 0.1897901     | 0.0207656    | 0.0009753        | torch.Size([2, 512, 48, 8, 1])   |
| 1591    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.feat_mul                           | input_1             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0287652    | 2.9333844        | torch.Size([2, 512, 48, 8, 32])  |
| 1591    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.17.feat_mul                           | output              | qint8         | 0.0177398 | -2.2706945   | 2.2529547     | 0.0005792    | 0.0039269        | torch.Size([2, 512, 48, 8, 32])  |
| 1592    | torch.Tensor.view                                                           | head.layers.17                                    | input_0             | qint8         | 0.0177398 | -2.2706945   | 2.2529547     | 0.0005792    | 0.0039269        | torch.Size([2, 512, 48, 8, 32])  |
| 1592    | torch.Tensor.view                                                           | head.layers.17                                    | output              | qint8         | 0.0177398 | -2.2706945   | 2.2529547     | 0.0005792    | 0.0039269        | torch.Size([2, 512, 48, 256])    |
| 1593    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.17.feat_sum                           | input               | qint8         | 0.0177398 | -2.2706945   | 2.2529547     | 0.0005792    | 0.0039269        | torch.Size([2, 512, 48, 256])    |
| 1593    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.17.feat_sum                           | output              | qint8         | 0.0349178 | -4.4694753   | 4.4345574     | 0.0278199    | 0.4452421        | torch.Size([2, 512, 256])        |
| 1594    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.output_proj                        | input               | qint8         | 0.0349178 | -4.4694753   | 4.4345574     | 0.0278199    | 0.4452421        | torch.Size([2, 512, 256])        |
| 1594    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.output_proj                        | weight              | torch.float32 |           | -0.2891404   | 0.3089988     | -0.0003690   | 0.0059508        | torch.Size([256, 256])           |
| 1594    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.output_proj                        | bias                | torch.float32 |           | -0.1011890   | 0.0951982     | -0.0002823   | 0.0014432        | torch.Size([256])                |
| 1594    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.17.output_proj                        | output              | qint8         | 0.0497975 | -6.3740845   | 6.3242869     | 0.0393897    | 0.7794623        | torch.Size([2, 512, 256])        |
| 1595    | torch.nn.modules.dropout.Dropout                                            | head.layers.17.proj_drop                          | input               | qint8         | 0.0497975 | -6.3740845   | 6.3242869     | 0.0393897    | 0.7794623        | torch.Size([2, 512, 256])        |
| 1595    | torch.nn.modules.dropout.Dropout                                            | head.layers.17.proj_drop                          | output              | qint8         | 0.0497975 | -6.3740845   | 6.3242869     | 0.0393897    | 0.7794623        | torch.Size([2, 512, 256])        |
| 1596    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.residual_op                        | input_0             | qint8         | 0.0497975 | -6.3740845   | 6.3242869     | 0.0393897    | 0.7794623        | torch.Size([2, 512, 256])        |
| 1596    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.residual_op                        | input_1             | qint8         | 0.0556755 | -7.1264648   | 4.7880936     | -0.0070023   | 0.6926007        | torch.Size([2, 512, 256])        |
| 1596    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.17.residual_op                        | output              | qint8         | 0.0552034 | -7.0660338   | 6.3483896     | 0.0160966    | 0.7358838        | torch.Size([2, 512, 512])        |
| 1597    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.input_mean.mean           | input_0             | qint8         | 0.0552034 | -7.0660338   | 6.3483896     | 0.0160966    | 0.7358838        | torch.Size([2, 512, 512])        |
| 1597    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.input_mean.mean           | output              | qint16        | 0.0000035 | -0.0396761   | 0.0905669     | 0.0160967    | 0.0003629        | torch.Size([2, 512, 1])          |
| 1598    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.18.pre_norm.sub                       | input_0             | qint8         | 0.0552034 | -7.0660338   | 6.3483896     | 0.0160966    | 0.7358838        | torch.Size([2, 512, 512])        |
| 1598    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.18.pre_norm.sub                       | input_1             | qint16        | 0.0000035 | -0.0396761   | 0.0905669     | 0.0160967    | 0.0003629        | torch.Size([2, 512, 1])          |
| 1598    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.18.pre_norm.sub                       | output              | qint16        | 0.0003024 | -7.1566625   | 6.3272772     | -0.0000017   | 0.7355218        | torch.Size([2, 512, 512])        |
| 1599    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.mul                       | input_0             | qint16        | 0.0003024 | -7.1566625   | 6.3272772     | -0.0000017   | 0.7355218        | torch.Size([2, 512, 512])        |
| 1599    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.mul                       | input_1             | qint16        | 0.0003024 | -7.1566625   | 6.3272772     | -0.0000017   | 0.7355218        | torch.Size([2, 512, 512])        |
| 1599    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.mul                       | output              | qint16        | 0.0030050 | 0.0000000    | 51.2174683    | 0.7355363    | 7.6053858        | torch.Size([2, 512, 512])        |
| 1600    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.var_mean.mean             | input_0             | qint16        | 0.0030050 | 0.0000000    | 51.2174683    | 0.7355363    | 7.6053858        | torch.Size([2, 512, 512])        |
| 1600    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.var_mean.mean             | output              | qint16        | 0.0000799 | 0.3612607    | 2.6177416     | 0.7352966    | 0.1153785        | torch.Size([2, 512, 1])          |
| 1601    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.18.pre_norm.rsqrt                     | input               | qint16        | 0.0000799 | 0.3612607    | 2.6177416     | 0.7352966    | 0.1153785        | torch.Size([2, 512, 1])          |
| 1601    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.18.pre_norm.rsqrt                     | output              | qint16        | 0.0000521 | 0.6180562    | 1.6637493     | 1.2441022    | 0.0626790        | torch.Size([2, 512, 1])          |
| 1602    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.out_mul                   | input_0             | qint16        | 0.0003024 | -7.1566625   | 6.3272772     | -0.0000017   | 0.7355218        | torch.Size([2, 512, 512])        |
| 1602    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.out_mul                   | input_1             | qint16        | 0.0000521 | 0.6180562    | 1.6637493     | 1.2441022    | 0.0626790        | torch.Size([2, 512, 1])          |
| 1602    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.out_mul                   | output              | qint16        | 0.0003408 | -10.3645105  | 7.3928509     | -0.0000045   | 1.0000253        | torch.Size([2, 512, 512])        |
| 1603    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.18.pre_norm.weight_quant              | input               | torch.float32 |           | 0.6495609    | 1.5811656     | 1.0579998    | 0.0720950        | torch.Size([512])                |
| 1603    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.18.pre_norm.weight_quant              | output              | qint16        | 0.0000483 | 0.6495482    | 1.5811414     | 1.0580001    | 0.0720956        | torch.Size([512])                |
| 1604    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.weight_mul                | input_0             | qint16        | 0.0003408 | -10.3645105  | 7.3928509     | -0.0000045   | 1.0000253        | torch.Size([2, 512, 512])        |
| 1604    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.weight_mul                | input_1             | qint16        | 0.0000483 | 0.6495482    | 1.5811414     | 1.0580001    | 0.0720956        | torch.Size([512])                |
| 1604    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.weight_mul                | output              | qint16        | 0.0002601 | -7.9086089   | 6.0442567     | 0.0025372    | 0.8619349        | torch.Size([2, 512, 512])        |
| 1605    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.18.pre_norm.bias_quant                | input               | torch.float32 |           | -0.2217483   | 0.2109743     | 0.0011747    | 0.0023772        | torch.Size([512])                |
| 1605    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.18.pre_norm.bias_quant                | output              | qint16        | 0.0000068 | -0.2217517   | 0.2109713     | 0.0011747    | 0.0023772        | torch.Size([512])                |
| 1606    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.18.pre_norm.bias_add                  | input_0             | qint16        | 0.0002601 | -7.9086089   | 6.0442567     | 0.0025372    | 0.8619349        | torch.Size([2, 512, 512])        |
| 1606    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.18.pre_norm.bias_add                  | input_1             | qint16        | 0.0000068 | -0.2217517   | 0.2109713     | 0.0011747    | 0.0023772        | torch.Size([512])                |
| 1606    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.18.pre_norm.bias_add                  | output              | qint8         | 0.0542331 | -6.9418325   | 6.0198703     | 0.0039640    | 0.8471439        | torch.Size([2, 512, 512])        |
| 1607    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.0.0                         | input               | qint8         | 0.0542331 | -6.9418325   | 6.0198703     | 0.0039640    | 0.8471439        | torch.Size([2, 512, 512])        |
| 1607    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.0.0                         | weight              | torch.float32 |           | -0.4454298   | 0.5020626     | -0.0008407   | 0.0058560        | torch.Size([1024, 512])          |
| 1607    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.0.0                         | bias                | torch.float32 |           | -0.1510170   | 0.0629522     | -0.0535287   | 0.0011214        | torch.Size([1024])               |
| 1607    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.0.0                         | output              | torch.float32 |           | -21.0813293  | 16.2964554    | -3.0005283   | 11.8455524       | torch.Size([2, 512, 1024])       |
| 1608    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.18.activate                           | input               | torch.float32 |           | -21.0813293  | 16.2964554    | -3.0005283   | 11.8455524       | torch.Size([2, 512, 1024])       |
| 1608    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.18.activate                           | output              | qint8         | 0.0838644 | 0.0000000    | 10.6507845    | 0.3702299    | 1.6636261        | torch.Size([2, 512, 1024])       |
| 1609    | torch.nn.modules.dropout.Dropout                                            | head.layers.18.layers.0.2                         | input               | qint8         | 0.0838644 | 0.0000000    | 10.6507845    | 0.3702299    | 1.6636261        | torch.Size([2, 512, 1024])       |
| 1609    | torch.nn.modules.dropout.Dropout                                            | head.layers.18.layers.0.2                         | output              | qint8         | 0.0838644 | 0.0000000    | 10.6507845    | 0.3702299    | 1.6636261        | torch.Size([2, 512, 1024])       |
| 1610    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.1                           | input               | qint8         | 0.0838644 | 0.0000000    | 10.6507845    | 0.3702299    | 1.6636261        | torch.Size([2, 512, 1024])       |
| 1610    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.1                           | weight              | torch.float32 |           | -0.3873430   | 0.3617197     | 0.0000918    | 0.0056267        | torch.Size([256, 1024])          |
| 1610    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.1                           | bias                | torch.float32 |           | -0.0861191   | 0.0774464     | -0.0007529   | 0.0010433        | torch.Size([256])                |
| 1610    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.layers.1                           | output              | qint8         | 0.2058181 | -26.3447132  | 26.1388950    | 0.0951692    | 39.9410553       | torch.Size([2, 512, 256])        |
| 1611    | torch.nn.modules.dropout.Dropout                                            | head.layers.18.layers.2                           | input               | qint8         | 0.2058181 | -26.3447132  | 26.1388950    | 0.0951692    | 39.9410553       | torch.Size([2, 512, 256])        |
| 1611    | torch.nn.modules.dropout.Dropout                                            | head.layers.18.layers.2                           | output              | qint8         | 0.2058181 | -26.3447132  | 26.1388950    | 0.0951692    | 39.9410553       | torch.Size([2, 512, 256])        |
| 1612    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.identity_fc                        | input               | qint8         | 0.0542331 | -6.9418325   | 6.0198703     | 0.0039640    | 0.8471439        | torch.Size([2, 512, 512])        |
| 1612    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.identity_fc                        | weight              | torch.float32 |           | -0.3842853   | 0.4044652     | -0.0002469   | 0.0070671        | torch.Size([256, 512])           |
| 1612    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.identity_fc                        | bias                | torch.float32 |           | -0.0906205   | 0.0750783     | -0.0010049   | 0.0010887        | torch.Size([256])                |
| 1612    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.18.identity_fc                        | output              | torch.float32 |           | -14.9567852  | 13.8220816    | 0.0027379    | 12.6454487       | torch.Size([2, 512, 256])        |
| 1613    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.18.short_add                          | input_0             | torch.float32 |           | -14.9567852  | 13.8220816    | 0.0027379    | 12.6454487       | torch.Size([2, 512, 256])        |
| 1613    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.18.short_add                          | input_1             | qint8         | 0.2058181 | -26.3447132  | 26.1388950    | 0.0951692    | 39.9410553       | torch.Size([2, 512, 256])        |
| 1613    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.18.short_add                          | output              | qint8         | 0.2513148 | -32.1682968  | 31.9169827    | 0.0997939    | 65.3701248       | torch.Size([2, 512, 256])        |
| 1614    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.19.input_mean.mean                    | input_0             | qint8         | 0.2513148 | -32.1682968  | 31.9169827    | 0.0997939    | 65.3701248       | torch.Size([2, 512, 256])        |
| 1614    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.19.input_mean.mean                    | output              | qint16        | 0.0000107 | -0.2287360   | 0.3506266     | 0.0688903    | 0.0279774        | torch.Size([2, 512, 1])          |
| 1615    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.19.sub                                | input_0             | qint8         | 0.2513148 | -32.1682968  | 31.9169827    | 0.0997939    | 65.3701248       | torch.Size([2, 512, 256])        |
| 1615    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.19.sub                                | input_1             | qint16        | 0.0000107 | -0.2287360   | 0.3506266     | 0.0688903    | 0.0279774        | torch.Size([2, 512, 1])          |
| 1615    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.19.sub                                | output              | qint16        | 0.0012761 | -32.5184021  | 31.5779305    | 0.0309046    | 65.3246460       | torch.Size([2, 512, 256])        |
| 1616    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.mul                                | input_0             | qint16        | 0.0012761 | -32.5184021  | 31.5779305    | 0.0309046    | 65.3246460       | torch.Size([2, 512, 256])        |
| 1616    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.mul                                | input_1             | qint16        | 0.0012761 | -32.5184021  | 31.5779305    | 0.0309046    | 65.3246460       | torch.Size([2, 512, 256])        |
| 1616    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.mul                                | output              | qint16        | 0.0541531 | 0.0000000    | 1057.4483643  | 65.3250732   | 25562.8593750    | torch.Size([2, 512, 256])        |
| 1617    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.19.var_mean.mean                      | input_0             | qint16        | 0.0541531 | 0.0000000    | 1057.4483643  | 65.3250732   | 25562.8593750    | torch.Size([2, 512, 256])        |
| 1617    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.19.var_mean.mean                      | output              | qint16        | 0.0070490 | 5.8295250    | 230.9746552   | 61.4042168   | 6995.4702148     | torch.Size([2, 512, 1])          |
| 1618    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.19.rsqrt                              | input               | qint16        | 0.0070490 | 5.8295250    | 230.9746552   | 61.4042168   | 6995.4702148     | torch.Size([2, 512, 1])          |
| 1618    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.19.rsqrt                              | output              | qint16        | 0.0000120 | 0.0657928    | 0.3934000     | 0.2320964    | 0.0106879        | torch.Size([2, 512, 1])          |
| 1619    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.out_mul                            | input_0             | qint16        | 0.0012761 | -32.5184021  | 31.5779305    | 0.0309046    | 65.3246460       | torch.Size([2, 512, 256])        |
| 1619    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.out_mul                            | input_1             | qint16        | 0.0000120 | 0.0657928    | 0.3934000     | 0.2320964    | 0.0106879        | torch.Size([2, 512, 1])          |
| 1619    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.out_mul                            | output              | qint16        | 0.0001588 | -4.7193580   | 4.0358648     | 0.0020325    | 1.0167639        | torch.Size([2, 512, 256])        |
| 1620    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.19.weight_quant                       | input               | torch.float32 |           | 0.6796300    | 1.0328771     | 0.8834044    | 0.0047104        | torch.Size([256])                |
| 1620    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.19.weight_quant                       | output              | qint16        | 0.0000315 | 0.6796327    | 1.0328614     | 0.8834054    | 0.0047102        | torch.Size([256])                |
| 1621    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.weight_mul                         | input_0             | qint16        | 0.0001588 | -4.7193580   | 4.0358648     | 0.0020325    | 1.0167639        | torch.Size([2, 512, 256])        |
| 1621    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.weight_mul                         | input_1             | qint16        | 0.0000315 | 0.6796327    | 1.0328614     | 0.8834054    | 0.0047102        | torch.Size([256])                |
| 1621    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.19.weight_mul                         | output              | qint16        | 0.0001374 | -4.1926208   | 3.6803908     | 0.0035206    | 0.7967686        | torch.Size([2, 512, 256])        |
| 1622    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.19.bias_quant                         | input               | torch.float32 |           | -0.0769484   | 0.1481542     | 0.0026473    | 0.0013678        | torch.Size([256])                |
| 1622    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.19.bias_quant                         | output              | qint16        | 0.0000045 | -0.0769493   | 0.1481520     | 0.0026474    | 0.0013678        | torch.Size([256])                |
| 1623    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.19.bias_add                           | input_0             | qint16        | 0.0001374 | -4.1926208   | 3.6803908     | 0.0035206    | 0.7967686        | torch.Size([2, 512, 256])        |
| 1623    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.19.bias_add                           | input_1             | qint16        | 0.0000045 | -0.0769493   | 0.1481520     | 0.0026474    | 0.0013678        | torch.Size([256])                |
| 1623    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.19.bias_add                           | output              | qint8         | 0.0278524 | -3.5651109   | 3.5372584     | 0.0062697    | 0.7816463        | torch.Size([2, 512, 256])        |
| 1624    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.add1                               | input_0             | qint8         | 0.0278524 | -3.5651109   | 3.5372584     | 0.0062697    | 0.7816463        | torch.Size([2, 512, 256])        |
| 1624    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.add1                               | input_1             | qint8         | 0.0569265 | -1.7647222   | 7.2296681     | 0.0575908    | 0.8505152        | torch.Size([2, 512, 256])        |
| 1624    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.add1                               | output              | qint8         | 0.0584720 | -4.1515126   | 7.4259453     | 0.0639848    | 1.3051810        | torch.Size([2, 512, 256])        |
| 1625    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.0                           | input               | qint8         | 0.0584720 | -4.1515126   | 7.4259453     | 0.0639848    | 1.3051810        | torch.Size([2, 512, 256])        |
| 1625    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.0                           | weight              | torch.float32 |           | -0.5312872   | 0.8384986     | 0.0000412    | 0.0048373        | torch.Size([256, 256])           |
| 1625    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.0                           | bias                | torch.float32 |           | -0.1474053   | 0.0710347     | -0.0397527   | 0.0019485        | torch.Size([256])                |
| 1625    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.0                           | output              | torch.float32 |           | -10.5273952  | 10.2535400    | -0.8430147   | 4.5146055        | torch.Size([2, 512, 256])        |
| 1626    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.1                           | input               | torch.float32 |           | -10.5273952  | 10.2535400    | -0.8430147   | 4.5146055        | torch.Size([2, 512, 256])        |
| 1626    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.1                           | output              | qint8         | 0.0620404 | 0.0000000    | 7.8791361     | 0.4929510    | 0.8862640        | torch.Size([2, 512, 256])        |
| 1627    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.2                           | input               | qint8         | 0.0620404 | 0.0000000    | 7.8791361     | 0.4929510    | 0.8862640        | torch.Size([2, 512, 256])        |
| 1627    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.2                           | weight              | torch.float32 |           | -0.5925879   | 0.3864230     | -0.0059677   | 0.0050629        | torch.Size([256, 256])           |
| 1627    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.2                           | bias                | torch.float32 |           | -0.1329685   | 0.1114794     | -0.0053145   | 0.0022305        | torch.Size([256])                |
| 1627    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.2                           | output              | torch.float32 |           | -10.7234297  | 6.5645642     | -0.7139856   | 3.7592494        | torch.Size([2, 512, 256])        |
| 1628    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.3                           | input               | torch.float32 |           | -10.7234297  | 6.5645642     | -0.7139856   | 3.7592494        | torch.Size([2, 512, 256])        |
| 1628    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.3                           | output              | qint8         | 0.0543666 | 0.0000000    | 6.5783629     | 0.4220607    | 0.5896839        | torch.Size([2, 512, 256])        |
| 1629    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.input_mean.mean           | input_0             | qint8         | 0.0543666 | 0.0000000    | 6.5783629     | 0.4220607    | 0.5896839        | torch.Size([2, 512, 256])        |
| 1629    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.input_mean.mean           | output              | qint16        | 0.0000228 | 0.1898563    | 0.7343662     | 0.4220616    | 0.0089290        | torch.Size([2, 512, 1])          |
| 1630    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.20.layers.4.sub                       | input_0             | qint8         | 0.0543666 | 0.0000000    | 6.5783629     | 0.4220607    | 0.5896839        | torch.Size([2, 512, 256])        |
| 1630    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.20.layers.4.sub                       | input_1             | qint16        | 0.0000228 | 0.1898563    | 0.7343662     | 0.4220616    | 0.0089290        | torch.Size([2, 512, 1])          |
| 1630    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.20.layers.4.sub                       | output              | qint16        | 0.0002578 | -0.7344683   | 6.0141845     | -0.0000096   | 0.5807683        | torch.Size([2, 512, 256])        |
| 1631    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.mul                       | input_0             | qint16        | 0.0002578 | -0.7344683   | 6.0141845     | -0.0000096   | 0.5807683        | torch.Size([2, 512, 256])        |
| 1631    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.mul                       | input_1             | qint16        | 0.0002578 | -0.7344683   | 6.0141845     | -0.0000096   | 0.5807683        | torch.Size([2, 512, 256])        |
| 1631    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.mul                       | output              | qint16        | 0.0021819 | 0.0000000    | 36.1696968    | 0.5808665    | 2.2938712        | torch.Size([2, 512, 256])        |
| 1632    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.var_mean.mean             | input_0             | qint16        | 0.0021819 | 0.0000000    | 36.1696968    | 0.5808665    | 2.2938712        | torch.Size([2, 512, 256])        |
| 1632    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.var_mean.mean             | output              | qint16        | 0.0000537 | 0.1218717    | 1.5812198     | 0.5808673    | 0.0518905        | torch.Size([2, 512, 1])          |
| 1633    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.20.layers.4.rsqrt                     | input               | qint16        | 0.0000537 | 0.1218717    | 1.5812198     | 0.5808673    | 0.0518905        | torch.Size([2, 512, 1])          |
| 1633    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.20.layers.4.rsqrt                     | output              | qint16        | 0.0001036 | 0.7951968    | 2.8644276     | 1.3866010    | 0.0729935        | torch.Size([2, 512, 1])          |
| 1634    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.out_mul                   | input_0             | qint16        | 0.0002578 | -0.7344683   | 6.0141845     | -0.0000096   | 0.5807683        | torch.Size([2, 512, 256])        |
| 1634    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.out_mul                   | input_1             | qint16        | 0.0001036 | 0.7951968    | 2.8644276     | 1.3866010    | 0.0729935        | torch.Size([2, 512, 1])          |
| 1634    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.out_mul                   | output              | qint16        | 0.0002449 | -0.7117801   | 6.2375636     | -0.0000084   | 0.9997827        | torch.Size([2, 512, 256])        |
| 1635    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.4.weight_quant              | input               | torch.float32 |           | 0.7434729    | 1.2185259     | 0.9715712    | 0.0058709        | torch.Size([256])                |
| 1635    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.4.weight_quant              | output              | qint16        | 0.0000372 | 0.7434802    | 1.2185073     | 0.9715716    | 0.0058707        | torch.Size([256])                |
| 1636    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.weight_mul                | input_0             | qint16        | 0.0002449 | -0.7117801   | 6.2375636     | -0.0000084   | 0.9997827        | torch.Size([2, 512, 256])        |
| 1636    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.weight_mul                | input_1             | qint16        | 0.0000372 | 0.7434802    | 1.2185073     | 0.9715716    | 0.0058707        | torch.Size([256])                |
| 1636    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.weight_mul                | output              | qint16        | 0.0002512 | -0.8673056   | 6.4382119     | 0.0061388    | 0.9671125        | torch.Size([2, 512, 256])        |
| 1637    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.4.bias_quant                | input               | torch.float32 |           | -0.0757226   | 0.2495108     | 0.0394512    | 0.0048348        | torch.Size([256])                |
| 1637    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.4.bias_quant                | output              | qint16        | 0.0000076 | -0.0757194   | 0.2495070     | 0.0394513    | 0.0048348        | torch.Size([256])                |
| 1638    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.layers.4.bias_add                  | input_0             | qint16        | 0.0002512 | -0.8673056   | 6.4382119     | 0.0061388    | 0.9671125        | torch.Size([2, 512, 256])        |
| 1638    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.layers.4.bias_add                  | input_1             | qint16        | 0.0000076 | -0.0757194   | 0.2495070     | 0.0394513    | 0.0048348        | torch.Size([256])                |
| 1638    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.layers.4.bias_add                  | output              | qint8         | 0.0523160 | -0.8893727   | 6.4871888     | 0.0458655    | 0.9265923        | torch.Size([2, 512, 256])        |
| 1639    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.5                           | input               | qint8         | 0.0523160 | -0.8893727   | 6.4871888     | 0.0458655    | 0.9265923        | torch.Size([2, 512, 256])        |
| 1639    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.5                           | weight              | torch.float32 |           | -0.3297310   | 0.4340349     | 0.0047341    | 0.0039670        | torch.Size([256, 256])           |
| 1639    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.5                           | bias                | torch.float32 |           | -0.1393721   | 0.0863483     | -0.0307643   | 0.0023375        | torch.Size([256])                |
| 1639    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.5                           | output              | torch.float32 |           | -7.5290451   | 10.6032581    | -1.0914938   | 4.5748696        | torch.Size([2, 512, 256])        |
| 1640    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.6                           | input               | torch.float32 |           | -7.5290451   | 10.6032581    | -1.0914938   | 4.5748696        | torch.Size([2, 512, 256])        |
| 1640    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.6                           | output              | qint8         | 0.0733139 | 0.0000000    | 9.3108625     | 0.4693004    | 1.0529664        | torch.Size([2, 512, 256])        |
| 1641    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.7                           | input               | qint8         | 0.0733139 | 0.0000000    | 9.3108625     | 0.4693004    | 1.0529664        | torch.Size([2, 512, 256])        |
| 1641    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.7                           | weight              | torch.float32 |           | -0.3382548   | 0.4402925     | -0.0069504   | 0.0026877        | torch.Size([256, 256])           |
| 1641    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.7                           | bias                | torch.float32 |           | -0.0995187   | 0.1937151     | -0.0185211   | 0.0016516        | torch.Size([256])                |
| 1641    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.7                           | output              | torch.float32 |           | -9.8733578   | 40.7629776    | -1.8833455   | 6.3785934        | torch.Size([2, 512, 256])        |
| 1642    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.8                           | input               | torch.float32 |           | -9.8733578   | 40.7629776    | -1.8833455   | 6.3785934        | torch.Size([2, 512, 256])        |
| 1642    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.20.layers.8                           | output              | qint8         | 0.3199500 | 0.0000000    | 40.6336479    | 0.2936931    | 2.5260415        | torch.Size([2, 512, 256])        |
| 1643    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.input_mean.mean           | input_0             | qint8         | 0.3199500 | 0.0000000    | 40.6336479    | 0.2936931    | 2.5260415        | torch.Size([2, 512, 256])        |
| 1643    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.input_mean.mean           | output              | qint16        | 0.0000200 | 0.1437301    | 0.6551127     | 0.2936581    | 0.0066215        | torch.Size([2, 512, 1])          |
| 1644    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.20.layers.9.sub                       | input_0             | qint8         | 0.3199500 | 0.0000000    | 40.6336479    | 0.2936931    | 2.5260415        | torch.Size([2, 512, 256])        |
| 1644    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.20.layers.9.sub                       | input_1             | qint16        | 0.0000200 | 0.1437301    | 0.6551127     | 0.2936581    | 0.0066215        | torch.Size([2, 512, 1])          |
| 1644    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.20.layers.9.sub                       | output              | qint16        | 0.0013419 | -0.6548592   | 40.3543587    | 0.0001432    | 2.5193505        | torch.Size([2, 512, 256])        |
| 1645    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.mul                       | input_0             | qint16        | 0.0013419 | -0.6548592   | 40.3543587    | 0.0001432    | 2.5193505        | torch.Size([2, 512, 256])        |
| 1645    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.mul                       | input_1             | qint16        | 0.0013419 | -0.6548592   | 40.3543587    | 0.0001432    | 2.5193505        | torch.Size([2, 512, 256])        |
| 1645    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.mul                       | output              | qint16        | 0.0590113 | 0.0000000    | 1628.4747314  | 2.5233958    | 1504.4593506     | torch.Size([2, 512, 256])        |
| 1646    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.var_mean.mean             | input_0             | qint16        | 0.0590113 | 0.0000000    | 1628.4747314  | 2.5233958    | 1504.4593506     | torch.Size([2, 512, 256])        |
| 1646    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.var_mean.mean             | output              | qint16        | 0.0002436 | 0.2743421    | 6.8302903     | 2.5234232    | 2.0890226        | torch.Size([2, 512, 1])          |
| 1647    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.20.layers.9.rsqrt                     | input               | qint16        | 0.0002436 | 0.2743421    | 6.8302903     | 2.5234232    | 2.0890226        | torch.Size([2, 512, 1])          |
| 1647    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.20.layers.9.rsqrt                     | output              | qint16        | 0.0000756 | 0.3826217    | 1.9091767     | 0.7127526    | 0.0439497        | torch.Size([2, 512, 1])          |
| 1648    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.out_mul                   | input_0             | qint16        | 0.0013419 | -0.6548592   | 40.3543587    | 0.0001432    | 2.5193505        | torch.Size([2, 512, 256])        |
| 1648    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.out_mul                   | input_1             | qint16        | 0.0000756 | 0.3826217    | 1.9091767     | 0.7127526    | 0.0439497        | torch.Size([2, 512, 1])          |
| 1648    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.out_mul                   | output              | qint16        | 0.0004807 | -0.5653123   | 15.7513494    | 0.0001469    | 0.9973997        | torch.Size([2, 512, 256])        |
| 1649    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.9.weight_quant              | input               | torch.float32 |           | 0.7900761    | 1.3101054     | 0.9095095    | 0.0016009        | torch.Size([256])                |
| 1649    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.9.weight_quant              | output              | qint16        | 0.0000400 | 0.7900814    | 1.3100855     | 0.9095093    | 0.0016009        | torch.Size([256])                |
| 1650    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.weight_mul                | input_0             | qint16        | 0.0004807 | -0.5653123   | 15.7513494    | 0.0001469    | 0.9973997        | torch.Size([2, 512, 256])        |
| 1650    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.weight_mul                | input_1             | qint16        | 0.0000400 | 0.7900814    | 1.3100855     | 0.9095093    | 0.0016009        | torch.Size([256])                |
| 1650    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.weight_mul                | output              | qint16        | 0.0004914 | -0.7405488   | 16.1019001    | -0.0016955   | 0.7091452        | torch.Size([2, 512, 256])        |
| 1651    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.9.bias_quant                | input               | torch.float32 |           | -0.1930256   | 0.0890824     | 0.0560105    | 0.0017839        | torch.Size([256])                |
| 1651    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.9.bias_quant                | output              | qint16        | 0.0000059 | -0.1930286   | 0.0890802     | 0.0560104    | 0.0017839        | torch.Size([256])                |
| 1652    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.layers.9.bias_add                  | input_0             | qint16        | 0.0004914 | -0.7405488   | 16.1019001    | -0.0016955   | 0.7091452        | torch.Size([2, 512, 256])        |
| 1652    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.layers.9.bias_add                  | input_1             | qint16        | 0.0000059 | -0.1930286   | 0.0890802     | 0.0560104    | 0.0017839        | torch.Size([256])                |
| 1652    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.layers.9.bias_add                  | output              | qint8         | 0.0954583 | -0.6682079   | 12.1232004    | 0.0522707    | 0.6709025        | torch.Size([2, 512, 256])        |
| 1653    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.10                          | input               | qint8         | 0.0954583 | -0.6682079   | 12.1232004    | 0.0522707    | 0.6709025        | torch.Size([2, 512, 256])        |
| 1653    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.10                          | weight              | torch.float32 |           | -0.4008031   | 0.6920518     | 0.0014364    | 0.0019408        | torch.Size([11, 256])            |
| 1653    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.10                          | bias                | torch.float32 |           | -0.0506025   | 0.0276851     | -0.0170499   | 0.0005790        | torch.Size([11])                 |
| 1653    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.20.layers.10                          | output              | qint16        | 0.0004572 | -5.7706809   | 12.0858345    | -0.0419431   | 0.8573016        | torch.Size([2, 512, 11])         |
| 1654    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.11.scale_quant_stub         | input               | torch.float32 |           | 0.0593412    | 0.6670731     | 0.3126911    | 0.0488246        | torch.Size([11])                 |
| 1654    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.20.layers.11.scale_quant_stub         | output              | qint16        | 0.0000204 | 0.0593429    | 0.6670629     | 0.3126916    | 0.0488231        | torch.Size([11])                 |
| 1655    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.11.mul                      | input_0             | qint16        | 0.0004572 | -5.7706809   | 12.0858345    | -0.0419431   | 0.8573016        | torch.Size([2, 512, 11])         |
| 1655    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.11.mul                      | input_1             | qint16        | 0.0000204 | 0.0593429    | 0.6670629     | 0.3126916    | 0.0488231        | torch.Size([11])                 |
| 1655    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.20.layers.11.mul                      | output              | qint16        | 0.0001302 | -3.7373874   | 4.2671514     | -0.0128279   | 0.2039602        | torch.Size([2, 512, 11])         |
| 1656    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.add2                               | input_0             | qint16        | 0.0001302 | -3.7373874   | 4.2671514     | -0.0128279   | 0.2039602        | torch.Size([2, 512, 11])         |
| 1656    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.add2                               | input_1             | qint16        | 0.0017920 | -53.6652603  | 53.3946686    | 0.2086014    | 74.7120285       | torch.Size([2, 512, 11])         |
| 1656    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.20.add2                               | output              | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1657    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(2)                                   | input               | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1657    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(2)                                   | output              | torch.float32 |           | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1658    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1658    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.7499245    | 275.7551575      | torch.Size([2, 512, 3])          |
| 1659    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(4)                   | input               | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.7499245    | 275.7551575      | torch.Size([2, 512, 3])          |
| 1659    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(4)                   | weight              | torch.float32 |           | -0.9216561   | 0.9167990     | -0.0046354   | 0.1373587        | torch.Size([128, 3])             |
| 1659    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(4)                   | bias                | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3650480        | torch.Size([128])                |
| 1659    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(4)                   | output              | torch.float32 |           | -32.9348793  | 34.4898796    | -0.1160083   | 67.3460159       | torch.Size([2, 512, 128])        |
| 1660    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(4)                   | input               | torch.float32 |           | -32.9348793  | 34.4898796    | -0.1160083   | 67.3460159       | torch.Size([2, 512, 128])        |
| 1660    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(4)                   | output              | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8030267    | 24.7248383       | torch.Size([2, 512, 128])        |
| 1661    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(4)   | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8030267    | 24.7248383       | torch.Size([2, 512, 128])        |
| 1661    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(4)   | output              | qint16        | 0.0002498 | 0.2427872    | 7.2868619     | 2.8030164    | 3.9071572        | torch.Size([2, 512, 1])          |
| 1662    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(4)               | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8030267    | 24.7248383       | torch.Size([2, 512, 128])        |
| 1662    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(4)               | input_1             | qint16        | 0.0002498 | 0.2427872    | 7.2868619     | 2.8030164    | 3.9071572        | torch.Size([2, 512, 1])          |
| 1662    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(4)               | output              | qint16        | 0.0008924 | -7.2866635   | 27.4715691    | 0.0000019    | 20.8215504       | torch.Size([2, 512, 128])        |
| 1663    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(4)               | input_0             | qint16        | 0.0008924 | -7.2866635   | 27.4715691    | 0.0000019    | 20.8215504       | torch.Size([2, 512, 128])        |
| 1663    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(4)               | input_1             | qint16        | 0.0008924 | -7.2866635   | 27.4715691    | 0.0000019    | 20.8215504       | torch.Size([2, 512, 128])        |
| 1663    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(4)               | output              | qint16        | 0.0261809 | 0.0000000    | 754.6892090   | 20.8212109   | 2407.2924805     | torch.Size([2, 512, 128])        |
| 1664    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(4)     | input_0             | qint16        | 0.0261809 | 0.0000000    | 754.6892090   | 20.8212109   | 2407.2924805     | torch.Size([2, 512, 128])        |
| 1664    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(4)     | output              | qint16        | 0.0029473 | 0.1208396    | 75.1003189    | 20.8209705   | 437.8292847      | torch.Size([2, 512, 1])          |
| 1665    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(4)             | input               | qint16        | 0.0029473 | 0.1208396    | 75.1003189    | 20.8209705   | 437.8292847      | torch.Size([2, 512, 1])          |
| 1665    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(4)             | output              | qint16        | 0.0000538 | 0.1154082    | 1.7621539     | 0.6443117    | 0.4513405        | torch.Size([2, 512, 1])          |
| 1666    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(4)           | input_0             | qint16        | 0.0008924 | -7.2866635   | 27.4715691    | 0.0000019    | 20.8215504       | torch.Size([2, 512, 128])        |
| 1666    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(4)           | input_1             | qint16        | 0.0000538 | 0.1154082    | 1.7621539     | 0.6443117    | 0.4513405        | torch.Size([2, 512, 1])          |
| 1666    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(4)           | output              | qint16        | 0.0001192 | -0.8840876   | 3.9062698     | 0.0000027    | 0.8606380        | torch.Size([2, 512, 128])        |
| 1667    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(4)      | input               | torch.float32 |           | 0.7278287    | 1.3287159     | 0.9627235    | 0.0086877        | torch.Size([128])                |
| 1667    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(4)      | output              | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 1668    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(4)        | input_0             | qint16        | 0.0001192 | -0.8840876   | 3.9062698     | 0.0000027    | 0.8606380        | torch.Size([2, 512, 128])        |
| 1668    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(4)        | input_1             | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 1668    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(4)        | output              | qint16        | 0.0001208 | -1.0489458   | 3.9574904     | -0.0022506   | 0.7978286        | torch.Size([2, 512, 128])        |
| 1669    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(4)        | input               | torch.float32 |           | -0.0562531   | 0.0804052     | 0.0088204    | 0.0005294        | torch.Size([128])                |
| 1669    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(4)        | output              | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 1670    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(4)          | input_0             | qint16        | 0.0001208 | -1.0489458   | 3.9574904     | -0.0022506   | 0.7978286        | torch.Size([2, 512, 128])        |
| 1670    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(4)          | input_1             | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 1670    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(4)          | output              | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0064813    | 0.7928224        | torch.Size([2, 512, 128])        |
| 1671    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(4)                   | input               | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0064813    | 0.7928224        | torch.Size([2, 512, 128])        |
| 1671    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(4)                   | weight              | torch.float32 |           | -0.3750711   | 0.3968706     | 0.0019093    | 0.0048458        | torch.Size([128, 128])           |
| 1671    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(4)                   | bias                | torch.float32 |           | -0.1863807   | 0.1385574     | -0.0156467   | 0.0047256        | torch.Size([128])                |
| 1671    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(4)                   | output              | torch.float32 |           | -5.8572435   | 6.2595530     | -0.1001247   | 1.9351658        | torch.Size([2, 512, 128])        |
| 1672    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(4)                   | input               | torch.float32 |           | -5.8572435   | 6.2595530     | -0.1001247   | 1.9351658        | torch.Size([2, 512, 128])        |
| 1672    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(4)                   | output              | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5072417    | 0.6601865        | torch.Size([2, 512, 128])        |
| 1673    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(4)   | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5072417    | 0.6601865        | torch.Size([2, 512, 128])        |
| 1673    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(4)   | output              | qint16        | 0.0000298 | 0.2863901    | 0.9749227     | 0.5072396    | 0.0339279        | torch.Size([2, 512, 1])          |
| 1674    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(4)               | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5072417    | 0.6601865        | torch.Size([2, 512, 128])        |
| 1674    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(4)               | input_1             | qint16        | 0.0000298 | 0.2863901    | 0.9749227     | 0.5072396    | 0.0339279        | torch.Size([2, 512, 1])          |
| 1674    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(4)               | output              | qint16        | 0.0001641 | -0.9748717   | 5.1057677     | -0.0000033   | 0.6262982        | torch.Size([2, 512, 128])        |
| 1675    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(4)               | input_0             | qint16        | 0.0001641 | -0.9748717   | 5.1057677     | -0.0000033   | 0.6262982        | torch.Size([2, 512, 128])        |
| 1675    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(4)               | input_1             | qint16        | 0.0001641 | -0.9748717   | 5.1057677     | -0.0000033   | 0.6262982        | torch.Size([2, 512, 128])        |
| 1675    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(4)               | output              | qint16        | 0.0008856 | 0.0000000    | 26.0686932    | 0.6263207    | 2.5565383        | torch.Size([2, 512, 128])        |
| 1676    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(4)     | input_0             | qint16        | 0.0008856 | 0.0000000    | 26.0686932    | 0.6263207    | 2.5565383        | torch.Size([2, 512, 128])        |
| 1676    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(4)     | output              | qint16        | 0.0000499 | 0.3040115    | 1.6354529     | 0.6263105    | 0.0653909        | torch.Size([2, 512, 1])          |
| 1677    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(4)             | input               | qint16        | 0.0000499 | 0.3040115    | 1.6354529     | 0.6263105    | 0.0653909        | torch.Size([2, 512, 1])          |
| 1677    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(4)             | output              | qint16        | 0.0000553 | 0.7819349    | 1.8121266     | 1.3387624    | 0.0644056        | torch.Size([2, 512, 1])          |
| 1678    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(4)           | input_0             | qint16        | 0.0001641 | -0.9748717   | 5.1057677     | -0.0000033   | 0.6262982        | torch.Size([2, 512, 128])        |
| 1678    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(4)           | input_1             | qint16        | 0.0000553 | 0.7819349    | 1.8121266     | 1.3387624    | 0.0644056        | torch.Size([2, 512, 1])          |
| 1678    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(4)           | output              | qint16        | 0.0002164 | -0.8008604   | 7.0161862     | 0.0000007    | 0.9999641        | torch.Size([2, 512, 128])        |
| 1679    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(4)      | input               | torch.float32 |           | 0.5925044    | 1.4726304     | 0.9182085    | 0.0175060        | torch.Size([128])                |
| 1679    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(4)      | output              | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 1680    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(4)        | input_0             | qint16        | 0.0002164 | -0.8008604   | 7.0161862     | 0.0000007    | 0.9999641        | torch.Size([2, 512, 128])        |
| 1680    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(4)        | input_1             | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 1680    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(4)        | output              | qint16        | 0.0002127 | -0.9368925   | 6.8940825     | 0.0344712    | 0.9453547        | torch.Size([2, 512, 128])        |
| 1681    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(4)        | input               | torch.float32 |           | -0.0644210   | 0.2426097     | 0.0318023    | 0.0030999        | torch.Size([128])                |
| 1681    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(4)        | output              | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 1682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(4)          | input_0             | qint16        | 0.0002127 | -0.9368925   | 6.8940825     | 0.0344712    | 0.9453547        | torch.Size([2, 512, 128])        |
| 1682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(4)          | input_1             | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 1682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(4)          | output              | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0663888    | 0.9200549        | torch.Size([2, 512, 128])        |
| 1683    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(4)                   | input               | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0663888    | 0.9200549        | torch.Size([2, 512, 128])        |
| 1683    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(4)                   | weight              | torch.float32 |           | -0.7504157   | 0.4182976     | -0.0024651   | 0.0052447        | torch.Size([128, 128])           |
| 1683    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(4)                   | bias                | torch.float32 |           | -0.1397866   | 0.1210779     | 0.0064616    | 0.0040949        | torch.Size([128])                |
| 1683    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(4)                   | output              | torch.float32 |           | -9.1940289   | 6.8846841     | -0.0429996   | 4.5777302        | torch.Size([2, 512, 128])        |
| 1684    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(4)                   | input               | torch.float32 |           | -9.1940289   | 6.8846841     | -0.0429996   | 4.5777302        | torch.Size([2, 512, 128])        |
| 1684    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(4)                   | output              | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8049779    | 1.4725941        | torch.Size([2, 512, 128])        |
| 1685    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(4)   | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8049779    | 1.4725941        | torch.Size([2, 512, 128])        |
| 1685    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(4)   | output              | qint16        | 0.0000319 | 0.5465035    | 1.0447656     | 0.7672883    | 0.0280745        | torch.Size([2, 512, 1])          |
| 1686    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(4)               | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.8049779    | 1.4725941        | torch.Size([2, 512, 128])        |
| 1686    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(4)               | input_1             | qint16        | 0.0000319 | 0.5465035    | 1.0447656     | 0.7672883    | 0.0280745        | torch.Size([2, 512, 1])          |
| 1686    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(4)               | output              | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0376920    | 1.4236239        | torch.Size([2, 512, 128])        |
| 1687    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(4)               | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0376920    | 1.4236239        | torch.Size([2, 512, 128])        |
| 1687    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(4)               | input_1             | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0376920    | 1.4236239        | torch.Size([2, 512, 128])        |
| 1687    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(4)               | output              | qint16        | 0.0011151 | 0.0000000    | 31.5160542    | 1.4250548    | 9.3300743        | torch.Size([2, 512, 128])        |
| 1688    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(4)     | input_0             | qint16        | 0.0011151 | 0.0000000    | 31.5160542    | 1.4250548    | 9.3300743        | torch.Size([2, 512, 128])        |
| 1688    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(4)     | output              | qint16        | 0.0000656 | 0.8154163    | 2.1495371     | 1.3547139    | 0.2306084        | torch.Size([2, 512, 1])          |
| 1689    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(4)             | input               | qint16        | 0.0000656 | 0.8154163    | 2.1495371     | 1.3547139    | 0.2306084        | torch.Size([2, 512, 1])          |
| 1689    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(4)             | output              | qint16        | 0.0000338 | 0.6820595    | 1.1069363     | 0.8944190    | 0.0183830        | torch.Size([2, 512, 1])          |
| 1690    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(4)           | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0376920    | 1.4236239        | torch.Size([2, 512, 128])        |
| 1690    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(4)           | input_1             | qint16        | 0.0000338 | 0.6820595    | 1.1069363     | 0.8944190    | 0.0183830        | torch.Size([2, 512, 1])          |
| 1690    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(4)           | output              | qint16        | 0.0001537 | -0.7466490   | 4.9878554     | 0.0257121    | 1.0320463        | torch.Size([2, 512, 128])        |
| 1691    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(4)      | input               | torch.float32 |           | 0.7673740    | 1.1249810     | 0.9671495    | 0.0053221        | torch.Size([128])                |
| 1691    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(4)      | output              | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 1692    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(4)        | input_0             | qint16        | 0.0001537 | -0.7466490   | 4.9878554     | 0.0257121    | 1.0320463        | torch.Size([2, 512, 128])        |
| 1692    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(4)        | input_1             | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 1692    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(4)        | output              | qint16        | 0.0001601 | -0.8399123   | 5.1989083     | 0.0406913    | 1.0269731        | torch.Size([2, 512, 128])        |
| 1693    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(4)        | input               | torch.float32 |           | -0.0537279   | 0.1594015     | 0.0216380    | 0.0014148        | torch.Size([128])                |
| 1693    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(4)        | output              | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 1694    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(4)          | input_0             | qint16        | 0.0001601 | -0.8399123   | 5.1989083     | 0.0406913    | 1.0269731        | torch.Size([2, 512, 128])        |
| 1694    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(4)          | input_1             | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 1694    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(4)          | output              | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0621106    | 1.0128145        | torch.Size([2, 512, 128])        |
| 1695    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(4)                   | input               | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0621106    | 1.0128145        | torch.Size([2, 512, 128])        |
| 1695    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(4)                   | weight              | torch.float32 |           | -0.4264432   | 0.3183554     | 0.0005866    | 0.0053991        | torch.Size([128, 128])           |
| 1695    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(4)                   | bias                | torch.float32 |           | -0.1690418   | 0.1536980     | -0.0166056   | 0.0039884        | torch.Size([128])                |
| 1695    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(4)                   | output              | torch.float32 |           | -11.8434677  | 10.2734270    | -0.4231248   | 4.5890913        | torch.Size([2, 512, 128])        |
| 1696    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(4)                  | input               | torch.float32 |           | -11.8434677  | 10.2734270    | -0.4231248   | 4.5890913        | torch.Size([2, 512, 128])        |
| 1696    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(4)                  | output              | qint8         | 0.0826298 | 0.0000000    | 10.2460938    | 0.6353451    | 1.5866843        | torch.Size([2, 512, 128])        |
| 1697    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(4)  | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.2460938    | 0.6353451    | 1.5866843        | torch.Size([2, 512, 128])        |
| 1697    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(4)  | output              | qint16        | 0.0000231 | 0.5241749    | 0.7339925     | 0.6353455    | 0.0034372        | torch.Size([2, 512, 1])          |
| 1698    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(4)              | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.2460938    | 0.6353451    | 1.5866843        | torch.Size([2, 512, 128])        |
| 1698    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(4)              | input_1             | qint16        | 0.0000231 | 0.5241749    | 0.7339925     | 0.6353455    | 0.0034372        | torch.Size([2, 512, 1])          |
| 1698    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(4)              | output              | qint16        | 0.0003154 | -0.7339528   | 9.7025595     | -0.0000091   | 1.5832639        | torch.Size([2, 512, 128])        |
| 1699    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(4)              | input_0             | qint16        | 0.0003154 | -0.7339528   | 9.7025595     | -0.0000091   | 1.5832639        | torch.Size([2, 512, 128])        |
| 1699    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(4)              | input_1             | qint16        | 0.0003154 | -0.7339528   | 9.7025595     | -0.0000091   | 1.5832639        | torch.Size([2, 512, 128])        |
| 1699    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(4)              | output              | qint16        | 0.0032599 | 0.0000000    | 94.1412582    | 1.5832539    | 25.7792397       | torch.Size([2, 512, 128])        |
| 1700    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(4)    | input_0             | qint16        | 0.0032599 | 0.0000000    | 94.1412582    | 1.5832539    | 25.7792397       | torch.Size([2, 512, 128])        |
| 1700    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(4)    | output              | qint16        | 0.0000598 | 1.0777850    | 1.9563167     | 1.5832527    | 0.0269414        | torch.Size([2, 512, 1])          |
| 1701    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(4)            | input               | qint16        | 0.0000598 | 1.0777850    | 1.9563167     | 1.5832527    | 0.0269414        | torch.Size([2, 512, 1])          |
| 1701    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(4)            | output              | qint16        | 0.0000315 | 0.7149585    | 0.9632238     | 0.7979497    | 0.0017220        | torch.Size([2, 512, 1])          |
| 1702    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(4)          | input_0             | qint16        | 0.0003154 | -0.7339528   | 9.7025595     | -0.0000091   | 1.5832639        | torch.Size([2, 512, 128])        |
| 1702    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(4)          | input_1             | qint16        | 0.0000315 | 0.7149585    | 0.9632238     | 0.7979497    | 0.0017220        | torch.Size([2, 512, 1])          |
| 1702    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(4)          | output              | qint16        | 0.0002431 | -0.6067135   | 7.3709860     | 0.0000064    | 0.9999874        | torch.Size([2, 512, 128])        |
| 1703    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(4)     | input               | torch.float32 |           | 0.7088336    | 1.4002132     | 0.9292046    | 0.0145085        | torch.Size([128])                |
| 1703    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(4)     | output              | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 1704    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(4)       | input_0             | qint16        | 0.0002431 | -0.6067135   | 7.3709860     | 0.0000064    | 0.9999874        | torch.Size([2, 512, 128])        |
| 1704    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(4)       | input_1             | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 1704    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(4)       | output              | qint16        | 0.0002455 | -0.8495573   | 7.4456577     | 0.0083698    | 0.9018336        | torch.Size([2, 512, 128])        |
| 1705    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(4)       | input               | torch.float32 |           | -0.0965041   | 0.2669707     | 0.0619903    | 0.0064956        | torch.Size([128])                |
| 1705    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(4)       | output              | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 1706    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(4)         | input_0             | qint16        | 0.0002455 | -0.8495573   | 7.4456577     | 0.0083698    | 0.9018336        | torch.Size([2, 512, 128])        |
| 1706    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(4)         | input_1             | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 1706    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(4)         | output              | qint8         | 0.0587279 | -0.8809187   | 7.3997173     | 0.0703954    | 0.8679019        | torch.Size([2, 512, 128])        |
| 1707    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1707    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017895 | -0.9752758   | 2.5876124     | 0.2333435    | 0.3873911        | torch.Size([2, 512, 3])          |
| 1708    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(4)                  | input               | qint16        | 0.0017895 | -0.9752758   | 2.5876124     | 0.2333435    | 0.3873911        | torch.Size([2, 512, 3])          |
| 1708    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(4)                  | weight              | torch.float32 |           | -0.8288664   | 0.6362330     | 0.0683853    | 0.1118651        | torch.Size([32, 3])              |
| 1708    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(4)                  | bias                | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1068659        | torch.Size([32])                 |
| 1708    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(4)                  | output              | torch.float32 |           | -1.8530767   | 2.1990559     | 0.1097110    | 0.2335261        | torch.Size([2, 512, 32])         |
| 1709    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(4)                  | input               | torch.float32 |           | -1.8530767   | 2.1990559     | 0.1097110    | 0.2335261        | torch.Size([2, 512, 32])         |
| 1709    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(4)                  | output              | qint8         | 0.0194126 | 0.0000000    | 2.1936238     | 0.2530534    | 0.0972925        | torch.Size([2, 512, 32])         |
| 1710    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(4)  | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.1936238     | 0.2530534    | 0.0972925        | torch.Size([2, 512, 32])         |
| 1710    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(4)  | output              | qint16        | 0.0000252 | 0.1613629    | 0.6363677     | 0.2530569    | 0.0126459        | torch.Size([2, 512, 1])          |
| 1711    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(4)              | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.1936238     | 0.2530534    | 0.0972925        | torch.Size([2, 512, 32])         |
| 1711    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(4)              | input_1             | qint16        | 0.0000252 | 0.1613629    | 0.6363677     | 0.2530569    | 0.0126459        | torch.Size([2, 512, 1])          |
| 1711    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(4)              | output              | qint16        | 0.0000639 | -0.6363481   | 1.5633223     | -0.0000049   | 0.0846583        | torch.Size([2, 512, 32])         |
| 1712    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(4)              | input_0             | qint16        | 0.0000639 | -0.6363481   | 1.5633223     | -0.0000049   | 0.0846583        | torch.Size([2, 512, 32])         |
| 1712    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(4)              | input_1             | qint16        | 0.0000639 | -0.6363481   | 1.5633223     | -0.0000049   | 0.0846583        | torch.Size([2, 512, 32])         |
| 1712    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(4)              | output              | qint16        | 0.0001394 | 0.0000000    | 2.4439368     | 0.0846539    | 0.0254054        | torch.Size([2, 512, 32])         |
| 1713    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(4)    | input_0             | qint16        | 0.0001394 | 0.0000000    | 2.4439368     | 0.0846539    | 0.0254054        | torch.Size([2, 512, 32])         |
| 1713    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(4)    | output              | qint16        | 0.0000212 | 0.0325433    | 0.3895862     | 0.0846561    | 0.0044098        | torch.Size([2, 512, 1])          |
| 1714    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(4)            | input               | qint16        | 0.0000212 | 0.0325433    | 0.3895862     | 0.0846561    | 0.0044098        | torch.Size([2, 512, 1])          |
| 1714    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(4)            | output              | qint16        | 0.0001649 | 1.6021245    | 5.4031301     | 4.0371137    | 1.2982420        | torch.Size([2, 512, 1])          |
| 1715    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(4)          | input_0             | qint16        | 0.0000639 | -0.6363481   | 1.5633223     | -0.0000049   | 0.0846583        | torch.Size([2, 512, 32])         |
| 1715    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(4)          | input_1             | qint16        | 0.0001649 | 1.6021245    | 5.4031301     | 4.0371137    | 1.2982420        | torch.Size([2, 512, 1])          |
| 1715    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(4)          | output              | qint16        | 0.0000919 | -1.0955541   | 3.0128427     | -0.0000387   | 0.9920491        | torch.Size([2, 512, 32])         |
| 1716    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(4)     | input               | torch.float32 |           | 0.8401937    | 1.1936733     | 0.9969203    | 0.0071658        | torch.Size([32])                 |
| 1716    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(4)     | output              | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 1717    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(4)       | input_0             | qint16        | 0.0000919 | -1.0955541   | 3.0128427     | -0.0000387   | 0.9920491        | torch.Size([2, 512, 32])         |
| 1717    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(4)       | input_1             | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 1717    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(4)       | output              | qint16        | 0.0001022 | -1.3076952   | 3.2300847     | 0.0079256    | 0.9819734        | torch.Size([2, 512, 32])         |
| 1718    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(4)       | input               | torch.float32 |           | -0.1003950   | 0.1085345     | 0.0035262    | 0.0030721        | torch.Size([32])                 |
| 1718    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(4)       | output              | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 1719    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(4)         | input_0             | qint16        | 0.0001022 | -1.3076952   | 3.2300847     | 0.0079256    | 0.9819734        | torch.Size([2, 512, 32])         |
| 1719    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(4)         | input_1             | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 1719    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(4)         | output              | qint8         | 0.0232598 | -1.2792890   | 2.9539945     | 0.0110762    | 0.9260956        | torch.Size([2, 512, 32])         |
| 1720    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(4)                  | input               | qint8         | 0.0232598 | -1.2792890   | 2.9539945     | 0.0110762    | 0.9260956        | torch.Size([2, 512, 32])         |
| 1720    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(4)                  | weight              | torch.float32 |           | -0.5793310   | 0.5422795     | -0.0032135   | 0.0176575        | torch.Size([32, 32])             |
| 1720    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(4)                  | bias                | torch.float32 |           | -0.1716317   | 0.2230143     | 0.0007250    | 0.0126328        | torch.Size([32])                 |
| 1720    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(4)                  | output              | torch.float32 |           | -4.2741585   | 2.1188934     | -0.2174622   | 1.4472200        | torch.Size([2, 512, 32])         |
| 1721    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(4)                  | input               | torch.float32 |           | -4.2741585   | 2.1188934     | -0.2174622   | 1.4472200        | torch.Size([2, 512, 32])         |
| 1721    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(4)                  | output              | qint8         | 0.0172935 | 0.0000000    | 2.1271040     | 0.3661681    | 0.2611728        | torch.Size([2, 512, 32])         |
| 1722    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(4)  | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1271040     | 0.3661681    | 0.2611728        | torch.Size([2, 512, 32])         |
| 1722    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(4)  | output              | qint16        | 0.0000141 | 0.2685968    | 0.4188303     | 0.3661674    | 0.0011172        | torch.Size([2, 512, 1])          |
| 1723    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(4)              | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1271040     | 0.3661681    | 0.2611728        | torch.Size([2, 512, 32])         |
| 1723    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(4)              | input_1             | qint16        | 0.0000141 | 0.2685968    | 0.4188303     | 0.3661674    | 0.0011172        | torch.Size([2, 512, 1])          |
| 1723    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(4)              | output              | qint16        | 0.0000617 | -0.4188201   | 1.8584870     | 0.0000017    | 0.2600554        | torch.Size([2, 512, 32])         |
| 1724    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(4)              | input_0             | qint16        | 0.0000617 | -0.4188201   | 1.8584870     | 0.0000017    | 0.2600554        | torch.Size([2, 512, 32])         |
| 1724    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(4)              | input_1             | qint16        | 0.0000617 | -0.4188201   | 1.8584870     | 0.0000017    | 0.2600554        | torch.Size([2, 512, 32])         |
| 1724    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(4)              | output              | qint16        | 0.0001252 | 0.0000000    | 3.4539537     | 0.2600440    | 0.1973726        | torch.Size([2, 512, 32])         |
| 1725    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(4)    | input_0             | qint16        | 0.0001252 | 0.0000000    | 3.4539537     | 0.2600440    | 0.1973726        | torch.Size([2, 512, 32])         |
| 1725    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(4)    | output              | qint16        | 0.0000132 | 0.1522504    | 0.3533944     | 0.2600439    | 0.0043267        | torch.Size([2, 512, 1])          |
| 1726    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(4)            | input               | qint16        | 0.0000132 | 0.1522504    | 0.3533944     | 0.2600439    | 0.0043267        | torch.Size([2, 512, 1])          |
| 1726    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(4)            | output              | qint16        | 0.0000777 | 1.6821437    | 2.5457854     | 2.0139813    | 0.0787305        | torch.Size([2, 512, 1])          |
| 1727    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(4)          | input_0             | qint16        | 0.0000617 | -0.4188201   | 1.8584870     | 0.0000017    | 0.2600554        | torch.Size([2, 512, 32])         |
| 1727    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(4)          | input_1             | qint16        | 0.0000777 | 1.6821437    | 2.5457854     | 2.0139813    | 0.0787305        | torch.Size([2, 512, 1])          |
| 1727    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(4)          | output              | qint16        | 0.0001125 | -0.9120530   | 3.6849864     | -0.0000140   | 0.9997780        | torch.Size([2, 512, 32])         |
| 1728    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(4)     | input               | torch.float32 |           | 0.8191299    | 1.0923718     | 0.9808199    | 0.0031231        | torch.Size([32])                 |
| 1728    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(4)     | output              | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 1729    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(4)       | input_0             | qint16        | 0.0001125 | -0.9120530   | 3.6849864     | -0.0000140   | 0.9997780        | torch.Size([2, 512, 32])         |
| 1729    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(4)       | input_1             | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 1729    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(4)       | output              | qint16        | 0.0001113 | -0.9202085   | 3.5213978     | 0.0100045    | 0.9959044        | torch.Size([2, 512, 32])         |
| 1730    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(4)       | input               | torch.float32 |           | -0.0704119   | 0.0788569     | 0.0097621    | 0.0015200        | torch.Size([32])                 |
| 1730    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(4)       | output              | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 1731    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(4)         | input_0             | qint16        | 0.0001113 | -0.9202085   | 3.5213978     | 0.0100045    | 0.9959044        | torch.Size([2, 512, 32])         |
| 1731    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(4)         | input_1             | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 1731    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(4)         | output              | qint8         | 0.0262611 | -0.9191371   | 3.3351545     | 0.0201558    | 0.9663262        | torch.Size([2, 512, 32])         |
| 1732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(4)                  | input               | qint8         | 0.0262611 | -0.9191371   | 3.3351545     | 0.0201558    | 0.9663262        | torch.Size([2, 512, 32])         |
| 1732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(4)                  | weight              | torch.float32 |           | -0.5712157   | 0.5219681     | -0.0062917   | 0.0166056        | torch.Size([32, 32])             |
| 1732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(4)                  | bias                | torch.float32 |           | -0.1649730   | 0.2318604     | 0.0253026    | 0.0136139        | torch.Size([32])                 |
| 1732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(4)                  | output              | torch.float32 |           | -4.3798795   | 2.5824568     | -0.1738899   | 1.3493233        | torch.Size([2, 512, 32])         |
| 1733    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(4)                  | input               | torch.float32 |           | -4.3798795   | 2.5824568     | -0.1738899   | 1.3493233        | torch.Size([2, 512, 32])         |
| 1733    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(4)                  | output              | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3705617    | 0.2774231        | torch.Size([2, 512, 32])         |
| 1734    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(4)  | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3705617    | 0.2774231        | torch.Size([2, 512, 32])         |
| 1734    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(4)  | output              | qint16        | 0.0000154 | 0.1866060    | 0.4871908     | 0.3705628    | 0.0090340        | torch.Size([2, 512, 1])          |
| 1735    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(4)              | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3705617    | 0.2774231        | torch.Size([2, 512, 32])         |
| 1735    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(4)              | input_1             | qint16        | 0.0000154 | 0.1866060    | 0.4871908     | 0.3705628    | 0.0090340        | torch.Size([2, 512, 1])          |
| 1735    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(4)              | output              | qint16        | 0.0000636 | -0.4871694   | 2.0190551     | -0.0000011   | 0.2683987        | torch.Size([2, 512, 32])         |
| 1736    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(4)              | input_0             | qint16        | 0.0000636 | -0.4871694   | 2.0190551     | -0.0000011   | 0.2683987        | torch.Size([2, 512, 32])         |
| 1736    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(4)              | input_1             | qint16        | 0.0000636 | -0.4871694   | 2.0190551     | -0.0000011   | 0.2683987        | torch.Size([2, 512, 32])         |
| 1736    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(4)              | output              | qint16        | 0.0001333 | 0.0000000    | 4.0765991     | 0.2683940    | 0.2921938        | torch.Size([2, 512, 32])         |
| 1737    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(4)    | input_0             | qint16        | 0.0001333 | 0.0000000    | 4.0765991     | 0.2683940    | 0.2921938        | torch.Size([2, 512, 32])         |
| 1737    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(4)    | output              | qint16        | 0.0000116 | 0.1232400    | 0.3784634     | 0.2683895    | 0.0055786        | torch.Size([2, 512, 1])          |
| 1738    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(4)            | input               | qint16        | 0.0000116 | 0.1232400    | 0.3784634     | 0.2683895    | 0.0055786        | torch.Size([2, 512, 1])          |
| 1738    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(4)            | output              | qint16        | 0.0000821 | 1.6254737    | 2.6913540     | 2.0039809    | 0.1206035        | torch.Size([2, 512, 1])          |
| 1739    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(4)          | input_0             | qint16        | 0.0000636 | -0.4871694   | 2.0190551     | -0.0000011   | 0.2683987        | torch.Size([2, 512, 32])         |
| 1739    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(4)          | input_1             | qint16        | 0.0000821 | 1.6254737    | 2.6913540     | 2.0039809    | 0.1206035        | torch.Size([2, 512, 1])          |
| 1739    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(4)          | output              | qint16        | 0.0001195 | -0.9495574   | 3.7948823     | -0.0000013   | 0.9997836        | torch.Size([2, 512, 32])         |
| 1740    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(4)     | input               | torch.float32 |           | 0.8903234    | 1.1315480     | 0.9912031    | 0.0026835        | torch.Size([32])                 |
| 1740    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(4)     | output              | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 1741    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(4)       | input_0             | qint16        | 0.0001195 | -0.9495574   | 3.7948823     | -0.0000013   | 0.9997836        | torch.Size([2, 512, 32])         |
| 1741    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(4)       | input_1             | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 1741    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(4)       | output              | qint16        | 0.0001226 | -1.0744114   | 3.9102151     | 0.0050413    | 1.0243331        | torch.Size([2, 512, 32])         |
| 1742    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(4)       | input               | torch.float32 |           | -0.0586081   | 0.0779655     | 0.0041962    | 0.0015323        | torch.Size([32])                 |
| 1742    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(4)       | output              | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 1743    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(4)         | input_0             | qint16        | 0.0001226 | -1.0744114   | 3.9102151     | 0.0050413    | 1.0243331        | torch.Size([2, 512, 32])         |
| 1743    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(4)         | input_1             | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 1743    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(4)         | output              | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0094972    | 1.0014392        | torch.Size([2, 512, 32])         |
| 1744    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(4)                  | input               | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0094972    | 1.0014392        | torch.Size([2, 512, 32])         |
| 1744    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(4)                  | weight              | torch.float32 |           | -0.3204980   | 0.3365203     | -0.0020388   | 0.0145364        | torch.Size([32, 32])             |
| 1744    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(4)                  | bias                | torch.float32 |           | -0.1559148   | 0.2119379     | 0.0091616    | 0.0105488        | torch.Size([32])                 |
| 1744    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(4)                  | output              | torch.float32 |           | -2.2583416   | 2.6651974     | 0.0196148    | 0.8059253        | torch.Size([2, 512, 32])         |
| 1745    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(4)                 | input               | torch.float32 |           | -2.2583416   | 2.6651974     | 0.0196148    | 0.8059253        | torch.Size([2, 512, 32])         |
| 1745    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(4)                 | output              | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3641348    | 0.2847253        | torch.Size([2, 512, 32])         |
| 1746    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(4) | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3641348    | 0.2847253        | torch.Size([2, 512, 32])         |
| 1746    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(4) | output              | qint16        | 0.0000157 | 0.2544906    | 0.5130996     | 0.3625120    | 0.0020225        | torch.Size([2, 512, 1])          |
| 1747    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(4)             | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3641348    | 0.2847253        | torch.Size([2, 512, 32])         |
| 1747    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(4)             | input_1             | qint16        | 0.0000157 | 0.2544906    | 0.5130996     | 0.3625120    | 0.0020225        | torch.Size([2, 512, 1])          |
| 1747    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(4)             | output              | qint16        | 0.0000689 | -0.5131254   | 2.1766636     | 0.0016207    | 0.2822165        | torch.Size([2, 512, 32])         |
| 1748    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(4)             | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1766636     | 0.0016207    | 0.2822165        | torch.Size([2, 512, 32])         |
| 1748    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(4)             | input_1             | qint16        | 0.0000689 | -0.5131254   | 2.1766636     | 0.0016207    | 0.2822165        | torch.Size([2, 512, 32])         |
| 1748    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(4)             | output              | qint16        | 0.0001557 | 0.0000000    | 4.7379065     | 0.2822119    | 0.3642026        | torch.Size([2, 512, 32])         |
| 1749    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(4)   | input_0             | qint16        | 0.0001557 | 0.0000000    | 4.7379065     | 0.2822119    | 0.3642026        | torch.Size([2, 512, 32])         |
| 1749    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(4)   | output              | qint16        | 0.0000123 | 0.1530242    | 0.3973589     | 0.2822119    | 0.0013832        | torch.Size([2, 512, 1])          |
| 1750    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(4)           | input               | qint16        | 0.0000123 | 0.1530242    | 0.3973589     | 0.2822119    | 0.0013832        | torch.Size([2, 512, 1])          |
| 1750    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(4)           | output              | qint16        | 0.0000803 | 1.5863448    | 2.5562866     | 1.8947262    | 0.0159742        | torch.Size([2, 512, 1])          |
| 1751    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(4)         | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1766636     | 0.0016207    | 0.2822165        | torch.Size([2, 512, 32])         |
| 1751    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(4)         | input_1             | qint16        | 0.0000803 | 1.5863448    | 2.5562866     | 1.8947262    | 0.0159742        | torch.Size([2, 512, 1])          |
| 1751    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(4)         | output              | qint16        | 0.0001207 | -1.2131712   | 3.9424446     | 0.0029900    | 0.9999793        | torch.Size([2, 512, 32])         |
| 1752    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(4)    | input               | torch.float32 |           | 0.8289159    | 1.6609058     | 1.2561316    | 0.0353652        | torch.Size([32])                 |
| 1752    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(4)    | output              | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 1753    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(4)      | input_0             | qint16        | 0.0001207 | -1.2131712   | 3.9424446     | 0.0029900    | 0.9999793        | torch.Size([2, 512, 32])         |
| 1753    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(4)      | input_1             | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 1753    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(4)      | output              | qint16        | 0.0001642 | -1.8387730   | 4.9674954     | -0.0298442   | 1.4355977        | torch.Size([2, 512, 32])         |
| 1754    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(4)      | input               | torch.float32 |           | -0.1194881   | 0.2576658     | 0.0445686    | 0.0113612        | torch.Size([32])                 |
| 1754    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(4)      | output              | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 1755    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(4)        | input_0             | qint16        | 0.0001642 | -1.8387730   | 4.9674954     | -0.0298442   | 1.4355977        | torch.Size([2, 512, 32])         |
| 1755    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(4)        | input_1             | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 1755    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(4)        | output              | qint8         | 0.0385920 | -1.6980467   | 4.8625884     | 0.0146545    | 1.3493775        | torch.Size([2, 512, 32])         |
| 1756    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1756    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017895 | -1.0325397   | 1.0325397     | -0.0348393   | 0.1149378        | torch.Size([2, 512, 2])          |
| 1757    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(4)                   | input               | qint16        | 0.0017895 | -1.0325397   | 1.0325397     | -0.0348393   | 0.1149378        | torch.Size([2, 512, 2])          |
| 1757    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(4)                   | weight              | torch.float32 |           | -0.7023237   | 0.7394427     | 0.0490668    | 0.1972211        | torch.Size([32, 2])              |
| 1757    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(4)                   | bias                | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1641774        | torch.Size([32])                 |
| 1757    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(4)                   | output              | torch.float32 |           | -1.5372586   | 1.1631101     | -0.1211208   | 0.2042286        | torch.Size([2, 512, 32])         |
| 1758    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(4)                   | input               | torch.float32 |           | -1.5372586   | 1.1631101     | -0.1211208   | 0.2042286        | torch.Size([2, 512, 32])         |
| 1758    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(4)                   | output              | qint8         | 0.0115854 | 0.0000000    | 1.1585438     | 0.1356293    | 0.0564905        | torch.Size([2, 512, 32])         |
| 1759    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(4)   | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.1585438     | 0.1356293    | 0.0564905        | torch.Size([2, 512, 32])         |
| 1759    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(4)   | output              | qint16        | 0.0000105 | 0.1082505    | 0.2317083     | 0.1356288    | 0.0006937        | torch.Size([2, 512, 1])          |
| 1760    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(4)               | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.1585438     | 0.1356293    | 0.0564905        | torch.Size([2, 512, 32])         |
| 1760    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(4)               | input_1             | qint16        | 0.0000105 | 0.1082505    | 0.2317083     | 0.1356288    | 0.0006937        | torch.Size([2, 512, 1])          |
| 1760    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(4)               | output              | qint16        | 0.0000395 | -0.2317158   | 0.9597676     | 0.0000012    | 0.0557978        | torch.Size([2, 512, 32])         |
| 1761    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(4)               | input_0             | qint16        | 0.0000395 | -0.2317158   | 0.9597676     | 0.0000012    | 0.0557978        | torch.Size([2, 512, 32])         |
| 1761    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(4)               | input_1             | qint16        | 0.0000395 | -0.2317158   | 0.9597676     | 0.0000012    | 0.0557978        | torch.Size([2, 512, 32])         |
| 1761    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(4)               | output              | qint16        | 0.0000524 | 0.0000000    | 0.9211483     | 0.0557959    | 0.0116168        | torch.Size([2, 512, 32])         |
| 1762    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(4)     | input_0             | qint16        | 0.0000524 | 0.0000000    | 0.9211483     | 0.0557959    | 0.0116168        | torch.Size([2, 512, 32])         |
| 1762    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(4)     | output              | qint16        | 0.0000071 | 0.0405323    | 0.1210567     | 0.0557955    | 0.0003376        | torch.Size([2, 512, 1])          |
| 1763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(4)             | input               | qint16        | 0.0000071 | 0.0405323    | 0.1210567     | 0.0557955    | 0.0003376        | torch.Size([2, 512, 1])          |
| 1763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(4)             | output              | qint16        | 0.0001514 | 2.8739457    | 4.9613075     | 4.3559690    | 0.2723953        | torch.Size([2, 512, 1])          |
| 1764    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(4)           | input_0             | qint16        | 0.0000395 | -0.2317158   | 0.9597676     | 0.0000012    | 0.0557978        | torch.Size([2, 512, 32])         |
| 1764    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(4)           | input_1             | qint16        | 0.0001514 | 2.8739457    | 4.9613075     | 4.3559690    | 0.2723953        | torch.Size([2, 512, 1])          |
| 1764    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(4)           | output              | qint16        | 0.0001206 | -0.7224153   | 3.9524767     | -0.0000043   | 0.9997038        | torch.Size([2, 512, 32])         |
| 1765    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(4)      | input               | torch.float32 |           | 0.8947600    | 1.1748335     | 0.9865216    | 0.0041537        | torch.Size([32])                 |
| 1765    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(4)      | output              | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 1766    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(4)        | input_0             | qint16        | 0.0001206 | -0.7224153   | 3.9524767     | -0.0000043   | 0.9997038        | torch.Size([2, 512, 32])         |
| 1766    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(4)        | input_1             | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 1766    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(4)        | output              | qint16        | 0.0001306 | -0.8245773   | 4.2798867     | 0.0036659    | 1.0093879        | torch.Size([2, 512, 32])         |
| 1767    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(4)        | input               | torch.float32 |           | -0.0879948   | 0.1319895     | 0.0285039    | 0.0034159        | torch.Size([32])                 |
| 1767    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(4)        | output              | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 1768    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(4)          | input_0             | qint16        | 0.0001306 | -0.8245773   | 4.2798867     | 0.0036659    | 1.0093879        | torch.Size([2, 512, 32])         |
| 1768    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(4)          | input_1             | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 1768    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(4)          | output              | qint8         | 0.0302674 | -0.7869512   | 3.8439538     | 0.0312991    | 0.9253613        | torch.Size([2, 512, 32])         |
| 1769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(4)                   | input               | qint8         | 0.0302674 | -0.7869512   | 3.8439538     | 0.0312991    | 0.9253613        | torch.Size([2, 512, 32])         |
| 1769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(4)                   | weight              | torch.float32 |           | -1.0547366   | 0.5812716     | 0.0070099    | 0.0187704        | torch.Size([32, 32])             |
| 1769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(4)                   | bias                | torch.float32 |           | -0.2183180   | 0.1396109     | -0.0140744   | 0.0103446        | torch.Size([32])                 |
| 1769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(4)                   | output              | torch.float32 |           | -4.9591098   | 1.6927794     | -0.5280743   | 1.4607052        | torch.Size([2, 512, 32])         |
| 1770    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(4)                   | input               | torch.float32 |           | -4.9591098   | 1.6927794     | -0.5280743   | 1.4607052        | torch.Size([2, 512, 32])         |
| 1770    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(4)                   | output              | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2281493    | 0.1241141        | torch.Size([2, 512, 32])         |
| 1771    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(4)   | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2281493    | 0.1241141        | torch.Size([2, 512, 32])         |
| 1771    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(4)   | output              | qint16        | 0.0000116 | 0.1696848    | 0.3375855     | 0.2281502    | 0.0007601        | torch.Size([2, 512, 1])          |
| 1772    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(4)               | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2281493    | 0.1241141        | torch.Size([2, 512, 32])         |
| 1772    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(4)               | input_1             | qint16        | 0.0000116 | 0.1696848    | 0.3375855     | 0.2281502    | 0.0007601        | torch.Size([2, 512, 1])          |
| 1772    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(4)               | output              | qint16        | 0.0000516 | -0.3375874   | 1.4303070     | -0.0000022   | 0.1233547        | torch.Size([2, 512, 32])         |
| 1773    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(4)               | input_0             | qint16        | 0.0000516 | -0.3375874   | 1.4303070     | -0.0000022   | 0.1233547        | torch.Size([2, 512, 32])         |
| 1773    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(4)               | input_1             | qint16        | 0.0000516 | -0.3375874   | 1.4303070     | -0.0000022   | 0.1233547        | torch.Size([2, 512, 32])         |
| 1773    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(4)               | output              | qint16        | 0.0000889 | 0.0000000    | 2.0457370     | 0.1233531    | 0.0493288        | torch.Size([2, 512, 32])         |
| 1774    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(4)     | input_0             | qint16        | 0.0000889 | 0.0000000    | 2.0457370     | 0.1233531    | 0.0493288        | torch.Size([2, 512, 32])         |
| 1774    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(4)     | output              | qint16        | 0.0000089 | 0.0769180    | 0.2012836     | 0.1233538    | 0.0004790        | torch.Size([2, 512, 1])          |
| 1775    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(4)             | input               | qint16        | 0.0000089 | 0.0769180    | 0.2012836     | 0.1233538    | 0.0004790        | torch.Size([2, 512, 1])          |
| 1775    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(4)             | output              | qint16        | 0.0001114 | 2.2289231    | 3.6054370     | 2.8778391    | 0.0564926        | torch.Size([2, 512, 1])          |
| 1776    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(4)           | input_0             | qint16        | 0.0000516 | -0.3375874   | 1.4303070     | -0.0000022   | 0.1233547        | torch.Size([2, 512, 32])         |
| 1776    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(4)           | input_1             | qint16        | 0.0001114 | 2.2289231    | 3.6054370     | 2.8778391    | 0.0564926        | torch.Size([2, 512, 1])          |
| 1776    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(4)           | output              | qint16        | 0.0001083 | -0.8040389   | 3.5501876     | -0.0000074   | 0.9998721        | torch.Size([2, 512, 32])         |
| 1777    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(4)      | input               | torch.float32 |           | 0.8550419    | 1.1198171     | 0.9805899    | 0.0036729        | torch.Size([32])                 |
| 1777    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(4)      | output              | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 1778    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(4)        | input_0             | qint16        | 0.0001083 | -0.8040389   | 3.5501876     | -0.0000074   | 0.9998721        | torch.Size([2, 512, 32])         |
| 1778    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(4)        | input_1             | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 1778    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(4)        | output              | qint16        | 0.0001106 | -0.8745015   | 3.6229506     | -0.0019489   | 0.9776510        | torch.Size([2, 512, 32])         |
| 1779    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(4)        | input               | torch.float32 |           | -0.0792132   | 0.1045145     | 0.0242442    | 0.0021608        | torch.Size([32])                 |
| 1779    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(4)        | output              | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 1780    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(4)          | input_0             | qint16        | 0.0001106 | -0.8745015   | 3.6229506     | -0.0019489   | 0.9776510        | torch.Size([2, 512, 32])         |
| 1780    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(4)          | input_1             | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 1780    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(4)          | output              | qint8         | 0.0268612 | -0.8326958   | 3.4113667     | 0.0221067    | 0.9201729        | torch.Size([2, 512, 32])         |
| 1781    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(4)                   | input               | qint8         | 0.0268612 | -0.8326958   | 3.4113667     | 0.0221067    | 0.9201729        | torch.Size([2, 512, 32])         |
| 1781    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(4)                   | weight              | torch.float32 |           | -0.4480607   | 0.3678726     | 0.0004879    | 0.0160908        | torch.Size([32, 32])             |
| 1781    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(4)                   | bias                | torch.float32 |           | -0.1861591   | 0.1739754     | 0.0155446    | 0.0137690        | torch.Size([32])                 |
| 1781    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(4)                   | output              | torch.float32 |           | -3.6681681   | 2.4229326     | -0.3057791   | 1.5328454        | torch.Size([2, 512, 32])         |
| 1782    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(4)                   | input               | torch.float32 |           | -3.6681681   | 2.4229326     | -0.3057791   | 1.5328454        | torch.Size([2, 512, 32])         |
| 1782    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(4)                   | output              | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3335989    | 0.1959013        | torch.Size([2, 512, 32])         |
| 1783    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(4)   | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3335989    | 0.1959013        | torch.Size([2, 512, 32])         |
| 1783    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(4)   | output              | qint16        | 0.0000156 | 0.2546772    | 0.4369152     | 0.3335989    | 0.0004850        | torch.Size([2, 512, 1])          |
| 1784    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(4)               | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3335989    | 0.1959013        | torch.Size([2, 512, 32])         |
| 1784    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(4)               | input_1             | qint16        | 0.0000156 | 0.2546772    | 0.4369152     | 0.3335989    | 0.0004850        | torch.Size([2, 512, 1])          |
| 1784    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(4)               | output              | qint16        | 0.0000645 | -0.4369223   | 2.0816886     | 0.0000019    | 0.1954150        | torch.Size([2, 512, 32])         |
| 1785    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(4)               | input_0             | qint16        | 0.0000645 | -0.4369223   | 2.0816886     | 0.0000019    | 0.1954150        | torch.Size([2, 512, 32])         |
| 1785    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(4)               | input_1             | qint16        | 0.0000645 | -0.4369223   | 2.0816886     | 0.0000019    | 0.1954150        | torch.Size([2, 512, 32])         |
| 1785    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(4)               | output              | qint16        | 0.0001365 | 0.0000000    | 4.3334532     | 0.1954160    | 0.1034551        | torch.Size([2, 512, 32])         |
| 1786    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(4)     | input_0             | qint16        | 0.0001365 | 0.0000000    | 4.3334532     | 0.1954160    | 0.1034551        | torch.Size([2, 512, 32])         |
| 1786    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(4)     | output              | qint16        | 0.0000123 | 0.1582704    | 0.2867241     | 0.1954157    | 0.0003833        | torch.Size([2, 512, 1])          |
| 1787    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(4)             | input               | qint16        | 0.0000123 | 0.1582704    | 0.2867241     | 0.1954157    | 0.0003833        | torch.Size([2, 512, 1])          |
| 1787    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(4)             | output              | qint16        | 0.0000749 | 1.8674875    | 2.4551423     | 2.2689631    | 0.0104226        | torch.Size([2, 512, 1])          |
| 1788    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(4)           | input_0             | qint16        | 0.0000645 | -0.4369223   | 2.0816886     | 0.0000019    | 0.1954150        | torch.Size([2, 512, 32])         |
| 1788    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(4)           | input_1             | qint16        | 0.0000749 | 1.8674875    | 2.4551423     | 2.2689631    | 0.0104226        | torch.Size([2, 512, 1])          |
| 1788    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(4)           | output              | qint16        | 0.0001267 | -0.8618963   | 4.1501474     | -0.0001454   | 0.9979989        | torch.Size([2, 512, 32])         |
| 1789    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(4)      | input               | torch.float32 |           | 0.8469434    | 1.1090456     | 0.9866461    | 0.0031007        | torch.Size([32])                 |
| 1789    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(4)      | output              | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 1790    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(4)        | input_0             | qint16        | 0.0001267 | -0.8618963   | 4.1501474     | -0.0001454   | 0.9979989        | torch.Size([2, 512, 32])         |
| 1790    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(4)        | input_1             | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 1790    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(4)        | output              | qint16        | 0.0001376 | -0.9558105   | 4.4246821     | -0.0036015   | 0.9935451        | torch.Size([2, 512, 32])         |
| 1791    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(4)        | input               | torch.float32 |           | -0.0626723   | 0.0887763     | 0.0071697    | 0.0011301        | torch.Size([32])                 |
| 1791    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(4)        | output              | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 1792    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(4)          | input_0             | qint16        | 0.0001376 | -0.9558105   | 4.4246821     | -0.0036015   | 0.9935451        | torch.Size([2, 512, 32])         |
| 1792    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(4)          | input_1             | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 1792    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(4)          | output              | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0038317    | 0.9673165        | torch.Size([2, 512, 32])         |
| 1793    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(4)                   | input               | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0038317    | 0.9673165        | torch.Size([2, 512, 32])         |
| 1793    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(4)                   | weight              | torch.float32 |           | -0.5597425   | 0.7001730     | 0.0015679    | 0.0160348        | torch.Size([32, 32])             |
| 1793    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(4)                   | bias                | torch.float32 |           | -0.1810580   | 0.1736723     | -0.0279047   | 0.0091159        | torch.Size([32])                 |
| 1793    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(4)                   | output              | torch.float32 |           | -4.2969317   | 3.0793865     | -0.2467091   | 1.2318938        | torch.Size([2, 512, 32])         |
| 1794    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(4)                  | input               | torch.float32 |           | -4.2969317   | 3.0793865     | -0.2467091   | 1.2318938        | torch.Size([2, 512, 32])         |
| 1794    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(4)                  | output              | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2848611    | 0.3364636        | torch.Size([2, 512, 32])         |
| 1795    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(4)  | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2848611    | 0.3364636        | torch.Size([2, 512, 32])         |
| 1795    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(4)  | output              | qint16        | 0.0000121 | 0.2175284    | 0.3942815     | 0.2848602    | 0.0012068        | torch.Size([2, 512, 1])          |
| 1796    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(4)              | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.0726585     | 0.2848611    | 0.3364636        | torch.Size([2, 512, 32])         |
| 1796    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(4)              | input_1             | qint16        | 0.0000121 | 0.2175284    | 0.3942815     | 0.2848602    | 0.0012068        | torch.Size([2, 512, 1])          |
| 1796    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(4)              | output              | qint16        | 0.0000976 | -0.3943166   | 2.7931085     | 0.0000035    | 0.3352561        | torch.Size([2, 512, 32])         |
| 1797    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(4)              | input_0             | qint16        | 0.0000976 | -0.3943166   | 2.7931085     | 0.0000035    | 0.3352561        | torch.Size([2, 512, 32])         |
| 1797    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(4)              | input_1             | qint16        | 0.0000976 | -0.3943166   | 2.7931085     | 0.0000035    | 0.3352561        | torch.Size([2, 512, 32])         |
| 1797    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(4)              | output              | qint16        | 0.0003122 | 0.0000000    | 7.8014235     | 0.3352385    | 1.1433848        | torch.Size([2, 512, 32])         |
| 1798    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(4)    | input_0             | qint16        | 0.0003122 | 0.0000000    | 7.8014235     | 0.3352385    | 1.1433848        | torch.Size([2, 512, 32])         |
| 1798    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(4)    | output              | qint16        | 0.0000136 | 0.1389676    | 0.4219047     | 0.3352383    | 0.0061654        | torch.Size([2, 512, 1])          |
| 1799    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(4)            | input               | qint16        | 0.0000136 | 0.1389676    | 0.4219047     | 0.3352383    | 0.0061654        | torch.Size([2, 512, 1])          |
| 1799    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(4)            | output              | qint16        | 0.0000802 | 1.5395263    | 2.6273782     | 1.7750173    | 0.0722676        | torch.Size([2, 512, 1])          |
| 1800    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(4)          | input_0             | qint16        | 0.0000976 | -0.3943166   | 2.7931085     | 0.0000035    | 0.3352561        | torch.Size([2, 512, 32])         |
| 1800    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(4)          | input_1             | qint16        | 0.0000802 | 1.5395263    | 2.6273782     | 1.7750173    | 0.0722676        | torch.Size([2, 512, 1])          |
| 1800    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(4)          | output              | qint16        | 0.0001482 | -0.7599204   | 4.7678375     | -0.0000035   | 0.9999455        | torch.Size([2, 512, 32])         |
| 1801    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(4)     | input               | torch.float32 |           | 0.8363900    | 1.4688344     | 1.0570920    | 0.0396277        | torch.Size([32])                 |
| 1801    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(4)     | output              | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 1802    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(4)       | input_0             | qint16        | 0.0001482 | -0.7599204   | 4.7678375     | -0.0000035   | 0.9999455        | torch.Size([2, 512, 32])         |
| 1802    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(4)       | input_1             | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 1802    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(4)       | output              | qint16        | 0.0001637 | -1.1162564   | 4.0499902     | -0.0628490   | 0.8711653        | torch.Size([2, 512, 32])         |
| 1803    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(4)       | input               | torch.float32 |           | -0.1492936   | 0.2842544     | 0.0803791    | 0.0109446        | torch.Size([32])                 |
| 1803    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(4)       | output              | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 1804    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(4)         | input_0             | qint16        | 0.0001637 | -1.1162564   | 4.0499902     | -0.0628490   | 0.8711653        | torch.Size([2, 512, 32])         |
| 1804    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(4)         | input_1             | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 1804    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(4)         | output              | qint8         | 0.0373904 | -0.9347606   | 3.9259944     | 0.0169288    | 0.7841282        | torch.Size([2, 512, 32])         |
| 1805    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1805    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017895 | -2.4695058   | 2.3030825     | -0.2422031   | 0.5219936        | torch.Size([2, 512, 3])          |
| 1806    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(4)                   | input               | qint16        | 0.0017895 | -2.4695058   | 2.3030825     | -0.2422031   | 0.5219936        | torch.Size([2, 512, 3])          |
| 1806    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(4)                   | weight              | torch.float32 |           | -1.0475703   | 0.9848034     | -0.0054673   | 0.2080412        | torch.Size([64, 3])              |
| 1806    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(4)                   | bias                | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1294928        | torch.Size([64])                 |
| 1806    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(4)                   | output              | torch.float32 |           | -2.4910517   | 2.2613220     | -0.0866622   | 0.3466904        | torch.Size([2, 512, 64])         |
| 1807    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(4)                   | input               | torch.float32 |           | -2.4910517   | 2.2613220     | -0.0866622   | 0.3466904        | torch.Size([2, 512, 64])         |
| 1807    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(4)                   | output              | qint8         | 0.0729980 | 0.0000000    | 2.2629383     | 0.1849055    | 0.0800136        | torch.Size([2, 512, 64])         |
| 1808    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(4)   | input_0             | qint8         | 0.0729980 | 0.0000000    | 2.2629383     | 0.1849055    | 0.0800136        | torch.Size([2, 512, 64])         |
| 1808    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(4)   | output              | qint16        | 0.0000685 | 0.1232008    | 0.5565962     | 0.1848990    | 0.0072613        | torch.Size([2, 512, 1])          |
| 1809    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(4)               | input_0             | qint8         | 0.0729980 | 0.0000000    | 2.2629383     | 0.1849055    | 0.0800136        | torch.Size([2, 512, 64])         |
| 1809    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(4)               | input_1             | qint16        | 0.0000685 | 0.1232008    | 0.5565962     | 0.1848990    | 0.0072613        | torch.Size([2, 512, 1])          |
| 1809    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(4)               | output              | qint16        | 0.0002902 | -0.5566201   | 1.7348671     | 0.0000120    | 0.0727562        | torch.Size([2, 512, 64])         |
| 1810    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(4)               | input_0             | qint16        | 0.0002902 | -0.5566201   | 1.7348671     | 0.0000120    | 0.0727562        | torch.Size([2, 512, 64])         |
| 1810    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(4)               | input_1             | qint16        | 0.0002902 | -0.5566201   | 1.7348671     | 0.0000120    | 0.0727562        | torch.Size([2, 512, 64])         |
| 1810    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(4)               | output              | qint16        | 0.0029551 | 0.0000000    | 3.0082872     | 0.0728222    | 0.0329208        | torch.Size([2, 512, 64])         |
| 1811    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(4)     | input_0             | qint16        | 0.0029551 | 0.0000000    | 3.0082872     | 0.0728222    | 0.0329208        | torch.Size([2, 512, 64])         |
| 1811    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(4)     | output              | qint16        | 0.0003723 | 0.0260613    | 0.3864525     | 0.0728296    | 0.0043603        | torch.Size([2, 512, 1])          |
| 1812    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(4)             | input               | qint16        | 0.0003723 | 0.0260613    | 0.3864525     | 0.0728296    | 0.0043603        | torch.Size([2, 512, 1])          |
| 1812    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(4)             | output              | qint16        | 0.0001859 | 1.6085832    | 6.0927577     | 4.6207180    | 1.9932632        | torch.Size([2, 512, 1])          |
| 1813    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(4)           | input_0             | qint16        | 0.0002902 | -0.5566201   | 1.7348671     | 0.0000120    | 0.0727562        | torch.Size([2, 512, 64])         |
| 1813    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(4)           | input_1             | qint16        | 0.0001859 | 1.6085832    | 6.0927577     | 4.6207180    | 1.9932632        | torch.Size([2, 512, 1])          |
| 1813    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(4)           | output              | qint16        | 0.0001160 | -0.8953730   | 3.4967320     | 0.0000388    | 0.9965850        | torch.Size([2, 512, 64])         |
| 1814    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(4)      | input               | torch.float32 |           | 0.8691067    | 1.1281288     | 0.9794419    | 0.0036082        | torch.Size([64])                 |
| 1814    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(4)      | output              | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 1815    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(4)        | input_0             | qint16        | 0.0001160 | -0.8953730   | 3.4967320     | 0.0000388    | 0.9965850        | torch.Size([2, 512, 64])         |
| 1815    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(4)        | input_1             | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 1815    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(4)        | output              | qint16        | 0.0001189 | -1.0100285   | 3.4019074     | 0.0114950    | 0.9428160        | torch.Size([2, 512, 64])         |
| 1816    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(4)        | input               | torch.float32 |           | -0.1133662   | 0.1493634     | 0.0304540    | 0.0046508        | torch.Size([64])                 |
| 1816    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(4)        | output              | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 1817    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(4)          | input_0             | qint16        | 0.0001189 | -1.0100285   | 3.4019074     | 0.0114950    | 0.9428160        | torch.Size([2, 512, 64])         |
| 1817    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(4)          | input_1             | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 1817    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(4)          | output              | qint8         | 0.0267452 | -1.0163175   | 3.3431499     | 0.0421228    | 0.8525499        | torch.Size([2, 512, 64])         |
| 1818    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(4)                   | input               | qint8         | 0.0267452 | -1.0163175   | 3.3431499     | 0.0421228    | 0.8525499        | torch.Size([2, 512, 64])         |
| 1818    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(4)                   | weight              | torch.float32 |           | -0.4523612   | 0.4813256     | -0.0014562   | 0.0096743        | torch.Size([64, 64])             |
| 1818    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(4)                   | bias                | torch.float32 |           | -0.1183558   | 0.2243176     | 0.0150283    | 0.0049289        | torch.Size([64])                 |
| 1818    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(4)                   | output              | torch.float32 |           | -5.4004717   | 2.9433472     | -0.4130918   | 2.0712807        | torch.Size([2, 512, 64])         |
| 1819    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(4)                   | input               | torch.float32 |           | -5.4004717   | 2.9433472     | -0.4130918   | 2.0712807        | torch.Size([2, 512, 64])         |
| 1819    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(4)                   | output              | qint8         | 0.0337689 | 0.0000000    | 2.9378939     | 0.3205098    | 0.2142777        | torch.Size([2, 512, 64])         |
| 1820    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(4)   | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.9378939     | 0.3205098    | 0.2142777        | torch.Size([2, 512, 64])         |
| 1820    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(4)   | output              | qint16        | 0.0000195 | 0.2184383    | 0.5999317     | 0.3205094    | 0.0074539        | torch.Size([2, 512, 1])          |
| 1821    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(4)               | input_0             | qint8         | 0.0337689 | 0.0000000    | 2.9378939     | 0.3205098    | 0.2142777        | torch.Size([2, 512, 64])         |
| 1821    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(4)               | input_1             | qint16        | 0.0000195 | 0.2184383    | 0.5999317     | 0.3205094    | 0.0074539        | torch.Size([2, 512, 1])          |
| 1821    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(4)               | output              | qint16        | 0.0001376 | -0.5999088   | 2.3506408     | 0.0000032    | 0.2068300        | torch.Size([2, 512, 64])         |
| 1822    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(4)               | input_0             | qint16        | 0.0001376 | -0.5999088   | 2.3506408     | 0.0000032    | 0.2068300        | torch.Size([2, 512, 64])         |
| 1822    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(4)               | input_1             | qint16        | 0.0001376 | -0.5999088   | 2.3506408     | 0.0000032    | 0.2068300        | torch.Size([2, 512, 64])         |
| 1822    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(4)               | output              | qint16        | 0.0006236 | 0.0000000    | 5.5254269     | 0.2068073    | 0.2126743        | torch.Size([2, 512, 64])         |
| 1823    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(4)     | input_0             | qint16        | 0.0006236 | 0.0000000    | 5.5254269     | 0.2068073    | 0.2126743        | torch.Size([2, 512, 64])         |
| 1823    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(4)     | output              | qint16        | 0.0000322 | 0.0847569    | 0.6167059     | 0.2068105    | 0.0099026        | torch.Size([2, 512, 1])          |
| 1824    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(4)             | input               | qint16        | 0.0000322 | 0.0847569    | 0.6167059     | 0.2068105    | 0.0099026        | torch.Size([2, 512, 1])          |
| 1824    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(4)             | output              | qint16        | 0.0001060 | 1.2733284    | 3.4346581     | 2.4093261    | 0.3990262        | torch.Size([2, 512, 1])          |
| 1825    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(4)           | input_0             | qint16        | 0.0001376 | -0.5999088   | 2.3506408     | 0.0000032    | 0.2068300        | torch.Size([2, 512, 64])         |
| 1825    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(4)           | input_1             | qint16        | 0.0001060 | 1.2733284    | 3.4346581     | 2.4093261    | 0.3990262        | torch.Size([2, 512, 1])          |
| 1825    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(4)           | output              | qint16        | 0.0001466 | -0.8866556   | 4.4626036     | 0.0000123    | 1.0001365        | torch.Size([2, 512, 64])         |
| 1826    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(4)      | input               | torch.float32 |           | 0.8333027    | 1.1388558     | 0.9778216    | 0.0042186        | torch.Size([64])                 |
| 1826    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(4)      | output              | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 1827    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(4)        | input_0             | qint16        | 0.0001466 | -0.8866556   | 4.4626036     | 0.0000123    | 1.0001365        | torch.Size([2, 512, 64])         |
| 1827    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(4)        | input_1             | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 1827    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(4)        | output              | qint16        | 0.0001474 | -0.9484284   | 4.3372946     | 0.0041997    | 0.9859117        | torch.Size([2, 512, 64])         |
| 1828    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(4)        | input               | torch.float32 |           | -0.0757831   | 0.1161729     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1828    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(4)        | output              | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1829    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(4)          | input_0             | qint16        | 0.0001474 | -0.9484284   | 4.3372946     | 0.0041997    | 0.9859117        | torch.Size([2, 512, 64])         |
| 1829    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(4)          | input_1             | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 1829    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(4)          | output              | qint8         | 0.0350382 | -0.9109923   | 4.3096943     | 0.0206205    | 0.9404460        | torch.Size([2, 512, 64])         |
| 1830    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(4)                   | input               | qint8         | 0.0350382 | -0.9109923   | 4.3096943     | 0.0206205    | 0.9404460        | torch.Size([2, 512, 64])         |
| 1830    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(4)                   | weight              | torch.float32 |           | -0.5707353   | 0.3620123     | -0.0010372   | 0.0088292        | torch.Size([64, 64])             |
| 1830    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(4)                   | bias                | torch.float32 |           | -0.1720246   | 0.1340137     | -0.0235144   | 0.0050507        | torch.Size([64])                 |
| 1830    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(4)                   | output              | torch.float32 |           | -5.3988538   | 3.7189701     | -0.3470180   | 2.1185658        | torch.Size([2, 512, 64])         |
| 1831    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(4)                   | input               | torch.float32 |           | -5.3988538   | 3.7189701     | -0.3470180   | 2.1185658        | torch.Size([2, 512, 64])         |
| 1831    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(4)                   | output              | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4453222    | 0.4922985        | torch.Size([2, 512, 64])         |
| 1832    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(4)   | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4453222    | 0.4922985        | torch.Size([2, 512, 64])         |
| 1832    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(4)   | output              | qint16        | 0.0000166 | 0.3592917    | 0.5175771     | 0.4453222    | 0.0027646        | torch.Size([2, 512, 1])          |
| 1833    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(4)               | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4453222    | 0.4922985        | torch.Size([2, 512, 64])         |
| 1833    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(4)               | input_1             | qint16        | 0.0000166 | 0.3592917    | 0.5175771     | 0.4453222    | 0.0027646        | torch.Size([2, 512, 1])          |
| 1833    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(4)               | output              | qint16        | 0.0000988 | -0.5176006   | 3.1881309     | 0.0000002    | 0.4895365        | torch.Size([2, 512, 64])         |
| 1834    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(4)               | input_0             | qint16        | 0.0000988 | -0.5176006   | 3.1881309     | 0.0000002    | 0.4895365        | torch.Size([2, 512, 64])         |
| 1834    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(4)               | input_1             | qint16        | 0.0000988 | -0.5176006   | 3.1881309     | 0.0000002    | 0.4895365        | torch.Size([2, 512, 64])         |
| 1834    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(4)               | output              | qint16        | 0.0003201 | 0.0000000    | 10.1640558    | 0.4895208    | 1.0278925        | torch.Size([2, 512, 64])         |
| 1835    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(4)     | input_0             | qint16        | 0.0003201 | 0.0000000    | 10.1640558    | 0.4895208    | 1.0278925        | torch.Size([2, 512, 64])         |
| 1835    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(4)     | output              | qint16        | 0.0000230 | 0.3041536    | 0.7285390     | 0.4895230    | 0.0130940        | torch.Size([2, 512, 1])          |
| 1836    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(4)             | input               | qint16        | 0.0000230 | 0.3041536    | 0.7285390     | 0.4895230    | 0.0130940        | torch.Size([2, 512, 1])          |
| 1836    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(4)             | output              | qint16        | 0.0000608 | 1.1715734    | 1.8131927     | 1.4635396    | 0.0377224        | torch.Size([2, 512, 1])          |
| 1837    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(4)           | input_0             | qint16        | 0.0000988 | -0.5176006   | 3.1881309     | 0.0000002    | 0.4895365        | torch.Size([2, 512, 64])         |
| 1837    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(4)           | input_1             | qint16        | 0.0000608 | 1.1715734    | 1.8131927     | 1.4635396    | 0.0377224        | torch.Size([2, 512, 1])          |
| 1837    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(4)           | output              | qint16        | 0.0001598 | -0.6970997   | 4.1899514     | 0.0000060    | 1.0000036        | torch.Size([2, 512, 64])         |
| 1838    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(4)      | input               | torch.float32 |           | 0.8006503    | 1.1495361     | 0.9818506    | 0.0032003        | torch.Size([64])                 |
| 1838    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(4)      | output              | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 1839    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(4)        | input_0             | qint16        | 0.0001598 | -0.6970997   | 4.1899514     | 0.0000060    | 1.0000036        | torch.Size([2, 512, 64])         |
| 1839    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(4)        | input_1             | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 1839    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(4)        | output              | qint16        | 0.0001633 | -0.7850167   | 4.3655939     | 0.0056174    | 1.0017412        | torch.Size([2, 512, 64])         |
| 1840    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(4)        | input               | torch.float32 |           | -0.0461140   | 0.1411197     | 0.0132828    | 0.0015701        | torch.Size([64])                 |
| 1840    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(4)        | output              | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 1841    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(4)          | input_0             | qint16        | 0.0001633 | -0.7850167   | 4.3655939     | 0.0056174    | 1.0017412        | torch.Size([2, 512, 64])         |
| 1841    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(4)          | input_1             | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 1841    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(4)          | output              | qint8         | 0.0387038 | -0.7740757   | 4.3348241     | 0.0190766    | 0.9818817        | torch.Size([2, 512, 64])         |
| 1842    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(4)                   | input               | qint8         | 0.0387038 | -0.7740757   | 4.3348241     | 0.0190766    | 0.9818817        | torch.Size([2, 512, 64])         |
| 1842    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(4)                   | weight              | torch.float32 |           | -0.5701389   | 0.3477888     | 0.0006721    | 0.0085883        | torch.Size([64, 64])             |
| 1842    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(4)                   | bias                | torch.float32 |           | -0.1677032   | 0.1709885     | -0.0237130   | 0.0070098        | torch.Size([64])                 |
| 1842    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(4)                   | output              | torch.float32 |           | -4.7830462   | 7.2191157     | -0.4885931   | 1.7646931        | torch.Size([2, 512, 64])         |
| 1843    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(4)                  | input               | torch.float32 |           | -4.7830462   | 7.2191157     | -0.4885931   | 1.7646931        | torch.Size([2, 512, 64])         |
| 1843    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(4)                  | output              | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2606275    | 0.6659026        | torch.Size([2, 512, 64])         |
| 1844    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(4)  | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2606275    | 0.6659026        | torch.Size([2, 512, 64])         |
| 1844    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(4)  | output              | qint16        | 0.0000138 | 0.2029517    | 0.3856899     | 0.2606305    | 0.0021775        | torch.Size([2, 512, 1])          |
| 1845    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(4)              | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2606275    | 0.6659026        | torch.Size([2, 512, 64])         |
| 1845    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(4)              | input_1             | qint16        | 0.0000138 | 0.2029517    | 0.3856899     | 0.2606305    | 0.0021775        | torch.Size([2, 512, 1])          |
| 1845    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(4)              | output              | qint16        | 0.0002137 | -0.3856650   | 6.9370551     | 0.0000095    | 0.6637211        | torch.Size([2, 512, 64])         |
| 1846    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(4)              | input_0             | qint16        | 0.0002137 | -0.3856650   | 6.9370551     | 0.0000095    | 0.6637211        | torch.Size([2, 512, 64])         |
| 1846    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(4)              | input_1             | qint16        | 0.0002137 | -0.3856650   | 6.9370551     | 0.0000095    | 0.6637211        | torch.Size([2, 512, 64])         |
| 1846    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(4)              | output              | qint16        | 0.0014959 | 0.0000000    | 48.1224632    | 0.6637198    | 18.7339592       | torch.Size([2, 512, 64])         |
| 1847    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(4)    | input_0             | qint16        | 0.0014959 | 0.0000000    | 48.1224632    | 0.6637198    | 18.7339592       | torch.Size([2, 512, 64])         |
| 1847    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(4)    | output              | qint16        | 0.0000253 | 0.3164783    | 0.8266518     | 0.6637200    | 0.0141979        | torch.Size([2, 512, 1])          |
| 1848    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(4)            | input               | qint16        | 0.0000253 | 0.3164783    | 0.8266518     | 0.6637200    | 0.0141979        | torch.Size([2, 512, 1])          |
| 1848    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(4)            | output              | qint16        | 0.0000680 | 1.0998724    | 1.7775645     | 1.2443045    | 0.0156557        | torch.Size([2, 512, 1])          |
| 1849    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(4)          | input_0             | qint16        | 0.0002137 | -0.3856650   | 6.9370551     | 0.0000095    | 0.6637211        | torch.Size([2, 512, 64])         |
| 1849    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(4)          | input_1             | qint16        | 0.0000680 | 1.0998724    | 1.7775645     | 1.2443045    | 0.0156557        | torch.Size([2, 512, 1])          |
| 1849    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(4)          | output              | qint16        | 0.0002366 | -0.6375600   | 7.7517352     | 0.0000170    | 0.9999601        | torch.Size([2, 512, 64])         |
| 1850    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(4)     | input               | torch.float32 |           | 0.7297163    | 1.2824999     | 1.0134131    | 0.0161719        | torch.Size([64])                 |
| 1850    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(4)     | output              | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 1851    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(4)       | input_0             | qint16        | 0.0002366 | -0.6375600   | 7.7517352     | 0.0000170    | 0.9999601        | torch.Size([2, 512, 64])         |
| 1851    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(4)       | input_1             | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 1851    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(4)       | output              | qint16        | 0.0001954 | -0.7954578   | 5.6565237     | -0.0321529   | 0.7129198        | torch.Size([2, 512, 64])         |
| 1852    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(4)       | input               | torch.float32 |           | -0.2385408   | 0.3192695     | 0.0900053    | 0.0129013        | torch.Size([64])                 |
| 1852    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(4)       | output              | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 1853    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(4)         | input_0             | qint16        | 0.0001954 | -0.7954578   | 5.6565237     | -0.0321529   | 0.7129198        | torch.Size([2, 512, 64])         |
| 1853    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(4)         | input_1             | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 1853    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(4)         | output              | qint8         | 0.0462055 | -0.7854942   | 5.4060483     | 0.0579107    | 0.6284634        | torch.Size([2, 512, 64])         |
| 1854    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_0             | qint8         | 0.0587279 | -0.8809187   | 7.3997173     | 0.0703954    | 0.8679019        | torch.Size([2, 512, 128])        |
| 1854    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_1             | qint8         | 0.0385920 | -1.6980467   | 4.8625884     | 0.0146545    | 1.3493775        | torch.Size([2, 512, 32])         |
| 1854    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_2             | qint8         | 0.0373904 | -0.9347606   | 3.9259944     | 0.0169288    | 0.7841282        | torch.Size([2, 512, 32])         |
| 1854    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_3             | qint8         | 0.0462055 | -0.7854942   | 5.4060483     | 0.0579107    | 0.6284634        | torch.Size([2, 512, 64])         |
| 1854    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | output              | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0568060    | 0.8548859        | torch.Size([2, 512, 256])        |
| 1855    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(6)                                 | input               | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 1855    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(6)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 1855    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(6)                                 | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 1856    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.21.query_cat                          | input_0             | qint8         | 0.0278524 | -3.5651109   | 3.5372584     | 0.0062697    | 0.7816463        | torch.Size([2, 512, 256])        |
| 1856    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.21.query_cat                          | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0568060    | 0.8548859        | torch.Size([2, 512, 256])        |
| 1856    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.21.query_cat                          | output              | qint8         | 0.0538260 | -3.5525172   | 6.8359046     | 0.0334632    | 0.8170608        | torch.Size([2, 512, 512])        |
| 1857    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.21.key_cat                            | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 1857    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.21.key_cat                            | input_1             | qint8         | 0.0569265 | -1.0246774   | 5.3510933     | 0.0736042    | 0.8488365        | torch.Size([2, 256, 256])        |
| 1857    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.21.key_cat                            | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 1858    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.0538260 | -3.5525172   | 6.8359046     | 0.0334632    | 0.8170608        | torch.Size([2, 512, 512])        |
| 1858    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.0538260 | -3.5525172   | 6.8359046     | 0.0334632    | 0.8170608        | torch.Size([512, 2, 512])        |
| 1859    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 1859    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1860    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 1860    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1861    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | input_0             | qint8         | 0.0538260 | -3.5525172   | 6.8359046     | 0.0334632    | 0.8170608        | torch.Size([512, 2, 512])        |
| 1861    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | output              | qint8         | 0.0538260 | -3.5525172   | 6.8359046     | 0.0334632    | 0.8170608        | torch.Size([512, 2, 512])        |
| 1862    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1862    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1863    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1863    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1864    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.q_proj                        | input               | qint8         | 0.0538260 | -3.5525172   | 6.8359046     | 0.0334632    | 0.8170608        | torch.Size([512, 2, 512])        |
| 1864    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.q_proj                        | weight              | torch.float32 |           | -0.2718778   | 0.2867957     | -0.0000759   | 0.0035608        | torch.Size([512, 512])           |
| 1864    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.q_proj                        | bias                | torch.float32 |           | -0.1191430   | 0.1196405     | 0.0007935    | 0.0012712        | torch.Size([512])                |
| 1864    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.q_proj                        | output              | qint8         | 0.1139357 | -14.5837736  | 14.3559017    | 0.0324891    | 12.1301947       | torch.Size([512, 2, 512])        |
| 1865    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.k_proj                        | input               | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 1865    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.k_proj                        | weight              | torch.float32 |           | -0.2869442   | 0.2633475     | 0.0000353    | 0.0036706        | torch.Size([512, 512])           |
| 1865    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.k_proj                        | bias                | torch.float32 |           | -0.0028050   | 0.0033431     | 0.0000168    | 0.0000008        | torch.Size([512])                |
| 1865    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.k_proj                        | output              | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([256, 2, 512])        |
| 1866    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.v_proj                        | input               | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 1866    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.v_proj                        | weight              | torch.float32 |           | -0.1508207   | 0.1581457     | -0.0000932   | 0.0012603        | torch.Size([512, 512])           |
| 1866    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.v_proj                        | bias                | torch.float32 |           | -0.0568344   | 0.0711433     | 0.0019992    | 0.0005089        | torch.Size([512])                |
| 1866    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.v_proj                        | output              | qint8         | 0.0064641 | -0.0581770   | 0.0711052     | 0.0021210    | 0.0005116        | torch.Size([256, 2, 512])        |
| 1867    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | input_0             | qint8         | 0.1139357 | -14.5837736  | 14.3559017    | 0.0324891    | 12.1301947       | torch.Size([512, 2, 512])        |
| 1867    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | output              | qint8         | 0.1139357 | -14.5837736  | 14.3559017    | 0.0324891    | 12.1301947       | torch.Size([512, 16, 64])        |
| 1868    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.1139357 | -14.5837736  | 14.3559017    | 0.0324891    | 12.1301947       | torch.Size([512, 16, 64])        |
| 1868    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.1139357 | -14.5837736  | 14.3559017    | 0.0324891    | 12.1301947       | torch.Size([16, 512, 64])        |
| 1869    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | input_0             | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([256, 2, 512])        |
| 1869    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | output              | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([256, 16, 64])        |
| 1870    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([256, 16, 64])        |
| 1870    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([16, 256, 64])        |
| 1871    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | input_0             | qint8         | 0.0064641 | -0.0581770   | 0.0711052     | 0.0021210    | 0.0005116        | torch.Size([256, 2, 512])        |
| 1871    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | output              | qint8         | 0.0064641 | -0.0581770   | 0.0711052     | 0.0021210    | 0.0005116        | torch.Size([256, 16, 64])        |
| 1872    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.0064641 | -0.0581770   | 0.0711052     | 0.0021210    | 0.0005116        | torch.Size([256, 16, 64])        |
| 1872    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.0064641 | -0.0581770   | 0.0711052     | 0.0021210    | 0.0005116        | torch.Size([16, 256, 64])        |
| 1873    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.21.attn.q_scale_mul                   | input_0             | qint8         | 0.1139357 | -14.5837736  | 14.3559017    | 0.0324891    | 12.1301947       | torch.Size([16, 512, 64])        |
| 1873    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.21.attn.q_scale_mul                   | output              | qint8         | 0.0142420 | -1.8229717   | 1.7944877     | 0.0040611    | 0.1895343        | torch.Size([16, 512, 64])        |
| 1874    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([16, 256, 64])        |
| 1874    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([16, 64, 256])        |
| 1875    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.21.attn.matmul                        | input_0             | qint8         | 0.0142420 | -1.8229717   | 1.7944877     | 0.0040611    | 0.1895343        | torch.Size([16, 512, 64])        |
| 1875    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.21.attn.matmul                        | input_1             | qint8         | 0.0788103 | -4.8862391   | 4.7286181     | 0.1026689    | 3.7830389        | torch.Size([16, 64, 256])        |
| 1875    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.21.attn.matmul                        | output              | qint8         | 1.9740638 | -88.8328705  | 78.9625473    | -4.9587746   | 682.1637573      | torch.Size([16, 512, 256])       |
| 1876    | torch.Tensor.max                                                            | head.layers.21.attn.softmax                       | input               | qint8         | 1.9740638 | -88.8328705  | 78.9625473    | -4.9587746   | 682.1637573      | torch.Size([16, 512, 256])       |
| 1876    | torch.Tensor.max                                                            | head.layers.21.attn.softmax                       | output_0            | qint8         | 1.9740638 | -88.8328705  | 78.9625473    | -4.9587746   | 682.2467041      | torch.Size([16, 512, 1])         |
| 1876    | torch.Tensor.max                                                            | head.layers.21.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1877    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.21.attn.softmax.sub                   | input_0             | qint8         | 1.9740638 | -88.8328705  | 78.9625473    | -4.9587746   | 682.1637573      | torch.Size([16, 512, 256])       |
| 1877    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.21.attn.softmax.sub                   | input_1             | qint8         | 1.9740638 | -88.8328705  | 78.9625473    | -4.9587746   | 682.2467041      | torch.Size([16, 512, 1])         |
| 1877    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.21.attn.softmax.sub                   | output              | qint16        | 0.0148660 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1878    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.21.attn.softmax.exp                   | input               | qint16        | 0.0148660 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1878    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.21.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1879    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.21.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1879    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.21.attn.softmax.sum                   | output              | qint16        | 0.0037042 | 121.3739700  | 121.3739700   | 121.3739700  | 0.0000000        | torch.Size([16, 512, 1])         |
| 1880    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.21.attn.softmax.reciprocal            | input               | qint16        | 0.0037042 | 121.3739700  | 121.3739700   | 121.3739700  | 0.0000000        | torch.Size([16, 512, 1])         |
| 1880    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.21.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0082399    | 0.0082399     | 0.0082399    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1881    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.21.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1881    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.21.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0082399    | 0.0082399     | 0.0082399    | 0.0000000        | torch.Size([16, 512, 1])         |
| 1881    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.21.attn.softmax.mul                   | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1882    | torch.nn.modules.dropout.Dropout                                            | head.layers.21.attn.attention_drop                | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1882    | torch.nn.modules.dropout.Dropout                                            | head.layers.21.attn.attention_drop                | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1883    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.21.attn.attn_matmul                   | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1883    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.21.attn.attn_matmul                   | input_1             | qint8         | 0.0064641 | -0.0581770   | 0.0711052     | 0.0021210    | 0.0005116        | torch.Size([16, 256, 64])        |
| 1883    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.21.attn.attn_matmul                   | output              | qint8         | 0.0061011 | -0.1159206   | 0.1403250     | 0.0041707    | 0.0020806        | torch.Size([16, 512, 64])        |
| 1884    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.0061011 | -0.1159206   | 0.1403250     | 0.0041707    | 0.0020806        | torch.Size([16, 512, 64])        |
| 1884    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.0061011 | -0.1159206   | 0.1403250     | 0.0041707    | 0.0020806        | torch.Size([512, 16, 64])        |
| 1885    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | input_0             | qint8         | 0.0061011 | -0.1159206   | 0.1403250     | 0.0041707    | 0.0020806        | torch.Size([512, 16, 64])        |
| 1885    | torch.Tensor.reshape                                                        | head.layers.21.attn                               | output              | qint8         | 0.0061011 | -0.1159206   | 0.1403250     | 0.0041707    | 0.0020806        | torch.Size([512, 2, 512])        |
| 1886    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.out_proj                      | input               | qint8         | 0.0061011 | -0.1159206   | 0.1403250     | 0.0041707    | 0.0020806        | torch.Size([512, 2, 512])        |
| 1886    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.out_proj                      | weight              | torch.float32 |           | -0.1928206   | 0.1779369     | -0.0001203   | 0.0022082        | torch.Size([512, 512])           |
| 1886    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.out_proj                      | bias                | torch.float32 |           | -0.2257318   | 0.2060668     | 0.0074249    | 0.0055845        | torch.Size([512])                |
| 1886    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.21.attn.out_proj                      | output              | qint8         | 0.0082327 | -0.4445677   | 0.4116368     | 0.0192633    | 0.0248392        | torch.Size([512, 2, 512])        |
| 1887    | torch.Tensor.view                                                           | head.layers.21.attn                               | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 1887    | torch.Tensor.view                                                           | head.layers.21.attn                               | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 1888    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.21.attn.attn_weights_mean             | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 1888    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.21.attn.attn_weights_mean             | output              | qint8         | 0.0030319 | 0.0090958    | 0.0090958     | 0.0090958    | 0.0000000        | torch.Size([2, 512, 256])        |
| 1889    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | input_0             | qint8         | 0.0082327 | -0.4445677   | 0.4116368     | 0.0192633    | 0.0248392        | torch.Size([512, 2, 512])        |
| 1889    | torch.Tensor.transpose                                                      | head.layers.21.attn                               | output              | qint8         | 0.0082327 | -0.4445677   | 0.4116368     | 0.0192633    | 0.0248392        | torch.Size([2, 512, 512])        |
| 1890    | torch.nn.modules.dropout.Dropout                                            | head.layers.21.dropout                            | input               | qint8         | 0.0082327 | -0.4445677   | 0.4116368     | 0.0192633    | 0.0248392        | torch.Size([2, 512, 512])        |
| 1890    | torch.nn.modules.dropout.Dropout                                            | head.layers.21.dropout                            | output              | qint8         | 0.0082327 | -0.4445677   | 0.4116368     | 0.0192633    | 0.0248392        | torch.Size([2, 512, 512])        |
| 1891    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.21.add                                | input_0             | qint8         | 0.0538260 | -3.5525172   | 6.8359046     | 0.0334632    | 0.8170608        | torch.Size([2, 512, 512])        |
| 1891    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.21.add                                | input_1             | qint8         | 0.0082327 | -0.4445677   | 0.4116368     | 0.0192633    | 0.0248392        | torch.Size([2, 512, 512])        |
| 1891    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.21.add                                | output              | qint8         | 0.0514059 | -3.5470095   | 6.5285535     | 0.0525308    | 0.7768616        | torch.Size([2, 512, 512])        |
| 1892    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(6)                                  | input               | qint8         | 0.0514059 | -3.5470095   | 6.5285535     | 0.0525308    | 0.7768616        | torch.Size([2, 512, 512])        |
| 1892    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(6)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 1892    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(6)                                  | output              | qint16        | 0.0015259 | -6.7321777   | 5.4504395     | 0.0375946    | 0.8819827        | torch.Size([2, 512, 256])        |
| 1893    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(7)                                 | input               | qint16        | 0.0015259 | -6.7321777   | 5.4504395     | 0.0375946    | 0.8819827        | torch.Size([2, 512, 256])        |
| 1893    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(7)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 1893    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(7)                                 | output              | qint16        | 0.0001526 | -3.1246948   | 3.1045532     | 0.0010160    | 0.0549861        | torch.Size([2, 512, 512])        |
| 1894    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.22.query_cat                          | input_0             | qint16        | 0.0015259 | -6.7321777   | 5.4504395     | 0.0375946    | 0.8819827        | torch.Size([2, 512, 256])        |
| 1894    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.22.query_cat                          | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0568060    | 0.8548859        | torch.Size([2, 512, 256])        |
| 1894    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.22.query_cat                          | output              | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([2, 512, 512])        |
| 1895    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.22.key_cat                            | input_0             | qint16        | 0.0015259 | -6.7321777   | 5.4504395     | 0.0375946    | 0.8819827        | torch.Size([2, 512, 256])        |
| 1895    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.22.key_cat                            | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0568060    | 0.8548859        | torch.Size([2, 512, 256])        |
| 1895    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.22.key_cat                            | output              | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([2, 512, 512])        |
| 1896    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([2, 512, 512])        |
| 1896    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1897    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([2, 512, 512])        |
| 1897    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1898    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint16        | 0.0001526 | -3.1246948   | 3.1045532     | 0.0010160    | 0.0549861        | torch.Size([2, 512, 512])        |
| 1898    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint16        | 0.0001526 | -3.1246948   | 3.1045532     | 0.0010160    | 0.0549861        | torch.Size([512, 2, 512])        |
| 1899    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | input_0             | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1899    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | output              | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1900    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | input_0             | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1900    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | output              | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1901    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | input_0             | qint16        | 0.0001526 | -3.1246948   | 3.1045532     | 0.0010160    | 0.0549861        | torch.Size([512, 2, 512])        |
| 1901    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | output              | qint16        | 0.0001526 | -3.1246948   | 3.1045532     | 0.0010160    | 0.0549861        | torch.Size([512, 2, 512])        |
| 1902    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.q_proj                        | input               | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1902    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.q_proj                        | weight              | torch.float32 |           | -0.2868485   | 0.3352289     | -0.0001518   | 0.0026820        | torch.Size([512, 512])           |
| 1902    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.q_proj                        | bias                | torch.float32 |           | -0.0801667   | 0.0727894     | 0.0005583    | 0.0005112        | torch.Size([512])                |
| 1902    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.q_proj                        | output              | qint8         | 0.0764212 | -9.7819099   | 9.7054892     | -0.0162208   | 4.8851247        | torch.Size([512, 2, 512])        |
| 1903    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.k_proj                        | input               | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([512, 2, 512])        |
| 1903    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.k_proj                        | weight              | torch.float32 |           | -0.5697392   | 0.5493896     | -0.0000795   | 0.0032088        | torch.Size([512, 512])           |
| 1903    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.k_proj                        | bias                | torch.float32 |           | -0.0280499   | 0.0381052     | -0.0003095   | 0.0000538        | torch.Size([512])                |
| 1903    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.k_proj                        | output              | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([512, 2, 512])        |
| 1904    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.v_proj                        | input               | qint16        | 0.0001526 | -3.1246948   | 3.1045532     | 0.0010160    | 0.0549861        | torch.Size([512, 2, 512])        |
| 1904    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.v_proj                        | weight              | torch.float32 |           | -0.2083604   | 0.2150452     | -0.0000953   | 0.0016115        | torch.Size([512, 512])           |
| 1904    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.v_proj                        | bias                | torch.float32 |           | -0.3051279   | 0.2680113     | 0.0025552    | 0.0078078        | torch.Size([512])                |
| 1904    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.v_proj                        | output              | qint8         | 0.0220039 | -2.6624706   | 2.7944939     | 0.0213901    | 0.1479354        | torch.Size([512, 2, 512])        |
| 1905    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | input_0             | qint8         | 0.0764212 | -9.7819099   | 9.7054892     | -0.0162208   | 4.8851247        | torch.Size([512, 2, 512])        |
| 1905    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | output              | qint8         | 0.0764212 | -9.7819099   | 9.7054892     | -0.0162208   | 4.8851247        | torch.Size([512, 16, 64])        |
| 1906    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.0764212 | -9.7819099   | 9.7054892     | -0.0162208   | 4.8851247        | torch.Size([512, 16, 64])        |
| 1906    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.0764212 | -9.7819099   | 9.7054892     | -0.0162208   | 4.8851247        | torch.Size([16, 512, 64])        |
| 1907    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | input_0             | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([512, 2, 512])        |
| 1907    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | output              | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([512, 16, 64])        |
| 1908    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([512, 16, 64])        |
| 1908    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([16, 512, 64])        |
| 1909    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | input_0             | qint8         | 0.0220039 | -2.6624706   | 2.7944939     | 0.0213901    | 0.1479354        | torch.Size([512, 2, 512])        |
| 1909    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | output              | qint8         | 0.0220039 | -2.6624706   | 2.7944939     | 0.0213901    | 0.1479354        | torch.Size([512, 16, 64])        |
| 1910    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.0220039 | -2.6624706   | 2.7944939     | 0.0213901    | 0.1479354        | torch.Size([512, 16, 64])        |
| 1910    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.0220039 | -2.6624706   | 2.7944939     | 0.0213901    | 0.1479354        | torch.Size([16, 512, 64])        |
| 1911    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.22.attn.q_scale_mul                   | input_0             | qint8         | 0.0764212 | -9.7819099   | 9.7054892     | -0.0162208   | 4.8851247        | torch.Size([16, 512, 64])        |
| 1911    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.22.attn.q_scale_mul                   | output              | qint8         | 0.0095526 | -1.2227387   | 1.2131861     | -0.0020276   | 0.0763301        | torch.Size([16, 512, 64])        |
| 1912    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([16, 512, 64])        |
| 1912    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([16, 64, 512])        |
| 1913    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.22.attn.matmul                        | input_0             | qint8         | 0.0095526 | -1.2227387   | 1.2131861     | -0.0020276   | 0.0763301        | torch.Size([16, 512, 64])        |
| 1913    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.22.attn.matmul                        | input_1             | qint8         | 0.1016686 | -13.0135822  | 12.9119139    | -0.0202659   | 6.3114295        | torch.Size([16, 64, 512])        |
| 1913    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.22.attn.matmul                        | output              | qint8         | 0.6952415 | -83.4289780  | 88.2956696    | -0.6234021   | 313.8110657      | torch.Size([16, 512, 512])       |
| 1914    | torch.Tensor.max                                                            | head.layers.22.attn.softmax                       | input               | qint8         | 0.6952415 | -83.4289780  | 88.2956696    | -0.6234021   | 313.8110657      | torch.Size([16, 512, 512])       |
| 1914    | torch.Tensor.max                                                            | head.layers.22.attn.softmax                       | output_0            | qint8         | 0.6952415 | 4.8666906    | 88.2956696    | 33.3341675   | 410.0737610      | torch.Size([16, 512, 1])         |
| 1914    | torch.Tensor.max                                                            | head.layers.22.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 511.0000000   | 267.3946533  | 14842.0966797    | torch.Size([16, 512, 1])         |
| 1915    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.22.attn.softmax.sub                   | input_0             | qint8         | 0.6952415 | -83.4289780  | 88.2956696    | -0.6234021   | 313.8110657      | torch.Size([16, 512, 512])       |
| 1915    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.22.attn.softmax.sub                   | input_1             | qint8         | 0.6952415 | 4.8666906    | 88.2956696    | 33.3341675   | 410.0737610      | torch.Size([16, 512, 1])         |
| 1915    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.22.attn.softmax.sub                   | output              | qint16        | 0.0056863 | -165.4650726 | 0.0000000     | -33.9576340  | 695.9110718      | torch.Size([16, 512, 512])       |
| 1916    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.22.attn.softmax.exp                   | input               | qint16        | 0.0056863 | -165.4650726 | 0.0000000     | -33.9576340  | 695.9110718      | torch.Size([16, 512, 512])       |
| 1916    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.22.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0244335    | 0.0191615        | torch.Size([16, 512, 512])       |
| 1917    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.22.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0244335    | 0.0191615        | torch.Size([16, 512, 512])       |
| 1917    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.22.attn.softmax.sum                   | output              | qint16        | 0.0010989 | 0.9999802    | 36.0069771    | 6.6223354    | 81.3551788       | torch.Size([16, 512, 1])         |
| 1918    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.22.attn.softmax.reciprocal            | input               | qint16        | 0.0010989 | 0.9999802    | 36.0069771    | 6.6223354    | 81.3551788       | torch.Size([16, 512, 1])         |
| 1918    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.22.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0277714    | 0.9999847     | 0.3596089    | 0.0716714        | torch.Size([16, 512, 1])         |
| 1919    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.22.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0244335    | 0.0191615        | torch.Size([16, 512, 512])       |
| 1919    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.22.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0277714    | 0.9999847     | 0.3596089    | 0.0716714        | torch.Size([16, 512, 1])         |
| 1919    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.22.attn.softmax.mul                   | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0023053    | 0.0005090        | torch.Size([16, 512, 512])       |
| 1920    | torch.nn.modules.dropout.Dropout                                            | head.layers.22.attn.attention_drop                | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0023053    | 0.0005090        | torch.Size([16, 512, 512])       |
| 1920    | torch.nn.modules.dropout.Dropout                                            | head.layers.22.attn.attention_drop                | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0023053    | 0.0005090        | torch.Size([16, 512, 512])       |
| 1921    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.22.attn.attn_matmul                   | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0023053    | 0.0005090        | torch.Size([16, 512, 512])       |
| 1921    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.22.attn.attn_matmul                   | input_1             | qint8         | 0.0220039 | -2.6624706   | 2.7944939     | 0.0213901    | 0.1479354        | torch.Size([16, 512, 64])        |
| 1921    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.22.attn.attn_matmul                   | output              | qint8         | 0.0212958 | -2.7258604   | 2.7045646     | 0.0180513    | 0.2783437        | torch.Size([16, 512, 64])        |
| 1922    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.0212958 | -2.7258604   | 2.7045646     | 0.0180513    | 0.2783437        | torch.Size([16, 512, 64])        |
| 1922    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.0212958 | -2.7258604   | 2.7045646     | 0.0180513    | 0.2783437        | torch.Size([512, 16, 64])        |
| 1923    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | input_0             | qint8         | 0.0212958 | -2.7258604   | 2.7045646     | 0.0180513    | 0.2783437        | torch.Size([512, 16, 64])        |
| 1923    | torch.Tensor.reshape                                                        | head.layers.22.attn                               | output              | qint8         | 0.0212958 | -2.7258604   | 2.7045646     | 0.0180513    | 0.2783438        | torch.Size([512, 2, 512])        |
| 1924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.out_proj                      | input               | qint8         | 0.0212958 | -2.7258604   | 2.7045646     | 0.0180513    | 0.2783438        | torch.Size([512, 2, 512])        |
| 1924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.out_proj                      | weight              | torch.float32 |           | -0.2679534   | 0.2460409     | 0.0001218    | 0.0026792        | torch.Size([512, 512])           |
| 1924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.out_proj                      | bias                | torch.float32 |           | -0.3912482   | 0.3744041     | -0.0041605   | 0.0237935        | torch.Size([512])                |
| 1924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.22.attn.out_proj                      | output              | qint8         | 0.0299964 | -3.8395350   | 3.8095386     | -0.0133996   | 1.3110965        | torch.Size([512, 2, 512])        |
| 1925    | torch.Tensor.view                                                           | head.layers.22.attn                               | input_0             | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0023053    | 0.0005090        | torch.Size([16, 512, 512])       |
| 1925    | torch.Tensor.view                                                           | head.layers.22.attn                               | output              | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0023053    | 0.0005090        | torch.Size([2, 8, 512, 512])     |
| 1926    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.22.attn.attn_weights_mean             | input               | qint8         | 0.0078431 | 0.0000000    | 0.9960785     | 0.0023053    | 0.0005090        | torch.Size([2, 8, 512, 512])     |
| 1926    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.22.attn.attn_weights_mean             | output              | qint8         | 0.0025059 | 0.0000000    | 0.2455802     | 0.0022982    | 0.0000799        | torch.Size([2, 512, 512])        |
| 1927    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | input_0             | qint8         | 0.0299964 | -3.8395350   | 3.8095386     | -0.0133996   | 1.3110965        | torch.Size([512, 2, 512])        |
| 1927    | torch.Tensor.transpose                                                      | head.layers.22.attn                               | output              | qint8         | 0.0299964 | -3.8395350   | 3.8095386     | -0.0133996   | 1.3110965        | torch.Size([2, 512, 512])        |
| 1928    | torch.nn.modules.dropout.Dropout                                            | head.layers.22.dropout                            | input               | qint8         | 0.0299964 | -3.8395350   | 3.8095386     | -0.0133996   | 1.3110965        | torch.Size([2, 512, 512])        |
| 1928    | torch.nn.modules.dropout.Dropout                                            | head.layers.22.dropout                            | output              | qint8         | 0.0299964 | -3.8395350   | 3.8095386     | -0.0133996   | 1.3110965        | torch.Size([2, 512, 512])        |
| 1929    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.22.add                                | input_0             | qint8         | 0.0537724 | -6.7215495   | 6.8290944     | 0.0495723    | 0.8672290        | torch.Size([2, 512, 512])        |
| 1929    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.22.add                                | input_1             | qint8         | 0.0299964 | -3.8395350   | 3.8095386     | -0.0133996   | 1.3110965        | torch.Size([2, 512, 512])        |
| 1929    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.22.add                                | output              | qint8         | 0.0608918 | -7.7941518   | 7.7332602     | 0.0360444    | 2.1232903        | torch.Size([2, 512, 512])        |
| 1930    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(7)                                  | input               | qint8         | 0.0608918 | -7.7941518   | 7.7332602     | 0.0360444    | 2.1232903        | torch.Size([2, 512, 512])        |
| 1930    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(7)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 1930    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(7)                                  | output              | qint16        | 0.0015259 | -50.0000000  | 46.0586548    | 0.0158080    | 30.5379810       | torch.Size([2, 512, 256])        |
| 1931    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.23.input_mean.mean                    | input_0             | qint16        | 0.0015259 | -50.0000000  | 46.0586548    | 0.0158080    | 30.5379810       | torch.Size([2, 512, 256])        |
| 1931    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.23.input_mean.mean                    | output              | qint16        | 0.0000056 | -0.1330856   | 0.1843663     | 0.0157563    | 0.0066431        | torch.Size([2, 512, 1])          |
| 1932    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.23.sub                                | input_0             | qint16        | 0.0015259 | -50.0000000  | 46.0586548    | 0.0158080    | 30.5379810       | torch.Size([2, 512, 256])        |
| 1932    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.23.sub                                | input_1             | qint16        | 0.0000056 | -0.1330856   | 0.1843663     | 0.0157563    | 0.0066431        | torch.Size([2, 512, 1])          |
| 1932    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.23.sub                                | output              | qint16        | 0.0017192 | -50.1698456  | 46.0231209    | 0.0000542    | 30.5312881       | torch.Size([2, 512, 256])        |
| 1933    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.mul                                | input_0             | qint16        | 0.0017192 | -50.1698456  | 46.0231209    | 0.0000542    | 30.5312881       | torch.Size([2, 512, 256])        |
| 1933    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.mul                                | input_1             | qint16        | 0.0017192 | -50.1698456  | 46.0231209    | 0.0000542    | 30.5312881       | torch.Size([2, 512, 256])        |
| 1933    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.mul                                | output              | qint16        | 0.0968515 | 0.0000000    | 2516.9760742  | 30.5301285   | 27300.7050781    | torch.Size([2, 512, 256])        |
| 1934    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.23.var_mean.mean                      | input_0             | qint16        | 0.0968515 | 0.0000000    | 2516.9760742  | 30.5301285   | 27300.7050781    | torch.Size([2, 512, 256])        |
| 1934    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.23.var_mean.mean                      | output              | qint16        | 0.0014796 | 14.2339497   | 48.4827271    | 30.4683990   | 46.6022682       | torch.Size([2, 512, 1])          |
| 1935    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.23.rsqrt                              | input               | qint16        | 0.0014796 | 14.2339497   | 48.4827271    | 30.4683990   | 46.6022682       | torch.Size([2, 512, 1])          |
| 1935    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.23.rsqrt                              | output              | qint16        | 0.0000104 | 0.1436182    | 0.2650562     | 0.1844771    | 0.0004137        | torch.Size([2, 512, 1])          |
| 1936    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.out_mul                            | input_0             | qint16        | 0.0017192 | -50.1698456  | 46.0231209    | 0.0000542    | 30.5312881       | torch.Size([2, 512, 256])        |
| 1936    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.out_mul                            | input_1             | qint16        | 0.0000104 | 0.1436182    | 0.2650562     | 0.1844771    | 0.0004137        | torch.Size([2, 512, 1])          |
| 1936    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.out_mul                            | output              | qint16        | 0.0002629 | -8.3293028   | 6.6096439     | 0.0000090    | 1.0013063        | torch.Size([2, 512, 256])        |
| 1937    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.23.weight_quant                       | input               | torch.float32 |           | 0.7844438    | 1.0446960     | 0.8969574    | 0.0022063        | torch.Size([256])                |
| 1937    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.23.weight_quant                       | output              | qint16        | 0.0000319 | 0.7844585    | 1.0446801     | 0.8969576    | 0.0022063        | torch.Size([256])                |
| 1938    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.weight_mul                         | input_0             | qint16        | 0.0002629 | -8.3293028   | 6.6096439     | 0.0000090    | 1.0013063        | torch.Size([2, 512, 256])        |
| 1938    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.weight_mul                         | input_1             | qint16        | 0.0000319 | 0.7844585    | 1.0446801     | 0.8969576    | 0.0022063        | torch.Size([256])                |
| 1938    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.23.weight_mul                         | output              | qint16        | 0.0002110 | -6.6830363   | 5.7815428     | 0.0024042    | 0.7470961        | torch.Size([2, 512, 256])        |
| 1939    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.23.bias_quant                         | input               | torch.float32 |           | -0.1350660   | 0.1619885     | 0.0027300    | 0.0011589        | torch.Size([256])                |
| 1939    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.23.bias_quant                         | output              | qint16        | 0.0000049 | -0.1350683   | 0.1619860     | 0.0027300    | 0.0011589        | torch.Size([256])                |
| 1940    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.23.bias_add                           | input_0             | qint16        | 0.0002110 | -6.6830363   | 5.7815428     | 0.0024042    | 0.7470961        | torch.Size([2, 512, 256])        |
| 1940    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.23.bias_add                           | input_1             | qint16        | 0.0000049 | -0.1350683   | 0.1619860     | 0.0027300    | 0.0011589        | torch.Size([256])                |
| 1940    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.23.bias_add                           | output              | qint8         | 0.0487494 | -6.2399182   | 5.7036753     | 0.0051542    | 0.7165613        | torch.Size([2, 512, 256])        |
| 1941    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.kps_generator.offset               | input               | qint8         | 0.0487494 | -6.2399182   | 5.7036753     | 0.0051542    | 0.7165613        | torch.Size([2, 512, 256])        |
| 1941    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.kps_generator.offset               | weight              | torch.float32 |           | -0.4079330   | 0.3764863     | -0.0009719   | 0.0062766        | torch.Size([24, 256])            |
| 1941    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.kps_generator.offset               | bias                | torch.float32 |           | -0.1728180   | 0.0862914     | -0.0105869   | 0.0040706        | torch.Size([24])                 |
| 1941    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.kps_generator.offset               | output              | qint16        | 0.0004372 | -14.2038527  | 7.9787102     | -0.4484337   | 10.5624657       | torch.Size([2, 512, 24])         |
| 1942    | torch.Tensor.view                                                           | head.layers.24.kps_generator                      | input_0             | qint16        | 0.0004372 | -14.2038527  | 7.9787102     | -0.4484337   | 10.5624657       | torch.Size([2, 512, 24])         |
| 1942    | torch.Tensor.view                                                           | head.layers.24.kps_generator                      | output              | qint16        | 0.0004372 | -14.2038527  | 7.9787102     | -0.4484337   | 10.5624657       | torch.Size([2, 512, 8, 3])       |
| 1943    | torch.Tensor.__getitem__                                                    | head.layers.24.kps_generator                      | input_0             | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 1943    | torch.Tensor.__getitem__                                                    | head.layers.24.kps_generator                      | output              | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.7499245    | 275.7551575      | torch.Size([2, 512, 1, 3])       |
| 1944    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.kps_generator.keypoints_add        | input_0             | qint16        | 0.0004372 | -14.2038527  | 7.9787102     | -0.4484337   | 10.5624657       | torch.Size([2, 512, 8, 3])       |
| 1944    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.kps_generator.keypoints_add        | input_1             | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.7499245    | 275.7551575      | torch.Size([2, 512, 1, 3])       |
| 1944    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.kps_generator.keypoints_add        | output              | qint16        | 0.0020101 | -59.0433807  | 59.6162453    | 0.3015099    | 285.9446106      | torch.Size([2, 512, 8, 3])       |
| 1945    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.weight_add                         | input_0             | qint8         | 0.0487494 | -6.2399182   | 5.7036753     | 0.0051542    | 0.7165613        | torch.Size([2, 512, 256])        |
| 1945    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.weight_add                         | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0568060    | 0.8548859        | torch.Size([2, 512, 256])        |
| 1945    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.weight_add                         | output              | qint8         | 0.0597862 | -6.6362710   | 7.5928507     | 0.0620062    | 1.5021193        | torch.Size([2, 512, 256])        |
| 1946    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 1946    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 1947    | torch.Tensor.reshape                                                        | head.layers.24                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 1947    | torch.Tensor.reshape                                                        | head.layers.24                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 1948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.0                   | input               | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 1948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.0                   | weight              | torch.float32 |           | -0.7857405   | 0.6352730     | 0.0006263    | 0.0174991        | torch.Size([256, 12])            |
| 1948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.0                   | bias                | torch.float32 |           | -0.3248905   | 0.3380931     | 0.0039869    | 0.0290271        | torch.Size([256])                |
| 1948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.0                   | output              | torch.float32 |           | -1.1114415   | 1.3127626     | -0.0376013   | 0.1960484        | torch.Size([2, 6, 256])          |
| 1949    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.24.camera_encoder.1                   | input               | torch.float32 |           | -1.1114415   | 1.3127626     | -0.0376013   | 0.1960484        | torch.Size([2, 6, 256])          |
| 1949    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.24.camera_encoder.1                   | output              | qint8         | 0.0101761 | 0.0000000    | 1.2923608     | 0.1688492    | 0.0610745        | torch.Size([2, 6, 256])          |
| 1950    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.input_mean.mean   | input_0             | qint8         | 0.0101761 | 0.0000000    | 1.2923608     | 0.1688492    | 0.0610745        | torch.Size([2, 6, 256])          |
| 1950    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.input_mean.mean   | output              | qint16        | 0.0000058 | 0.1171070    | 0.1851965     | 0.1688492    | 0.0005937        | torch.Size([2, 6, 1])            |
| 1951    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.2.sub               | input_0             | qint8         | 0.0101761 | 0.0000000    | 1.2923608     | 0.1688492    | 0.0610745        | torch.Size([2, 6, 256])          |
| 1951    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.2.sub               | input_1             | qint16        | 0.0000058 | 0.1171070    | 0.1851965     | 0.1688492    | 0.0005937        | torch.Size([2, 6, 1])            |
| 1951    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.2.sub               | output              | qint16        | 0.0000345 | -0.1851933   | 1.1117457     | -0.0000046   | 0.0605318        | torch.Size([2, 6, 256])          |
| 1952    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.mul               | input_0             | qint16        | 0.0000345 | -0.1851933   | 1.1117457     | -0.0000046   | 0.0605318        | torch.Size([2, 6, 256])          |
| 1952    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.mul               | input_1             | qint16        | 0.0000345 | -0.1851933   | 1.1117457     | -0.0000046   | 0.0605318        | torch.Size([2, 6, 256])          |
| 1952    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.mul               | output              | qint16        | 0.0000390 | 0.0000000    | 1.2359753     | 0.0605113    | 0.0142757        | torch.Size([2, 6, 256])          |
| 1953    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.var_mean.mean     | input_0             | qint16        | 0.0000390 | 0.0000000    | 1.2359753     | 0.0605113    | 0.0142757        | torch.Size([2, 6, 256])          |
| 1953    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.var_mean.mean     | output              | qint16        | 0.0000024 | 0.0220898    | 0.0730579     | 0.0605114    | 0.0003256        | torch.Size([2, 6, 1])            |
| 1954    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.24.camera_encoder.2.rsqrt             | input               | qint16        | 0.0000024 | 0.0220898    | 0.0730579     | 0.0605114    | 0.0003256        | torch.Size([2, 6, 1])            |
| 1954    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.24.camera_encoder.2.rsqrt             | output              | qint16        | 0.0002042 | 3.6993871    | 6.6919408     | 4.3076296    | 1.2388936        | torch.Size([2, 6, 1])            |
| 1955    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.out_mul           | input_0             | qint16        | 0.0000345 | -0.1851933   | 1.1117457     | -0.0000046   | 0.0605318        | torch.Size([2, 6, 256])          |
| 1955    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.out_mul           | input_1             | qint16        | 0.0002042 | 3.6993871    | 6.6919408     | 4.3076296    | 1.2388936        | torch.Size([2, 6, 1])            |
| 1955    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.out_mul           | output              | qint16        | 0.0001337 | -0.7878858   | 4.3148608     | -0.0000226   | 0.9992854        | torch.Size([2, 6, 256])          |
| 1956    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.2.weight_quant      | input               | torch.float32 |           | 0.7170500    | 1.1652156     | 0.9740722    | 0.0055252        | torch.Size([256])                |
| 1956    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.2.weight_quant      | output              | qint16        | 0.0000356 | 0.7170339    | 1.1651978     | 0.9740726    | 0.0055252        | torch.Size([256])                |
| 1957    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.weight_mul        | input_0             | qint16        | 0.0001337 | -0.7878858   | 4.3148608     | -0.0000226   | 0.9992854        | torch.Size([2, 6, 256])          |
| 1957    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.weight_mul        | input_1             | qint16        | 0.0000356 | 0.7170339    | 1.1651978     | 0.9740726    | 0.0055252        | torch.Size([256])                |
| 1957    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.weight_mul        | output              | qint16        | 0.0001329 | -0.8604422   | 4.2921066     | 0.0091794    | 0.9698855        | torch.Size([2, 6, 256])          |
| 1958    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.2.bias_quant        | input               | torch.float32 |           | -0.0844964   | 0.2250945     | 0.0129729    | 0.0024166        | torch.Size([256])                |
| 1958    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.2.bias_quant        | output              | qint16        | 0.0000069 | -0.0844942   | 0.2250911     | 0.0129730    | 0.0024166        | torch.Size([256])                |
| 1959    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.2.bias_add          | input_0             | qint16        | 0.0001329 | -0.8604422   | 4.2921066     | 0.0091794    | 0.9698855        | torch.Size([2, 6, 256])          |
| 1959    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.2.bias_add          | input_1             | qint16        | 0.0000069 | -0.0844942   | 0.2250911     | 0.0129730    | 0.0024166        | torch.Size([256])                |
| 1959    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.2.bias_add          | output              | qint8         | 0.0337904 | -0.9123418   | 4.2913857     | 0.0219110    | 0.9544103        | torch.Size([2, 6, 256])          |
| 1960    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.3                   | input               | qint8         | 0.0337904 | -0.9123418   | 4.2913857     | 0.0219110    | 0.9544103        | torch.Size([2, 6, 256])          |
| 1960    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.3                   | weight              | torch.float32 |           | -0.4547428   | 0.4697872     | 0.0003959    | 0.0051907        | torch.Size([256, 256])           |
| 1960    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.3                   | bias                | torch.float32 |           | -0.0825015   | 0.3699438     | -0.0037957   | 0.0022571        | torch.Size([256])                |
| 1960    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.camera_encoder.3                   | output              | torch.float32 |           | -13.0082130  | 59.5281105    | -0.5712168   | 26.2293663       | torch.Size([2, 6, 256])          |
| 1961    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.24.camera_encoder.4                   | input               | torch.float32 |           | -13.0082130  | 59.5281105    | -0.5712168   | 26.2293663       | torch.Size([2, 6, 256])          |
| 1961    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.24.camera_encoder.4                   | output              | qint8         | 0.4700390 | 0.0000000    | 59.6949501    | 1.0776317    | 21.1207008       | torch.Size([2, 6, 256])          |
| 1962    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.input_mean.mean   | input_0             | qint8         | 0.4700390 | 0.0000000    | 59.6949501    | 1.0776317    | 21.1207008       | torch.Size([2, 6, 256])          |
| 1962    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.input_mean.mean   | output              | qint16        | 0.0000338 | 1.0594095    | 1.1053196     | 1.0776273    | 0.0002111        | torch.Size([2, 6, 1])            |
| 1963    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.5.sub               | input_0             | qint8         | 0.4700390 | 0.0000000    | 59.6949501    | 1.0776317    | 21.1207008       | torch.Size([2, 6, 256])          |
| 1963    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.5.sub               | input_1             | qint16        | 0.0000338 | 1.0594095    | 1.1053196     | 1.0776273    | 0.0002111        | torch.Size([2, 6, 1])            |
| 1963    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.5.sub               | output              | qint16        | 0.0017959 | -1.1044511   | 58.6131325    | 0.0002175    | 21.1199951       | torch.Size([2, 6, 256])          |
| 1964    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.mul               | input_0             | qint16        | 0.0017959 | -1.1044511   | 58.6131325    | 0.0002175    | 21.1199951       | torch.Size([2, 6, 256])          |
| 1964    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.mul               | input_1             | qint16        | 0.0017959 | -1.1044511   | 58.6131325    | 0.0002175    | 21.1199951       | torch.Size([2, 6, 256])          |
| 1964    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.mul               | output              | qint16        | 0.1056784 | 0.0000000    | 3435.4987793  | 21.1216774   | 40957.5273438    | torch.Size([2, 6, 256])          |
| 1965    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.var_mean.mean     | input_0             | qint16        | 0.1056784 | 0.0000000    | 3435.4987793  | 21.1216774   | 40957.5273438    | torch.Size([2, 6, 256])          |
| 1965    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.var_mean.mean     | output              | qint16        | 0.0007177 | 19.7106152   | 23.2353153    | 21.1217041   | 1.1442288        | torch.Size([2, 6, 1])            |
| 1966    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.24.camera_encoder.5.rsqrt             | input               | qint16        | 0.0007177 | 19.7106152   | 23.2353153    | 21.1217041   | 1.1442288        | torch.Size([2, 6, 1])            |
| 1966    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.24.camera_encoder.5.rsqrt             | output              | qint16        | 0.0000070 | 0.2074536    | 0.2252415     | 0.2177742    | 0.0000288        | torch.Size([2, 6, 1])            |
| 1967    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.out_mul           | input_0             | qint16        | 0.0017959 | -1.1044511   | 58.6131325    | 0.0002175    | 21.1199951       | torch.Size([2, 6, 256])          |
| 1967    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.out_mul           | input_1             | qint16        | 0.0000070 | 0.2074536    | 0.2252415     | 0.2177742    | 0.0000288        | torch.Size([2, 6, 1])            |
| 1967    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.out_mul           | output              | qint16        | 0.0003768 | -0.2479336   | 12.3379021    | 0.0000980    | 0.9999023        | torch.Size([2, 6, 256])          |
| 1968    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.5.weight_quant      | input               | torch.float32 |           | 0.4739479    | 1.5194587     | 0.8861445    | 0.0227169        | torch.Size([256])                |
| 1968    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.5.weight_quant      | output              | qint16        | 0.0000464 | 0.4739570    | 1.5194354     | 0.8861456    | 0.0227165        | torch.Size([256])                |
| 1969    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.weight_mul        | input_0             | qint16        | 0.0003768 | -0.2479336   | 12.3379021    | 0.0000980    | 0.9999023        | torch.Size([2, 6, 256])          |
| 1969    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.weight_mul        | input_1             | qint16        | 0.0000464 | 0.4739570    | 1.5194354     | 0.8861456    | 0.0227165        | torch.Size([256])                |
| 1969    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.weight_mul        | output              | qint16        | 0.0002353 | -0.3767715   | 7.7058120     | -0.0192724   | 0.5534326        | torch.Size([2, 6, 256])          |
| 1970    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.5.bias_quant        | input               | torch.float32 |           | -0.5851686   | 0.4827383     | 0.0429210    | 0.0232055        | torch.Size([256])                |
| 1970    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.camera_encoder.5.bias_quant        | output              | qint16        | 0.0000179 | -0.5851597   | 0.4827429     | 0.0429209    | 0.0232055        | torch.Size([256])                |
| 1971    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.5.bias_add          | input_0             | qint16        | 0.0002353 | -0.3767715   | 7.7058120     | -0.0192724   | 0.5534326        | torch.Size([2, 6, 256])          |
| 1971    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.5.bias_add          | input_1             | qint16        | 0.0000179 | -0.5851597   | 0.4827429     | 0.0429209    | 0.0232055        | torch.Size([256])                |
| 1971    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.5.bias_add          | output              | qint8         | 0.0571271 | -0.9711612   | 7.2551460     | 0.0226500    | 0.5353527        | torch.Size([2, 6, 256])          |
| 1972    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint8         | 0.0597862 | -6.6362710   | 7.5928507     | 0.0620062    | 1.5021193        | torch.Size([2, 512, 256])        |
| 1972    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint8         | 0.0597862 | -6.6362710   | 7.5928507     | 0.0620062    | 1.5021193        | torch.Size([2, 512, 1, 256])     |
| 1973    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint8         | 0.0571271 | -0.9711612   | 7.2551460     | 0.0226500    | 0.5353527        | torch.Size([2, 6, 256])          |
| 1973    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint8         | 0.0571271 | -0.9711612   | 7.2551460     | 0.0226500    | 0.5353527        | torch.Size([2, 1, 6, 256])       |
| 1974    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.cam_add                            | input_0             | qint8         | 0.0597862 | -6.6362710   | 7.5928507     | 0.0620062    | 1.5021193        | torch.Size([2, 512, 1, 256])     |
| 1974    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.cam_add                            | input_1             | qint8         | 0.0571271 | -0.9711612   | 7.2551460     | 0.0226500    | 0.5353527        | torch.Size([2, 1, 6, 256])       |
| 1974    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.24.cam_add                            | output              | qint8         | 0.0674109 | -6.3366246   | 8.5611849     | 0.0843829    | 1.3861963        | torch.Size([2, 512, 6, 256])     |
| 1975    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.weights_fc                         | input               | qint8         | 0.0674109 | -6.3366246   | 8.5611849     | 0.0843829    | 1.3861963        | torch.Size([2, 512, 6, 256])     |
| 1975    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.weights_fc                         | weight              | torch.float32 |           | -0.3503168   | 0.2480071     | 0.0005745    | 0.0031640        | torch.Size([64, 256])            |
| 1975    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.weights_fc                         | bias                | torch.float32 |           | -0.1120743   | 0.0735845     | -0.0091236   | 0.0018223        | torch.Size([64])                 |
| 1975    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.weights_fc                         | output              | qint8         | 0.0744363 | -9.5278416   | 6.4015183     | -0.3734718   | 5.2784066        | torch.Size([2, 512, 6, 64])      |
| 1976    | torch.Tensor.reshape                                                        | head.layers.24                                    | input_0             | qint8         | 0.0744363 | -9.5278416   | 6.4015183     | -0.3734718   | 5.2784066        | torch.Size([2, 512, 6, 64])      |
| 1976    | torch.Tensor.reshape                                                        | head.layers.24                                    | output              | qint8         | 0.0744363 | -9.5278416   | 6.4015183     | -0.3734718   | 5.2784066        | torch.Size([2, 512, 48, 8])      |
| 1977    | torch.Tensor.max                                                            | head.layers.24.weight_softmax                     | input               | qint8         | 0.0744363 | -9.5278416   | 6.4015183     | -0.3734718   | 5.2784066        | torch.Size([2, 512, 48, 8])      |
| 1977    | torch.Tensor.max                                                            | head.layers.24.weight_softmax                     | output_0            | qint8         | 0.0744363 | 1.2654165    | 6.4015183     | 2.9497459    | 0.7711469        | torch.Size([2, 512, 1, 8])       |
| 1977    | torch.Tensor.max                                                            | head.layers.24.weight_softmax                     | output_1            | torch.int64   |           | 0.0000000    | 47.0000000    | 24.8862305   | 181.5028687      | torch.Size([2, 512, 1, 8])       |
| 1978    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.weight_softmax.sub                 | input_0             | qint8         | 0.0744363 | -9.5278416   | 6.4015183     | -0.3734718   | 5.2784066        | torch.Size([2, 512, 48, 8])      |
| 1978    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.weight_softmax.sub                 | input_1             | qint8         | 0.0744363 | 1.2654165    | 6.4015183     | 2.9497459    | 0.7711469        | torch.Size([2, 512, 1, 8])       |
| 1978    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.24.weight_softmax.sub                 | output              | qint16        | 0.0003908 | -11.9843769  | 0.0000000     | -3.3232141   | 5.3208656        | torch.Size([2, 512, 48, 8])      |
| 1979    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.24.weight_softmax.exp                 | input               | qint16        | 0.0003908 | -11.9843769  | 0.0000000     | -3.3232141   | 5.3208656        | torch.Size([2, 512, 48, 8])      |
| 1979    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.24.weight_softmax.exp                 | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1936768    | 0.0805507        | torch.Size([2, 512, 48, 8])      |
| 1980    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.24.weight_softmax.sum                 | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1936768    | 0.0805507        | torch.Size([2, 512, 48, 8])      |
| 1980    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.24.weight_softmax.sum                 | output              | qint16        | 0.0006535 | 1.6841737    | 19.5094891    | 9.2964802    | 11.4473276       | torch.Size([2, 512, 1, 8])       |
| 1981    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.24.weight_softmax.reciprocal          | input               | qint16        | 0.0006535 | 1.6841737    | 19.5094891    | 9.2964802    | 11.4473276       | torch.Size([2, 512, 1, 8])       |
| 1981    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.24.weight_softmax.reciprocal          | output              | qint16        | 0.0000180 | 0.0512522    | 0.5892569     | 0.1331204    | 0.0073940        | torch.Size([2, 512, 1, 8])       |
| 1982    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.weight_softmax.mul                 | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1936768    | 0.0805507        | torch.Size([2, 512, 48, 8])      |
| 1982    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.weight_softmax.mul                 | input_1             | qint16        | 0.0000180 | 0.0512522    | 0.5892569     | 0.1331204    | 0.0073940        | torch.Size([2, 512, 1, 8])       |
| 1982    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.weight_softmax.mul                 | output              | qint8         | 0.0036511 | 0.0000000    | 0.4636950     | 0.0206877    | 0.0011403        | torch.Size([2, 512, 48, 8])      |
| 1983    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint16        | 0.0020101 | -59.0433807  | 59.6162453    | 0.3015099    | 285.9446106      | torch.Size([2, 512, 8, 3])       |
| 1983    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint16        | 0.0020101 | -59.0433807  | 50.8704910    | -1.6255388   | 299.4944153      | torch.Size([2, 512, 8, 1])       |
| 1984    | torch.ones_like                                                             | head.layers.24                                    | input               | qint16        | 0.0020101 | -59.0433807  | 50.8704910    | -1.6255388   | 299.4944153      | torch.Size([2, 512, 8, 1])       |
| 1984    | torch.ones_like                                                             | head.layers.24                                    | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1985    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.point_quant_stub                   | input               | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1985    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.24.point_quant_stub                   | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1986    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.point_cat                          | input_0             | qint16        | 0.0020101 | -59.0433807  | 59.6162453    | 0.3015099    | 285.9446106      | torch.Size([2, 512, 8, 3])       |
| 1986    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.point_cat                          | input_1             | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 1986    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.point_cat                          | output              | qint16        | 0.0018311 | -59.0441895  | 59.6154785    | 0.4760618    | 214.5476685      | torch.Size([2, 512, 8, 4])       |
| 1987    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 1987    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1988    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint16        | 0.0018311 | -59.0441895  | 59.6154785    | 0.4760618    | 214.5476685      | torch.Size([2, 512, 8, 4])       |
| 1988    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint16        | 0.0018311 | -59.0441895  | 59.6154785    | 0.4760618    | 214.5476685      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1989    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.point_matmul                       | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1989    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.point_matmul                       | input_1             | qint16        | 0.0018311 | -59.0441895  | 59.6154785    | 0.4760618    | 214.5476685      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1989    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.point_matmul                       | output              | qint16        | 0.0031483 | -93.4632187  | 87.8498230    | 0.0954272    | 96.8347549       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1990    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.24.point_sum                          | input               | qint16        | 0.0031483 | -93.4632187  | 87.8498230    | 0.0954272    | 96.8347549       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1990    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.24.point_sum                          | output              | qint16        | 0.0031785 | -102.7067719 | 101.3749771   | 0.3817288    | 381.9493408      | torch.Size([2, 6, 512, 8, 4])    |
| 1991    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint16        | 0.0031785 | -102.7067719 | 101.3749771   | 0.3817288    | 381.9493408      | torch.Size([2, 6, 512, 8, 4])    |
| 1991    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint16        | 0.0031785 | -61.9044037  | 63.4364395    | -0.4525164   | 422.7819214      | torch.Size([2, 6, 512, 8, 1])    |
| 1992    | torch.clamp                                                                 | head.layers.24                                    | input               | qint16        | 0.0031785 | -61.9044037  | 63.4364395    | -0.4525164   | 422.7819214      | torch.Size([2, 6, 512, 8, 1])    |
| 1992    | torch.clamp                                                                 | head.layers.24                                    | output              | qint16        | 0.0031785 | 0.0000000    | 63.4364395    | 7.4249411    | 150.7652283      | torch.Size([2, 6, 512, 8, 1])    |
| 1993    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.24.reciprocal_op                      | input               | qint16        | 0.0031785 | 0.0000000    | 63.4364395    | 7.4249411    | 150.7652283      | torch.Size([2, 6, 512, 8, 1])    |
| 1993    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.24.reciprocal_op                      | output              | qint16        | 0.0003357 | 0.0157776    | 10.9996643    | 6.2122207    | 28.3262901       | torch.Size([2, 6, 512, 8, 1])    |
| 1994    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint16        | 0.0031785 | -102.7067719 | 101.3749771   | 0.3817288    | 381.9493408      | torch.Size([2, 6, 512, 8, 4])    |
| 1994    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint16        | 0.0031785 | -102.7067719 | 101.3749771   | 0.4891023    | 551.9623413      | torch.Size([2, 6, 512, 8, 2])    |
| 1995    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.point_mul                          | input_0             | qint16        | 0.0031785 | -102.7067719 | 101.3749771   | 0.4891023    | 551.9623413      | torch.Size([2, 6, 512, 8, 2])    |
| 1995    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.point_mul                          | input_1             | qint16        | 0.0003357 | 0.0157776    | 10.9996643    | 6.2122207    | 28.3262901       | torch.Size([2, 6, 512, 8, 1])    |
| 1995    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.point_mul                          | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1331789    | 0.9411140        | torch.Size([2, 6, 512, 8, 2])    |
| 1996    | torch.Tensor.flatten                                                        | head.layers.24                                    | input               | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1331789    | 0.9411140        | torch.Size([2, 6, 512, 8, 2])    |
| 1996    | torch.Tensor.flatten                                                        | head.layers.24                                    | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1331789    | 0.9411140        | torch.Size([12, 512, 8, 2])      |
| 1997    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.24                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 1997    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.24                                    | input_1             | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.1331789    | 0.9411140        | torch.Size([12, 512, 8, 2])      |
| 1997    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.24                                    | output              | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([12, 256, 512, 8])    |
| 1998    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.feat_cat                           | input               | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([12, 256, 512, 8])    |
| 1998    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.feat_cat                           | output              | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([12, 256, 512, 8])    |
| 1999    | torch.Tensor.view                                                           | head.layers.24                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([12, 256, 512, 8])    |
| 1999    | torch.Tensor.view                                                           | head.layers.24                                    | output              | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([2, 6, 256, 512, 8])  |
| 2000    | torch.Tensor.permute                                                        | head.layers.24                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([2, 6, 256, 512, 8])  |
| 2000    | torch.Tensor.permute                                                        | head.layers.24                                    | output              | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([2, 512, 6, 8, 256])  |
| 2001    | torch.Tensor.contiguous                                                     | head.layers.24                                    | input               | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769810        | torch.Size([2, 512, 6, 8, 256])  |
| 2001    | torch.Tensor.contiguous                                                     | head.layers.24                                    | output              | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769805        | torch.Size([2, 512, 6, 8, 256])  |
| 2002    | torch.Tensor.view                                                           | head.layers.24                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769805        | torch.Size([2, 512, 6, 8, 256])  |
| 2002    | torch.Tensor.view                                                           | head.layers.24                                    | output              | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769805        | torch.Size([2, 512, 48, 256])    |
| 2003    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | input_0             | qint8         | 0.0036511 | 0.0000000    | 0.4636950     | 0.0206877    | 0.0011403        | torch.Size([2, 512, 48, 8])      |
| 2003    | torch.Tensor.__getitem__                                                    | head.layers.24                                    | output              | qint8         | 0.0036511 | 0.0000000    | 0.4636950     | 0.0206877    | 0.0011403        | torch.Size([2, 512, 48, 8, 1])   |
| 2004    | torch.Tensor.reshape                                                        | head.layers.24                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769805        | torch.Size([2, 512, 48, 256])    |
| 2004    | torch.Tensor.reshape                                                        | head.layers.24                                    | output              | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769805        | torch.Size([2, 512, 48, 8, 32])  |
| 2005    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.feat_mul                           | input_0             | qint8         | 0.0036511 | 0.0000000    | 0.4636950     | 0.0206877    | 0.0011403        | torch.Size([2, 512, 48, 8, 1])   |
| 2005    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.feat_mul                           | input_1             | qint8         | 0.2235520 | -28.6146584  | 27.4968987    | 0.0262860    | 2.6769805        | torch.Size([2, 512, 48, 8, 32])  |
| 2005    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.24.feat_mul                           | output              | qint8         | 0.0151945 | -1.9448956   | 1.9297011     | 0.0002231    | 0.0026120        | torch.Size([2, 512, 48, 8, 32])  |
| 2006    | torch.Tensor.view                                                           | head.layers.24                                    | input_0             | qint8         | 0.0151945 | -1.9448956   | 1.9297011     | 0.0002231    | 0.0026120        | torch.Size([2, 512, 48, 8, 32])  |
| 2006    | torch.Tensor.view                                                           | head.layers.24                                    | output              | qint8         | 0.0151945 | -1.9448956   | 1.9297011     | 0.0002231    | 0.0026120        | torch.Size([2, 512, 48, 256])    |
| 2007    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.24.feat_sum                           | input               | qint8         | 0.0151945 | -1.9448956   | 1.9297011     | 0.0002231    | 0.0026120        | torch.Size([2, 512, 48, 256])    |
| 2007    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.24.feat_sum                           | output              | qint8         | 0.0319187 | -4.0855899   | 4.0536714     | 0.0107475    | 0.2894052        | torch.Size([2, 512, 256])        |
| 2008    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.output_proj                        | input               | qint8         | 0.0319187 | -4.0855899   | 4.0536714     | 0.0107475    | 0.2894052        | torch.Size([2, 512, 256])        |
| 2008    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.output_proj                        | weight              | torch.float32 |           | -0.3212579   | 0.3928832     | -0.0001007   | 0.0072132        | torch.Size([256, 256])           |
| 2008    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.output_proj                        | bias                | torch.float32 |           | -0.0801640   | 0.1065602     | -0.0009339   | 0.0011949        | torch.Size([256])                |
| 2008    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.24.output_proj                        | output              | qint8         | 0.0338453 | -4.3322029   | 4.2983575     | 0.0094636    | 0.6042999        | torch.Size([2, 512, 256])        |
| 2009    | torch.nn.modules.dropout.Dropout                                            | head.layers.24.proj_drop                          | input               | qint8         | 0.0338453 | -4.3322029   | 4.2983575     | 0.0094636    | 0.6042999        | torch.Size([2, 512, 256])        |
| 2009    | torch.nn.modules.dropout.Dropout                                            | head.layers.24.proj_drop                          | output              | qint8         | 0.0338453 | -4.3322029   | 4.2983575     | 0.0094636    | 0.6042999        | torch.Size([2, 512, 256])        |
| 2010    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.residual_op                        | input_0             | qint8         | 0.0338453 | -4.3322029   | 4.2983575     | 0.0094636    | 0.6042999        | torch.Size([2, 512, 256])        |
| 2010    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.residual_op                        | input_1             | qint8         | 0.0487494 | -6.2399182   | 5.7036753     | 0.0051542    | 0.7165613        | torch.Size([2, 512, 256])        |
| 2010    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.24.residual_op                        | output              | qint8         | 0.0477158 | -6.1076260   | 5.7258992     | 0.0070092    | 0.6587798        | torch.Size([2, 512, 512])        |
| 2011    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.input_mean.mean           | input_0             | qint8         | 0.0477158 | -6.1076260   | 5.7258992     | 0.0070092    | 0.6587798        | torch.Size([2, 512, 512])        |
| 2011    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.input_mean.mean           | output              | qint16        | 0.0000026 | -0.0467840   | 0.0697086     | 0.0070093    | 0.0002892        | torch.Size([2, 512, 1])          |
| 2012    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.25.pre_norm.sub                       | input_0             | qint8         | 0.0477158 | -6.1076260   | 5.7258992     | 0.0070092    | 0.6587798        | torch.Size([2, 512, 512])        |
| 2012    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.25.pre_norm.sub                       | input_1             | qint16        | 0.0000026 | -0.0467840   | 0.0697086     | 0.0070093    | 0.0002892        | torch.Size([2, 512, 1])          |
| 2012    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.25.pre_norm.sub                       | output              | qint16        | 0.0002096 | -6.1692433   | 5.7622085     | -0.0000003   | 0.6584894        | torch.Size([2, 512, 512])        |
| 2013    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.mul                       | input_0             | qint16        | 0.0002096 | -6.1692433   | 5.7622085     | -0.0000003   | 0.6584894        | torch.Size([2, 512, 512])        |
| 2013    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.mul                       | input_1             | qint16        | 0.0002096 | -6.1692433   | 5.7622085     | -0.0000003   | 0.6584894        | torch.Size([2, 512, 512])        |
| 2013    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.mul                       | output              | qint16        | 0.0014419 | 0.0000000    | 38.0594940    | 0.6584795    | 6.8067250        | torch.Size([2, 512, 512])        |
| 2014    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.var_mean.mean             | input_0             | qint16        | 0.0014419 | 0.0000000    | 38.0594940    | 0.6584795    | 6.8067250        | torch.Size([2, 512, 512])        |
| 2014    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.var_mean.mean             | output              | qint16        | 0.0000482 | 0.3684628    | 1.5778126     | 0.6551232    | 0.0489546        | torch.Size([2, 512, 1])          |
| 2015    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.25.pre_norm.rsqrt                     | input               | qint16        | 0.0000482 | 0.3684628    | 1.5778126     | 0.6551232    | 0.0489546        | torch.Size([2, 512, 1])          |
| 2015    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.25.pre_norm.rsqrt                     | output              | qint16        | 0.0000498 | 0.7961277    | 1.6324604     | 1.2848413    | 0.0424105        | torch.Size([2, 512, 1])          |
| 2016    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.out_mul                   | input_0             | qint16        | 0.0002096 | -6.1692433   | 5.7622085     | -0.0000003   | 0.6584894        | torch.Size([2, 512, 512])        |
| 2016    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.out_mul                   | input_1             | qint16        | 0.0000498 | 0.7961277    | 1.6324604     | 1.2848413    | 0.0424105        | torch.Size([2, 512, 1])          |
| 2016    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.out_mul                   | output              | qint16        | 0.0002919 | -9.5650911   | 8.1516953     | -0.0000023   | 1.0020869        | torch.Size([2, 512, 512])        |
| 2017    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.25.pre_norm.weight_quant              | input               | torch.float32 |           | 0.6055309    | 1.5414252     | 1.0381298    | 0.0553940        | torch.Size([512])                |
| 2017    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.25.pre_norm.weight_quant              | output              | qint16        | 0.0000470 | 0.6055154    | 1.5414016     | 1.0381287    | 0.0553939        | torch.Size([512])                |
| 2018    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.weight_mul                | input_0             | qint16        | 0.0002919 | -9.5650911   | 8.1516953     | -0.0000023   | 1.0020869        | torch.Size([2, 512, 512])        |
| 2018    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.weight_mul                | input_1             | qint16        | 0.0000470 | 0.6055154    | 1.5414016     | 1.0381287    | 0.0553939        | torch.Size([512])                |
| 2018    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.weight_mul                | output              | qint16        | 0.0001779 | -5.8298340   | 5.7638283     | 0.0056139    | 0.7102420        | torch.Size([2, 512, 512])        |
| 2019    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.25.pre_norm.bias_quant                | input               | torch.float32 |           | -0.1894612   | 0.2801258     | -0.0025418   | 0.0019453        | torch.Size([512])                |
| 2019    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.25.pre_norm.bias_quant                | output              | qint16        | 0.0000085 | -0.1894605   | 0.2801215     | -0.0025419   | 0.0019453        | torch.Size([512])                |
| 2020    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.25.pre_norm.bias_add                  | input_0             | qint16        | 0.0001779 | -5.8298340   | 5.7638283     | 0.0056139    | 0.7102420        | torch.Size([2, 512, 512])        |
| 2020    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.25.pre_norm.bias_add                  | input_1             | qint16        | 0.0000085 | -0.1894605   | 0.2801215     | -0.0025419   | 0.0019453        | torch.Size([512])                |
| 2020    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.25.pre_norm.bias_add                  | output              | qint8         | 0.0390223 | -4.9948592   | 4.9558368     | 0.0029361    | 0.6845329        | torch.Size([2, 512, 512])        |
| 2021    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.0.0                         | input               | qint8         | 0.0390223 | -4.9948592   | 4.9558368     | 0.0029361    | 0.6845329        | torch.Size([2, 512, 512])        |
| 2021    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.0.0                         | weight              | torch.float32 |           | -0.4742253   | 0.4443682     | -0.0002076   | 0.0065301        | torch.Size([1024, 512])          |
| 2021    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.0.0                         | bias                | torch.float32 |           | -0.1643867   | 0.0346350     | -0.0589554   | 0.0010728        | torch.Size([1024])               |
| 2021    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.0.0                         | output              | torch.float32 |           | -16.4231796  | 10.6462221    | -3.0797815   | 6.5674176        | torch.Size([2, 512, 1024])       |
| 2022    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.25.activate                           | input               | torch.float32 |           | -16.4231796  | 10.6462221    | -3.0797815   | 6.5674176        | torch.Size([2, 512, 1024])       |
| 2022    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.25.activate                           | output              | qint8         | 0.0913206 | 0.0000000    | 10.6845102    | 0.1564392    | 0.4019059        | torch.Size([2, 512, 1024])       |
| 2023    | torch.nn.modules.dropout.Dropout                                            | head.layers.25.layers.0.2                         | input               | qint8         | 0.0913206 | 0.0000000    | 10.6845102    | 0.1564392    | 0.4019059        | torch.Size([2, 512, 1024])       |
| 2023    | torch.nn.modules.dropout.Dropout                                            | head.layers.25.layers.0.2                         | output              | qint8         | 0.0913206 | 0.0000000    | 10.6845102    | 0.1564392    | 0.4019059        | torch.Size([2, 512, 1024])       |
| 2024    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.1                           | input               | qint8         | 0.0913206 | 0.0000000    | 10.6845102    | 0.1564392    | 0.4019059        | torch.Size([2, 512, 1024])       |
| 2024    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.1                           | weight              | torch.float32 |           | -0.4354753   | 0.4189465     | 0.0000335    | 0.0068930        | torch.Size([256, 1024])          |
| 2024    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.1                           | bias                | torch.float32 |           | -0.0726037   | 0.0860158     | -0.0004128   | 0.0009016        | torch.Size([256])                |
| 2024    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.layers.1                           | output              | qint8         | 0.1834919 | -17.2482414  | 15.0463381    | 0.0003731    | 7.5267949        | torch.Size([2, 512, 256])        |
| 2025    | torch.nn.modules.dropout.Dropout                                            | head.layers.25.layers.2                           | input               | qint8         | 0.1834919 | -17.2482414  | 15.0463381    | 0.0003731    | 7.5267949        | torch.Size([2, 512, 256])        |
| 2025    | torch.nn.modules.dropout.Dropout                                            | head.layers.25.layers.2                           | output              | qint8         | 0.1834919 | -17.2482414  | 15.0463381    | 0.0003731    | 7.5267949        | torch.Size([2, 512, 256])        |
| 2026    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.identity_fc                        | input               | qint8         | 0.0390223 | -4.9948592   | 4.9558368     | 0.0029361    | 0.6845329        | torch.Size([2, 512, 512])        |
| 2026    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.identity_fc                        | weight              | torch.float32 |           | -0.3958582   | 0.4033061     | 0.0002529    | 0.0075558        | torch.Size([256, 512])           |
| 2026    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.identity_fc                        | bias                | torch.float32 |           | -0.0905164   | 0.0738403     | -0.0010515   | 0.0010065        | torch.Size([256])                |
| 2026    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.25.identity_fc                        | output              | torch.float32 |           | -14.9043179  | 10.1933517    | -0.0266680   | 7.8802314        | torch.Size([2, 512, 256])        |
| 2027    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.25.short_add                          | input_0             | torch.float32 |           | -14.9043179  | 10.1933517    | -0.0266680   | 7.8802314        | torch.Size([2, 512, 256])        |
| 2027    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.25.short_add                          | input_1             | qint8         | 0.1834919 | -17.2482414  | 15.0463381    | 0.0003731    | 7.5267949        | torch.Size([2, 512, 256])        |
| 2027    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.25.short_add                          | output              | qint8         | 0.2049546 | -19.2657299  | 21.7251835    | -0.0255615   | 20.4136200       | torch.Size([2, 512, 256])        |
| 2028    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.26.input_mean.mean                    | input_0             | qint8         | 0.2049546 | -19.2657299  | 21.7251835    | -0.0255615   | 20.4136200       | torch.Size([2, 512, 256])        |
| 2028    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.26.input_mean.mean                    | output              | qint16        | 0.0000073 | -0.1184910   | 0.1321029     | -0.0255615   | 0.0034706        | torch.Size([2, 512, 1])          |
| 2029    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.26.sub                                | input_0             | qint8         | 0.2049546 | -19.2657299  | 21.7251835    | -0.0255615   | 20.4136200       | torch.Size([2, 512, 256])        |
| 2029    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.26.sub                                | input_1             | qint16        | 0.0000073 | -0.1184910   | 0.1321029     | -0.0255615   | 0.0034706        | torch.Size([2, 512, 1])          |
| 2029    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.26.sub                                | output              | qint16        | 0.0010802 | -19.2737961  | 21.7895546    | 0.0000030    | 20.4102154       | torch.Size([2, 512, 256])        |
| 2030    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.mul                                | input_0             | qint16        | 0.0010802 | -19.2737961  | 21.7895546    | 0.0000030    | 20.4102154       | torch.Size([2, 512, 256])        |
| 2030    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.mul                                | input_1             | qint16        | 0.0010802 | -19.2737961  | 21.7895546    | 0.0000030    | 20.4102154       | torch.Size([2, 512, 256])        |
| 2030    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.mul                                | output              | qint16        | 0.0383224 | 0.0000000    | 474.7763062   | 20.4101543   | 1332.5806885     | torch.Size([2, 512, 256])        |
| 2031    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.26.var_mean.mean                      | input_0             | qint16        | 0.0383224 | 0.0000000    | 474.7763062   | 20.4101543   | 1332.5806885     | torch.Size([2, 512, 256])        |
| 2031    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.26.var_mean.mean                      | output              | qint16        | 0.0034055 | 4.5736156    | 50.6639442    | 20.4102592   | 259.7369080      | torch.Size([2, 512, 1])          |
| 2032    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.26.rsqrt                              | input               | qint16        | 0.0034055 | 4.5736156    | 50.6639442    | 20.4102592   | 259.7369080      | torch.Size([2, 512, 1])          |
| 2032    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.26.rsqrt                              | output              | qint16        | 0.0000133 | 0.1404974    | 0.4362850     | 0.2710727    | 0.0078423        | torch.Size([2, 512, 1])          |
| 2033    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.out_mul                            | input_0             | qint16        | 0.0010802 | -19.2737961  | 21.7895546    | 0.0000030    | 20.4102154       | torch.Size([2, 512, 256])        |
| 2033    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.out_mul                            | input_1             | qint16        | 0.0000133 | 0.1404974    | 0.4362850     | 0.2710727    | 0.0078423        | torch.Size([2, 512, 1])          |
| 2033    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.out_mul                            | output              | qint16        | 0.0001502 | -4.8603225   | 3.6916523     | 0.0000005    | 0.9993892        | torch.Size([2, 512, 256])        |
| 2034    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.26.weight_quant                       | input               | torch.float32 |           | 0.7192894    | 1.0790963     | 0.9198906    | 0.0035739        | torch.Size([256])                |
| 2034    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.26.weight_quant                       | output              | qint16        | 0.0000329 | 0.7192987    | 1.0790799     | 0.9198903    | 0.0035740        | torch.Size([256])                |
| 2035    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.weight_mul                         | input_0             | qint16        | 0.0001502 | -4.8603225   | 3.6916523     | 0.0000005    | 0.9993892        | torch.Size([2, 512, 256])        |
| 2035    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.weight_mul                         | input_1             | qint16        | 0.0000329 | 0.7192987    | 1.0790799     | 0.9198903    | 0.0035740        | torch.Size([256])                |
| 2035    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.26.weight_mul                         | output              | qint16        | 0.0001437 | -4.7077036   | 3.5103769     | 0.0004617    | 0.8479880        | torch.Size([2, 512, 256])        |
| 2036    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.26.bias_quant                         | input               | torch.float32 |           | -0.0724428   | 0.1072301     | 0.0025768    | 0.0008598        | torch.Size([256])                |
| 2036    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.26.bias_quant                         | output              | qint16        | 0.0000033 | -0.0724423   | 0.1072284     | 0.0025768    | 0.0008598        | torch.Size([256])                |
| 2037    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.26.bias_add                           | input_0             | qint16        | 0.0001437 | -4.7077036   | 3.5103769     | 0.0004617    | 0.8479880        | torch.Size([2, 512, 256])        |
| 2037    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.26.bias_add                           | input_1             | qint16        | 0.0000033 | -0.0724423   | 0.1072284     | 0.0025768    | 0.0008598        | torch.Size([256])                |
| 2037    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.26.bias_add                           | output              | qint8         | 0.0285185 | -3.6503737   | 3.5077810     | 0.0031838    | 0.8414007        | torch.Size([2, 512, 256])        |
| 2038    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.add1                               | input_0             | qint8         | 0.0285185 | -3.6503737   | 3.5077810     | 0.0031838    | 0.8414007        | torch.Size([2, 512, 256])        |
| 2038    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.add1                               | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0568060    | 0.8548859        | torch.Size([2, 512, 256])        |
| 2038    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.add1                               | output              | qint8         | 0.0648260 | -4.1488614   | 8.2328968     | 0.0599128    | 1.4936931        | torch.Size([2, 512, 256])        |
| 2039    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.0                           | input               | qint8         | 0.0648260 | -4.1488614   | 8.2328968     | 0.0599128    | 1.4936931        | torch.Size([2, 512, 256])        |
| 2039    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.0                           | weight              | torch.float32 |           | -0.5372956   | 0.5631919     | 0.0002626    | 0.0057614        | torch.Size([256, 256])           |
| 2039    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.0                           | bias                | torch.float32 |           | -0.2096737   | 0.1025517     | -0.0419072   | 0.0025007        | torch.Size([256])                |
| 2039    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.0                           | output              | torch.float32 |           | -11.3530893  | 10.8383980    | -1.0283513   | 5.5716910        | torch.Size([2, 512, 256])        |
| 2040    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.1                           | input               | torch.float32 |           | -11.3530893  | 10.8383980    | -1.0283513   | 5.5716910        | torch.Size([2, 512, 256])        |
| 2040    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.1                           | output              | qint8         | 0.0667878 | 0.0000000    | 8.4820509     | 0.5160896    | 1.0524825        | torch.Size([2, 512, 256])        |
| 2041    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.2                           | input               | qint8         | 0.0667878 | 0.0000000    | 8.4820509     | 0.5160896    | 1.0524825        | torch.Size([2, 512, 256])        |
| 2041    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.2                           | weight              | torch.float32 |           | -0.6903200   | 0.4113263     | -0.0078947   | 0.0061080        | torch.Size([256, 256])           |
| 2041    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.2                           | bias                | torch.float32 |           | -0.1265819   | 0.1779750     | -0.0111210   | 0.0030116        | torch.Size([256])                |
| 2041    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.2                           | output              | torch.float32 |           | -11.9195547  | 7.3719306     | -1.0388592   | 5.1507092        | torch.Size([2, 512, 256])        |
| 2042    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.3                           | input               | torch.float32 |           | -11.9195547  | 7.3719306     | -1.0388592   | 5.1507092        | torch.Size([2, 512, 256])        |
| 2042    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.3                           | output              | qint8         | 0.0597887 | 0.0000000    | 7.3540096     | 0.4137855    | 0.6698072        | torch.Size([2, 512, 256])        |
| 2043    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.input_mean.mean           | input_0             | qint8         | 0.0597887 | 0.0000000    | 7.3540096     | 0.4137855    | 0.6698072        | torch.Size([2, 512, 256])        |
| 2043    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.input_mean.mean           | output              | qint16        | 0.0000254 | 0.2087947    | 0.6899082     | 0.4137857    | 0.0109592        | torch.Size([2, 512, 1])          |
| 2044    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.27.layers.4.sub                       | input_0             | qint8         | 0.0597887 | 0.0000000    | 7.3540096     | 0.4137855    | 0.6698072        | torch.Size([2, 512, 256])        |
| 2044    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.27.layers.4.sub                       | input_1             | qint16        | 0.0000254 | 0.2087947    | 0.6899082     | 0.4137857    | 0.0109592        | torch.Size([2, 512, 1])          |
| 2044    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.27.layers.4.sub                       | output              | qint16        | 0.0002655 | -0.6900253   | 6.8312240     | 0.0000110    | 0.6588498        | torch.Size([2, 512, 256])        |
| 2045    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.mul                       | input_0             | qint16        | 0.0002655 | -0.6900253   | 6.8312240     | 0.0000110    | 0.6588498        | torch.Size([2, 512, 256])        |
| 2045    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.mul                       | input_1             | qint16        | 0.0002655 | -0.6900253   | 6.8312240     | 0.0000110    | 0.6588498        | torch.Size([2, 512, 256])        |
| 2045    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.mul                       | output              | qint16        | 0.0023107 | 0.0000000    | 46.6659088    | 0.6588502    | 3.5709894        | torch.Size([2, 512, 256])        |
| 2046    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.var_mean.mean             | input_0             | qint16        | 0.0023107 | 0.0000000    | 46.6659088    | 0.6588502    | 3.5709894        | torch.Size([2, 512, 256])        |
| 2046    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.var_mean.mean             | output              | qint16        | 0.0000721 | 0.2049739    | 1.4103041     | 0.6588559    | 0.0960146        | torch.Size([2, 512, 1])          |
| 2047    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.27.layers.4.rsqrt                     | input               | qint16        | 0.0000721 | 0.2049739    | 1.4103041     | 0.6588559    | 0.0960146        | torch.Size([2, 512, 1])          |
| 2047    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.27.layers.4.rsqrt                     | output              | qint16        | 0.0001103 | 0.8420185    | 2.2087126     | 1.3320715    | 0.0911004        | torch.Size([2, 512, 1])          |
| 2048    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.out_mul                   | input_0             | qint16        | 0.0002655 | -0.6900253   | 6.8312240     | 0.0000110    | 0.6588498        | torch.Size([2, 512, 256])        |
| 2048    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.out_mul                   | input_1             | qint16        | 0.0001103 | 0.8420185    | 2.2087126     | 1.3320715    | 0.0911004        | torch.Size([2, 512, 1])          |
| 2048    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.out_mul                   | output              | qint16        | 0.0002633 | -0.6262248   | 7.5937004     | 0.0000108    | 1.0000315        | torch.Size([2, 512, 256])        |
| 2049    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.4.weight_quant              | input               | torch.float32 |           | 0.6927252    | 1.1722289     | 0.9681799    | 0.0055170        | torch.Size([256])                |
| 2049    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.4.weight_quant              | output              | qint16        | 0.0000358 | 0.6927303    | 1.1722111     | 0.9681793    | 0.0055169        | torch.Size([256])                |
| 2050    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.weight_mul                | input_0             | qint16        | 0.0002633 | -0.6262248   | 7.5937004     | 0.0000108    | 1.0000315        | torch.Size([2, 512, 256])        |
| 2050    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.weight_mul                | input_1             | qint16        | 0.0000358 | 0.6927303    | 1.1722111     | 0.9681793    | 0.0055169        | torch.Size([256])                |
| 2050    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.weight_mul                | output              | qint16        | 0.0002703 | -0.7339497   | 8.2318602     | 0.0061168    | 0.9704615        | torch.Size([2, 512, 256])        |
| 2051    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.4.bias_quant                | input               | torch.float32 |           | -0.1199606   | 0.2986090     | 0.0510115    | 0.0063596        | torch.Size([256])                |
| 2051    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.4.bias_quant                | output              | qint16        | 0.0000091 | -0.1199630   | 0.2986044     | 0.0510115    | 0.0063596        | torch.Size([256])                |
| 2052    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.layers.4.bias_add                  | input_0             | qint16        | 0.0002703 | -0.7339497   | 8.2318602     | 0.0061168    | 0.9704615        | torch.Size([2, 512, 256])        |
| 2052    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.layers.4.bias_add                  | input_1             | qint16        | 0.0000091 | -0.1199630   | 0.2986044     | 0.0510115    | 0.0063596        | torch.Size([256])                |
| 2052    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.layers.4.bias_add                  | output              | qint8         | 0.0588596 | -0.7651754   | 7.4751749     | 0.0570760    | 0.9301060        | torch.Size([2, 512, 256])        |
| 2053    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.5                           | input               | qint8         | 0.0588596 | -0.7651754   | 7.4751749     | 0.0570760    | 0.9301060        | torch.Size([2, 512, 256])        |
| 2053    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.5                           | weight              | torch.float32 |           | -0.4725817   | 0.4318931     | 0.0043037    | 0.0048818        | torch.Size([256, 256])           |
| 2053    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.5                           | bias                | torch.float32 |           | -0.1813288   | 0.0764300     | -0.0312060   | 0.0026632        | torch.Size([256])                |
| 2053    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.5                           | output              | torch.float32 |           | -8.0052032   | 10.4905748    | -0.9437712   | 3.8948185        | torch.Size([2, 512, 256])        |
| 2054    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.6                           | input               | torch.float32 |           | -8.0052032   | 10.4905748    | -0.9437712   | 3.8948185        | torch.Size([2, 512, 256])        |
| 2054    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.6                           | output              | qint8         | 0.0809315 | 0.0000000    | 10.2783012    | 0.4233187    | 0.8251505        | torch.Size([2, 512, 256])        |
| 2055    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.7                           | input               | qint8         | 0.0809315 | 0.0000000    | 10.2783012    | 0.4233187    | 0.8251505        | torch.Size([2, 512, 256])        |
| 2055    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.7                           | weight              | torch.float32 |           | -0.3544154   | 0.5146543     | -0.0073491   | 0.0036408        | torch.Size([256, 256])           |
| 2055    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.7                           | bias                | torch.float32 |           | -0.1227437   | 0.2899182     | -0.0230045   | 0.0021475        | torch.Size([256])                |
| 2055    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.7                           | output              | torch.float32 |           | -12.1811848  | 41.0031166    | -1.8232585   | 6.6178460        | torch.Size([2, 512, 256])        |
| 2056    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.8                           | input               | torch.float32 |           | -12.1811848  | 41.0031166    | -1.8232585   | 6.6178460        | torch.Size([2, 512, 256])        |
| 2056    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.27.layers.8                           | output              | qint8         | 0.3129860 | 0.0000000    | 39.7492218    | 0.3245326    | 2.3990240        | torch.Size([2, 512, 256])        |
| 2057    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.input_mean.mean           | input_0             | qint8         | 0.3129860 | 0.0000000    | 39.7492218    | 0.3245326    | 2.3990240        | torch.Size([2, 512, 256])        |
| 2057    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.input_mean.mean           | output              | qint16        | 0.0000348 | 0.1613925    | 1.1416987     | 0.3244459    | 0.0158558        | torch.Size([2, 512, 1])          |
| 2058    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.27.layers.9.sub                       | input_0             | qint8         | 0.3129860 | 0.0000000    | 39.7492218    | 0.3245326    | 2.3990240        | torch.Size([2, 512, 256])        |
| 2058    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.27.layers.9.sub                       | input_1             | qint16        | 0.0000348 | 0.1613925    | 1.1416987     | 0.3244459    | 0.0158558        | torch.Size([2, 512, 1])          |
| 2058    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.27.layers.9.sub                       | output              | qint16        | 0.0012917 | -1.1418639   | 39.4588928    | 0.0000838    | 2.3830526        | torch.Size([2, 512, 256])        |
| 2059    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.mul                       | input_0             | qint16        | 0.0012917 | -1.1418639   | 39.4588928    | 0.0000838    | 2.3830526        | torch.Size([2, 512, 256])        |
| 2059    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.mul                       | input_1             | qint16        | 0.0012917 | -1.1418639   | 39.4588928    | 0.0000838    | 2.3830526        | torch.Size([2, 512, 256])        |
| 2059    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.mul                       | output              | qint16        | 0.0547161 | 0.0000000    | 1557.0009766  | 2.3812795    | 1199.5019531     | torch.Size([2, 512, 256])        |
| 2060    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.var_mean.mean             | input_0             | qint16        | 0.0547161 | 0.0000000    | 1557.0009766  | 2.3812795    | 1199.5019531     | torch.Size([2, 512, 256])        |
| 2060    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.var_mean.mean             | output              | qint16        | 0.0002526 | 0.3627137    | 6.6546345     | 2.3812957    | 1.8174378        | torch.Size([2, 512, 1])          |
| 2061    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.27.layers.9.rsqrt                     | input               | qint16        | 0.0002526 | 0.3627137    | 6.6546345     | 2.3812957    | 1.8174378        | torch.Size([2, 512, 1])          |
| 2061    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.27.layers.9.rsqrt                     | output              | qint16        | 0.0000514 | 0.3876498    | 1.6603879     | 0.7237115    | 0.0369198        | torch.Size([2, 512, 1])          |
| 2062    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.out_mul                   | input_0             | qint16        | 0.0012917 | -1.1418639   | 39.4588928    | 0.0000838    | 2.3830526        | torch.Size([2, 512, 256])        |
| 2062    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.out_mul                   | input_1             | qint16        | 0.0000514 | 0.3876498    | 1.6603879     | 0.7237115    | 0.0369198        | torch.Size([2, 512, 1])          |
| 2062    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.out_mul                   | output              | qint16        | 0.0004841 | -0.4546096   | 15.7907839    | 0.0000029    | 1.0008590        | torch.Size([2, 512, 256])        |
| 2063    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.9.weight_quant              | input               | torch.float32 |           | 0.6694351    | 1.1924911     | 0.9463960    | 0.0051841        | torch.Size([256])                |
| 2063    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.9.weight_quant              | output              | qint16        | 0.0000364 | 0.6694400    | 1.1924729     | 0.9463953    | 0.0051841        | torch.Size([256])                |
| 2064    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.weight_mul                | input_0             | qint16        | 0.0004841 | -0.4546096   | 15.7907839    | 0.0000029    | 1.0008590        | torch.Size([2, 512, 256])        |
| 2064    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.weight_mul                | input_1             | qint16        | 0.0000364 | 0.6694400    | 1.1924729     | 0.9463953    | 0.0051841        | torch.Size([256])                |
| 2064    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.weight_mul                | output              | qint16        | 0.0003241 | -0.5422224   | 10.5709047    | -0.0136770   | 0.5991781        | torch.Size([2, 512, 256])        |
| 2065    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.9.bias_quant                | input               | torch.float32 |           | -0.3060245   | 0.0903289     | 0.0524159    | 0.0020540        | torch.Size([256])                |
| 2065    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.9.bias_quant                | output              | qint16        | 0.0000093 | -0.3060291   | 0.0903294     | 0.0524162    | 0.0020540        | torch.Size([256])                |
| 2066    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.layers.9.bias_add                  | input_0             | qint16        | 0.0003241 | -0.5422224   | 10.5709047    | -0.0136770   | 0.5991781        | torch.Size([2, 512, 256])        |
| 2066    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.layers.9.bias_add                  | input_1             | qint16        | 0.0000093 | -0.3060291   | 0.0903294     | 0.0524162    | 0.0020540        | torch.Size([256])                |
| 2066    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.layers.9.bias_add                  | output              | qint8         | 0.0800850 | -0.6406803   | 10.1707993    | 0.0388819    | 0.5604801        | torch.Size([2, 512, 256])        |
| 2067    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.10                          | input               | qint8         | 0.0800850 | -0.6406803   | 10.1707993    | 0.0388819    | 0.5604801        | torch.Size([2, 512, 256])        |
| 2067    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.10                          | weight              | torch.float32 |           | -0.3738195   | 0.3876365     | -0.0004279   | 0.0034504        | torch.Size([11, 256])            |
| 2067    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.10                          | bias                | torch.float32 |           | -0.0642515   | 0.0481880     | -0.0072857   | 0.0011733        | torch.Size([11])                 |
| 2067    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.27.layers.10                          | output              | qint16        | 0.0004770 | -13.7135134  | 11.9937792    | -0.0672176   | 1.9183995        | torch.Size([2, 512, 11])         |
| 2068    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.11.scale_quant_stub         | input               | torch.float32 |           | 0.1060089    | 0.8700237     | 0.3596667    | 0.0556504        | torch.Size([11])                 |
| 2068    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.27.layers.11.scale_quant_stub         | output              | qint16        | 0.0000266 | 0.1060198    | 0.8700105     | 0.3596680    | 0.0556480        | torch.Size([11])                 |
| 2069    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.11.mul                      | input_0             | qint16        | 0.0004770 | -13.7135134  | 11.9937792    | -0.0672176   | 1.9183995        | torch.Size([2, 512, 11])         |
| 2069    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.11.mul                      | input_1             | qint16        | 0.0000266 | 0.1060198    | 0.8700105     | 0.3596680    | 0.0556480        | torch.Size([11])                 |
| 2069    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.27.layers.11.mul                      | output              | qint16        | 0.0003746 | -11.9307728  | 7.2737746     | 0.0080421    | 0.5722189        | torch.Size([2, 512, 11])         |
| 2070    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.add2                               | input_0             | qint16        | 0.0003746 | -11.9307728  | 7.2737746     | 0.0080421    | 0.5722189        | torch.Size([2, 512, 11])         |
| 2070    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.add2                               | input_1             | qint16        | 0.0017895 | -53.6043777  | 53.3932190    | 0.1957742    | 75.6031265       | torch.Size([2, 512, 11])         |
| 2070    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.27.add2                               | output              | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2071    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(3)                                   | input               | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2071    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(3)                                   | output              | torch.float32 |           | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2072    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2072    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.8399352    | 277.1991577      | torch.Size([2, 512, 3])          |
| 2073    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(5)                   | input               | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.8399352    | 277.1991577      | torch.Size([2, 512, 3])          |
| 2073    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(5)                   | weight              | torch.float32 |           | -0.9216561   | 0.9167990     | -0.0046354   | 0.1373587        | torch.Size([128, 3])             |
| 2073    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(5)                   | bias                | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3650480        | torch.Size([128])                |
| 2073    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(5)                   | output              | torch.float32 |           | -33.0704269  | 34.8847580    | -0.1245608   | 67.7217026       | torch.Size([2, 512, 128])        |
| 2074    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(5)                   | input               | torch.float32 |           | -33.0704269  | 34.8847580    | -0.1245608   | 67.7217026       | torch.Size([2, 512, 128])        |
| 2074    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(5)                   | output              | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8146653    | 24.7988911       | torch.Size([2, 512, 128])        |
| 2075    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(5)   | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8146653    | 24.7988911       | torch.Size([2, 512, 128])        |
| 2075    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(5)   | output              | qint16        | 0.0002498 | 0.2692640    | 7.2706265     | 2.8146801    | 3.8786955        | torch.Size([2, 512, 1])          |
| 2076    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(5)               | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8146653    | 24.7988911       | torch.Size([2, 512, 128])        |
| 2076    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(5)               | input_1             | qint16        | 0.0002498 | 0.2692640    | 7.2706265     | 2.8146801    | 3.8786955        | torch.Size([2, 512, 1])          |
| 2076    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(5)               | output              | qint16        | 0.0008924 | -7.2705998   | 27.4769230    | 0.0000049    | 20.9238777       | torch.Size([2, 512, 128])        |
| 2077    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(5)               | input_0             | qint16        | 0.0008924 | -7.2705998   | 27.4769230    | 0.0000049    | 20.9238777       | torch.Size([2, 512, 128])        |
| 2077    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(5)               | input_1             | qint16        | 0.0008924 | -7.2705998   | 27.4769230    | 0.0000049    | 20.9238777       | torch.Size([2, 512, 128])        |
| 2077    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(5)               | output              | qint16        | 0.0261809 | 0.0000000    | 754.9771729   | 20.9236450   | 2433.6140137     | torch.Size([2, 512, 128])        |
| 2078    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(5)     | input_0             | qint16        | 0.0261809 | 0.0000000    | 754.9771729   | 20.9236450   | 2433.6140137     | torch.Size([2, 512, 128])        |
| 2078    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(5)     | output              | qint16        | 0.0029473 | 0.1886276    | 76.3205032    | 20.9237461   | 440.4988403      | torch.Size([2, 512, 1])          |
| 2079    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(5)             | input               | qint16        | 0.0029473 | 0.1886276    | 76.3205032    | 20.9237461   | 440.4988403      | torch.Size([2, 512, 1])          |
| 2079    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(5)             | output              | qint16        | 0.0000538 | 0.1144402    | 1.7621539     | 0.6435223    | 0.4517395        | torch.Size([2, 512, 1])          |
| 2080    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(5)           | input_0             | qint16        | 0.0008924 | -7.2705998   | 27.4769230    | 0.0000049    | 20.9238777       | torch.Size([2, 512, 128])        |
| 2080    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(5)           | input_1             | qint16        | 0.0000538 | 0.1144402    | 1.7621539     | 0.6435223    | 0.4517395        | torch.Size([2, 512, 1])          |
| 2080    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(5)           | output              | qint16        | 0.0001192 | -0.8837299   | 3.9062698     | 0.0000046    | 0.8977377        | torch.Size([2, 512, 128])        |
| 2081    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(5)      | input               | torch.float32 |           | 0.7278287    | 1.3287159     | 0.9627235    | 0.0086877        | torch.Size([128])                |
| 2081    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(5)      | output              | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 2082    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(5)        | input_0             | qint16        | 0.0001192 | -0.8837299   | 3.9062698     | 0.0000046    | 0.8977377        | torch.Size([2, 512, 128])        |
| 2082    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(5)        | input_1             | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 2082    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(5)        | output              | qint16        | 0.0001208 | -1.0485834   | 3.8095391     | -0.0024070   | 0.8350545        | torch.Size([2, 512, 128])        |
| 2083    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(5)        | input               | torch.float32 |           | -0.0562531   | 0.0804052     | 0.0088204    | 0.0005294        | torch.Size([128])                |
| 2083    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(5)        | output              | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 2084    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(5)          | input_0             | qint16        | 0.0001208 | -1.0485834   | 3.8095391     | -0.0024070   | 0.8350545        | torch.Size([2, 512, 128])        |
| 2084    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(5)          | input_1             | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 2084    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(5)          | output              | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0063076    | 0.8296071        | torch.Size([2, 512, 128])        |
| 2085    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(5)                   | input               | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0063076    | 0.8296071        | torch.Size([2, 512, 128])        |
| 2085    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(5)                   | weight              | torch.float32 |           | -0.3750711   | 0.3968706     | 0.0019093    | 0.0048458        | torch.Size([128, 128])           |
| 2085    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(5)                   | bias                | torch.float32 |           | -0.1863807   | 0.1385574     | -0.0156467   | 0.0047256        | torch.Size([128])                |
| 2085    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(5)                   | output              | torch.float32 |           | -5.4601564   | 6.3892884     | -0.1013955   | 2.0435767        | torch.Size([2, 512, 128])        |
| 2086    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(5)                   | input               | torch.float32 |           | -5.4601564   | 6.3892884     | -0.1013955   | 2.0435767        | torch.Size([2, 512, 128])        |
| 2086    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(5)                   | output              | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5137753    | 0.7009904        | torch.Size([2, 512, 128])        |
| 2087    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(5)   | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5137753    | 0.7009904        | torch.Size([2, 512, 128])        |
| 2087    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(5)   | output              | qint16        | 0.0000298 | 0.2877294    | 0.8862258     | 0.5137755    | 0.0369272        | torch.Size([2, 512, 1])          |
| 2088    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(5)               | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5137753    | 0.7009904        | torch.Size([2, 512, 128])        |
| 2088    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(5)               | input_1             | qint16        | 0.0000298 | 0.2877294    | 0.8862258     | 0.5137755    | 0.0369272        | torch.Size([2, 512, 1])          |
| 2088    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(5)               | output              | qint16        | 0.0001641 | -0.8862470   | 5.0713024     | -0.0000065   | 0.6641101        | torch.Size([2, 512, 128])        |
| 2089    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(5)               | input_0             | qint16        | 0.0001641 | -0.8862470   | 5.0713024     | -0.0000065   | 0.6641101        | torch.Size([2, 512, 128])        |
| 2089    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(5)               | input_1             | qint16        | 0.0001641 | -0.8862470   | 5.0713024     | -0.0000065   | 0.6641101        | torch.Size([2, 512, 128])        |
| 2089    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(5)               | output              | qint16        | 0.0008856 | 0.0000000    | 25.7179813    | 0.6640999    | 3.0442369        | torch.Size([2, 512, 128])        |
| 2090    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(5)     | input_0             | qint16        | 0.0008856 | 0.0000000    | 25.7179813    | 0.6640999    | 3.0442369        | torch.Size([2, 512, 128])        |
| 2090    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(5)     | output              | qint16        | 0.0000499 | 0.3040115    | 1.4275212     | 0.6641009    | 0.0955870        | torch.Size([2, 512, 1])          |
| 2091    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(5)             | input               | qint16        | 0.0000499 | 0.3040115    | 1.4275212     | 0.6641009    | 0.0955870        | torch.Size([2, 512, 1])          |
| 2091    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(5)             | output              | qint16        | 0.0000553 | 0.8369617    | 1.8121266     | 1.3206966    | 0.0764855        | torch.Size([2, 512, 1])          |
| 2092    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(5)           | input_0             | qint16        | 0.0001641 | -0.8862470   | 5.0713024     | -0.0000065   | 0.6641101        | torch.Size([2, 512, 128])        |
| 2092    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(5)           | input_1             | qint16        | 0.0000553 | 0.8369617    | 1.8121266     | 1.3206966    | 0.0764855        | torch.Size([2, 512, 1])          |
| 2092    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(5)           | output              | qint16        | 0.0002164 | -0.7792155   | 6.9832859     | -0.0000105   | 1.0000026        | torch.Size([2, 512, 128])        |
| 2093    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(5)      | input               | torch.float32 |           | 0.5925044    | 1.4726304     | 0.9182085    | 0.0175060        | torch.Size([128])                |
| 2093    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(5)      | output              | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 2094    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(5)        | input_0             | qint16        | 0.0002164 | -0.7792155   | 6.9832859     | -0.0000105   | 1.0000026        | torch.Size([2, 512, 128])        |
| 2094    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(5)        | input_1             | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 2094    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(5)        | output              | qint16        | 0.0002127 | -0.9115826   | 6.8617539     | 0.0330866    | 0.9376526        | torch.Size([2, 512, 128])        |
| 2095    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(5)        | input               | torch.float32 |           | -0.0644210   | 0.2426097     | 0.0318023    | 0.0030999        | torch.Size([128])                |
| 2095    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(5)        | output              | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 2096    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(5)          | input_0             | qint16        | 0.0002127 | -0.9115826   | 6.8617539     | 0.0330866    | 0.9376526        | torch.Size([2, 512, 128])        |
| 2096    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(5)          | input_1             | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 2096    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(5)          | output              | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0650968    | 0.9137359        | torch.Size([2, 512, 128])        |
| 2097    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(5)                   | input               | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0650968    | 0.9137359        | torch.Size([2, 512, 128])        |
| 2097    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(5)                   | weight              | torch.float32 |           | -0.7504157   | 0.4182976     | -0.0024651   | 0.0052447        | torch.Size([128, 128])           |
| 2097    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(5)                   | bias                | torch.float32 |           | -0.1397866   | 0.1210779     | 0.0064616    | 0.0040949        | torch.Size([128])                |
| 2097    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(5)                   | output              | torch.float32 |           | -8.1828995   | 6.9167275     | -0.0442811   | 4.1132836        | torch.Size([2, 512, 128])        |
| 2098    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(5)                   | input               | torch.float32 |           | -8.1828995   | 6.9167275     | -0.0442811   | 4.1132836        | torch.Size([2, 512, 128])        |
| 2098    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(5)                   | output              | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.7762265    | 1.3538462        | torch.Size([2, 512, 128])        |
| 2099    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(5)   | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.7762265    | 1.3538462        | torch.Size([2, 512, 128])        |
| 2099    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(5)   | output              | qint16        | 0.0000319 | 0.5492456    | 1.0447656     | 0.7660561    | 0.0273153        | torch.Size([2, 512, 1])          |
| 2100    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(5)               | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.7762265    | 1.3538462        | torch.Size([2, 512, 128])        |
| 2100    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(5)               | input_1             | qint16        | 0.0000319 | 0.5492456    | 1.0447656     | 0.7660561    | 0.0273153        | torch.Size([2, 512, 1])          |
| 2100    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(5)               | output              | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0101651    | 1.3208967        | torch.Size([2, 512, 128])        |
| 2101    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(5)               | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0101651    | 1.3208967        | torch.Size([2, 512, 128])        |
| 2101    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(5)               | input_1             | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0101651    | 1.3208967        | torch.Size([2, 512, 128])        |
| 2101    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(5)               | output              | qint16        | 0.0011151 | 0.0000000    | 31.5160542    | 1.3209623    | 7.3691616        | torch.Size([2, 512, 128])        |
| 2102    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(5)     | input_0             | qint16        | 0.0011151 | 0.0000000    | 31.5160542    | 1.3209623    | 7.3691616        | torch.Size([2, 512, 128])        |
| 2102    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(5)     | output              | qint16        | 0.0000656 | 0.8199427    | 2.1383848     | 1.3209622    | 0.1840288        | torch.Size([2, 512, 1])          |
| 2103    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(5)             | input               | qint16        | 0.0000656 | 0.8199427    | 2.1383848     | 1.3209622    | 0.1840288        | torch.Size([2, 512, 1])          |
| 2103    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(5)             | output              | qint16        | 0.0000338 | 0.6838499    | 1.1043351     | 0.9002239    | 0.0160382        | torch.Size([2, 512, 1])          |
| 2104    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(5)           | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6138892     | 0.0101651    | 1.3208967        | torch.Size([2, 512, 128])        |
| 2104    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(5)           | input_1             | qint16        | 0.0000338 | 0.6838499    | 1.1043351     | 0.9002239    | 0.0160382        | torch.Size([2, 512, 1])          |
| 2104    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(5)           | output              | qint16        | 0.0001537 | -0.7518747   | 4.9738693     | 0.0069482    | 0.9999781        | torch.Size([2, 512, 128])        |
| 2105    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(5)      | input               | torch.float32 |           | 0.7673740    | 1.1249810     | 0.9671495    | 0.0053221        | torch.Size([128])                |
| 2105    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(5)      | output              | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 2106    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(5)        | input_0             | qint16        | 0.0001537 | -0.7518747   | 4.9738693     | 0.0069482    | 0.9999781        | torch.Size([2, 512, 128])        |
| 2106    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(5)        | input_1             | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 2106    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(5)        | output              | qint16        | 0.0001601 | -0.8458350   | 5.1843414     | 0.0216795    | 0.9910248        | torch.Size([2, 512, 128])        |
| 2107    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(5)        | input               | torch.float32 |           | -0.0537279   | 0.1594015     | 0.0216380    | 0.0014148        | torch.Size([128])                |
| 2107    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(5)        | output              | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 2108    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(5)          | input_0             | qint16        | 0.0001601 | -0.8458350   | 5.1843414     | 0.0216795    | 0.9910248        | torch.Size([2, 512, 128])        |
| 2108    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(5)          | input_1             | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 2108    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(5)          | output              | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0433161    | 0.9785486        | torch.Size([2, 512, 128])        |
| 2109    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(5)                   | input               | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0433161    | 0.9785486        | torch.Size([2, 512, 128])        |
| 2109    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(5)                   | weight              | torch.float32 |           | -0.4264432   | 0.3183554     | 0.0005866    | 0.0053991        | torch.Size([128, 128])           |
| 2109    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(5)                   | bias                | torch.float32 |           | -0.1690418   | 0.1536980     | -0.0166056   | 0.0039884        | torch.Size([128])                |
| 2109    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(5)                   | output              | torch.float32 |           | -11.8618450  | 10.1022367    | -0.4229281   | 4.3405466        | torch.Size([2, 512, 128])        |
| 2110    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(5)                  | input               | torch.float32 |           | -11.8618450  | 10.1022367    | -0.4229281   | 4.3405466        | torch.Size([2, 512, 128])        |
| 2110    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(5)                  | output              | qint8         | 0.0826298 | 0.0000000    | 10.0808334    | 0.6198343    | 1.5397854        | torch.Size([2, 512, 128])        |
| 2111    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(5)  | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.0808334    | 0.6198343    | 1.5397854        | torch.Size([2, 512, 128])        |
| 2111    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(5)  | output              | qint16        | 0.0000231 | 0.5254661    | 0.7307645     | 0.6198336    | 0.0019051        | torch.Size([2, 512, 1])          |
| 2112    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(5)              | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.0808334    | 0.6198343    | 1.5397854        | torch.Size([2, 512, 128])        |
| 2112    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(5)              | input_1             | qint16        | 0.0000231 | 0.5254661    | 0.7307645     | 0.6198336    | 0.0019051        | torch.Size([2, 512, 1])          |
| 2112    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(5)              | output              | qint16        | 0.0003154 | -0.7307987   | 9.5398092     | -0.0000045   | 1.5378933        | torch.Size([2, 512, 128])        |
| 2113    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(5)              | input_0             | qint16        | 0.0003154 | -0.7307987   | 9.5398092     | -0.0000045   | 1.5378933        | torch.Size([2, 512, 128])        |
| 2113    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(5)              | input_1             | qint16        | 0.0003154 | -0.7307987   | 9.5398092     | -0.0000045   | 1.5378933        | torch.Size([2, 512, 128])        |
| 2113    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(5)              | output              | qint16        | 0.0032599 | 0.0000000    | 91.0085449    | 1.5378184    | 25.1603012       | torch.Size([2, 512, 128])        |
| 2114    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(5)    | input_0             | qint16        | 0.0032599 | 0.0000000    | 91.0085449    | 1.5378184    | 25.1603012       | torch.Size([2, 512, 128])        |
| 2114    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(5)    | output              | qint16        | 0.0000598 | 1.0738406    | 1.9413159     | 1.5378166    | 0.0457184        | torch.Size([2, 512, 1])          |
| 2115    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(5)            | input               | qint16        | 0.0000598 | 1.0738406    | 1.9413159     | 1.5378166    | 0.0457184        | torch.Size([2, 512, 1])          |
| 2115    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(5)            | output              | qint16        | 0.0000315 | 0.7177283    | 0.9649863     | 0.8124428    | 0.0033704        | torch.Size([2, 512, 1])          |
| 2116    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(5)          | input_0             | qint16        | 0.0003154 | -0.7307987   | 9.5398092     | -0.0000045   | 1.5378933        | torch.Size([2, 512, 128])        |
| 2116    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(5)          | input_1             | qint16        | 0.0000315 | 0.7177283    | 0.9649863     | 0.8124428    | 0.0033704        | torch.Size([2, 512, 1])          |
| 2116    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(5)          | output              | qint16        | 0.0002431 | -0.5984490   | 7.2795901     | -0.0000062   | 1.0000603        | torch.Size([2, 512, 128])        |
| 2117    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(5)     | input               | torch.float32 |           | 0.7088336    | 1.4002132     | 0.9292046    | 0.0145085        | torch.Size([128])                |
| 2117    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(5)     | output              | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 2118    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(5)       | input_0             | qint16        | 0.0002431 | -0.5984490   | 7.2795901     | -0.0000062   | 1.0000603        | torch.Size([2, 512, 128])        |
| 2118    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(5)       | input_1             | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 2118    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(5)       | output              | qint16        | 0.0002455 | -0.8380171   | 7.3533359     | 0.0089762    | 0.9048356        | torch.Size([2, 512, 128])        |
| 2119    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(5)       | input               | torch.float32 |           | -0.0965041   | 0.2669707     | 0.0619903    | 0.0064956        | torch.Size([128])                |
| 2119    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(5)       | output              | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 2120    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(5)         | input_0             | qint16        | 0.0002455 | -0.8380171   | 7.3533359     | 0.0089762    | 0.9048356        | torch.Size([2, 512, 128])        |
| 2120    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(5)         | input_1             | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 2120    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(5)         | output              | qint8         | 0.0587279 | -0.8221908   | 7.2822618     | 0.0712888    | 0.8683364        | torch.Size([2, 512, 128])        |
| 2121    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2121    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017897 | -0.9539077   | 2.8778305     | 0.1606543    | 0.4536631        | torch.Size([2, 512, 3])          |
| 2122    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(5)                  | input               | qint16        | 0.0017897 | -0.9539077   | 2.8778305     | 0.1606543    | 0.4536631        | torch.Size([2, 512, 3])          |
| 2122    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(5)                  | weight              | torch.float32 |           | -0.8288664   | 0.6362330     | 0.0683853    | 0.1118651        | torch.Size([32, 3])              |
| 2122    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(5)                  | bias                | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1068659        | torch.Size([32])                 |
| 2122    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(5)                  | output              | torch.float32 |           | -2.0930939   | 2.4429431     | 0.0933597    | 0.2482558        | torch.Size([2, 512, 32])         |
| 2123    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(5)                  | input               | torch.float32 |           | -2.0930939   | 2.4429431     | 0.0933597    | 0.2482558        | torch.Size([2, 512, 32])         |
| 2123    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(5)                  | output              | qint8         | 0.0194126 | 0.0000000    | 2.4459875     | 0.2492559    | 0.1010951        | torch.Size([2, 512, 32])         |
| 2124    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(5)  | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.4459875     | 0.2492559    | 0.1010951        | torch.Size([2, 512, 32])         |
| 2124    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(5)  | output              | qint16        | 0.0000252 | 0.1583350    | 0.7037137     | 0.2492562    | 0.0135226        | torch.Size([2, 512, 1])          |
| 2125    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(5)              | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.4459875     | 0.2492559    | 0.1010951        | torch.Size([2, 512, 32])         |
| 2125    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(5)              | input_1             | qint16        | 0.0000252 | 0.1583350    | 0.7037137     | 0.2492562    | 0.0135226        | torch.Size([2, 512, 1])          |
| 2125    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(5)              | output              | qint16        | 0.0000639 | -0.7037156   | 1.7422872     | 0.0000010    | 0.0875848        | torch.Size([2, 512, 32])         |
| 2126    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(5)              | input_0             | qint16        | 0.0000639 | -0.7037156   | 1.7422872     | 0.0000010    | 0.0875848        | torch.Size([2, 512, 32])         |
| 2126    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(5)              | input_1             | qint16        | 0.0000639 | -0.7037156   | 1.7422872     | 0.0000010    | 0.0875848        | torch.Size([2, 512, 32])         |
| 2126    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(5)              | output              | qint16        | 0.0001394 | 0.0000000    | 3.0355489     | 0.0875785    | 0.0268510        | torch.Size([2, 512, 32])         |
| 2127    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(5)    | input_0             | qint16        | 0.0001394 | 0.0000000    | 3.0355489     | 0.0875785    | 0.0268510        | torch.Size([2, 512, 32])         |
| 2127    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(5)    | output              | qint16        | 0.0000212 | 0.0353436    | 0.4788787     | 0.0875777    | 0.0045077        | torch.Size([2, 512, 1])          |
| 2128    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(5)            | input               | qint16        | 0.0000212 | 0.0353436    | 0.4788787     | 0.0875777    | 0.0045077        | torch.Size([2, 512, 1])          |
| 2128    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(5)            | output              | qint16        | 0.0001649 | 1.4449791    | 5.3183737     | 3.8938987    | 0.9822341        | torch.Size([2, 512, 1])          |
| 2129    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(5)          | input_0             | qint16        | 0.0000639 | -0.7037156   | 1.7422872     | 0.0000010    | 0.0875848        | torch.Size([2, 512, 32])         |
| 2129    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(5)          | input_1             | qint16        | 0.0001649 | 1.4449791    | 5.3183737     | 3.8938987    | 0.9822341        | torch.Size([2, 512, 1])          |
| 2129    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(5)          | output              | qint16        | 0.0000919 | -1.0773485   | 3.0128427     | -0.0000048   | 0.9998953        | torch.Size([2, 512, 32])         |
| 2130    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(5)     | input               | torch.float32 |           | 0.8401937    | 1.1936733     | 0.9969203    | 0.0071658        | torch.Size([32])                 |
| 2130    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(5)     | output              | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 2131    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(5)       | input_0             | qint16        | 0.0000919 | -1.0773485   | 3.0128427     | -0.0000048   | 0.9998953        | torch.Size([2, 512, 32])         |
| 2131    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(5)       | input_1             | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 2131    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(5)       | output              | qint16        | 0.0001022 | -1.2365447   | 3.2300847     | 0.0099356    | 0.9955858        | torch.Size([2, 512, 32])         |
| 2132    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(5)       | input               | torch.float32 |           | -0.1003950   | 0.1085345     | 0.0035262    | 0.0030721        | torch.Size([32])                 |
| 2132    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(5)       | output              | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 2133    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(5)         | input_0             | qint16        | 0.0001022 | -1.2365447   | 3.2300847     | 0.0099356    | 0.9955858        | torch.Size([2, 512, 32])         |
| 2133    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(5)         | input_1             | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 2133    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(5)         | output              | qint8         | 0.0232598 | -1.2095096   | 2.9539945     | 0.0132618    | 0.9480127        | torch.Size([2, 512, 32])         |
| 2134    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(5)                  | input               | qint8         | 0.0232598 | -1.2095096   | 2.9539945     | 0.0132618    | 0.9480127        | torch.Size([2, 512, 32])         |
| 2134    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(5)                  | weight              | torch.float32 |           | -0.5793310   | 0.5422795     | -0.0032135   | 0.0176575        | torch.Size([32, 32])             |
| 2134    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(5)                  | bias                | torch.float32 |           | -0.1716317   | 0.2230143     | 0.0007250    | 0.0126328        | torch.Size([32])                 |
| 2134    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(5)                  | output              | torch.float32 |           | -4.3975639   | 2.1393447     | -0.2385971   | 1.4081060        | torch.Size([2, 512, 32])         |
| 2135    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(5)                  | input               | torch.float32 |           | -4.3975639   | 2.1393447     | -0.2385971   | 1.4081060        | torch.Size([2, 512, 32])         |
| 2135    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(5)                  | output              | qint8         | 0.0172935 | 0.0000000    | 2.1443977     | 0.3560004    | 0.2458027        | torch.Size([2, 512, 32])         |
| 2136    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(5)  | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1443977     | 0.3560004    | 0.2458027        | torch.Size([2, 512, 32])         |
| 2136    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(5)  | output              | qint16        | 0.0000141 | 0.2680463    | 0.4182939     | 0.3560011    | 0.0006583        | torch.Size([2, 512, 1])          |
| 2137    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(5)              | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1443977     | 0.3560004    | 0.2458027        | torch.Size([2, 512, 32])         |
| 2137    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(5)              | input_1             | qint16        | 0.0000141 | 0.2680463    | 0.4182939     | 0.3560011    | 0.0006583        | torch.Size([2, 512, 1])          |
| 2137    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(5)              | output              | qint16        | 0.0000617 | -0.4182645   | 1.8417588     | 0.0000012    | 0.2451439        | torch.Size([2, 512, 32])         |
| 2138    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(5)              | input_0             | qint16        | 0.0000617 | -0.4182645   | 1.8417588     | 0.0000012    | 0.2451439        | torch.Size([2, 512, 32])         |
| 2138    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(5)              | input_1             | qint16        | 0.0000617 | -0.4182645   | 1.8417588     | 0.0000012    | 0.2451439        | torch.Size([2, 512, 32])         |
| 2138    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(5)              | output              | qint16        | 0.0001252 | 0.0000000    | 3.3920836     | 0.2451345    | 0.1506466        | torch.Size([2, 512, 32])         |
| 2139    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(5)    | input_0             | qint16        | 0.0001252 | 0.0000000    | 3.3920836     | 0.2451345    | 0.1506466        | torch.Size([2, 512, 32])         |
| 2139    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(5)    | output              | qint16        | 0.0000132 | 0.1517345    | 0.3461583     | 0.2451342    | 0.0027347        | torch.Size([2, 512, 1])          |
| 2140    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(5)            | input               | qint16        | 0.0000132 | 0.1517345    | 0.3461583     | 0.2451342    | 0.0027347        | torch.Size([2, 512, 1])          |
| 2140    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(5)            | output              | qint16        | 0.0000777 | 1.6996247    | 2.5457854     | 2.0593505    | 0.0612489        | torch.Size([2, 512, 1])          |
| 2141    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(5)          | input_0             | qint16        | 0.0000617 | -0.4182645   | 1.8417588     | 0.0000012    | 0.2451439        | torch.Size([2, 512, 32])         |
| 2141    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(5)          | input_1             | qint16        | 0.0000777 | 1.6996247    | 2.5457854     | 2.0593505    | 0.0612489        | torch.Size([2, 512, 1])          |
| 2141    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(5)          | output              | qint16        | 0.0001125 | -0.9120530   | 3.6849864     | -0.0000018   | 0.9998610        | torch.Size([2, 512, 32])         |
| 2142    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(5)     | input               | torch.float32 |           | 0.8191299    | 1.0923718     | 0.9808199    | 0.0031231        | torch.Size([32])                 |
| 2142    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(5)     | output              | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 2143    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(5)       | input_0             | qint16        | 0.0001125 | -0.9120530   | 3.6849864     | -0.0000018   | 0.9998610        | torch.Size([2, 512, 32])         |
| 2143    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(5)       | input_1             | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 2143    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(5)       | output              | qint16        | 0.0001113 | -0.9202085   | 3.5213978     | 0.0091194    | 0.9908087        | torch.Size([2, 512, 32])         |
| 2144    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(5)       | input               | torch.float32 |           | -0.0704119   | 0.0788569     | 0.0097621    | 0.0015200        | torch.Size([32])                 |
| 2144    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(5)       | output              | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 2145    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(5)         | input_0             | qint16        | 0.0001113 | -0.9202085   | 3.5213978     | 0.0091194    | 0.9908087        | torch.Size([2, 512, 32])         |
| 2145    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(5)         | input_1             | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 2145    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(5)         | output              | qint8         | 0.0262611 | -0.8928760   | 3.3351545     | 0.0187533    | 0.9641208        | torch.Size([2, 512, 32])         |
| 2146    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(5)                  | input               | qint8         | 0.0262611 | -0.8928760   | 3.3351545     | 0.0187533    | 0.9641208        | torch.Size([2, 512, 32])         |
| 2146    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(5)                  | weight              | torch.float32 |           | -0.5712157   | 0.5219681     | -0.0062917   | 0.0166056        | torch.Size([32, 32])             |
| 2146    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(5)                  | bias                | torch.float32 |           | -0.1649730   | 0.2318604     | 0.0253026    | 0.0136139        | torch.Size([32])                 |
| 2146    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(5)                  | output              | torch.float32 |           | -4.2679458   | 2.5603087     | -0.1425576   | 1.2321801        | torch.Size([2, 512, 32])         |
| 2147    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(5)                  | input               | torch.float32 |           | -4.2679458   | 2.5603087     | -0.1425576   | 1.2321801        | torch.Size([2, 512, 32])         |
| 2147    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(5)                  | output              | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3653507    | 0.2711799        | torch.Size([2, 512, 32])         |
| 2148    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(5)  | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3653507    | 0.2711799        | torch.Size([2, 512, 32])         |
| 2148    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(5)  | output              | qint16        | 0.0000154 | 0.1942825    | 0.4676995     | 0.3653513    | 0.0078798        | torch.Size([2, 512, 1])          |
| 2149    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(5)              | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3653507    | 0.2711799        | torch.Size([2, 512, 32])         |
| 2149    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(5)              | input_1             | qint16        | 0.0000154 | 0.1942825    | 0.4676995     | 0.3653513    | 0.0078798        | torch.Size([2, 512, 1])          |
| 2149    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(5)              | output              | qint16        | 0.0000636 | -0.4676979   | 2.0137100     | 0.0000000    | 0.2633089        | torch.Size([2, 512, 32])         |
| 2150    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(5)              | input_0             | qint16        | 0.0000636 | -0.4676979   | 2.0137100     | 0.0000000    | 0.2633089        | torch.Size([2, 512, 32])         |
| 2150    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(5)              | input_1             | qint16        | 0.0000636 | -0.4676979   | 2.0137100     | 0.0000000    | 0.2633089        | torch.Size([2, 512, 32])         |
| 2150    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(5)              | output              | qint16        | 0.0001333 | 0.0000000    | 4.0549994     | 0.2633027    | 0.3037859        | torch.Size([2, 512, 32])         |
| 2151    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(5)    | input_0             | qint16        | 0.0001333 | 0.0000000    | 4.0549994     | 0.2633027    | 0.3037859        | torch.Size([2, 512, 32])         |
| 2151    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(5)    | output              | qint16        | 0.0000116 | 0.1348479    | 0.3784634     | 0.2633015    | 0.0046384        | torch.Size([2, 512, 1])          |
| 2152    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(5)            | input               | qint16        | 0.0000116 | 0.1348479    | 0.3784634     | 0.2633015    | 0.0046384        | torch.Size([2, 512, 1])          |
| 2152    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(5)            | output              | qint16        | 0.0000821 | 1.6254737    | 2.6913540     | 2.0136735    | 0.1070609        | torch.Size([2, 512, 1])          |
| 2153    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(5)          | input_0             | qint16        | 0.0000636 | -0.4676979   | 2.0137100     | 0.0000000    | 0.2633089        | torch.Size([2, 512, 32])         |
| 2153    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(5)          | input_1             | qint16        | 0.0000821 | 1.6254737    | 2.6913540     | 2.0136735    | 0.1070609        | torch.Size([2, 512, 1])          |
| 2153    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(5)          | output              | qint16        | 0.0001195 | -0.9426237   | 3.7880681     | -0.0000057   | 0.9999589        | torch.Size([2, 512, 32])         |
| 2154    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(5)     | input               | torch.float32 |           | 0.8903234    | 1.1315480     | 0.9912031    | 0.0026835        | torch.Size([32])                 |
| 2154    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(5)     | output              | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 2155    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(5)       | input_0             | qint16        | 0.0001195 | -0.9426237   | 3.7880681     | -0.0000057   | 0.9999589        | torch.Size([2, 512, 32])         |
| 2155    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(5)       | input_1             | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 2155    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(5)       | output              | qint16        | 0.0001226 | -1.0665672   | 3.9031062     | 0.0039716    | 1.0264159        | torch.Size([2, 512, 32])         |
| 2156    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(5)       | input               | torch.float32 |           | -0.0586081   | 0.0779655     | 0.0041962    | 0.0015323        | torch.Size([32])                 |
| 2156    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(5)       | output              | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 2157    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(5)         | input_0             | qint16        | 0.0001226 | -1.0665672   | 3.9031062     | 0.0039716    | 1.0264159        | torch.Size([2, 512, 32])         |
| 2157    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(5)         | input_1             | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 2157    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(5)         | output              | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0079887    | 1.0063226        | torch.Size([2, 512, 32])         |
| 2158    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(5)                  | input               | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0079887    | 1.0063226        | torch.Size([2, 512, 32])         |
| 2158    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(5)                  | weight              | torch.float32 |           | -0.3204980   | 0.3365203     | -0.0020388   | 0.0145364        | torch.Size([32, 32])             |
| 2158    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(5)                  | bias                | torch.float32 |           | -0.1559148   | 0.2119379     | 0.0091616    | 0.0105488        | torch.Size([32])                 |
| 2158    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(5)                  | output              | torch.float32 |           | -2.4420834   | 2.6547031     | 0.0302387    | 0.7640640        | torch.Size([2, 512, 32])         |
| 2159    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(5)                 | input               | torch.float32 |           | -2.4420834   | 2.6547031     | 0.0302387    | 0.7640640        | torch.Size([2, 512, 32])         |
| 2159    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(5)                 | output              | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3564395    | 0.2745296        | torch.Size([2, 512, 32])         |
| 2160    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(5) | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3564395    | 0.2745296        | torch.Size([2, 512, 32])         |
| 2160    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(5) | output              | qint16        | 0.0000157 | 0.2569961    | 0.5130996     | 0.3550933    | 0.0023563        | torch.Size([2, 512, 1])          |
| 2161    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(5)             | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3564395    | 0.2745296        | torch.Size([2, 512, 32])         |
| 2161    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(5)             | input_1             | qint16        | 0.0000157 | 0.2569961    | 0.5130996     | 0.3550933    | 0.0023563        | torch.Size([2, 512, 1])          |
| 2161    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(5)             | output              | qint16        | 0.0000689 | -0.5131254   | 2.1691492     | 0.0013452    | 0.2717509        | torch.Size([2, 512, 32])         |
| 2162    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(5)             | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1691492     | 0.0013452    | 0.2717509        | torch.Size([2, 512, 32])         |
| 2162    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(5)             | input_1             | qint16        | 0.0000689 | -0.5131254   | 2.1691492     | 0.0013452    | 0.2717509        | torch.Size([2, 512, 32])         |
| 2162    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(5)             | output              | qint16        | 0.0001557 | 0.0000000    | 4.7052011     | 0.2717462    | 0.3466320        | torch.Size([2, 512, 32])         |
| 2163    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(5)   | input_0             | qint16        | 0.0001557 | 0.0000000    | 4.7052011     | 0.2717462    | 0.3466320        | torch.Size([2, 512, 32])         |
| 2163    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(5)   | output              | qint16        | 0.0000123 | 0.1902662    | 0.3948884     | 0.2717458    | 0.0015116        | torch.Size([2, 512, 1])          |
| 2164    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(5)           | input               | qint16        | 0.0000123 | 0.1902662    | 0.3948884     | 0.2717458    | 0.0015116        | torch.Size([2, 512, 1])          |
| 2164    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(5)           | output              | qint16        | 0.0000803 | 1.5913243    | 2.2924578     | 1.9323598    | 0.0176623        | torch.Size([2, 512, 1])          |
| 2165    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(5)         | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1691492     | 0.0013452    | 0.2717509        | torch.Size([2, 512, 32])         |
| 2165    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(5)         | input_1             | qint16        | 0.0000803 | 1.5913243    | 2.2924578     | 1.9323598    | 0.0176623        | torch.Size([2, 512, 1])          |
| 2165    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(5)         | output              | qint16        | 0.0001207 | -1.0553672   | 3.9562087     | 0.0025461    | 0.9999704        | torch.Size([2, 512, 32])         |
| 2166    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(5)    | input               | torch.float32 |           | 0.8289159    | 1.6609058     | 1.2561316    | 0.0353652        | torch.Size([32])                 |
| 2166    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(5)    | output              | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 2167    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(5)      | input_0             | qint16        | 0.0001207 | -1.0553672   | 3.9562087     | 0.0025461    | 0.9999704        | torch.Size([2, 512, 32])         |
| 2167    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(5)      | input_1             | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 2167    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(5)      | output              | qint16        | 0.0001642 | -1.7528859   | 4.9847383     | -0.0211847   | 1.4669164        | torch.Size([2, 512, 32])         |
| 2168    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(5)      | input               | torch.float32 |           | -0.1194881   | 0.2576658     | 0.0445686    | 0.0113612        | torch.Size([32])                 |
| 2168    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(5)      | output              | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 2169    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(5)        | input_0             | qint16        | 0.0001642 | -1.7528859   | 4.9847383     | -0.0211847   | 1.4669164        | torch.Size([2, 512, 32])         |
| 2169    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(5)        | input_1             | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 2169    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(5)        | output              | qint8         | 0.0385920 | -1.6980467   | 4.9011803     | 0.0238491    | 1.3918434        | torch.Size([2, 512, 32])         |
| 2170    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2170    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017897 | -1.2187827   | 1.2724736     | -0.0521930   | 0.1893093        | torch.Size([2, 512, 2])          |
| 2171    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(5)                   | input               | qint16        | 0.0017897 | -1.2187827   | 1.2724736     | -0.0521930   | 0.1893093        | torch.Size([2, 512, 2])          |
| 2171    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(5)                   | weight              | torch.float32 |           | -0.7023237   | 0.7394427     | 0.0490668    | 0.1972211        | torch.Size([32, 2])              |
| 2171    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(5)                   | bias                | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1641774        | torch.Size([32])                 |
| 2171    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(5)                   | output              | torch.float32 |           | -1.7378417   | 1.3024731     | -0.1231466   | 0.2337782        | torch.Size([2, 512, 32])         |
| 2172    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(5)                   | input               | torch.float32 |           | -1.7378417   | 1.3024731     | -0.1231466   | 0.2337782        | torch.Size([2, 512, 32])         |
| 2172    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(5)                   | output              | qint8         | 0.0115854 | 0.0000000    | 1.2975692     | 0.1456750    | 0.0641795        | torch.Size([2, 512, 32])         |
| 2173    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(5)   | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.2975692     | 0.1456750    | 0.0641795        | torch.Size([2, 512, 32])         |
| 2173    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(5)   | output              | qint16        | 0.0000105 | 0.1086165    | 0.2588598     | 0.1456744    | 0.0009591        | torch.Size([2, 512, 1])          |
| 2174    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(5)               | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.2975692     | 0.1456750    | 0.0641795        | torch.Size([2, 512, 32])         |
| 2174    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(5)               | input_1             | qint16        | 0.0000105 | 0.1086165    | 0.2588598     | 0.1456744    | 0.0009591        | torch.Size([2, 512, 1])          |
| 2174    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(5)               | output              | qint16        | 0.0000395 | -0.2588462   | 1.0387068     | -0.0000000   | 0.0632215        | torch.Size([2, 512, 32])         |
| 2175    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(5)               | input_0             | qint16        | 0.0000395 | -0.2588462   | 1.0387068     | -0.0000000   | 0.0632215        | torch.Size([2, 512, 32])         |
| 2175    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(5)               | input_1             | qint16        | 0.0000395 | -0.2588462   | 1.0387068     | -0.0000000   | 0.0632215        | torch.Size([2, 512, 32])         |
| 2175    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(5)               | output              | qint16        | 0.0000524 | 0.0000000    | 1.0788978     | 0.0632212    | 0.0147988        | torch.Size([2, 512, 32])         |
| 2176    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(5)     | input_0             | qint16        | 0.0000524 | 0.0000000    | 1.0788978     | 0.0632212    | 0.0147988        | torch.Size([2, 512, 32])         |
| 2176    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(5)     | output              | qint16        | 0.0000071 | 0.0404115    | 0.1350153     | 0.0632209    | 0.0004032        | torch.Size([2, 512, 1])          |
| 2177    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(5)             | input               | qint16        | 0.0000071 | 0.0404115    | 0.1350153     | 0.0632209    | 0.0004032        | torch.Size([2, 512, 1])          |
| 2177    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(5)             | output              | qint16        | 0.0001514 | 2.7214742    | 4.9613075     | 4.1025429    | 0.2994989        | torch.Size([2, 512, 1])          |
| 2178    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(5)           | input_0             | qint16        | 0.0000395 | -0.2588462   | 1.0387068     | -0.0000000   | 0.0632215        | torch.Size([2, 512, 32])         |
| 2178    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(5)           | input_1             | qint16        | 0.0001514 | 2.7214742    | 4.9613075     | 4.1025429    | 0.2994989        | torch.Size([2, 512, 1])          |
| 2178    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(5)           | output              | qint16        | 0.0001206 | -0.7652367   | 3.9524767     | -0.0000229   | 0.9996063        | torch.Size([2, 512, 32])         |
| 2179    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(5)      | input               | torch.float32 |           | 0.8947600    | 1.1748335     | 0.9865216    | 0.0041537        | torch.Size([32])                 |
| 2179    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(5)      | output              | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 2180    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(5)        | input_0             | qint16        | 0.0001206 | -0.7652367   | 3.9524767     | -0.0000229   | 0.9996063        | torch.Size([2, 512, 32])         |
| 2180    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(5)        | input_1             | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 2180    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(5)        | output              | qint16        | 0.0001306 | -0.8705541   | 4.2798867     | 0.0039402    | 1.0035650        | torch.Size([2, 512, 32])         |
| 2181    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(5)        | input               | torch.float32 |           | -0.0879948   | 0.1319895     | 0.0285039    | 0.0034159        | torch.Size([32])                 |
| 2181    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(5)        | output              | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 2182    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(5)          | input_0             | qint16        | 0.0001306 | -0.8705541   | 4.2798867     | 0.0039402    | 1.0035650        | torch.Size([2, 512, 32])         |
| 2182    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(5)          | input_1             | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 2182    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(5)          | output              | qint8         | 0.0302674 | -0.8172185   | 3.8439538     | 0.0321387    | 0.9219481        | torch.Size([2, 512, 32])         |
| 2183    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(5)                   | input               | qint8         | 0.0302674 | -0.8172185   | 3.8439538     | 0.0321387    | 0.9219481        | torch.Size([2, 512, 32])         |
| 2183    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(5)                   | weight              | torch.float32 |           | -1.0547366   | 0.5812716     | 0.0070099    | 0.0187704        | torch.Size([32, 32])             |
| 2183    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(5)                   | bias                | torch.float32 |           | -0.2183180   | 0.1396109     | -0.0140744   | 0.0103446        | torch.Size([32])                 |
| 2183    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(5)                   | output              | torch.float32 |           | -4.9405813   | 1.8046702     | -0.4950020   | 1.4116901        | torch.Size([2, 512, 32])         |
| 2184    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(5)                   | input               | torch.float32 |           | -4.9405813   | 1.8046702     | -0.4950020   | 1.4116901        | torch.Size([2, 512, 32])         |
| 2184    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(5)                   | output              | qint8         | 0.0142143 | 0.0000000    | 1.8052157     | 0.2243389    | 0.1245537        | torch.Size([2, 512, 32])         |
| 2185    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(5)   | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.8052157     | 0.2243389    | 0.1245537        | torch.Size([2, 512, 32])         |
| 2185    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(5)   | output              | qint16        | 0.0000116 | 0.1701251    | 0.3796301     | 0.2243319    | 0.0012099        | torch.Size([2, 512, 1])          |
| 2186    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(5)               | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.8052157     | 0.2243389    | 0.1245537        | torch.Size([2, 512, 32])         |
| 2186    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(5)               | input_1             | qint16        | 0.0000116 | 0.1701251    | 0.3796301     | 0.2243319    | 0.0012099        | torch.Size([2, 512, 1])          |
| 2186    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(5)               | output              | qint16        | 0.0000516 | -0.3796051   | 1.5271441     | 0.0000066    | 0.1233434        | torch.Size([2, 512, 32])         |
| 2187    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(5)               | input_0             | qint16        | 0.0000516 | -0.3796051   | 1.5271441     | 0.0000066    | 0.1233434        | torch.Size([2, 512, 32])         |
| 2187    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(5)               | input_1             | qint16        | 0.0000516 | -0.3796051   | 1.5271441     | 0.0000066    | 0.1233434        | torch.Size([2, 512, 32])         |
| 2187    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(5)               | output              | qint16        | 0.0000889 | 0.0000000    | 2.3321457     | 0.1233426    | 0.0521508        | torch.Size([2, 512, 32])         |
| 2188    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(5)     | input_0             | qint16        | 0.0000889 | 0.0000000    | 2.3321457     | 0.1233426    | 0.0521508        | torch.Size([2, 512, 32])         |
| 2188    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(5)     | output              | qint16        | 0.0000089 | 0.0736941    | 0.2693694     | 0.1233427    | 0.0008507        | torch.Size([2, 512, 1])          |
| 2189    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(5)             | input               | qint16        | 0.0000089 | 0.0736941    | 0.2693694     | 0.1233427    | 0.0008507        | torch.Size([2, 512, 1])          |
| 2189    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(5)             | output              | qint16        | 0.0001114 | 1.9266962    | 3.6515737     | 2.9010224    | 0.0979548        | torch.Size([2, 512, 1])          |
| 2190    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(5)           | input_0             | qint16        | 0.0000516 | -0.3796051   | 1.5271441     | 0.0000066    | 0.1233434        | torch.Size([2, 512, 32])         |
| 2190    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(5)           | input_1             | qint16        | 0.0001114 | 1.9266962    | 3.6515737     | 2.9010224    | 0.0979548        | torch.Size([2, 512, 1])          |
| 2190    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(5)           | output              | qint16        | 0.0001083 | -0.8466190   | 3.5501876     | 0.0000051    | 0.9998234        | torch.Size([2, 512, 32])         |
| 2191    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(5)      | input               | torch.float32 |           | 0.8550419    | 1.1198171     | 0.9805899    | 0.0036729        | torch.Size([32])                 |
| 2191    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(5)      | output              | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 2192    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(5)        | input_0             | qint16        | 0.0001083 | -0.8466190   | 3.5501876     | 0.0000051    | 0.9998234        | torch.Size([2, 512, 32])         |
| 2192    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(5)        | input_1             | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 2192    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(5)        | output              | qint16        | 0.0001106 | -0.9208305   | 3.6229506     | -0.0022752   | 0.9743939        | torch.Size([2, 512, 32])         |
| 2193    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(5)        | input               | torch.float32 |           | -0.0792132   | 0.1045145     | 0.0242442    | 0.0021608        | torch.Size([32])                 |
| 2193    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(5)        | output              | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 2194    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(5)          | input_0             | qint16        | 0.0001106 | -0.9208305   | 3.6229506     | -0.0022752   | 0.9743939        | torch.Size([2, 512, 32])         |
| 2194    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(5)          | input_1             | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 2194    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(5)          | output              | qint8         | 0.0268612 | -0.8595570   | 3.4113667     | 0.0219132    | 0.9233653        | torch.Size([2, 512, 32])         |
| 2195    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(5)                   | input               | qint8         | 0.0268612 | -0.8595570   | 3.4113667     | 0.0219132    | 0.9233653        | torch.Size([2, 512, 32])         |
| 2195    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(5)                   | weight              | torch.float32 |           | -0.4480607   | 0.3678726     | 0.0004879    | 0.0160908        | torch.Size([32, 32])             |
| 2195    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(5)                   | bias                | torch.float32 |           | -0.1861591   | 0.1739754     | 0.0155446    | 0.0137690        | torch.Size([32])                 |
| 2195    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(5)                   | output              | torch.float32 |           | -3.6878965   | 2.4170377     | -0.2582332   | 1.3679295        | torch.Size([2, 512, 32])         |
| 2196    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(5)                   | input               | torch.float32 |           | -3.6878965   | 2.4170377     | -0.2582332   | 1.3679295        | torch.Size([2, 512, 32])         |
| 2196    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(5)                   | output              | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3329420    | 0.2068340        | torch.Size([2, 512, 32])         |
| 2197    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(5)   | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3329420    | 0.2068340        | torch.Size([2, 512, 32])         |
| 2197    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(5)   | output              | qint16        | 0.0000156 | 0.2449042    | 0.5115557     | 0.3329072    | 0.0010742        | torch.Size([2, 512, 1])          |
| 2198    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(5)               | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3329420    | 0.2068340        | torch.Size([2, 512, 32])         |
| 2198    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(5)               | input_1             | qint16        | 0.0000156 | 0.2449042    | 0.5115557     | 0.3329072    | 0.0010742        | torch.Size([2, 512, 1])          |
| 2198    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(5)               | output              | qint16        | 0.0000645 | -0.5115817   | 2.0845923     | 0.0000384    | 0.2057466        | torch.Size([2, 512, 32])         |
| 2199    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(5)               | input_0             | qint16        | 0.0000645 | -0.5115817   | 2.0845923     | 0.0000384    | 0.2057466        | torch.Size([2, 512, 32])         |
| 2199    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(5)               | input_1             | qint16        | 0.0000645 | -0.5115817   | 2.0845923     | 0.0000384    | 0.2057466        | torch.Size([2, 512, 32])         |
| 2199    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(5)               | output              | qint16        | 0.0001365 | 0.0000000    | 4.3454652     | 0.2057410    | 0.1341521        | torch.Size([2, 512, 32])         |
| 2200    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(5)     | input_0             | qint16        | 0.0001365 | 0.0000000    | 4.3454652     | 0.2057410    | 0.1341521        | torch.Size([2, 512, 32])         |
| 2200    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(5)     | output              | qint16        | 0.0000123 | 0.1586774    | 0.4040551     | 0.2057137    | 0.0008710        | torch.Size([2, 512, 1])          |
| 2201    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(5)             | input               | qint16        | 0.0000123 | 0.1586774    | 0.4040551     | 0.2057137    | 0.0008710        | torch.Size([2, 512, 1])          |
| 2201    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(5)             | output              | qint16        | 0.0000749 | 1.5731732    | 2.4551423     | 2.2178214    | 0.0180624        | torch.Size([2, 512, 1])          |
| 2202    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(5)           | input_0             | qint16        | 0.0000645 | -0.5115817   | 2.0845923     | 0.0000384    | 0.2057466        | torch.Size([2, 512, 32])         |
| 2202    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(5)           | input_1             | qint16        | 0.0000749 | 1.5731732    | 2.4551423     | 2.2178214    | 0.0180624        | torch.Size([2, 512, 1])          |
| 2202    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(5)           | output              | qint16        | 0.0001267 | -0.8627828   | 4.1501474     | -0.0000990   | 0.9978194        | torch.Size([2, 512, 32])         |
| 2203    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(5)      | input               | torch.float32 |           | 0.8469434    | 1.1090456     | 0.9866461    | 0.0031007        | torch.Size([32])                 |
| 2203    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(5)      | output              | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 2204    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(5)        | input_0             | qint16        | 0.0001267 | -0.8627828   | 4.1501474     | -0.0000990   | 0.9978194        | torch.Size([2, 512, 32])         |
| 2204    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(5)        | input_1             | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 2204    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(5)        | output              | qint16        | 0.0001376 | -0.9569111   | 4.4246821     | -0.0016947   | 0.9925926        | torch.Size([2, 512, 32])         |
| 2205    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(5)        | input               | torch.float32 |           | -0.0626723   | 0.0887763     | 0.0071697    | 0.0011301        | torch.Size([32])                 |
| 2205    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(5)        | output              | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 2206    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(5)          | input_0             | qint16        | 0.0001376 | -0.9569111   | 4.4246821     | -0.0016947   | 0.9925926        | torch.Size([2, 512, 32])         |
| 2206    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(5)          | input_1             | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 2206    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(5)          | output              | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0051411    | 0.9688237        | torch.Size([2, 512, 32])         |
| 2207    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(5)                   | input               | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0051411    | 0.9688237        | torch.Size([2, 512, 32])         |
| 2207    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(5)                   | weight              | torch.float32 |           | -0.5597425   | 0.7001730     | 0.0015679    | 0.0160348        | torch.Size([32, 32])             |
| 2207    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(5)                   | bias                | torch.float32 |           | -0.1810580   | 0.1736723     | -0.0279047   | 0.0091159        | torch.Size([32])                 |
| 2207    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(5)                   | output              | torch.float32 |           | -4.3277512   | 3.1663642     | -0.2416214   | 1.1683766        | torch.Size([2, 512, 32])         |
| 2208    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(5)                  | input               | torch.float32 |           | -4.3277512   | 3.1663642     | -0.2416214   | 1.1683766        | torch.Size([2, 512, 32])         |
| 2208    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(5)                  | output              | qint8         | 0.0271917 | 0.0000000    | 3.1542335     | 0.2887098    | 0.3026666        | torch.Size([2, 512, 32])         |
| 2209    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(5)  | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.1542335     | 0.2887098    | 0.3026666        | torch.Size([2, 512, 32])         |
| 2209    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(5)  | output              | qint16        | 0.0000121 | 0.2081878    | 0.3959820     | 0.2887099    | 0.0016508        | torch.Size([2, 512, 1])          |
| 2210    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(5)              | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.1542335     | 0.2887098    | 0.3026666        | torch.Size([2, 512, 32])         |
| 2210    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(5)              | input_1             | qint16        | 0.0000121 | 0.2081878    | 0.3959820     | 0.2887099    | 0.0016508        | torch.Size([2, 512, 1])          |
| 2210    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(5)              | output              | qint16        | 0.0000976 | -0.3959758   | 2.8406410     | 0.0000024    | 0.3010166        | torch.Size([2, 512, 32])         |
| 2211    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(5)              | input_0             | qint16        | 0.0000976 | -0.3959758   | 2.8406410     | 0.0000024    | 0.3010166        | torch.Size([2, 512, 32])         |
| 2211    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(5)              | input_1             | qint16        | 0.0000976 | -0.3959758   | 2.8406410     | 0.0000024    | 0.3010166        | torch.Size([2, 512, 32])         |
| 2211    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(5)              | output              | qint16        | 0.0003122 | 0.0000000    | 8.0692539     | 0.3010077    | 0.8155602        | torch.Size([2, 512, 32])         |
| 2212    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(5)    | input_0             | qint16        | 0.0003122 | 0.0000000    | 8.0692539     | 0.3010077    | 0.8155602        | torch.Size([2, 512, 32])         |
| 2212    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(5)    | output              | qint16        | 0.0000136 | 0.1408076    | 0.4212233     | 0.3010088    | 0.0056254        | torch.Size([2, 512, 1])          |
| 2213    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(5)            | input               | qint16        | 0.0000136 | 0.1408076    | 0.4212233     | 0.3010088    | 0.0056254        | torch.Size([2, 512, 1])          |
| 2213    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(5)            | output              | qint16        | 0.0000802 | 1.5408093    | 2.6273782     | 1.8738623    | 0.0748497        | torch.Size([2, 512, 1])          |
| 2214    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(5)          | input_0             | qint16        | 0.0000976 | -0.3959758   | 2.8406410     | 0.0000024    | 0.3010166        | torch.Size([2, 512, 32])         |
| 2214    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(5)          | input_1             | qint16        | 0.0000802 | 1.5408093    | 2.6273782     | 1.8738623    | 0.0748497        | torch.Size([2, 512, 1])          |
| 2214    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(5)          | output              | qint16        | 0.0001482 | -0.7862931   | 4.7922845     | 0.0000065    | 0.9999160        | torch.Size([2, 512, 32])         |
| 2215    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(5)     | input               | torch.float32 |           | 0.8363900    | 1.4688344     | 1.0570920    | 0.0396277        | torch.Size([32])                 |
| 2215    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(5)     | output              | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 2216    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(5)       | input_0             | qint16        | 0.0001482 | -0.7862931   | 4.7922845     | 0.0000065    | 0.9999160        | torch.Size([2, 512, 32])         |
| 2216    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(5)       | input_1             | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 2216    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(5)       | output              | qint16        | 0.0001637 | -1.1548947   | 5.0084186     | -0.0580049   | 0.9164519        | torch.Size([2, 512, 32])         |
| 2217    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(5)       | input               | torch.float32 |           | -0.1492936   | 0.2842544     | 0.0803791    | 0.0109446        | torch.Size([32])                 |
| 2217    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(5)       | output              | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 2218    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(5)         | input_0             | qint16        | 0.0001637 | -1.1548947   | 5.0084186     | -0.0580049   | 0.9164519        | torch.Size([2, 512, 32])         |
| 2218    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(5)         | input_1             | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 2218    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(5)         | output              | qint8         | 0.0373904 | -0.9721510   | 4.7485838     | 0.0226422    | 0.8426588        | torch.Size([2, 512, 32])         |
| 2219    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2219    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017897 | -12.0285444  | 9.5766611     | -0.2184472   | 2.2790558        | torch.Size([2, 512, 3])          |
| 2220    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(5)                   | input               | qint16        | 0.0017897 | -12.0285444  | 9.5766611     | -0.2184472   | 2.2790558        | torch.Size([2, 512, 3])          |
| 2220    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(5)                   | weight              | torch.float32 |           | -1.0475703   | 0.9848034     | -0.0054673   | 0.2080412        | torch.Size([64, 3])              |
| 2220    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(5)                   | bias                | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1294928        | torch.Size([64])                 |
| 2220    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(5)                   | output              | torch.float32 |           | -11.1473722  | 12.9845095    | -0.0984260   | 1.6896001        | torch.Size([2, 512, 64])         |
| 2221    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(5)                   | input               | torch.float32 |           | -11.1473722  | 12.9845095    | -0.0984260   | 1.6896001        | torch.Size([2, 512, 64])         |
| 2221    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(5)                   | output              | qint8         | 0.0729980 | 0.0000000    | 9.2707472     | 0.2895650    | 0.6371219        | torch.Size([2, 512, 64])         |
| 2222    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(5)   | input_0             | qint8         | 0.0729980 | 0.0000000    | 9.2707472     | 0.2895650    | 0.6371219        | torch.Size([2, 512, 64])         |
| 2222    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(5)   | output              | qint16        | 0.0000685 | 0.1208711    | 2.2452281     | 0.2894680    | 0.1376961        | torch.Size([2, 512, 1])          |
| 2223    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(5)               | input_0             | qint8         | 0.0729980 | 0.0000000    | 9.2707472     | 0.2895650    | 0.6371219        | torch.Size([2, 512, 64])         |
| 2223    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(5)               | input_1             | qint16        | 0.0000685 | 0.1208711    | 2.2452281     | 0.2894680    | 0.1376961        | torch.Size([2, 512, 1])          |
| 2223    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(5)               | output              | qint16        | 0.0002902 | -2.2453439   | 7.8016782     | 0.0000899    | 0.4991982        | torch.Size([2, 512, 64])         |
| 2224    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(5)               | input_0             | qint16        | 0.0002902 | -2.2453439   | 7.8016782     | 0.0000899    | 0.4991982        | torch.Size([2, 512, 64])         |
| 2224    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(5)               | input_1             | qint16        | 0.0002902 | -2.2453439   | 7.8016782     | 0.0000899    | 0.4991982        | torch.Size([2, 512, 64])         |
| 2224    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(5)               | output              | qint16        | 0.0029551 | 0.0000000    | 60.8661003    | 0.4992411    | 10.0756750       | torch.Size([2, 512, 64])         |
| 2225    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(5)     | input_0             | qint16        | 0.0029551 | 0.0000000    | 60.8661003    | 0.4992411    | 10.0756750       | torch.Size([2, 512, 64])         |
| 2225    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(5)     | output              | qint16        | 0.0003723 | 0.0245721    | 11.1289387    | 0.4992787    | 2.8082719        | torch.Size([2, 512, 1])          |
| 2226    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(5)             | input               | qint16        | 0.0003723 | 0.0245721    | 11.1289387    | 0.4992787    | 2.8082719        | torch.Size([2, 512, 1])          |
| 2226    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(5)             | output              | qint16        | 0.0001859 | 0.2997383    | 6.0927577     | 4.1243968    | 2.8486910        | torch.Size([2, 512, 1])          |
| 2227    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(5)           | input_0             | qint16        | 0.0002902 | -2.2453439   | 7.8016782     | 0.0000899    | 0.4991982        | torch.Size([2, 512, 64])         |
| 2227    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(5)           | input_1             | qint16        | 0.0001859 | 0.2997383    | 6.0927577     | 4.1243968    | 2.8486910        | torch.Size([2, 512, 1])          |
| 2227    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(5)           | output              | qint16        | 0.0001160 | -0.9094031   | 3.7993641     | 0.0000082    | 0.9978359        | torch.Size([2, 512, 64])         |
| 2228    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(5)      | input               | torch.float32 |           | 0.8691067    | 1.1281288     | 0.9794419    | 0.0036082        | torch.Size([64])                 |
| 2228    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(5)      | output              | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 2229    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(5)        | input_0             | qint16        | 0.0001160 | -0.9094031   | 3.7993641     | 0.0000082    | 0.9978359        | torch.Size([2, 512, 64])         |
| 2229    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(5)        | input_1             | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 2229    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(5)        | output              | qint16        | 0.0001189 | -1.0259570   | 3.7777705     | 0.0115627    | 0.9564945        | torch.Size([2, 512, 64])         |
| 2230    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(5)        | input               | torch.float32 |           | -0.1133662   | 0.1493634     | 0.0304540    | 0.0046508        | torch.Size([64])                 |
| 2230    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(5)        | output              | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 2231    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(5)          | input_0             | qint16        | 0.0001189 | -1.0259570   | 3.7777705     | 0.0115627    | 0.9564945        | torch.Size([2, 512, 64])         |
| 2231    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(5)          | input_1             | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 2231    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(5)          | output              | qint8         | 0.0267452 | -1.0163175   | 3.3966403     | 0.0416184    | 0.8771896        | torch.Size([2, 512, 64])         |
| 2232    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(5)                   | input               | qint8         | 0.0267452 | -1.0163175   | 3.3966403     | 0.0416184    | 0.8771896        | torch.Size([2, 512, 64])         |
| 2232    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(5)                   | weight              | torch.float32 |           | -0.4523612   | 0.4813256     | -0.0014562   | 0.0096743        | torch.Size([64, 64])             |
| 2232    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(5)                   | bias                | torch.float32 |           | -0.1183558   | 0.2243176     | 0.0150283    | 0.0049289        | torch.Size([64])                 |
| 2232    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(5)                   | output              | torch.float32 |           | -5.4276304   | 4.3398428     | -0.3664048   | 2.1069281        | torch.Size([2, 512, 64])         |
| 2233    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(5)                   | input               | torch.float32 |           | -5.4276304   | 4.3398428     | -0.3664048   | 2.1069281        | torch.Size([2, 512, 64])         |
| 2233    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(5)                   | output              | qint8         | 0.0337689 | 0.0000000    | 4.2886496     | 0.3575265    | 0.2832046        | torch.Size([2, 512, 64])         |
| 2234    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(5)   | input_0             | qint8         | 0.0337689 | 0.0000000    | 4.2886496     | 0.3575265    | 0.2832046        | torch.Size([2, 512, 64])         |
| 2234    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(5)   | output              | qint16        | 0.0000195 | 0.2168615    | 0.6378726     | 0.3575000    | 0.0128839        | torch.Size([2, 512, 1])          |
| 2235    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(5)               | input_0             | qint8         | 0.0337689 | 0.0000000    | 4.2886496     | 0.3575265    | 0.2832046        | torch.Size([2, 512, 64])         |
| 2235    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(5)               | input_1             | qint16        | 0.0000195 | 0.2168615    | 0.6378726     | 0.3575000    | 0.0128839        | torch.Size([2, 512, 1])          |
| 2235    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(5)               | output              | qint16        | 0.0001376 | -0.6378934   | 3.6897352     | 0.0000291    | 0.2703185        | torch.Size([2, 512, 64])         |
| 2236    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(5)               | input_0             | qint16        | 0.0001376 | -0.6378934   | 3.6897352     | 0.0000291    | 0.2703185        | torch.Size([2, 512, 64])         |
| 2236    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(5)               | input_1             | qint16        | 0.0001376 | -0.6378934   | 3.6897352     | 0.0000291    | 0.2703185        | torch.Size([2, 512, 64])         |
| 2236    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(5)               | output              | qint16        | 0.0006236 | 0.0000000    | 13.6140032    | 0.2702859    | 0.3750892        | torch.Size([2, 512, 64])         |
| 2237    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(5)     | input_0             | qint16        | 0.0006236 | 0.0000000    | 13.6140032    | 0.2702859    | 0.3750892        | torch.Size([2, 512, 64])         |
| 2237    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(5)     | output              | qint16        | 0.0000322 | 0.0842103    | 0.8599478     | 0.2702842    | 0.0276069        | torch.Size([2, 512, 1])          |
| 2238    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(5)             | input               | qint16        | 0.0000322 | 0.0842103    | 0.8599478     | 0.2702842    | 0.0276069        | torch.Size([2, 512, 1])          |
| 2238    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(5)             | output              | qint16        | 0.0001060 | 1.0783857    | 3.4457886     | 2.2403049    | 0.5643308        | torch.Size([2, 512, 1])          |
| 2239    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(5)           | input_0             | qint16        | 0.0001376 | -0.6378934   | 3.6897352     | 0.0000291    | 0.2703185        | torch.Size([2, 512, 64])         |
| 2239    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(5)           | input_1             | qint16        | 0.0001060 | 1.0783857    | 3.4457886     | 2.2403049    | 0.5643308        | torch.Size([2, 512, 1])          |
| 2239    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(5)           | output              | qint16        | 0.0001466 | -0.8815237   | 4.4716945     | 0.0000368    | 1.0002877        | torch.Size([2, 512, 64])         |
| 2240    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(5)      | input               | torch.float32 |           | 0.8333027    | 1.1388558     | 0.9778216    | 0.0042186        | torch.Size([64])                 |
| 2240    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(5)      | output              | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 2241    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(5)        | input_0             | qint16        | 0.0001466 | -0.8815237   | 4.4716945     | 0.0000368    | 1.0002877        | torch.Size([2, 512, 64])         |
| 2241    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(5)        | input_1             | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 2241    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(5)        | output              | qint16        | 0.0001474 | -0.9429734   | 4.3549862     | 0.0041888    | 0.9861685        | torch.Size([2, 512, 64])         |
| 2242    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(5)        | input               | torch.float32 |           | -0.0757831   | 0.1161729     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 2242    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(5)        | output              | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 2243    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(5)          | input_0             | qint16        | 0.0001474 | -0.9429734   | 4.3549862     | 0.0041888    | 0.9861685        | torch.Size([2, 512, 64])         |
| 2243    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(5)          | input_1             | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 2243    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(5)          | output              | qint8         | 0.0350382 | -0.9460305   | 4.3447328     | 0.0204901    | 0.9497355        | torch.Size([2, 512, 64])         |
| 2244    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(5)                   | input               | qint8         | 0.0350382 | -0.9460305   | 4.3447328     | 0.0204901    | 0.9497355        | torch.Size([2, 512, 64])         |
| 2244    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(5)                   | weight              | torch.float32 |           | -0.5707353   | 0.3620123     | -0.0010372   | 0.0088292        | torch.Size([64, 64])             |
| 2244    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(5)                   | bias                | torch.float32 |           | -0.1720246   | 0.1340137     | -0.0235144   | 0.0050507        | torch.Size([64])                 |
| 2244    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(5)                   | output              | torch.float32 |           | -5.4768038   | 3.7020831     | -0.2897721   | 1.9625076        | torch.Size([2, 512, 64])         |
| 2245    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(5)                   | input               | torch.float32 |           | -5.4768038   | 3.7020831     | -0.2897721   | 1.9625076        | torch.Size([2, 512, 64])         |
| 2245    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(5)                   | output              | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4412844    | 0.4835328        | torch.Size([2, 512, 64])         |
| 2246    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(5)   | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4412844    | 0.4835328        | torch.Size([2, 512, 64])         |
| 2246    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(5)   | output              | qint16        | 0.0000166 | 0.3435480    | 0.5453199     | 0.4412594    | 0.0020066        | torch.Size([2, 512, 1])          |
| 2247    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(5)               | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4412844    | 0.4835328        | torch.Size([2, 512, 64])         |
| 2247    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(5)               | input_1             | qint16        | 0.0000166 | 0.3435480    | 0.5453199     | 0.4412594    | 0.0020066        | torch.Size([2, 512, 1])          |
| 2247    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(5)               | output              | qint16        | 0.0000988 | -0.5452744   | 3.1881309     | 0.0000284    | 0.4815213        | torch.Size([2, 512, 64])         |
| 2248    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(5)               | input_0             | qint16        | 0.0000988 | -0.5452744   | 3.1881309     | 0.0000284    | 0.4815213        | torch.Size([2, 512, 64])         |
| 2248    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(5)               | input_1             | qint16        | 0.0000988 | -0.5452744   | 3.1881309     | 0.0000284    | 0.4815213        | torch.Size([2, 512, 64])         |
| 2248    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(5)               | output              | qint16        | 0.0003201 | 0.0000000    | 10.1640558    | 0.4814986    | 0.9643359        | torch.Size([2, 512, 64])         |
| 2249    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(5)     | input_0             | qint16        | 0.0003201 | 0.0000000    | 10.1640558    | 0.4814986    | 0.9643359        | torch.Size([2, 512, 64])         |
| 2249    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(5)     | output              | qint16        | 0.0000230 | 0.2757448    | 0.7316419     | 0.4814996    | 0.0109268        | torch.Size([2, 512, 1])          |
| 2250    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(5)             | input               | qint16        | 0.0000230 | 0.2757448    | 0.7316419     | 0.4814996    | 0.0109268        | torch.Size([2, 512, 1])          |
| 2250    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(5)             | output              | qint16        | 0.0000608 | 1.1690825    | 1.9043232     | 1.4680409    | 0.0274447        | torch.Size([2, 512, 1])          |
| 2251    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(5)           | input_0             | qint16        | 0.0000988 | -0.5452744   | 3.1881309     | 0.0000284    | 0.4815213        | torch.Size([2, 512, 64])         |
| 2251    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(5)           | input_1             | qint16        | 0.0000608 | 1.1690825    | 1.9043232     | 1.4680409    | 0.0274447        | torch.Size([2, 512, 1])          |
| 2251    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(5)           | output              | qint16        | 0.0001598 | -0.7236346   | 4.2777085     | 0.0000359    | 1.0000418        | torch.Size([2, 512, 64])         |
| 2252    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(5)      | input               | torch.float32 |           | 0.8006503    | 1.1495361     | 0.9818506    | 0.0032003        | torch.Size([64])                 |
| 2252    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(5)      | output              | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 2253    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(5)        | input_0             | qint16        | 0.0001598 | -0.7236346   | 4.2777085     | 0.0000359    | 1.0000418        | torch.Size([2, 512, 64])         |
| 2253    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(5)        | input_1             | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 2253    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(5)        | output              | qint16        | 0.0001633 | -0.8044462   | 4.4689455     | 0.0056443    | 0.9979448        | torch.Size([2, 512, 64])         |
| 2254    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(5)        | input               | torch.float32 |           | -0.0461140   | 0.1411197     | 0.0132828    | 0.0015701        | torch.Size([64])                 |
| 2254    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(5)        | output              | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 2255    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(5)          | input_0             | qint16        | 0.0001633 | -0.8044462   | 4.4689455     | 0.0056443    | 0.9979448        | torch.Size([2, 512, 64])         |
| 2255    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(5)          | input_1             | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 2255    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(5)          | output              | qint8         | 0.0387038 | -0.8127795   | 4.4509358     | 0.0187306    | 0.9831260        | torch.Size([2, 512, 64])         |
| 2256    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(5)                   | input               | qint8         | 0.0387038 | -0.8127795   | 4.4509358     | 0.0187306    | 0.9831260        | torch.Size([2, 512, 64])         |
| 2256    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(5)                   | weight              | torch.float32 |           | -0.5701389   | 0.3477888     | 0.0006721    | 0.0085883        | torch.Size([64, 64])             |
| 2256    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(5)                   | bias                | torch.float32 |           | -0.1677032   | 0.1709885     | -0.0237130   | 0.0070098        | torch.Size([64])                 |
| 2256    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(5)                   | output              | torch.float32 |           | -4.7524137   | 7.1970291     | -0.4101194   | 1.6245574        | torch.Size([2, 512, 64])         |
| 2257    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(5)                  | input               | torch.float32 |           | -4.7524137   | 7.1970291     | -0.4101194   | 1.6245574        | torch.Size([2, 512, 64])         |
| 2257    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(5)                  | output              | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2798406    | 0.5850682        | torch.Size([2, 512, 64])         |
| 2258    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(5)  | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2798406    | 0.5850682        | torch.Size([2, 512, 64])         |
| 2258    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(5)  | output              | qint16        | 0.0000138 | 0.2038234    | 0.3918329     | 0.2798437    | 0.0022875        | torch.Size([2, 512, 1])          |
| 2259    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(5)              | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2798406    | 0.5850682        | torch.Size([2, 512, 64])         |
| 2259    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(5)              | input_1             | qint16        | 0.0000138 | 0.2038234    | 0.3918329     | 0.2798437    | 0.0022875        | torch.Size([2, 512, 1])          |
| 2259    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(5)              | output              | qint16        | 0.0002137 | -0.3918612   | 6.9370551     | -0.0000130   | 0.5827907        | torch.Size([2, 512, 64])         |
| 2260    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(5)              | input_0             | qint16        | 0.0002137 | -0.3918612   | 6.9370551     | -0.0000130   | 0.5827907        | torch.Size([2, 512, 64])         |
| 2260    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(5)              | input_1             | qint16        | 0.0002137 | -0.3918612   | 6.9370551     | -0.0000130   | 0.5827907        | torch.Size([2, 512, 64])         |
| 2260    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(5)              | output              | qint16        | 0.0014959 | 0.0000000    | 48.1224632    | 0.5827801    | 13.2765980       | torch.Size([2, 512, 64])         |
| 2261    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(5)    | input_0             | qint16        | 0.0014959 | 0.0000000    | 48.1224632    | 0.5827801    | 13.2765980       | torch.Size([2, 512, 64])         |
| 2261    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(5)    | output              | qint16        | 0.0000253 | 0.1908668    | 0.8216262     | 0.5827791    | 0.0303151        | torch.Size([2, 512, 1])          |
| 2262    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(5)            | input               | qint16        | 0.0000253 | 0.1908668    | 0.8216262     | 0.5827791    | 0.0303151        | torch.Size([2, 512, 1])          |
| 2262    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(5)            | output              | qint16        | 0.0000680 | 1.1032057    | 2.2290647     | 1.3745120    | 0.0839208        | torch.Size([2, 512, 1])          |
| 2263    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(5)          | input_0             | qint16        | 0.0002137 | -0.3918612   | 6.9370551     | -0.0000130   | 0.5827907        | torch.Size([2, 512, 64])         |
| 2263    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(5)          | input_1             | qint16        | 0.0000680 | 1.1032057    | 2.2290647     | 1.3745120    | 0.0839208        | torch.Size([2, 512, 1])          |
| 2263    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(5)          | output              | qint16        | 0.0002366 | -0.7000148   | 7.7517352     | -0.0000098   | 0.9993765        | torch.Size([2, 512, 64])         |
| 2264    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(5)     | input               | torch.float32 |           | 0.7297163    | 1.2824999     | 1.0134131    | 0.0161719        | torch.Size([64])                 |
| 2264    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(5)     | output              | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 2265    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(5)       | input_0             | qint16        | 0.0002366 | -0.7000148   | 7.7517352     | -0.0000098   | 0.9993765        | torch.Size([2, 512, 64])         |
| 2265    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(5)       | input_1             | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 2265    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(5)       | output              | qint16        | 0.0001954 | -0.8734208   | 5.6565237     | -0.0227906   | 0.7966160        | torch.Size([2, 512, 64])         |
| 2266    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(5)       | input               | torch.float32 |           | -0.2385408   | 0.3192695     | 0.0900053    | 0.0129013        | torch.Size([64])                 |
| 2266    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(5)       | output              | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 2267    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(5)         | input_0             | qint16        | 0.0001954 | -0.8734208   | 5.6565237     | -0.0227906   | 0.7966160        | torch.Size([2, 512, 64])         |
| 2267    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(5)         | input_1             | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 2267    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(5)         | output              | qint8         | 0.0462055 | -0.8316998   | 5.8218985     | 0.0674167    | 0.7319239        | torch.Size([2, 512, 64])         |
| 2268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_0             | qint8         | 0.0587279 | -0.8221908   | 7.2822618     | 0.0712888    | 0.8683364        | torch.Size([2, 512, 128])        |
| 2268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_1             | qint8         | 0.0385920 | -1.6980467   | 4.9011803     | 0.0238491    | 1.3918434        | torch.Size([2, 512, 32])         |
| 2268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_2             | qint8         | 0.0373904 | -0.9721510   | 4.7485838     | 0.0226422    | 0.8426588        | torch.Size([2, 512, 32])         |
| 2268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_3             | qint8         | 0.0462055 | -0.8316998   | 5.8218985     | 0.0674167    | 0.7319239        | torch.Size([2, 512, 64])         |
| 2268    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | output              | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0614475    | 0.8942024        | torch.Size([2, 512, 256])        |
| 2269    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(8)                                 | input               | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 2269    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(8)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 2269    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(8)                                 | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 2270    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.28.query_cat                          | input_0             | qint8         | 0.0285185 | -3.6503737   | 3.5077810     | 0.0031838    | 0.8414007        | torch.Size([2, 512, 256])        |
| 2270    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.28.query_cat                          | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0614475    | 0.8942024        | torch.Size([2, 512, 256])        |
| 2270    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.28.query_cat                          | output              | qint8         | 0.0541171 | -3.6258445   | 6.8728695     | 0.0351441    | 0.8668307        | torch.Size([2, 512, 512])        |
| 2271    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.28.key_cat                            | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 2271    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.28.key_cat                            | input_1             | qint8         | 0.0569265 | -1.0246774   | 5.3510933     | 0.0736042    | 0.8488365        | torch.Size([2, 256, 256])        |
| 2271    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.28.key_cat                            | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 2272    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0541171 | -3.6258445   | 6.8728695     | 0.0351441    | 0.8668307        | torch.Size([2, 512, 512])        |
| 2272    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0541171 | -3.6258445   | 6.8728695     | 0.0351441    | 0.8668307        | torch.Size([512, 2, 512])        |
| 2273    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 2273    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2274    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 2274    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2275    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | input_0             | qint8         | 0.0541171 | -3.6258445   | 6.8728695     | 0.0351441    | 0.8668307        | torch.Size([512, 2, 512])        |
| 2275    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | output              | qint8         | 0.0541171 | -3.6258445   | 6.8728695     | 0.0351441    | 0.8668307        | torch.Size([512, 2, 512])        |
| 2276    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2276    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2277    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2277    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2278    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.q_proj                        | input               | qint8         | 0.0541171 | -3.6258445   | 6.8728695     | 0.0351441    | 0.8668307        | torch.Size([512, 2, 512])        |
| 2278    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.q_proj                        | weight              | torch.float32 |           | -0.4073947   | 0.3189994     | 0.0001346    | 0.0033978        | torch.Size([512, 512])           |
| 2278    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.q_proj                        | bias                | torch.float32 |           | -0.0915100   | 0.0791734     | -0.0000095   | 0.0008503        | torch.Size([512])                |
| 2278    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.q_proj                        | output              | qint8         | 0.0920164 | -11.7780981  | 11.6860819    | 0.0530518    | 9.6438074        | torch.Size([512, 2, 512])        |
| 2279    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.k_proj                        | input               | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2279    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.k_proj                        | weight              | torch.float32 |           | -0.4692126   | 0.5299173     | -0.0000477   | 0.0036618        | torch.Size([512, 512])           |
| 2279    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.k_proj                        | bias                | torch.float32 |           | -0.0043523   | 0.0039338     | -0.0000140   | 0.0000007        | torch.Size([512])                |
| 2279    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.k_proj                        | output              | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([256, 2, 512])        |
| 2280    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.v_proj                        | input               | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2280    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.v_proj                        | weight              | torch.float32 |           | -0.3048484   | 0.3328977     | -0.0000697   | 0.0014966        | torch.Size([512, 512])           |
| 2280    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.v_proj                        | bias                | torch.float32 |           | -0.0813287   | 0.0743355     | -0.0004657   | 0.0005773        | torch.Size([512])                |
| 2280    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.v_proj                        | output              | qint8         | 0.0099533 | -0.0796268   | 0.0696734     | -0.0002722   | 0.0005986        | torch.Size([256, 2, 512])        |
| 2281    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | input_0             | qint8         | 0.0920164 | -11.7780981  | 11.6860819    | 0.0530518    | 9.6438074        | torch.Size([512, 2, 512])        |
| 2281    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | output              | qint8         | 0.0920164 | -11.7780981  | 11.6860819    | 0.0530518    | 9.6438074        | torch.Size([512, 16, 64])        |
| 2282    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0920164 | -11.7780981  | 11.6860819    | 0.0530518    | 9.6438074        | torch.Size([512, 16, 64])        |
| 2282    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0920164 | -11.7780981  | 11.6860819    | 0.0530518    | 9.6438074        | torch.Size([16, 512, 64])        |
| 2283    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | input_0             | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([256, 2, 512])        |
| 2283    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | output              | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([256, 16, 64])        |
| 2284    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([256, 16, 64])        |
| 2284    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([16, 256, 64])        |
| 2285    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | input_0             | qint8         | 0.0099533 | -0.0796268   | 0.0696734     | -0.0002722   | 0.0005986        | torch.Size([256, 2, 512])        |
| 2285    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | output              | qint8         | 0.0099533 | -0.0796268   | 0.0696734     | -0.0002722   | 0.0005986        | torch.Size([256, 16, 64])        |
| 2286    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0099533 | -0.0796268   | 0.0696734     | -0.0002722   | 0.0005986        | torch.Size([256, 16, 64])        |
| 2286    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0099533 | -0.0796268   | 0.0696734     | -0.0002722   | 0.0005986        | torch.Size([16, 256, 64])        |
| 2287    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.28.attn.q_scale_mul                   | input_0             | qint8         | 0.0920164 | -11.7780981  | 11.6860819    | 0.0530518    | 9.6438074        | torch.Size([16, 512, 64])        |
| 2287    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.28.attn.q_scale_mul                   | output              | qint8         | 0.0115020 | -1.4722623   | 1.4607602     | 0.0066315    | 0.1506845        | torch.Size([16, 512, 64])        |
| 2288    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([16, 256, 64])        |
| 2288    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([16, 64, 256])        |
| 2289    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.28.attn.matmul                        | input_0             | qint8         | 0.0115020 | -1.4722623   | 1.4607602     | 0.0066315    | 0.1506845        | torch.Size([16, 512, 64])        |
| 2289    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.28.attn.matmul                        | input_1             | qint8         | 0.0800077 | -6.3206100   | 5.1204944     | -0.0337533   | 3.9979730        | torch.Size([16, 64, 256])        |
| 2289    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.28.attn.matmul                        | output              | qint8         | 1.2229125 | -81.9351425  | 96.6100922    | 0.9213643    | 623.6292725      | torch.Size([16, 512, 256])       |
| 2290    | torch.Tensor.max                                                            | head.layers.28.attn.softmax                       | input               | qint8         | 1.2229125 | -81.9351425  | 96.6100922    | 0.9213643    | 623.6292725      | torch.Size([16, 512, 256])       |
| 2290    | torch.Tensor.max                                                            | head.layers.28.attn.softmax                       | output_0            | qint8         | 1.2229125 | -81.9351425  | 96.6100922    | 0.9213645    | 623.7052002      | torch.Size([16, 512, 1])         |
| 2290    | torch.Tensor.max                                                            | head.layers.28.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 1])         |
| 2291    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.28.attn.softmax.sub                   | input_0             | qint8         | 1.2229125 | -81.9351425  | 96.6100922    | 0.9213643    | 623.6292725      | torch.Size([16, 512, 256])       |
| 2291    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.28.attn.softmax.sub                   | input_1             | qint8         | 1.2229125 | -81.9351425  | 96.6100922    | 0.9213645    | 623.7052002      | torch.Size([16, 512, 1])         |
| 2291    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.28.attn.softmax.sub                   | output              | qint16        | 0.0097038 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2292    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.28.attn.softmax.exp                   | input               | qint16        | 0.0097038 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2292    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.28.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2293    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.28.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2293    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.28.attn.softmax.sum                   | output              | qint16        | 0.0037545 | 123.0234146  | 123.0234146   | 123.0234146  | 0.0000000        | torch.Size([16, 512, 1])         |
| 2294    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.28.attn.softmax.reciprocal            | input               | qint16        | 0.0037545 | 123.0234146  | 123.0234146   | 123.0234146  | 0.0000000        | torch.Size([16, 512, 1])         |
| 2294    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.28.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0081178    | 0.0081178     | 0.0081178    | 0.0000000        | torch.Size([16, 512, 1])         |
| 2295    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.28.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2295    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.28.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0081178    | 0.0081178     | 0.0081178    | 0.0000000        | torch.Size([16, 512, 1])         |
| 2295    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.28.attn.softmax.mul                   | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2296    | torch.nn.modules.dropout.Dropout                                            | head.layers.28.attn.attention_drop                | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2296    | torch.nn.modules.dropout.Dropout                                            | head.layers.28.attn.attention_drop                | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2297    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.28.attn.attn_matmul                   | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2297    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.28.attn.attn_matmul                   | input_1             | qint8         | 0.0099533 | -0.0796268   | 0.0696734     | -0.0002722   | 0.0005986        | torch.Size([16, 256, 64])        |
| 2297    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.28.attn.attn_matmul                   | output              | qint8         | 0.0106407 | -0.1596101   | 0.1383287     | -0.0006650   | 0.0025444        | torch.Size([16, 512, 64])        |
| 2298    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0106407 | -0.1596101   | 0.1383287     | -0.0006650   | 0.0025444        | torch.Size([16, 512, 64])        |
| 2298    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0106407 | -0.1596101   | 0.1383287     | -0.0006650   | 0.0025444        | torch.Size([512, 16, 64])        |
| 2299    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | input_0             | qint8         | 0.0106407 | -0.1596101   | 0.1383287     | -0.0006650   | 0.0025444        | torch.Size([512, 16, 64])        |
| 2299    | torch.Tensor.reshape                                                        | head.layers.28.attn                               | output              | qint8         | 0.0106407 | -0.1596101   | 0.1383287     | -0.0006650   | 0.0025444        | torch.Size([512, 2, 512])        |
| 2300    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.out_proj                      | input               | qint8         | 0.0106407 | -0.1596101   | 0.1383287     | -0.0006650   | 0.0025444        | torch.Size([512, 2, 512])        |
| 2300    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.out_proj                      | weight              | torch.float32 |           | -0.2395778   | 0.2118238     | -0.0001136   | 0.0023239        | torch.Size([512, 512])           |
| 2300    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.out_proj                      | bias                | torch.float32 |           | -0.2437576   | 0.2574523     | 0.0090795    | 0.0067918        | torch.Size([512])                |
| 2300    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.28.attn.out_proj                      | output              | qint8         | 0.0107470 | -0.6125816   | 0.4298818     | 0.0117546    | 0.0264618        | torch.Size([512, 2, 512])        |
| 2301    | torch.Tensor.view                                                           | head.layers.28.attn                               | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2301    | torch.Tensor.view                                                           | head.layers.28.attn                               | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 2302    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.28.attn.attn_weights_mean             | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 2302    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.28.attn.attn_weights_mean             | output              | qint8         | 0.0033470 | 0.0066940    | 0.0066940     | 0.0066940    | 0.0000000        | torch.Size([2, 512, 256])        |
| 2303    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | input_0             | qint8         | 0.0107470 | -0.6125816   | 0.4298818     | 0.0117546    | 0.0264618        | torch.Size([512, 2, 512])        |
| 2303    | torch.Tensor.transpose                                                      | head.layers.28.attn                               | output              | qint8         | 0.0107470 | -0.6125816   | 0.4298818     | 0.0117546    | 0.0264618        | torch.Size([2, 512, 512])        |
| 2304    | torch.nn.modules.dropout.Dropout                                            | head.layers.28.dropout                            | input               | qint8         | 0.0107470 | -0.6125816   | 0.4298818     | 0.0117546    | 0.0264618        | torch.Size([2, 512, 512])        |
| 2304    | torch.nn.modules.dropout.Dropout                                            | head.layers.28.dropout                            | output              | qint8         | 0.0107470 | -0.6125816   | 0.4298818     | 0.0117546    | 0.0264618        | torch.Size([2, 512, 512])        |
| 2305    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.28.add                                | input_0             | qint8         | 0.0541171 | -3.6258445   | 6.8728695     | 0.0351441    | 0.8668307        | torch.Size([2, 512, 512])        |
| 2305    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.28.add                                | input_1             | qint8         | 0.0107470 | -0.6125816   | 0.4298818     | 0.0117546    | 0.0264618        | torch.Size([2, 512, 512])        |
| 2305    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.28.add                                | output              | qint8         | 0.0500488 | -3.7036095   | 6.3561945     | 0.0463830    | 0.8290052        | torch.Size([2, 512, 512])        |
| 2306    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(8)                                  | input               | qint8         | 0.0500488 | -3.7036095   | 6.3561945     | 0.0463830    | 0.8290052        | torch.Size([2, 512, 512])        |
| 2306    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(8)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 2306    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(8)                                  | output              | qint16        | 0.0015259 | -5.6655884   | 5.2093506     | 0.0024307    | 0.8686829        | torch.Size([2, 512, 256])        |
| 2307    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(9)                                 | input               | qint16        | 0.0015259 | -5.6655884   | 5.2093506     | 0.0024307    | 0.8686829        | torch.Size([2, 512, 256])        |
| 2307    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(9)                                 | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 2307    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(9)                                 | output              | qint16        | 0.0001526 | -3.7730408   | 3.1639099     | 0.0004520    | 0.0620639        | torch.Size([2, 512, 512])        |
| 2308    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.29.query_cat                          | input_0             | qint16        | 0.0015259 | -5.6655884   | 5.2093506     | 0.0024307    | 0.8686829        | torch.Size([2, 512, 256])        |
| 2308    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.29.query_cat                          | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0614475    | 0.8942024        | torch.Size([2, 512, 256])        |
| 2308    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.29.query_cat                          | output              | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([2, 512, 512])        |
| 2309    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.29.key_cat                            | input_0             | qint16        | 0.0015259 | -5.6655884   | 5.2093506     | 0.0024307    | 0.8686829        | torch.Size([2, 512, 256])        |
| 2309    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.29.key_cat                            | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0614475    | 0.8942024        | torch.Size([2, 512, 256])        |
| 2309    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.29.key_cat                            | output              | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([2, 512, 512])        |
| 2310    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([2, 512, 512])        |
| 2310    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2311    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([2, 512, 512])        |
| 2311    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2312    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint16        | 0.0001526 | -3.7730408   | 3.1639099     | 0.0004520    | 0.0620639        | torch.Size([2, 512, 512])        |
| 2312    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint16        | 0.0001526 | -3.7730408   | 3.1639099     | 0.0004520    | 0.0620639        | torch.Size([512, 2, 512])        |
| 2313    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | input_0             | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2313    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | output              | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2314    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | input_0             | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2314    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | output              | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2315    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | input_0             | qint16        | 0.0001526 | -3.7730408   | 3.1639099     | 0.0004520    | 0.0620639        | torch.Size([512, 2, 512])        |
| 2315    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | output              | qint16        | 0.0001526 | -3.7730408   | 3.1639099     | 0.0004520    | 0.0620639        | torch.Size([512, 2, 512])        |
| 2316    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.q_proj                        | input               | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2316    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.q_proj                        | weight              | torch.float32 |           | -0.3925455   | 0.4585033     | 0.0001725    | 0.0026408        | torch.Size([512, 512])           |
| 2316    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.q_proj                        | bias                | torch.float32 |           | -0.0954414   | 0.0812263     | -0.0016288   | 0.0003734        | torch.Size([512])                |
| 2316    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.q_proj                        | output              | qint8         | 0.0613284 | -7.8500342   | 7.7887058     | -0.0506161   | 2.7215457        | torch.Size([512, 2, 512])        |
| 2317    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.k_proj                        | input               | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([512, 2, 512])        |
| 2317    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.k_proj                        | weight              | torch.float32 |           | -0.6571054   | 0.6037697     | -0.0000865   | 0.0031884        | torch.Size([512, 512])           |
| 2317    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.k_proj                        | bias                | torch.float32 |           | -0.1333090   | 0.1077095     | -0.0008078   | 0.0002287        | torch.Size([512])                |
| 2317    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.k_proj                        | output              | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([512, 2, 512])        |
| 2318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.v_proj                        | input               | qint16        | 0.0001526 | -3.7730408   | 3.1639099     | 0.0004520    | 0.0620639        | torch.Size([512, 2, 512])        |
| 2318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.v_proj                        | weight              | torch.float32 |           | -0.2302573   | 0.2758068     | -0.0000755   | 0.0018357        | torch.Size([512, 512])           |
| 2318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.v_proj                        | bias                | torch.float32 |           | -0.3465908   | 0.3370203     | -0.0008104   | 0.0041902        | torch.Size([512])                |
| 2318    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.v_proj                        | output              | qint8         | 0.0274525 | -3.5139227   | 3.4864702     | -0.0029727   | 0.1544293        | torch.Size([512, 2, 512])        |
| 2319    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | input_0             | qint8         | 0.0613284 | -7.8500342   | 7.7887058     | -0.0506161   | 2.7215457        | torch.Size([512, 2, 512])        |
| 2319    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | output              | qint8         | 0.0613284 | -7.8500342   | 7.7887058     | -0.0506161   | 2.7215457        | torch.Size([512, 16, 64])        |
| 2320    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0613284 | -7.8500342   | 7.7887058     | -0.0506161   | 2.7215457        | torch.Size([512, 16, 64])        |
| 2320    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0613284 | -7.8500342   | 7.7887058     | -0.0506161   | 2.7215457        | torch.Size([16, 512, 64])        |
| 2321    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | input_0             | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([512, 2, 512])        |
| 2321    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | output              | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([512, 16, 64])        |
| 2322    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([512, 16, 64])        |
| 2322    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([16, 512, 64])        |
| 2323    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | input_0             | qint8         | 0.0274525 | -3.5139227   | 3.4864702     | -0.0029727   | 0.1544293        | torch.Size([512, 2, 512])        |
| 2323    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | output              | qint8         | 0.0274525 | -3.5139227   | 3.4864702     | -0.0029727   | 0.1544293        | torch.Size([512, 16, 64])        |
| 2324    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0274525 | -3.5139227   | 3.4864702     | -0.0029727   | 0.1544293        | torch.Size([512, 16, 64])        |
| 2324    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0274525 | -3.5139227   | 3.4864702     | -0.0029727   | 0.1544293        | torch.Size([16, 512, 64])        |
| 2325    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.29.attn.q_scale_mul                   | input_0             | qint8         | 0.0613284 | -7.8500342   | 7.7887058     | -0.0506161   | 2.7215457        | torch.Size([16, 512, 64])        |
| 2325    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.29.attn.q_scale_mul                   | output              | qint8         | 0.0076660 | -0.9812543   | 0.9735882     | -0.0063270   | 0.0425242        | torch.Size([16, 512, 64])        |
| 2326    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([16, 512, 64])        |
| 2326    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([16, 64, 512])        |
| 2327    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.29.attn.matmul                        | input_0             | qint8         | 0.0076660 | -0.9812543   | 0.9735882     | -0.0063270   | 0.0425242        | torch.Size([16, 512, 64])        |
| 2327    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.29.attn.matmul                        | input_1             | qint8         | 0.0956884 | -12.2481117  | 12.1524229    | 0.0378619    | 4.5232253        | torch.Size([16, 64, 512])        |
| 2327    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.29.attn.matmul                        | output              | qint8         | 0.6018751 | -75.8362579  | 76.4381332    | -0.8519042   | 145.6926727      | torch.Size([16, 512, 512])       |
| 2328    | torch.Tensor.max                                                            | head.layers.29.attn.softmax                       | input               | qint8         | 0.6018751 | -75.8362579  | 76.4381332    | -0.8519042   | 145.6926727      | torch.Size([16, 512, 512])       |
| 2328    | torch.Tensor.max                                                            | head.layers.29.attn.softmax                       | output_0            | qint8         | 0.6018751 | 1.2037501    | 76.4381332    | 18.6121330   | 284.5298462      | torch.Size([16, 512, 1])         |
| 2328    | torch.Tensor.max                                                            | head.layers.29.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 511.0000000   | 258.3272705  | 11848.3242188    | torch.Size([16, 512, 1])         |
| 2329    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.29.attn.softmax.sub                   | input_0             | qint8         | 0.6018751 | -75.8362579  | 76.4381332    | -0.8519042   | 145.6926727      | torch.Size([16, 512, 512])       |
| 2329    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.29.attn.softmax.sub                   | input_1             | qint8         | 0.6018751 | 1.2037501    | 76.4381332    | 18.6121330   | 284.5298462      | torch.Size([16, 512, 1])         |
| 2329    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.29.attn.softmax.sub                   | output              | qint16        | 0.0046970 | -149.2659760 | 0.0000000     | -19.4641304  | 436.5587463      | torch.Size([16, 512, 512])       |
| 2330    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.29.attn.softmax.exp                   | input               | qint16        | 0.0046970 | -149.2659760 | 0.0000000     | -19.4641304  | 436.5587463      | torch.Size([16, 512, 512])       |
| 2330    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.29.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0284190    | 0.0133124        | torch.Size([16, 512, 512])       |
| 2331    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.29.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0284190    | 0.0133124        | torch.Size([16, 512, 512])       |
| 2331    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.29.attn.softmax.sum                   | output              | qint16        | 0.0039759 | 1.0019240    | 130.2779541   | 14.5490646   | 443.5452881      | torch.Size([16, 512, 1])         |
| 2332    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.29.attn.softmax.reciprocal            | input               | qint16        | 0.0039759 | 1.0019240    | 130.2779541   | 14.5490646   | 443.5452881      | torch.Size([16, 512, 1])         |
| 2332    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.29.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0076905    | 0.9980927     | 0.2127824    | 0.0510637        | torch.Size([16, 512, 1])         |
| 2333    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.29.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0284190    | 0.0133124        | torch.Size([16, 512, 512])       |
| 2333    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.29.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0076905    | 0.9980927     | 0.2127824    | 0.0510637        | torch.Size([16, 512, 1])         |
| 2333    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.29.attn.softmax.mul                   | output              | qint8         | 0.0076841 | 0.0000000    | 0.9758785     | 0.0017999    | 0.0002676        | torch.Size([16, 512, 512])       |
| 2334    | torch.nn.modules.dropout.Dropout                                            | head.layers.29.attn.attention_drop                | input               | qint8         | 0.0076841 | 0.0000000    | 0.9758785     | 0.0017999    | 0.0002676        | torch.Size([16, 512, 512])       |
| 2334    | torch.nn.modules.dropout.Dropout                                            | head.layers.29.attn.attention_drop                | output              | qint8         | 0.0076841 | 0.0000000    | 0.9758785     | 0.0017999    | 0.0002676        | torch.Size([16, 512, 512])       |
| 2335    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.29.attn.attn_matmul                   | input_0             | qint8         | 0.0076841 | 0.0000000    | 0.9758785     | 0.0017999    | 0.0002676        | torch.Size([16, 512, 512])       |
| 2335    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.29.attn.attn_matmul                   | input_1             | qint8         | 0.0274525 | -3.5139227   | 3.4864702     | -0.0029727   | 0.1544293        | torch.Size([16, 512, 64])        |
| 2335    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.29.attn.attn_matmul                   | output              | qint8         | 0.0181638 | -2.3249650   | 2.3068013     | -0.0049233   | 0.0945190        | torch.Size([16, 512, 64])        |
| 2336    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0181638 | -2.3249650   | 2.3068013     | -0.0049233   | 0.0945190        | torch.Size([16, 512, 64])        |
| 2336    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0181638 | -2.3249650   | 2.3068013     | -0.0049233   | 0.0945190        | torch.Size([512, 16, 64])        |
| 2337    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | input_0             | qint8         | 0.0181638 | -2.3249650   | 2.3068013     | -0.0049233   | 0.0945190        | torch.Size([512, 16, 64])        |
| 2337    | torch.Tensor.reshape                                                        | head.layers.29.attn                               | output              | qint8         | 0.0181638 | -2.3249650   | 2.3068013     | -0.0049233   | 0.0945190        | torch.Size([512, 2, 512])        |
| 2338    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.out_proj                      | input               | qint8         | 0.0181638 | -2.3249650   | 2.3068013     | -0.0049233   | 0.0945190        | torch.Size([512, 2, 512])        |
| 2338    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.out_proj                      | weight              | torch.float32 |           | -0.2557875   | 0.2624706     | -0.0000386   | 0.0028310        | torch.Size([512, 512])           |
| 2338    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.out_proj                      | bias                | torch.float32 |           | -0.4021156   | 0.3647011     | -0.0051460   | 0.0224833        | torch.Size([512])                |
| 2338    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.29.attn.out_proj                      | output              | qint8         | 0.0265246 | -3.3951490   | 3.0238047     | -0.0296479   | 0.3934342        | torch.Size([512, 2, 512])        |
| 2339    | torch.Tensor.view                                                           | head.layers.29.attn                               | input_0             | qint8         | 0.0076841 | 0.0000000    | 0.9758785     | 0.0017999    | 0.0002676        | torch.Size([16, 512, 512])       |
| 2339    | torch.Tensor.view                                                           | head.layers.29.attn                               | output              | qint8         | 0.0076841 | 0.0000000    | 0.9758785     | 0.0017999    | 0.0002676        | torch.Size([2, 8, 512, 512])     |
| 2340    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.29.attn.attn_weights_mean             | input               | qint8         | 0.0076841 | 0.0000000    | 0.9758785     | 0.0017999    | 0.0002676        | torch.Size([2, 8, 512, 512])     |
| 2340    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.29.attn.attn_weights_mean             | output              | qint8         | 0.0011718 | 0.0000000    | 0.1488153     | 0.0018457    | 0.0000361        | torch.Size([2, 512, 512])        |
| 2341    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | input_0             | qint8         | 0.0265246 | -3.3951490   | 3.0238047     | -0.0296479   | 0.3934342        | torch.Size([512, 2, 512])        |
| 2341    | torch.Tensor.transpose                                                      | head.layers.29.attn                               | output              | qint8         | 0.0265246 | -3.3951490   | 3.0238047     | -0.0296479   | 0.3934342        | torch.Size([2, 512, 512])        |
| 2342    | torch.nn.modules.dropout.Dropout                                            | head.layers.29.dropout                            | input               | qint8         | 0.0265246 | -3.3951490   | 3.0238047     | -0.0296479   | 0.3934342        | torch.Size([2, 512, 512])        |
| 2342    | torch.nn.modules.dropout.Dropout                                            | head.layers.29.dropout                            | output              | qint8         | 0.0265246 | -3.3951490   | 3.0238047     | -0.0296479   | 0.3934342        | torch.Size([2, 512, 512])        |
| 2343    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.29.add                                | input_0             | qint8         | 0.0545808 | -5.6764026   | 6.9317608     | 0.0354508    | 0.8796363        | torch.Size([2, 512, 512])        |
| 2343    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.29.add                                | input_1             | qint8         | 0.0265246 | -3.3951490   | 3.0238047     | -0.0296479   | 0.3934342        | torch.Size([2, 512, 512])        |
| 2343    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.29.add                                | output              | qint8         | 0.0596804 | -7.6390896   | 7.5794091     | 0.0058760    | 1.3042287        | torch.Size([2, 512, 512])        |
| 2344    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(9)                                  | input               | qint8         | 0.0596804 | -7.6390896   | 7.5794091     | 0.0058760    | 1.3042287        | torch.Size([2, 512, 512])        |
| 2344    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(9)                                  | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 2344    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(9)                                  | output              | qint16        | 0.0015259 | -50.0000000  | 36.6043091    | 0.0751695    | 15.2614174       | torch.Size([2, 512, 256])        |
| 2345    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.30.input_mean.mean                    | input_0             | qint16        | 0.0015259 | -50.0000000  | 36.6043091    | 0.0751695    | 15.2614174       | torch.Size([2, 512, 256])        |
| 2345    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.30.input_mean.mean                    | output              | qint16        | 0.0000065 | -0.0514595   | 0.2040529     | 0.0751694    | 0.0024847        | torch.Size([2, 512, 1])          |
| 2346    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.30.sub                                | input_0             | qint16        | 0.0015259 | -50.0000000  | 36.6043091    | 0.0751695    | 15.2614174       | torch.Size([2, 512, 256])        |
| 2346    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.30.sub                                | input_1             | qint16        | 0.0000065 | -0.0514595   | 0.2040529     | 0.0751694    | 0.0024847        | torch.Size([2, 512, 1])          |
| 2346    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.30.sub                                | output              | qint16        | 0.0016189 | -50.1129150  | 36.4927063    | -0.0000015   | 15.2589245       | torch.Size([2, 512, 256])        |
| 2347    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.mul                                | input_0             | qint16        | 0.0016189 | -50.1129150  | 36.4927063    | -0.0000015   | 15.2589245       | torch.Size([2, 512, 256])        |
| 2347    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.mul                                | input_1             | qint16        | 0.0016189 | -50.1129150  | 36.4927063    | -0.0000015   | 15.2589245       | torch.Size([2, 512, 256])        |
| 2347    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.mul                                | output              | qint16        | 0.0859441 | 0.0000000    | 2511.2858887  | 15.2583055   | 8429.9062500     | torch.Size([2, 512, 256])        |
| 2348    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.30.var_mean.mean                      | input_0             | qint16        | 0.0859441 | 0.0000000    | 2511.2858887  | 15.2583055   | 8429.9062500     | torch.Size([2, 512, 256])        |
| 2348    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.30.var_mean.mean                      | output              | qint16        | 0.0013111 | 5.8424115    | 39.4493866    | 15.2581921   | 52.3627052       | torch.Size([2, 512, 1])          |
| 2349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.30.rsqrt                              | input               | qint16        | 0.0013111 | 5.8424115    | 39.4493866    | 15.2581921   | 52.3627052       | torch.Size([2, 512, 1])          |
| 2349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.30.rsqrt                              | output              | qint16        | 0.0000153 | 0.1592162    | 0.4137197     | 0.2800365    | 0.0049642        | torch.Size([2, 512, 1])          |
| 2350    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.out_mul                            | input_0             | qint16        | 0.0016189 | -50.1129150  | 36.4927063    | -0.0000015   | 15.2589245       | torch.Size([2, 512, 256])        |
| 2350    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.out_mul                            | input_1             | qint16        | 0.0000153 | 0.1592162    | 0.4137197     | 0.2800365    | 0.0049642        | torch.Size([2, 512, 1])          |
| 2350    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.out_mul                            | output              | qint16        | 0.0002578 | -8.4482574   | 6.2230101     | 0.0000036    | 1.0000010        | torch.Size([2, 512, 256])        |
| 2351    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.30.weight_quant                       | input               | torch.float32 |           | 0.7288531    | 1.0363919     | 0.8788871    | 0.0022640        | torch.Size([256])                |
| 2351    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.30.weight_quant                       | output              | qint16        | 0.0000316 | 0.7288507    | 1.0363761     | 0.8788875    | 0.0022640        | torch.Size([256])                |
| 2352    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.weight_mul                         | input_0             | qint16        | 0.0002578 | -8.4482574   | 6.2230101     | 0.0000036    | 1.0000010        | torch.Size([2, 512, 256])        |
| 2352    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.weight_mul                         | input_1             | qint16        | 0.0000316 | 0.7288507    | 1.0363761     | 0.8788875    | 0.0022640        | torch.Size([256])                |
| 2352    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.30.weight_mul                         | output              | qint16        | 0.0001933 | -6.3343363   | 5.0745440     | 0.0025338    | 0.6637375        | torch.Size([2, 512, 256])        |
| 2353    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.30.bias_quant                         | input               | torch.float32 |           | -0.1932694   | 0.2182894     | -0.0024702   | 0.0023584        | torch.Size([256])                |
| 2353    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.30.bias_quant                         | output              | qint16        | 0.0000067 | -0.1932711   | 0.2182861     | -0.0024701   | 0.0023583        | torch.Size([256])                |
| 2354    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.30.bias_add                           | input_0             | qint16        | 0.0001933 | -6.3343363   | 5.0745440     | 0.0025338    | 0.6637375        | torch.Size([2, 512, 256])        |
| 2354    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.30.bias_add                           | input_1             | qint16        | 0.0000067 | -0.1932711   | 0.2182861     | -0.0024701   | 0.0023583        | torch.Size([256])                |
| 2354    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.30.bias_add                           | output              | qint8         | 0.0449247 | -5.7503662   | 4.9417210     | 0.0003337    | 0.6211386        | torch.Size([2, 512, 256])        |
| 2355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.kps_generator.offset               | input               | qint8         | 0.0449247 | -5.7503662   | 4.9417210     | 0.0003337    | 0.6211386        | torch.Size([2, 512, 256])        |
| 2355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.kps_generator.offset               | weight              | torch.float32 |           | -0.1990188   | 0.2361899     | -0.0012109   | 0.0039983        | torch.Size([24, 256])            |
| 2355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.kps_generator.offset               | bias                | torch.float32 |           | -0.0593897   | 0.0563206     | -0.0048383   | 0.0008348        | torch.Size([24])                 |
| 2355    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.kps_generator.offset               | output              | qint16        | 0.0001727 | -3.4499016   | 4.0751629     | -0.0218036   | 0.7588364        | torch.Size([2, 512, 24])         |
| 2356    | torch.Tensor.view                                                           | head.layers.31.kps_generator                      | input_0             | qint16        | 0.0001727 | -3.4499016   | 4.0751629     | -0.0218036   | 0.7588364        | torch.Size([2, 512, 24])         |
| 2356    | torch.Tensor.view                                                           | head.layers.31.kps_generator                      | output              | qint16        | 0.0001727 | -3.4499016   | 4.0751629     | -0.0218036   | 0.7588364        | torch.Size([2, 512, 8, 3])       |
| 2357    | torch.Tensor.__getitem__                                                    | head.layers.31.kps_generator                      | input_0             | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2357    | torch.Tensor.__getitem__                                                    | head.layers.31.kps_generator                      | output              | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.8399352    | 277.1991577      | torch.Size([2, 512, 1, 3])       |
| 2358    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.kps_generator.keypoints_add        | input_0             | qint16        | 0.0001727 | -3.4499016   | 4.0751629     | -0.0218036   | 0.7588364        | torch.Size([2, 512, 8, 3])       |
| 2358    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.kps_generator.keypoints_add        | input_1             | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.8399352    | 277.1991577      | torch.Size([2, 512, 1, 3])       |
| 2358    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.kps_generator.keypoints_add        | output              | qint16        | 0.0018274 | -55.6998672  | 55.8149948    | 0.8181009    | 278.3481140      | torch.Size([2, 512, 8, 3])       |
| 2359    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.weight_add                         | input_0             | qint8         | 0.0449247 | -5.7503662   | 4.9417210     | 0.0003337    | 0.6211386        | torch.Size([2, 512, 256])        |
| 2359    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.weight_add                         | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0614475    | 0.8942024        | torch.Size([2, 512, 256])        |
| 2359    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.weight_add                         | output              | qint8         | 0.0592737 | -6.2237420   | 7.3499432     | 0.0618453    | 1.4286745        | torch.Size([2, 512, 256])        |
| 2360    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 2360    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 2361    | torch.Tensor.reshape                                                        | head.layers.31                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 2361    | torch.Tensor.reshape                                                        | head.layers.31                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 2362    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.0                   | input               | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 2362    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.0                   | weight              | torch.float32 |           | -0.6011963   | 0.6129394     | 0.0069147    | 0.0157550        | torch.Size([256, 12])            |
| 2362    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.0                   | bias                | torch.float32 |           | -0.3291516   | 0.3449677     | 0.0006622    | 0.0283183        | torch.Size([256])                |
| 2362    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.0                   | output              | torch.float32 |           | -1.3132747   | 1.2834779     | -0.0509153   | 0.2073667        | torch.Size([2, 6, 256])          |
| 2363    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.31.camera_encoder.1                   | input               | torch.float32 |           | -1.3132747   | 1.2834779     | -0.0509153   | 0.2073667        | torch.Size([2, 6, 256])          |
| 2363    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.31.camera_encoder.1                   | output              | qint8         | 0.0097844 | 0.0000000    | 1.2426217     | 0.1706541    | 0.0640745        | torch.Size([2, 6, 256])          |
| 2364    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.input_mean.mean   | input_0             | qint8         | 0.0097844 | 0.0000000    | 1.2426217     | 0.1706541    | 0.0640745        | torch.Size([2, 6, 256])          |
| 2364    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.input_mean.mean   | output              | qint16        | 0.0000061 | 0.1111041    | 0.1959957     | 0.1706540    | 0.0008966        | torch.Size([2, 6, 1])            |
| 2365    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.2.sub               | input_0             | qint8         | 0.0097844 | 0.0000000    | 1.2426217     | 0.1706541    | 0.0640745        | torch.Size([2, 6, 256])          |
| 2365    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.2.sub               | input_1             | qint16        | 0.0000061 | 0.1111041    | 0.1959957     | 0.1706540    | 0.0008966        | torch.Size([2, 6, 1])            |
| 2365    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.2.sub               | output              | qint16        | 0.0000329 | -0.1959972   | 1.0491210     | 0.0000004    | 0.0632523        | torch.Size([2, 6, 256])          |
| 2366    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.mul               | input_0             | qint16        | 0.0000329 | -0.1959972   | 1.0491210     | 0.0000004    | 0.0632523        | torch.Size([2, 6, 256])          |
| 2366    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.mul               | input_1             | qint16        | 0.0000329 | -0.1959972   | 1.0491210     | 0.0000004    | 0.0632523        | torch.Size([2, 6, 256])          |
| 2366    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.mul               | output              | qint16        | 0.0000354 | 0.0000000    | 1.1006628     | 0.0632316    | 0.0133821        | torch.Size([2, 6, 256])          |
| 2367    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.var_mean.mean     | input_0             | qint16        | 0.0000354 | 0.0000000    | 1.1006628     | 0.0632316    | 0.0133821        | torch.Size([2, 6, 256])          |
| 2367    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.var_mean.mean     | output              | qint16        | 0.0000027 | 0.0263058    | 0.0874187     | 0.0632319    | 0.0004239        | torch.Size([2, 6, 1])            |
| 2368    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.31.camera_encoder.2.rsqrt             | input               | qint16        | 0.0000027 | 0.0263058    | 0.0874187     | 0.0632319    | 0.0004239        | torch.Size([2, 6, 1])            |
| 2368    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.31.camera_encoder.2.rsqrt             | output              | qint16        | 0.0001883 | 3.3820186    | 6.1644964     | 4.1951361    | 0.9251117        | torch.Size([2, 6, 1])            |
| 2369    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.out_mul           | input_0             | qint16        | 0.0000329 | -0.1959972   | 1.0491210     | 0.0000004    | 0.0632523        | torch.Size([2, 6, 256])          |
| 2369    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.out_mul           | input_1             | qint16        | 0.0001883 | 3.3820186    | 6.1644964     | 4.1951361    | 0.9251117        | torch.Size([2, 6, 1])            |
| 2369    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.out_mul           | output              | qint16        | 0.0001487 | -0.7114095   | 4.8716311     | 0.0000017    | 1.0000747        | torch.Size([2, 6, 256])          |
| 2370    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.2.weight_quant      | input               | torch.float32 |           | 0.7249505    | 1.2187127     | 0.9718287    | 0.0056881        | torch.Size([256])                |
| 2370    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.2.weight_quant      | output              | qint16        | 0.0000372 | 0.7249606    | 1.2186941     | 0.9718284    | 0.0056880        | torch.Size([256])                |
| 2371    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.weight_mul        | input_0             | qint16        | 0.0001487 | -0.7114095   | 4.8716311     | 0.0000017    | 1.0000747        | torch.Size([2, 6, 256])          |
| 2371    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.weight_mul        | input_1             | qint16        | 0.0000372 | 0.7249606    | 1.2186941     | 0.9718284    | 0.0056880        | torch.Size([256])                |
| 2371    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.weight_mul        | output              | qint16        | 0.0001559 | -0.8324039   | 5.1067924     | 0.0099348    | 0.9658081        | torch.Size([2, 6, 256])          |
| 2372    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.2.bias_quant        | input               | torch.float32 |           | -0.1110947   | 0.1897046     | 0.0142131    | 0.0028453        | torch.Size([256])                |
| 2372    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.2.bias_quant        | output              | qint16        | 0.0000058 | -0.1110930   | 0.1897017     | 0.0142134    | 0.0028453        | torch.Size([256])                |
| 2373    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.2.bias_add          | input_0             | qint16        | 0.0001559 | -0.8324039   | 5.1067924     | 0.0099348    | 0.9658081        | torch.Size([2, 6, 256])          |
| 2373    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.2.bias_add          | input_1             | qint16        | 0.0000058 | -0.1110930   | 0.1897017     | 0.0142134    | 0.0028453        | torch.Size([256])                |
| 2373    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.2.bias_add          | output              | qint8         | 0.0398160 | -0.9157673   | 5.0566282     | 0.0240555    | 0.9556040        | torch.Size([2, 6, 256])          |
| 2374    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.3                   | input               | qint8         | 0.0398160 | -0.9157673   | 5.0566282     | 0.0240555    | 0.9556040        | torch.Size([2, 6, 256])          |
| 2374    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.3                   | weight              | torch.float32 |           | -0.4575176   | 0.4520092     | 0.0014985    | 0.0050318        | torch.Size([256, 256])           |
| 2374    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.3                   | bias                | torch.float32 |           | -0.0873436   | 0.3426891     | -0.0051534   | 0.0021563        | torch.Size([256])                |
| 2374    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.camera_encoder.3                   | output              | torch.float32 |           | -7.9205356   | 50.0739250    | -0.6068247   | 27.7330265       | torch.Size([2, 6, 256])          |
| 2375    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.31.camera_encoder.4                   | input               | torch.float32 |           | -7.9205356   | 50.0739250    | -0.6068247   | 27.7330265       | torch.Size([2, 6, 256])          |
| 2375    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.31.camera_encoder.4                   | output              | qint8         | 0.3899494 | 0.0000000    | 49.5235748    | 1.0478621    | 22.9627953       | torch.Size([2, 6, 256])          |
| 2376    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.input_mean.mean   | input_0             | qint8         | 0.3899494 | 0.0000000    | 49.5235748    | 1.0478621    | 22.9627953       | torch.Size([2, 6, 256])          |
| 2376    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.input_mean.mean   | output              | qint16        | 0.0000340 | 0.9992556    | 1.1134728     | 1.0478635    | 0.0015029        | torch.Size([2, 6, 1])            |
| 2377    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.5.sub               | input_0             | qint8         | 0.3899494 | 0.0000000    | 49.5235748    | 1.0478621    | 22.9627953       | torch.Size([2, 6, 256])          |
| 2377    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.5.sub               | input_1             | qint16        | 0.0000340 | 0.9992556    | 1.1134728     | 1.0478635    | 0.0015029        | torch.Size([2, 6, 1])            |
| 2377    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.5.sub               | output              | qint16        | 0.0014858 | -1.1129003   | 48.4906540    | -0.0000261   | 22.9614315       | torch.Size([2, 6, 256])          |
| 2378    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.mul               | input_0             | qint16        | 0.0014858 | -1.1129003   | 48.4906540    | -0.0000261   | 22.9614315       | torch.Size([2, 6, 256])          |
| 2378    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.mul               | input_1             | qint16        | 0.0014858 | -1.1129003   | 48.4906540    | -0.0000261   | 22.9614315       | torch.Size([2, 6, 256])          |
| 2378    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.mul               | output              | qint16        | 0.0723423 | 0.0000000    | 2351.3405762  | 22.9622631   | 29982.4550781    | torch.Size([2, 6, 256])          |
| 2379    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.var_mean.mean     | input_0             | qint16        | 0.0723423 | 0.0000000    | 2351.3405762  | 22.9622631   | 29982.4550781    | torch.Size([2, 6, 256])          |
| 2379    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.var_mean.mean     | output              | qint16        | 0.0007483 | 20.4641228   | 24.5208778    | 22.9523544   | 2.1988008        | torch.Size([2, 6, 1])            |
| 2380    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.31.camera_encoder.5.rsqrt             | input               | qint16        | 0.0007483 | 20.4641228   | 24.5208778    | 22.9523544   | 2.1988008        | torch.Size([2, 6, 1])            |
| 2380    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.31.camera_encoder.5.rsqrt             | output              | qint16        | 0.0000068 | 0.2019450    | 0.2210562     | 0.2090348    | 0.0000468        | torch.Size([2, 6, 1])            |
| 2381    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.out_mul           | input_0             | qint16        | 0.0014858 | -1.1129003   | 48.4906540    | -0.0000261   | 22.9614315       | torch.Size([2, 6, 256])          |
| 2381    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.out_mul           | input_1             | qint16        | 0.0000068 | 0.2019450    | 0.2210562     | 0.2090348    | 0.0000468        | torch.Size([2, 6, 1])            |
| 2381    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.out_mul           | output              | qint16        | 0.0003011 | -0.2405602   | 9.8632708     | -0.0000613   | 1.0003927        | torch.Size([2, 6, 256])          |
| 2382    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.5.weight_quant      | input               | torch.float32 |           | 0.4651215    | 1.3983060     | 0.8868107    | 0.0178757        | torch.Size([256])                |
| 2382    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.5.weight_quant      | output              | qint16        | 0.0000427 | 0.4651419    | 1.3982847     | 0.8868114    | 0.0178754        | torch.Size([256])                |
| 2383    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.weight_mul        | input_0             | qint16        | 0.0003011 | -0.2405602   | 9.8632708     | -0.0000613   | 1.0003927        | torch.Size([2, 6, 256])          |
| 2383    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.weight_mul        | input_1             | qint16        | 0.0000427 | 0.4651419    | 1.3982847     | 0.8868114    | 0.0178754        | torch.Size([256])                |
| 2383    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.weight_mul        | output              | qint16        | 0.0002254 | -0.3363495   | 7.3850365     | -0.0268077   | 0.5149340        | torch.Size([2, 6, 256])          |
| 2384    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.5.bias_quant        | input               | torch.float32 |           | -0.4541008   | 0.5208398     | 0.0459723    | 0.0227529        | torch.Size([256])                |
| 2384    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.camera_encoder.5.bias_quant        | output              | qint16        | 0.0000159 | -0.4541045   | 0.5208318     | 0.0459722    | 0.0227529        | torch.Size([256])                |
| 2385    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.5.bias_add          | input_0             | qint16        | 0.0002254 | -0.3363495   | 7.3850365     | -0.0268077   | 0.5149340        | torch.Size([2, 6, 256])          |
| 2385    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.5.bias_add          | input_1             | qint16        | 0.0000159 | -0.4541045   | 0.5208318     | 0.0459722    | 0.0227529        | torch.Size([256])                |
| 2385    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.5.bias_add          | output              | qint8         | 0.0561615 | -0.7300997   | 7.1325130     | 0.0185011    | 0.4764807        | torch.Size([2, 6, 256])          |
| 2386    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint8         | 0.0592737 | -6.2237420   | 7.3499432     | 0.0618453    | 1.4286745        | torch.Size([2, 512, 256])        |
| 2386    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint8         | 0.0592737 | -6.2237420   | 7.3499432     | 0.0618453    | 1.4286745        | torch.Size([2, 512, 1, 256])     |
| 2387    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint8         | 0.0561615 | -0.7300997   | 7.1325130     | 0.0185011    | 0.4764807        | torch.Size([2, 6, 256])          |
| 2387    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint8         | 0.0561615 | -0.7300997   | 7.1325130     | 0.0185011    | 0.4764807        | torch.Size([2, 1, 6, 256])       |
| 2388    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.cam_add                            | input_0             | qint8         | 0.0592737 | -6.2237420   | 7.3499432     | 0.0618453    | 1.4286745        | torch.Size([2, 512, 1, 256])     |
| 2388    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.cam_add                            | input_1             | qint8         | 0.0561615 | -0.7300997   | 7.1325130     | 0.0185011    | 0.4764807        | torch.Size([2, 1, 6, 256])       |
| 2388    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.31.cam_add                            | output              | qint8         | 0.0688446 | -5.8517900   | 8.7432623     | 0.0799851    | 1.7946118        | torch.Size([2, 512, 6, 256])     |
| 2389    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.weights_fc                         | input               | qint8         | 0.0688446 | -5.8517900   | 8.7432623     | 0.0799851    | 1.7946118        | torch.Size([2, 512, 6, 256])     |
| 2389    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.weights_fc                         | weight              | torch.float32 |           | -0.4320964   | 0.3347851     | 0.0003806    | 0.0034810        | torch.Size([64, 256])            |
| 2389    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.weights_fc                         | bias                | torch.float32 |           | -0.0894180   | 0.0804906     | -0.0091073   | 0.0015407        | torch.Size([64])                 |
| 2389    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.weights_fc                         | output              | qint8         | 0.0787158 | -10.0756254  | 5.9036870     | -0.4398302   | 6.9722404        | torch.Size([2, 512, 6, 64])      |
| 2390    | torch.Tensor.reshape                                                        | head.layers.31                                    | input_0             | qint8         | 0.0787158 | -10.0756254  | 5.9036870     | -0.4398302   | 6.9722404        | torch.Size([2, 512, 6, 64])      |
| 2390    | torch.Tensor.reshape                                                        | head.layers.31                                    | output              | qint8         | 0.0787158 | -10.0756254  | 5.9036870     | -0.4398302   | 6.9722404        | torch.Size([2, 512, 48, 8])      |
| 2391    | torch.Tensor.max                                                            | head.layers.31.weight_softmax                     | input               | qint8         | 0.0787158 | -10.0756254  | 5.9036870     | -0.4398302   | 6.9722404        | torch.Size([2, 512, 48, 8])      |
| 2391    | torch.Tensor.max                                                            | head.layers.31.weight_softmax                     | output_0            | qint8         | 0.0787158 | 1.2594532    | 5.9036870     | 3.1360934    | 0.5656060        | torch.Size([2, 512, 1, 8])       |
| 2391    | torch.Tensor.max                                                            | head.layers.31.weight_softmax                     | output_1            | torch.int64   |           | 0.0000000    | 47.0000000    | 22.6743164   | 203.6746521      | torch.Size([2, 512, 1, 8])       |
| 2392    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.weight_softmax.sub                 | input_0             | qint8         | 0.0787158 | -10.0756254  | 5.9036870     | -0.4398302   | 6.9722404        | torch.Size([2, 512, 48, 8])      |
| 2392    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.weight_softmax.sub                 | input_1             | qint8         | 0.0787158 | 1.2594532    | 5.9036870     | 3.1360934    | 0.5656060        | torch.Size([2, 512, 1, 8])       |
| 2392    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.31.weight_softmax.sub                 | output              | qint16        | 0.0004805 | -14.7986059  | 0.0000000     | -3.5759368   | 7.1114717        | torch.Size([2, 512, 48, 8])      |
| 2393    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.31.weight_softmax.exp                 | input               | qint16        | 0.0004805 | -14.7986059  | 0.0000000     | -3.5759368   | 7.1114717        | torch.Size([2, 512, 48, 8])      |
| 2393    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.31.weight_softmax.exp                 | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2013276    | 0.0900925        | torch.Size([2, 512, 48, 8])      |
| 2394    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.31.weight_softmax.sum                 | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2013276    | 0.0900925        | torch.Size([2, 512, 48, 8])      |
| 2394    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.31.weight_softmax.sum                 | output              | qint16        | 0.0007485 | 3.8345413    | 24.5259457    | 9.6636438    | 6.5189357        | torch.Size([2, 512, 1, 8])       |
| 2395    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.31.weight_softmax.reciprocal          | input               | qint16        | 0.0007485 | 3.8345413    | 24.5259457    | 9.6636438    | 6.5189357        | torch.Size([2, 512, 1, 8])       |
| 2395    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.31.weight_softmax.reciprocal          | output              | qint16        | 0.0000122 | 0.0407704    | 0.2607886     | 0.1112158    | 0.0009448        | torch.Size([2, 512, 1, 8])       |
| 2396    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.weight_softmax.mul                 | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.2013276    | 0.0900925        | torch.Size([2, 512, 48, 8])      |
| 2396    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.weight_softmax.mul                 | input_1             | qint16        | 0.0000122 | 0.0407704    | 0.2607886     | 0.1112158    | 0.0009448        | torch.Size([2, 512, 1, 8])       |
| 2396    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.weight_softmax.mul                 | output              | qint8         | 0.0021977 | 0.0000000    | 0.2615295     | 0.0207768    | 0.0010738        | torch.Size([2, 512, 48, 8])      |
| 2397    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint16        | 0.0018274 | -55.6998672  | 55.8149948    | 0.8181009    | 278.3481140      | torch.Size([2, 512, 8, 3])       |
| 2397    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint16        | 0.0018274 | -45.9268074  | 51.0819664    | 1.2504388    | 285.3941956      | torch.Size([2, 512, 8, 1])       |
| 2398    | torch.ones_like                                                             | head.layers.31                                    | input               | qint16        | 0.0018274 | -45.9268074  | 51.0819664    | 1.2504388    | 285.3941956      | torch.Size([2, 512, 8, 1])       |
| 2398    | torch.ones_like                                                             | head.layers.31                                    | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2399    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.point_quant_stub                   | input               | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2399    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.31.point_quant_stub                   | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2400    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.point_cat                          | input_0             | qint16        | 0.0018274 | -55.6998672  | 55.8149948    | 0.8181009    | 278.3481140      | torch.Size([2, 512, 8, 3])       |
| 2400    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.point_cat                          | input_1             | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2400    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.point_cat                          | output              | qint16        | 0.0018311 | -55.7006836  | 55.8142090    | 0.8635311    | 208.7652588      | torch.Size([2, 512, 8, 4])       |
| 2401    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 2401    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2402    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint16        | 0.0018311 | -55.7006836  | 55.8142090    | 0.8635311    | 208.7652588      | torch.Size([2, 512, 8, 4])       |
| 2402    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint16        | 0.0018311 | -55.7006836  | 55.8142090    | 0.8635311    | 208.7652588      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2403    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.point_matmul                       | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2403    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.point_matmul                       | input_1             | qint16        | 0.0018311 | -55.7006836  | 55.8142090    | 0.8635311    | 208.7652588      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2403    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.point_matmul                       | output              | qint16        | 0.0027279 | -82.0810318  | 82.2474365    | 0.2612448    | 94.9332657       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2404    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.31.point_sum                          | input               | qint16        | 0.0027279 | -82.0810318  | 82.2474365    | 0.2612448    | 94.9332657       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2404    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.31.point_sum                          | output              | qint16        | 0.0029318 | -88.3353271  | 91.5075455    | 1.0453070    | 373.0012207      | torch.Size([2, 6, 512, 8, 4])    |
| 2405    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint16        | 0.0029318 | -88.3353271  | 91.5075455    | 1.0453070    | 373.0012207      | torch.Size([2, 6, 512, 8, 4])    |
| 2405    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint16        | 0.0029318 | -56.6307755  | 55.2000542    | -0.5188924   | 411.7864075      | torch.Size([2, 6, 512, 8, 1])    |
| 2406    | torch.clamp                                                                 | head.layers.31                                    | input               | qint16        | 0.0029318 | -56.6307755  | 55.2000542    | -0.5188924   | 411.7864075      | torch.Size([2, 6, 512, 8, 1])    |
| 2406    | torch.clamp                                                                 | head.layers.31                                    | output              | qint16        | 0.0029318 | 0.0000000    | 55.2000542    | 7.2163525    | 147.4134064      | torch.Size([2, 6, 512, 8, 1])    |
| 2407    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.31.reciprocal_op                      | input               | qint16        | 0.0029318 | 0.0000000    | 55.2000542    | 7.2163525    | 147.4134064      | torch.Size([2, 6, 512, 8, 1])    |
| 2407    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.31.reciprocal_op                      | output              | qint16        | 0.0003357 | 0.0181274    | 10.9996643    | 6.1727486    | 28.1891575       | torch.Size([2, 6, 512, 8, 1])    |
| 2408    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint16        | 0.0029318 | -88.3353271  | 91.5075455    | 1.0453070    | 373.0012207      | torch.Size([2, 6, 512, 8, 4])    |
| 2408    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint16        | 0.0029318 | -88.3353271  | 91.5075455    | 1.8501873    | 538.2429199      | torch.Size([2, 6, 512, 8, 2])    |
| 2409    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.point_mul                          | input_0             | qint16        | 0.0029318 | -88.3353271  | 91.5075455    | 1.8501873    | 538.2429199      | torch.Size([2, 6, 512, 8, 2])    |
| 2409    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.point_mul                          | input_1             | qint16        | 0.0003357 | 0.0181274    | 10.9996643    | 6.1727486    | 28.1891575       | torch.Size([2, 6, 512, 8, 1])    |
| 2409    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.point_mul                          | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.3063952    | 0.8541042        | torch.Size([2, 6, 512, 8, 2])    |
| 2410    | torch.Tensor.flatten                                                        | head.layers.31                                    | input               | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.3063952    | 0.8541042        | torch.Size([2, 6, 512, 8, 2])    |
| 2410    | torch.Tensor.flatten                                                        | head.layers.31                                    | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.3063952    | 0.8541042        | torch.Size([12, 512, 8, 2])      |
| 2411    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.31                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 2411    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.31                                    | input_1             | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.3063952    | 0.8541042        | torch.Size([12, 512, 8, 2])      |
| 2411    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.31                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([12, 256, 512, 8])    |
| 2412    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.feat_cat                           | input               | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([12, 256, 512, 8])    |
| 2412    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.feat_cat                           | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([12, 256, 512, 8])    |
| 2413    | torch.Tensor.view                                                           | head.layers.31                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([12, 256, 512, 8])    |
| 2413    | torch.Tensor.view                                                           | head.layers.31                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([2, 6, 256, 512, 8])  |
| 2414    | torch.Tensor.permute                                                        | head.layers.31                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([2, 6, 256, 512, 8])  |
| 2414    | torch.Tensor.permute                                                        | head.layers.31                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([2, 512, 6, 8, 256])  |
| 2415    | torch.Tensor.contiguous                                                     | head.layers.31                                    | input               | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165027        | torch.Size([2, 512, 6, 8, 256])  |
| 2415    | torch.Tensor.contiguous                                                     | head.layers.31                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165034        | torch.Size([2, 512, 6, 8, 256])  |
| 2416    | torch.Tensor.view                                                           | head.layers.31                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165034        | torch.Size([2, 512, 6, 8, 256])  |
| 2416    | torch.Tensor.view                                                           | head.layers.31                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165034        | torch.Size([2, 512, 48, 256])    |
| 2417    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | input_0             | qint8         | 0.0021977 | 0.0000000    | 0.2615295     | 0.0207768    | 0.0010738        | torch.Size([2, 512, 48, 8])      |
| 2417    | torch.Tensor.__getitem__                                                    | head.layers.31                                    | output              | qint8         | 0.0021977 | 0.0000000    | 0.2615295     | 0.0207768    | 0.0010738        | torch.Size([2, 512, 48, 8, 1])   |
| 2418    | torch.Tensor.reshape                                                        | head.layers.31                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165034        | torch.Size([2, 512, 48, 256])    |
| 2418    | torch.Tensor.reshape                                                        | head.layers.31                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165034        | torch.Size([2, 512, 48, 8, 32])  |
| 2419    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.feat_mul                           | input_0             | qint8         | 0.0021977 | 0.0000000    | 0.2615295     | 0.0207768    | 0.0010738        | torch.Size([2, 512, 48, 8, 1])   |
| 2419    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.feat_mul                           | input_1             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0242661    | 2.7165034        | torch.Size([2, 512, 48, 8, 32])  |
| 2419    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.31.feat_mul                           | output              | qint8         | 0.0189173 | -2.4214082   | 2.4024909     | 0.0005744    | 0.0041068        | torch.Size([2, 512, 48, 8, 32])  |
| 2420    | torch.Tensor.view                                                           | head.layers.31                                    | input_0             | qint8         | 0.0189173 | -2.4214082   | 2.4024909     | 0.0005744    | 0.0041068        | torch.Size([2, 512, 48, 8, 32])  |
| 2420    | torch.Tensor.view                                                           | head.layers.31                                    | output              | qint8         | 0.0189173 | -2.4214082   | 2.4024909     | 0.0005744    | 0.0041068        | torch.Size([2, 512, 48, 256])    |
| 2421    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.31.feat_sum                           | input               | qint8         | 0.0189173 | -2.4214082   | 2.4024909     | 0.0005744    | 0.0041068        | torch.Size([2, 512, 48, 256])    |
| 2421    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.31.feat_sum                           | output              | qint8         | 0.0341593 | -4.3723850   | 4.3382258     | 0.0276174    | 0.3969238        | torch.Size([2, 512, 256])        |
| 2422    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.output_proj                        | input               | qint8         | 0.0341593 | -4.3723850   | 4.3382258     | 0.0276174    | 0.3969238        | torch.Size([2, 512, 256])        |
| 2422    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.output_proj                        | weight              | torch.float32 |           | -0.3630883   | 0.3866604     | -0.0003614   | 0.0071088        | torch.Size([256, 256])           |
| 2422    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.output_proj                        | bias                | torch.float32 |           | -0.1024493   | 0.1036076     | 0.0021211    | 0.0015196        | torch.Size([256])                |
| 2422    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.31.output_proj                        | output              | qint8         | 0.0418014 | -5.3505855   | 5.3087840     | -0.0053445   | 0.6669006        | torch.Size([2, 512, 256])        |
| 2423    | torch.nn.modules.dropout.Dropout                                            | head.layers.31.proj_drop                          | input               | qint8         | 0.0418014 | -5.3505855   | 5.3087840     | -0.0053445   | 0.6669006        | torch.Size([2, 512, 256])        |
| 2423    | torch.nn.modules.dropout.Dropout                                            | head.layers.31.proj_drop                          | output              | qint8         | 0.0418014 | -5.3505855   | 5.3087840     | -0.0053445   | 0.6669006        | torch.Size([2, 512, 256])        |
| 2424    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.residual_op                        | input_0             | qint8         | 0.0418014 | -5.3505855   | 5.3087840     | -0.0053445   | 0.6669006        | torch.Size([2, 512, 256])        |
| 2424    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.residual_op                        | input_1             | qint8         | 0.0449247 | -5.7503662   | 4.9417210     | 0.0003337    | 0.6211386        | torch.Size([2, 512, 256])        |
| 2424    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.31.residual_op                        | output              | qint8         | 0.0443808 | -5.6807466   | 5.3256998     | -0.0025733   | 0.6412634        | torch.Size([2, 512, 512])        |
| 2425    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.input_mean.mean           | input_0             | qint8         | 0.0443808 | -5.6807466   | 5.3256998     | -0.0025733   | 0.6412634        | torch.Size([2, 512, 512])        |
| 2425    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.input_mean.mean           | output              | qint16        | 0.0000031 | -0.0985569   | 0.0484557     | -0.0025733   | 0.0003110        | torch.Size([2, 512, 1])          |
| 2426    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.32.pre_norm.sub                       | input_0             | qint8         | 0.0443808 | -5.6807466   | 5.3256998     | -0.0025733   | 0.6412634        | torch.Size([2, 512, 512])        |
| 2426    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.32.pre_norm.sub                       | input_1             | qint16        | 0.0000031 | -0.0985569   | 0.0484557     | -0.0025733   | 0.0003110        | torch.Size([2, 512, 1])          |
| 2426    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.32.pre_norm.sub                       | output              | qint16        | 0.0002213 | -5.7292356   | 5.4242721     | -0.0000021   | 0.6409525        | torch.Size([2, 512, 512])        |
| 2427    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.mul                       | input_0             | qint16        | 0.0002213 | -5.7292356   | 5.4242721     | -0.0000021   | 0.6409525        | torch.Size([2, 512, 512])        |
| 2427    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.mul                       | input_1             | qint16        | 0.0002213 | -5.7292356   | 5.4242721     | -0.0000021   | 0.6409525        | torch.Size([2, 512, 512])        |
| 2427    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.mul                       | output              | qint16        | 0.0016197 | 0.0000000    | 32.8249168    | 0.6409363    | 4.7931061        | torch.Size([2, 512, 512])        |
| 2428    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.var_mean.mean             | input_0             | qint16        | 0.0016197 | 0.0000000    | 32.8249168    | 0.6409363    | 4.7931061        | torch.Size([2, 512, 512])        |
| 2428    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.var_mean.mean             | output              | qint16        | 0.0000659 | 0.3296561    | 2.1590726     | 0.6386327    | 0.0719202        | torch.Size([2, 512, 1])          |
| 2429    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.32.pre_norm.rsqrt                     | input               | qint16        | 0.0000659 | 0.3296561    | 2.1590726     | 0.6386327    | 0.0719202        | torch.Size([2, 512, 1])          |
| 2429    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.32.pre_norm.rsqrt                     | output              | qint16        | 0.0000534 | 0.6805519    | 1.7416699     | 1.3157661    | 0.0531440        | torch.Size([2, 512, 1])          |
| 2430    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.out_mul                   | input_0             | qint16        | 0.0002213 | -5.7292356   | 5.4242721     | -0.0000021   | 0.6409525        | torch.Size([2, 512, 512])        |
| 2430    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.out_mul                   | input_1             | qint16        | 0.0000534 | 0.6805519    | 1.7416699     | 1.3157661    | 0.0531440        | torch.Size([2, 512, 1])          |
| 2430    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.out_mul                   | output              | qint16        | 0.0002837 | -9.0222149   | 8.0294247     | -0.0000070   | 1.0010949        | torch.Size([2, 512, 512])        |
| 2431    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.32.pre_norm.weight_quant              | input               | torch.float32 |           | 0.6255694    | 1.5848855     | 1.0149837    | 0.0841199        | torch.Size([512])                |
| 2431    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.32.pre_norm.weight_quant              | output              | qint16        | 0.0000484 | 0.6255866    | 1.5848613     | 1.0149839    | 0.0841192        | torch.Size([512])                |
| 2432    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.weight_mul                | input_0             | qint16        | 0.0002837 | -9.0222149   | 8.0294247     | -0.0000070   | 1.0010949        | torch.Size([2, 512, 512])        |
| 2432    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.weight_mul                | input_1             | qint16        | 0.0000484 | 0.6255866    | 1.5848613     | 1.0149839    | 0.0841192        | torch.Size([512])                |
| 2432    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.weight_mul                | output              | qint16        | 0.0002046 | -6.5042529   | 5.9166770     | 0.0090046    | 0.7774164        | torch.Size([2, 512, 512])        |
| 2433    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.32.pre_norm.bias_quant                | input               | torch.float32 |           | -0.1540265   | 0.1764562     | -0.0054709   | 0.0019368        | torch.Size([512])                |
| 2433    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.32.pre_norm.bias_quant                | output              | qint16        | 0.0000054 | -0.1540246   | 0.1764535     | -0.0054710   | 0.0019368        | torch.Size([512])                |
| 2434    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.32.pre_norm.bias_add                  | input_0             | qint16        | 0.0002046 | -6.5042529   | 5.9166770     | 0.0090046    | 0.7774164        | torch.Size([2, 512, 512])        |
| 2434    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.32.pre_norm.bias_add                  | input_1             | qint16        | 0.0000054 | -0.1540246   | 0.1764535     | -0.0054710   | 0.0019368        | torch.Size([512])                |
| 2434    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.32.pre_norm.bias_add                  | output              | qint8         | 0.0430471 | -5.5100269   | 5.4669800     | 0.0037804    | 0.7665974        | torch.Size([2, 512, 512])        |
| 2435    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.0.0                         | input               | qint8         | 0.0430471 | -5.5100269   | 5.4669800     | 0.0037804    | 0.7665974        | torch.Size([2, 512, 512])        |
| 2435    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.0.0                         | weight              | torch.float32 |           | -0.4811940   | 0.5423552     | -0.0007460   | 0.0070652        | torch.Size([1024, 512])          |
| 2435    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.0.0                         | bias                | torch.float32 |           | -0.2153661   | 0.0513395     | -0.0674493   | 0.0012690        | torch.Size([1024])               |
| 2435    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.0.0                         | output              | torch.float32 |           | -19.8684921  | 13.2842655    | -3.3996668   | 9.6362991        | torch.Size([2, 512, 1024])       |
| 2436    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.32.activate                           | input               | torch.float32 |           | -19.8684921  | 13.2842655    | -3.3996668   | 9.6362991        | torch.Size([2, 512, 1024])       |
| 2436    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.32.activate                           | output              | qint8         | 0.0875963 | 0.0000000    | 11.1247320    | 0.2218855    | 0.7046368        | torch.Size([2, 512, 1024])       |
| 2437    | torch.nn.modules.dropout.Dropout                                            | head.layers.32.layers.0.2                         | input               | qint8         | 0.0875963 | 0.0000000    | 11.1247320    | 0.2218855    | 0.7046368        | torch.Size([2, 512, 1024])       |
| 2437    | torch.nn.modules.dropout.Dropout                                            | head.layers.32.layers.0.2                         | output              | qint8         | 0.0875963 | 0.0000000    | 11.1247320    | 0.2218855    | 0.7046368        | torch.Size([2, 512, 1024])       |
| 2438    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.1                           | input               | qint8         | 0.0875963 | 0.0000000    | 11.1247320    | 0.2218855    | 0.7046368        | torch.Size([2, 512, 1024])       |
| 2438    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.1                           | weight              | torch.float32 |           | -0.5106656   | 0.5106861     | 0.0000796    | 0.0075136        | torch.Size([256, 1024])          |
| 2438    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.1                           | bias                | torch.float32 |           | -0.1172329   | 0.0823930     | -0.0002596   | 0.0010212        | torch.Size([256])                |
| 2438    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.layers.1                           | output              | qint8         | 0.1822734 | -22.2373600  | 22.6019077    | 0.0594650    | 16.9240665       | torch.Size([2, 512, 256])        |
| 2439    | torch.nn.modules.dropout.Dropout                                            | head.layers.32.layers.2                           | input               | qint8         | 0.1822734 | -22.2373600  | 22.6019077    | 0.0594650    | 16.9240665       | torch.Size([2, 512, 256])        |
| 2439    | torch.nn.modules.dropout.Dropout                                            | head.layers.32.layers.2                           | output              | qint8         | 0.1822734 | -22.2373600  | 22.6019077    | 0.0594650    | 16.9240665       | torch.Size([2, 512, 256])        |
| 2440    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.identity_fc                        | input               | qint8         | 0.0430471 | -5.5100269   | 5.4669800     | 0.0037804    | 0.7665974        | torch.Size([2, 512, 512])        |
| 2440    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.identity_fc                        | weight              | torch.float32 |           | -0.4469438   | 0.4948564     | -0.0002955   | 0.0082387        | torch.Size([256, 512])           |
| 2440    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.identity_fc                        | bias                | torch.float32 |           | -0.1482334   | 0.0840410     | -0.0011662   | 0.0011191        | torch.Size([256])                |
| 2440    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.32.identity_fc                        | output              | torch.float32 |           | -15.5063314  | 19.6893616    | 0.0019777    | 11.2585630       | torch.Size([2, 512, 256])        |
| 2441    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.32.short_add                          | input_0             | torch.float32 |           | -15.5063314  | 19.6893616    | 0.0019777    | 11.2585630       | torch.Size([2, 512, 256])        |
| 2441    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.32.short_add                          | input_1             | qint8         | 0.1822734 | -22.2373600  | 22.6019077    | 0.0594650    | 16.9240665       | torch.Size([2, 512, 256])        |
| 2441    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.32.short_add                          | output              | qint8         | 0.2371307 | -30.3527279  | 27.0328979    | 0.0624379    | 38.1356430       | torch.Size([2, 512, 256])        |
| 2442    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.33.input_mean.mean                    | input_0             | qint8         | 0.2371307 | -30.3527279  | 27.0328979    | 0.0624379    | 38.1356430       | torch.Size([2, 512, 256])        |
| 2442    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.33.input_mean.mean                    | output              | qint16        | 0.0000086 | -0.1333867   | 0.2130471     | 0.0624384    | 0.0089632        | torch.Size([2, 512, 1])          |
| 2443    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.33.sub                                | input_0             | qint8         | 0.2371307 | -30.3527279  | 27.0328979    | 0.0624379    | 38.1356430       | torch.Size([2, 512, 256])        |
| 2443    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.33.sub                                | input_1             | qint16        | 0.0000086 | -0.1333867   | 0.2130471     | 0.0624384    | 0.0089632        | torch.Size([2, 512, 1])          |
| 2443    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.33.sub                                | output              | qint16        | 0.0011569 | -30.5662575  | 27.0318165    | -0.0000021   | 38.1267052       | torch.Size([2, 512, 256])        |
| 2444    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.mul                                | input_0             | qint16        | 0.0011569 | -30.5662575  | 27.0318165    | -0.0000021   | 38.1267052       | torch.Size([2, 512, 256])        |
| 2444    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.mul                                | input_1             | qint16        | 0.0011569 | -30.5662575  | 27.0318165    | -0.0000021   | 38.1267052       | torch.Size([2, 512, 256])        |
| 2444    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.mul                                | output              | qint16        | 0.0441320 | 0.0000000    | 934.3178711   | 38.1262512   | 7198.4257812     | torch.Size([2, 512, 256])        |
| 2445    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.33.var_mean.mean                      | input_0             | qint16        | 0.0441320 | 0.0000000    | 934.3178711   | 38.1262512   | 7198.4257812     | torch.Size([2, 512, 256])        |
| 2445    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.33.var_mean.mean                      | output              | qint16        | 0.0058094 | 6.7620893    | 140.3191681   | 38.1260757   | 1850.6064453     | torch.Size([2, 512, 1])          |
| 2446    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.33.rsqrt                              | input               | qint16        | 0.0058094 | 6.7620893    | 140.3191681   | 38.1260757   | 1850.6064453     | torch.Size([2, 512, 1])          |
| 2446    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.33.rsqrt                              | output              | qint16        | 0.0000126 | 0.0844198    | 0.3845539     | 0.2291445    | 0.0077752        | torch.Size([2, 512, 1])          |
| 2447    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.out_mul                            | input_0             | qint16        | 0.0011569 | -30.5662575  | 27.0318165    | -0.0000021   | 38.1267052       | torch.Size([2, 512, 256])        |
| 2447    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.out_mul                            | input_1             | qint16        | 0.0000126 | 0.0844198    | 0.3845539     | 0.2291445    | 0.0077752        | torch.Size([2, 512, 1])          |
| 2447    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.out_mul                            | output              | qint16        | 0.0001837 | -5.0966163   | 5.8763728     | 0.0000015    | 1.0000141        | torch.Size([2, 512, 256])        |
| 2448    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.33.weight_quant                       | input               | torch.float32 |           | 0.5037270    | 1.1255741     | 0.9008017    | 0.0102990        | torch.Size([256])                |
| 2448    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.33.weight_quant                       | output              | qint16        | 0.0000344 | 0.5037131    | 1.1255569     | 0.9008025    | 0.0102991        | torch.Size([256])                |
| 2449    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.weight_mul                         | input_0             | qint16        | 0.0001837 | -5.0966163   | 5.8763728     | 0.0000015    | 1.0000141        | torch.Size([2, 512, 256])        |
| 2449    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.weight_mul                         | input_1             | qint16        | 0.0000344 | 0.5037131    | 1.1255569     | 0.9008025    | 0.0102991        | torch.Size([256])                |
| 2449    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.33.weight_mul                         | output              | qint16        | 0.0001696 | -4.3277793   | 5.4257507     | 0.0005664    | 0.8236117        | torch.Size([2, 512, 256])        |
| 2450    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.33.bias_quant                         | input               | torch.float32 |           | -0.0986191   | 0.1023723     | 0.0041659    | 0.0009013        | torch.Size([256])                |
| 2450    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.33.bias_quant                         | output              | qint16        | 0.0000031 | -0.0986186   | 0.1023708     | 0.0041660    | 0.0009013        | torch.Size([256])                |
| 2451    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.33.bias_add                           | input_0             | qint16        | 0.0001696 | -4.3277793   | 5.4257507     | 0.0005664    | 0.8236117        | torch.Size([2, 512, 256])        |
| 2451    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.33.bias_add                           | input_1             | qint16        | 0.0000031 | -0.0986186   | 0.1023708     | 0.0041660    | 0.0009013        | torch.Size([256])                |
| 2451    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.33.bias_add                           | output              | qint8         | 0.0356415 | -4.2769833   | 4.5264740     | 0.0048699    | 0.8194842        | torch.Size([2, 512, 256])        |
| 2452    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.add1                               | input_0             | qint8         | 0.0356415 | -4.2769833   | 4.5264740     | 0.0048699    | 0.8194842        | torch.Size([2, 512, 256])        |
| 2452    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.add1                               | input_1             | qint8         | 0.0569265 | -1.7077956   | 7.2296681     | 0.0614475    | 0.8942024        | torch.Size([2, 512, 256])        |
| 2452    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.add1                               | output              | qint8         | 0.0622399 | -4.4812737   | 7.7799888     | 0.0663657    | 1.4075856        | torch.Size([2, 512, 256])        |
| 2453    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.0                           | input               | qint8         | 0.0622399 | -4.4812737   | 7.7799888     | 0.0663657    | 1.4075856        | torch.Size([2, 512, 256])        |
| 2453    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.0                           | weight              | torch.float32 |           | -0.6512140   | 0.6423623     | 0.0001085    | 0.0063452        | torch.Size([256, 256])           |
| 2453    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.0                           | bias                | torch.float32 |           | -0.1916889   | 0.1006546     | -0.0401542   | 0.0026011        | torch.Size([256])                |
| 2453    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.0                           | output              | torch.float32 |           | -10.9032497  | 9.6915398     | -0.9962533   | 4.8280768        | torch.Size([2, 512, 256])        |
| 2454    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.1                           | input               | torch.float32 |           | -10.9032497  | 9.6915398     | -0.9962533   | 4.8280768        | torch.Size([2, 512, 256])        |
| 2454    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.1                           | output              | qint8         | 0.0643997 | 0.0000000    | 8.1787567     | 0.4507269    | 0.8386854        | torch.Size([2, 512, 256])        |
| 2455    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.2                           | input               | qint8         | 0.0643997 | 0.0000000    | 8.1787567     | 0.4507269    | 0.8386854        | torch.Size([2, 512, 256])        |
| 2455    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.2                           | weight              | torch.float32 |           | -0.5759249   | 0.3917674     | -0.0049621   | 0.0060694        | torch.Size([256, 256])           |
| 2455    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.2                           | bias                | torch.float32 |           | -0.1434172   | 0.2241302     | -0.0092912   | 0.0040547        | torch.Size([256])                |
| 2455    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.2                           | output              | torch.float32 |           | -13.3629532  | 8.9673262     | -0.5123870   | 3.2362652        | torch.Size([2, 512, 256])        |
| 2456    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.3                           | input               | torch.float32 |           | -13.3629532  | 8.9673262     | -0.5123870   | 3.2362652        | torch.Size([2, 512, 256])        |
| 2456    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.3                           | output              | qint8         | 0.0617841 | 0.0000000    | 7.8465800     | 0.4703948    | 0.6561675        | torch.Size([2, 512, 256])        |
| 2457    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.input_mean.mean           | input_0             | qint8         | 0.0617841 | 0.0000000    | 7.8465800     | 0.4703948    | 0.6561675        | torch.Size([2, 512, 256])        |
| 2457    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.input_mean.mean           | output              | qint16        | 0.0000253 | 0.2768161    | 0.8285772     | 0.4703920    | 0.0055241        | torch.Size([2, 512, 1])          |
| 2458    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.34.layers.4.sub                       | input_0             | qint8         | 0.0617841 | 0.0000000    | 7.8465800     | 0.4703948    | 0.6561675        | torch.Size([2, 512, 256])        |
| 2458    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.34.layers.4.sub                       | input_1             | qint16        | 0.0000253 | 0.2768161    | 0.8285772     | 0.4703920    | 0.0055241        | torch.Size([2, 512, 1])          |
| 2458    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.34.layers.4.sub                       | output              | qint16        | 0.0002776 | -0.8284562   | 7.1709771     | 0.0000016    | 0.6506515        | torch.Size([2, 512, 256])        |
| 2459    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.mul                       | input_0             | qint16        | 0.0002776 | -0.8284562   | 7.1709771     | 0.0000016    | 0.6506515        | torch.Size([2, 512, 256])        |
| 2459    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.mul                       | input_1             | qint16        | 0.0002776 | -0.8284562   | 7.1709771     | 0.0000016    | 0.6506515        | torch.Size([2, 512, 256])        |
| 2459    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.mul                       | output              | qint16        | 0.0025312 | 0.0000000    | 51.4222679    | 0.6506398    | 2.7684975        | torch.Size([2, 512, 256])        |
| 2460    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.var_mean.mean             | input_0             | qint16        | 0.0025312 | 0.0000000    | 51.4222679    | 0.6506398    | 2.7684975        | torch.Size([2, 512, 256])        |
| 2460    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.var_mean.mean             | output              | qint16        | 0.0000680 | 0.2008514    | 1.9374808     | 0.6506429    | 0.0479813        | torch.Size([2, 512, 1])          |
| 2461    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.34.layers.4.rsqrt                     | input               | qint16        | 0.0000680 | 0.2008514    | 1.9374808     | 0.6506429    | 0.0479813        | torch.Size([2, 512, 1])          |
| 2461    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.34.layers.4.rsqrt                     | output              | qint16        | 0.0000826 | 0.7184484    | 2.2312627     | 1.2876736    | 0.0411493        | torch.Size([2, 512, 1])          |
| 2462    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.out_mul                   | input_0             | qint16        | 0.0002776 | -0.8284562   | 7.1709771     | 0.0000016    | 0.6506515        | torch.Size([2, 512, 256])        |
| 2462    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.out_mul                   | input_1             | qint16        | 0.0000826 | 0.7184484    | 2.2312627     | 1.2876736    | 0.0411493        | torch.Size([2, 512, 1])          |
| 2462    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.out_mul                   | output              | qint16        | 0.0002588 | -0.6875275   | 7.0633941     | 0.0000040    | 0.9999818        | torch.Size([2, 512, 256])        |
| 2463    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.4.weight_quant              | input               | torch.float32 |           | 0.6686562    | 1.1948749     | 0.9568136    | 0.0086885        | torch.Size([256])                |
| 2463    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.4.weight_quant              | output              | qint16        | 0.0000365 | 0.6686632    | 1.1948566     | 0.9568136    | 0.0086886        | torch.Size([256])                |
| 2464    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.weight_mul                | input_0             | qint16        | 0.0002588 | -0.6875275   | 7.0633941     | 0.0000040    | 0.9999818        | torch.Size([2, 512, 256])        |
| 2464    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.weight_mul                | input_1             | qint16        | 0.0000365 | 0.6686632    | 1.1948566     | 0.9568136    | 0.0086886        | torch.Size([256])                |
| 2464    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.weight_mul                | output              | qint16        | 0.0002795 | -0.8213688   | 7.2799716     | 0.0154723    | 0.9717999        | torch.Size([2, 512, 256])        |
| 2465    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.4.bias_quant                | input               | torch.float32 |           | -0.1362740   | 0.3444038     | 0.0655811    | 0.0123684        | torch.Size([256])                |
| 2465    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.4.bias_quant                | output              | qint16        | 0.0000105 | -0.1362690   | 0.3443986     | 0.0655809    | 0.0123685        | torch.Size([256])                |
| 2466    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.layers.4.bias_add                  | input_0             | qint16        | 0.0002795 | -0.8213688   | 7.2799716     | 0.0154723    | 0.9717999        | torch.Size([2, 512, 256])        |
| 2466    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.layers.4.bias_add                  | input_1             | qint16        | 0.0000105 | -0.1362690   | 0.3443986     | 0.0655809    | 0.0123685        | torch.Size([256])                |
| 2466    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.layers.4.bias_add                  | output              | qint8         | 0.0554585 | -0.8318773   | 7.0432277     | 0.0810825    | 0.9156163        | torch.Size([2, 512, 256])        |
| 2467    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.5                           | input               | qint8         | 0.0554585 | -0.8318773   | 7.0432277     | 0.0810825    | 0.9156163        | torch.Size([2, 512, 256])        |
| 2467    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.5                           | weight              | torch.float32 |           | -0.5576459   | 0.4978588     | 0.0026662    | 0.0046605        | torch.Size([256, 256])           |
| 2467    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.5                           | bias                | torch.float32 |           | -0.1226624   | 0.0810974     | -0.0227243   | 0.0021841        | torch.Size([256])                |
| 2467    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.5                           | output              | torch.float32 |           | -8.4584246   | 9.3909311     | -0.6635692   | 3.7917478        | torch.Size([2, 512, 256])        |
| 2468    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.6                           | input               | torch.float32 |           | -8.4584246   | 9.3909311     | -0.6635692   | 3.7917478        | torch.Size([2, 512, 256])        |
| 2468    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.6                           | output              | qint8         | 0.0641787 | 0.0000000    | 8.1506948     | 0.5105233    | 1.0043293        | torch.Size([2, 512, 256])        |
| 2469    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.7                           | input               | qint8         | 0.0641787 | 0.0000000    | 8.1506948     | 0.5105233    | 1.0043293        | torch.Size([2, 512, 256])        |
| 2469    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.7                           | weight              | torch.float32 |           | -0.4486472   | 0.5366535     | -0.0039619   | 0.0033260        | torch.Size([256, 256])           |
| 2469    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.7                           | bias                | torch.float32 |           | -0.0953889   | 0.2466190     | -0.0158665   | 0.0018828        | torch.Size([256])                |
| 2469    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.7                           | output              | torch.float32 |           | -11.0460310  | 31.7619629    | -1.2967153   | 5.3164029        | torch.Size([2, 512, 256])        |
| 2470    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.8                           | input               | torch.float32 |           | -11.0460310  | 31.7619629    | -1.2967153   | 5.3164029        | torch.Size([2, 512, 256])        |
| 2470    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.34.layers.8                           | output              | qint8         | 0.2470643 | 0.0000000    | 31.3771667    | 0.4203695    | 2.0366032        | torch.Size([2, 512, 256])        |
| 2471    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.input_mean.mean           | input_0             | qint8         | 0.2470643 | 0.0000000    | 31.3771667    | 0.4203695    | 2.0366032        | torch.Size([2, 512, 256])        |
| 2471    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.input_mean.mean           | output              | qint16        | 0.0000266 | 0.2152114    | 0.8708116     | 0.4199045    | 0.0112429        | torch.Size([2, 512, 1])          |
| 2472    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.34.layers.9.sub                       | input_0             | qint8         | 0.2470643 | 0.0000000    | 31.3771667    | 0.4203695    | 2.0366032        | torch.Size([2, 512, 256])        |
| 2472    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.34.layers.9.sub                       | input_1             | qint16        | 0.0000266 | 0.2152114    | 0.8708116     | 0.4199045    | 0.0112429        | torch.Size([2, 512, 1])          |
| 2472    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.34.layers.9.sub                       | output              | qint16        | 0.0010558 | -0.8710079   | 31.0997334    | 0.0005126    | 2.0249245        | torch.Size([2, 512, 256])        |
| 2473    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.mul                       | input_0             | qint16        | 0.0010558 | -0.8710079   | 31.0997334    | 0.0005126    | 2.0249245        | torch.Size([2, 512, 256])        |
| 2473    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.mul                       | input_1             | qint16        | 0.0010558 | -0.8710079   | 31.0997334    | 0.0005126    | 2.0249245        | torch.Size([2, 512, 256])        |
| 2473    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.mul                       | output              | qint16        | 0.0365251 | 0.0000000    | 967.1840820   | 2.0225172    | 353.8808899      | torch.Size([2, 512, 256])        |
| 2474    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.var_mean.mean             | input_0             | qint16        | 0.0365251 | 0.0000000    | 967.1840820   | 2.0225172    | 353.8808899      | torch.Size([2, 512, 256])        |
| 2474    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.var_mean.mean             | output              | qint16        | 0.0001742 | 0.5353712    | 5.1502919     | 2.0225213    | 0.3699504        | torch.Size([2, 512, 1])          |
| 2475    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.34.layers.9.rsqrt                     | input               | qint16        | 0.0001742 | 0.5353712    | 5.1502919     | 2.0225213    | 0.3699504        | torch.Size([2, 512, 1])          |
| 2475    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.34.layers.9.rsqrt                     | output              | qint16        | 0.0000459 | 0.4406400    | 1.3666734     | 0.7279897    | 0.0132871        | torch.Size([2, 512, 1])          |
| 2476    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.out_mul                   | input_0             | qint16        | 0.0010558 | -0.8710079   | 31.0997334    | 0.0005126    | 2.0249245        | torch.Size([2, 512, 256])        |
| 2476    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.out_mul                   | input_1             | qint16        | 0.0000459 | 0.4406400    | 1.3666734     | 0.7279897    | 0.0132871        | torch.Size([2, 512, 1])          |
| 2476    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.out_mul                   | output              | qint16        | 0.0004624 | -0.5748253   | 15.0277739    | 0.0002580    | 1.0011923        | torch.Size([2, 512, 256])        |
| 2477    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.9.weight_quant              | input               | torch.float32 |           | 0.7519886    | 1.2372242     | 0.9132024    | 0.0028244        | torch.Size([256])                |
| 2477    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.9.weight_quant              | output              | qint16        | 0.0000378 | 0.7519816    | 1.2372053     | 0.9132028    | 0.0028244        | torch.Size([256])                |
| 2478    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.weight_mul                | input_0             | qint16        | 0.0004624 | -0.5748253   | 15.0277739    | 0.0002580    | 1.0011923        | torch.Size([2, 512, 256])        |
| 2478    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.weight_mul                | input_1             | qint16        | 0.0000378 | 0.7519816    | 1.2372053     | 0.9132028    | 0.0028244        | torch.Size([256])                |
| 2478    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.weight_mul                | output              | qint16        | 0.0003478 | -0.6367432   | 11.3007145    | -0.0004592   | 0.7323431        | torch.Size([2, 512, 256])        |
| 2479    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.9.bias_quant                | input               | torch.float32 |           | -0.2334981   | 0.1167177     | 0.0665926    | 0.0030043        | torch.Size([256])                |
| 2479    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.9.bias_quant                | output              | qint16        | 0.0000071 | -0.2335017   | 0.1167152     | 0.0665926    | 0.0030043        | torch.Size([256])                |
| 2480    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.layers.9.bias_add                  | input_0             | qint16        | 0.0003478 | -0.6367432   | 11.3007145    | -0.0004592   | 0.7323431        | torch.Size([2, 512, 256])        |
| 2480    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.layers.9.bias_add                  | input_1             | qint16        | 0.0000071 | -0.2335017   | 0.1167152     | 0.0665926    | 0.0030043        | torch.Size([256])                |
| 2480    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.layers.9.bias_add                  | output              | qint8         | 0.0832070 | -0.6656561   | 10.5672903    | 0.0667534    | 0.6860237        | torch.Size([2, 512, 256])        |
| 2481    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.10                          | input               | qint8         | 0.0832070 | -0.6656561   | 10.5672903    | 0.0667534    | 0.6860237        | torch.Size([2, 512, 256])        |
| 2481    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.10                          | weight              | torch.float32 |           | -0.4327374   | 0.5036364     | -0.0011054   | 0.0035315        | torch.Size([11, 256])            |
| 2481    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.10                          | bias                | torch.float32 |           | -0.0496347   | 0.0377057     | -0.0115086   | 0.0009391        | torch.Size([11])                 |
| 2481    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.34.layers.10                          | output              | qint16        | 0.0003798 | -9.7684908   | 12.4454346    | 0.2342919    | 2.5683842        | torch.Size([2, 512, 11])         |
| 2482    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.11.scale_quant_stub         | input               | torch.float32 |           | 0.0472322    | 0.3123411     | 0.1293254    | 0.0056835        | torch.Size([11])                 |
| 2482    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.34.layers.11.scale_quant_stub         | output              | qint16        | 0.0000095 | 0.0472313    | 0.3123363     | 0.1293246    | 0.0056833        | torch.Size([11])                 |
| 2483    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.11.mul                      | input_0             | qint16        | 0.0003798 | -9.7684908   | 12.4454346    | 0.2342919    | 2.5683842        | torch.Size([2, 512, 11])         |
| 2483    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.11.mul                      | input_1             | qint16        | 0.0000095 | 0.0472313    | 0.3123363     | 0.1293246    | 0.0056833        | torch.Size([11])                 |
| 2483    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.34.layers.11.mul                      | output              | qint16        | 0.0000534 | -1.2782286   | 1.7309225     | 0.0327125    | 0.0571759        | torch.Size([2, 512, 11])         |
| 2484    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.add2                               | input_0             | qint16        | 0.0000534 | -1.2782286   | 1.7309225     | 0.0327125    | 0.0571759        | torch.Size([2, 512, 11])         |
| 2484    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.add2                               | input_1             | qint16        | 0.0017897 | -53.5673790  | 53.3347168    | 0.2038219    | 76.5328140       | torch.Size([2, 512, 11])         |
| 2484    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.34.add2                               | output              | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2485    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(4)                                   | input               | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2485    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(4)                                   | output              | torch.float32 |           | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2486    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2486    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.9105684    | 276.4417419      | torch.Size([2, 512, 3])          |
| 2487    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(6)                   | input               | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.9105684    | 276.4417419      | torch.Size([2, 512, 3])          |
| 2487    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(6)                   | weight              | torch.float32 |           | -0.9216561   | 0.9167990     | -0.0046354   | 0.1373587        | torch.Size([128, 3])             |
| 2487    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(6)                   | bias                | torch.float32 |           | -1.0762298   | 1.0183468     | -0.0273298   | 0.3650480        | torch.Size([128])                |
| 2487    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.0(6)                   | output              | torch.float32 |           | -33.1019287  | 34.9107018    | -0.1292802   | 67.5485840       | torch.Size([2, 512, 128])        |
| 2488    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(6)                   | input               | torch.float32 |           | -33.1019287  | 34.9107018    | -0.1292802   | 67.5485840       | torch.Size([2, 512, 128])        |
| 2488    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.1(6)                   | output              | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8210282    | 24.6527119       | torch.Size([2, 512, 128])        |
| 2489    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(6)   | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8210282    | 24.6527119       | torch.Size([2, 512, 128])        |
| 2489    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(6)   | output              | qint16        | 0.0002498 | 0.2914945    | 7.2786193     | 2.8210406    | 3.7958786        | torch.Size([2, 512, 1])          |
| 2490    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(6)               | input_0             | qint8         | 0.2590872 | 0.0000000    | 32.9040718    | 2.8210282    | 24.6527119       | torch.Size([2, 512, 128])        |
| 2490    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(6)               | input_1             | qint16        | 0.0002498 | 0.2914945    | 7.2786193     | 2.8210406    | 3.7958786        | torch.Size([2, 512, 1])          |
| 2490    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(6)               | output              | qint16        | 0.0008924 | -7.2786317   | 27.3725109    | -0.0000065   | 20.8604259       | torch.Size([2, 512, 128])        |
| 2491    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(6)               | input_0             | qint16        | 0.0008924 | -7.2786317   | 27.3725109    | -0.0000065   | 20.8604259       | torch.Size([2, 512, 128])        |
| 2491    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(6)               | input_1             | qint16        | 0.0008924 | -7.2786317   | 27.3725109    | -0.0000065   | 20.8604259       | torch.Size([2, 512, 128])        |
| 2491    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(6)               | output              | qint16        | 0.0261809 | 0.0000000    | 749.2435913   | 20.8594246   | 2420.9887695     | torch.Size([2, 512, 128])        |
| 2492    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(6)     | input_0             | qint16        | 0.0261809 | 0.0000000    | 749.2435913   | 20.8594246   | 2420.9887695     | torch.Size([2, 512, 128])        |
| 2492    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(6)     | output              | qint16        | 0.0029473 | 0.2004168    | 76.5415497    | 20.8594971   | 436.9325562      | torch.Size([2, 512, 1])          |
| 2493    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(6)             | input               | qint16        | 0.0029473 | 0.2004168    | 76.5415497    | 20.8594971   | 436.9325562      | torch.Size([2, 512, 1])          |
| 2493    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.2.rsqrt(6)             | output              | qint16        | 0.0000538 | 0.1142789    | 1.7621539     | 0.6444378    | 0.4527944        | torch.Size([2, 512, 1])          |
| 2494    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(6)           | input_0             | qint16        | 0.0008924 | -7.2786317   | 27.3725109    | -0.0000065   | 20.8604259       | torch.Size([2, 512, 128])        |
| 2494    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(6)           | input_1             | qint16        | 0.0000538 | 0.1142789    | 1.7621539     | 0.6444378    | 0.4527944        | torch.Size([2, 512, 1])          |
| 2494    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(6)           | output              | qint16        | 0.0001192 | -0.8843260   | 3.9062698     | -0.0000197   | 0.9391420        | torch.Size([2, 512, 128])        |
| 2495    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(6)      | input               | torch.float32 |           | 0.7278287    | 1.3287159     | 0.9627235    | 0.0086877        | torch.Size([128])                |
| 2495    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(6)      | output              | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 2496    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(6)        | input_0             | qint16        | 0.0001192 | -0.8843260   | 3.9062698     | -0.0000197   | 0.9391420        | torch.Size([2, 512, 128])        |
| 2496    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(6)        | input_1             | qint16        | 0.0000405 | 0.7278286    | 1.3286957     | 0.9627234    | 0.0086873        | torch.Size([128])                |
| 2496    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(6)        | output              | qint16        | 0.0001208 | -1.0493081   | 3.9574904     | -0.0029744   | 0.8709553        | torch.Size([2, 512, 128])        |
| 2497    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(6)        | input               | torch.float32 |           | -0.0562531   | 0.0804052     | 0.0088204    | 0.0005294        | torch.Size([128])                |
| 2497    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(6)        | output              | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 2498    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(6)          | input_0             | qint16        | 0.0001208 | -1.0493081   | 3.9574904     | -0.0029744   | 0.8709553        | torch.Size([2, 512, 128])        |
| 2498    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(6)          | input_1             | qint16        | 0.0000025 | -0.0562536   | 0.0804040     | 0.0088203    | 0.0005294        | torch.Size([128])                |
| 2498    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(6)          | output              | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0057575    | 0.8660284        | torch.Size([2, 512, 128])        |
| 2499    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(6)                   | input               | qint8         | 0.0271288 | -1.0580239   | 3.4453597     | 0.0057575    | 0.8660284        | torch.Size([2, 512, 128])        |
| 2499    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(6)                   | weight              | torch.float32 |           | -0.3750711   | 0.3968706     | 0.0019093    | 0.0048458        | torch.Size([128, 128])           |
| 2499    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(6)                   | bias                | torch.float32 |           | -0.1863807   | 0.1385574     | -0.0156467   | 0.0047256        | torch.Size([128])                |
| 2499    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.3(6)                   | output              | torch.float32 |           | -5.6730514   | 6.3543305     | -0.1067219   | 2.1177521        | torch.Size([2, 512, 128])        |
| 2500    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(6)                   | input               | torch.float32 |           | -5.6730514   | 6.3543305     | -0.1067219   | 2.1177521        | torch.Size([2, 512, 128])        |
| 2500    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.4(6)                   | output              | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5187740    | 0.7384400        | torch.Size([2, 512, 128])        |
| 2501    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(6)   | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5187740    | 0.7384400        | torch.Size([2, 512, 128])        |
| 2501    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(6)   | output              | qint16        | 0.0000298 | 0.2860329    | 0.9167042     | 0.5187730    | 0.0401755        | torch.Size([2, 512, 1])          |
| 2502    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(6)               | input_0             | qint8         | 0.0433301 | 0.0000000    | 5.5029187     | 0.5187740    | 0.7384400        | torch.Size([2, 512, 128])        |
| 2502    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(6)               | input_1             | qint16        | 0.0000298 | 0.2860329    | 0.9167042     | 0.5187730    | 0.0401755        | torch.Size([2, 512, 1])          |
| 2502    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(6)               | output              | qint16        | 0.0001641 | -0.9167733   | 5.1057677     | 0.0000051    | 0.6982968        | torch.Size([2, 512, 128])        |
| 2503    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(6)               | input_0             | qint16        | 0.0001641 | -0.9167733   | 5.1057677     | 0.0000051    | 0.6982968        | torch.Size([2, 512, 128])        |
| 2503    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(6)               | input_1             | qint16        | 0.0001641 | -0.9167733   | 5.1057677     | 0.0000051    | 0.6982968        | torch.Size([2, 512, 128])        |
| 2503    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(6)               | output              | qint16        | 0.0008856 | 0.0000000    | 26.0686932    | 0.6983118    | 3.5903623        | torch.Size([2, 512, 128])        |
| 2504    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(6)     | input_0             | qint16        | 0.0008856 | 0.0000000    | 26.0686932    | 0.6983118    | 3.5903623        | torch.Size([2, 512, 128])        |
| 2504    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(6)     | output              | qint16        | 0.0000499 | 0.3053092    | 1.3932320     | 0.6983140    | 0.1334614        | torch.Size([2, 512, 1])          |
| 2505    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(6)             | input               | qint16        | 0.0000499 | 0.3053092    | 1.3932320     | 0.6983140    | 0.1334614        | torch.Size([2, 512, 1])          |
| 2505    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.5.rsqrt(6)             | output              | qint16        | 0.0000553 | 0.8471928    | 1.8097486     | 1.3082819    | 0.0869593        | torch.Size([2, 512, 1])          |
| 2506    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(6)           | input_0             | qint16        | 0.0001641 | -0.9167733   | 5.1057677     | 0.0000051    | 0.6982968        | torch.Size([2, 512, 128])        |
| 2506    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(6)           | input_1             | qint16        | 0.0000553 | 0.8471928    | 1.8097486     | 1.3082819    | 0.0869593        | torch.Size([2, 512, 1])          |
| 2506    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(6)           | output              | qint16        | 0.0002164 | -0.7822458   | 7.0122900     | 0.0000026    | 0.9999775        | torch.Size([2, 512, 128])        |
| 2507    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(6)      | input               | torch.float32 |           | 0.5925044    | 1.4726304     | 0.9182085    | 0.0175060        | torch.Size([128])                |
| 2507    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(6)      | output              | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 2508    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(6)        | input_0             | qint16        | 0.0002164 | -0.7822458   | 7.0122900     | 0.0000026    | 0.9999775        | torch.Size([2, 512, 128])        |
| 2508    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(6)        | input_1             | qint16        | 0.0000449 | 0.5925127    | 1.4726079     | 0.9182079    | 0.0175060        | torch.Size([128])                |
| 2508    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(6)        | output              | qint16        | 0.0002127 | -0.9436985   | 6.8902540     | 0.0339875    | 0.9386241        | torch.Size([2, 512, 128])        |
| 2509    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(6)        | input               | torch.float32 |           | -0.0644210   | 0.2426097     | 0.0318023    | 0.0030999        | torch.Size([128])                |
| 2509    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(6)        | output              | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 2510    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(6)          | input_0             | qint16        | 0.0002127 | -0.9436985   | 6.8902540     | 0.0339875    | 0.9386241        | torch.Size([2, 512, 128])        |
| 2510    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(6)          | input_1             | qint16        | 0.0000074 | -0.0644220   | 0.2426060     | 0.0318021    | 0.0030999        | torch.Size([128])                |
| 2510    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(6)          | output              | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0657645    | 0.9131092        | torch.Size([2, 512, 128])        |
| 2511    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(6)                   | input               | qint8         | 0.0521229 | -0.9382124   | 6.6196094     | 0.0657645    | 0.9131092        | torch.Size([2, 512, 128])        |
| 2511    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(6)                   | weight              | torch.float32 |           | -0.7504157   | 0.4182976     | -0.0024651   | 0.0052447        | torch.Size([128, 128])           |
| 2511    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(6)                   | bias                | torch.float32 |           | -0.1397866   | 0.1210779     | 0.0064616    | 0.0040949        | torch.Size([128])                |
| 2511    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.6(6)                   | output              | torch.float32 |           | -8.3129692   | 6.9601798     | -0.0391564   | 4.1170425        | torch.Size([2, 512, 128])        |
| 2512    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(6)                   | input               | torch.float32 |           | -8.3129692   | 6.9601798     | -0.0391564   | 4.1170425        | torch.Size([2, 512, 128])        |
| 2512    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.7(6)                   | output              | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.7780066    | 1.3594764        | torch.Size([2, 512, 128])        |
| 2513    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(6)   | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.7780066    | 1.3594764        | torch.Size([2, 512, 128])        |
| 2513    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(6)   | output              | qint16        | 0.0000319 | 0.5488311    | 1.0447656     | 0.7657390    | 0.0272445        | torch.Size([2, 512, 1])          |
| 2514    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(6)               | input_0             | qint8         | 0.0498948 | 0.0000000    | 6.3366351     | 0.7780066    | 1.3594764        | torch.Size([2, 512, 128])        |
| 2514    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(6)               | input_1             | qint16        | 0.0000319 | 0.5488311    | 1.0447656     | 0.7657390    | 0.0272445        | torch.Size([2, 512, 1])          |
| 2514    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(6)               | output              | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0122664    | 1.3254179        | torch.Size([2, 512, 128])        |
| 2515    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(6)               | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0122664    | 1.3254179        | torch.Size([2, 512, 128])        |
| 2515    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(6)               | input_1             | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0122664    | 1.3254179        | torch.Size([2, 512, 128])        |
| 2515    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(6)               | output              | qint16        | 0.0011151 | 0.0000000    | 31.5550842    | 1.3255467    | 7.5814304        | torch.Size([2, 512, 128])        |
| 2516    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(6)     | input_0             | qint16        | 0.0011151 | 0.0000000    | 31.5550842    | 1.3255467    | 7.5814304        | torch.Size([2, 512, 128])        |
| 2516    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(6)     | output              | qint16        | 0.0000656 | 0.8171875    | 2.1495371     | 1.3202052    | 0.1844285        | torch.Size([2, 512, 1])          |
| 2517    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(6)             | input               | qint16        | 0.0000656 | 0.8171875    | 2.1495371     | 1.3202052    | 0.1844285        | torch.Size([2, 512, 1])          |
| 2517    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.8.rsqrt(6)             | output              | qint16        | 0.0000338 | 0.6820595    | 1.1061931     | 0.9004964    | 0.0160294        | torch.Size([2, 512, 1])          |
| 2518    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(6)           | input_0             | qint16        | 0.0001844 | -1.0447190   | 5.6173935     | 0.0122664    | 1.3254179        | torch.Size([2, 512, 128])        |
| 2518    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(6)           | input_1             | qint16        | 0.0000338 | 0.6820595    | 1.1061931     | 0.9004964    | 0.0160294        | torch.Size([2, 512, 1])          |
| 2518    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(6)           | output              | qint16        | 0.0001537 | -0.7529505   | 4.9618812     | 0.0083662    | 1.0024242        | torch.Size([2, 512, 128])        |
| 2519    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(6)      | input               | torch.float32 |           | 0.7673740    | 1.1249810     | 0.9671495    | 0.0053221        | torch.Size([128])                |
| 2519    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(6)      | output              | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 2520    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(6)        | input_0             | qint16        | 0.0001537 | -0.7529505   | 4.9618812     | 0.0083662    | 1.0024242        | torch.Size([2, 512, 128])        |
| 2520    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(6)        | input_1             | qint16        | 0.0000343 | 0.7673595    | 1.1249639     | 0.9671496    | 0.0053219        | torch.Size([128])                |
| 2520    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(6)        | output              | qint16        | 0.0001601 | -0.8471156   | 5.1624112     | 0.0232322    | 0.9940879        | torch.Size([2, 512, 128])        |
| 2521    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(6)        | input               | torch.float32 |           | -0.0537279   | 0.1594015     | 0.0216380    | 0.0014148        | torch.Size([128])                |
| 2521    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(6)        | output              | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 2522    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(6)          | input_0             | qint16        | 0.0001601 | -0.8471156   | 5.1624112     | 0.0232322    | 0.9940879        | torch.Size([2, 512, 128])        |
| 2522    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(6)          | input_1             | qint16        | 0.0000049 | -0.0537297   | 0.1593991     | 0.0216380    | 0.0014147        | torch.Size([128])                |
| 2522    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(6)          | output              | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0449454    | 0.9808512        | torch.Size([2, 512, 128])        |
| 2523    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(6)                   | input               | qint8         | 0.0392422 | -0.8240871   | 4.9837651     | 0.0449454    | 0.9808512        | torch.Size([2, 512, 128])        |
| 2523    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(6)                   | weight              | torch.float32 |           | -0.4264432   | 0.3183554     | 0.0005866    | 0.0053991        | torch.Size([128, 128])           |
| 2523    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(6)                   | bias                | torch.float32 |           | -0.1690418   | 0.1536980     | -0.0166056   | 0.0039884        | torch.Size([128])                |
| 2523    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.pos_fc.9(6)                   | output              | torch.float32 |           | -12.0119963  | 10.0648241    | -0.4387294   | 4.3985019        | torch.Size([2, 512, 128])        |
| 2524    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(6)                  | input               | torch.float32 |           | -12.0119963  | 10.0648241    | -0.4387294   | 4.3985019        | torch.Size([2, 512, 128])        |
| 2524    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.pos_fc.10(6)                  | output              | qint8         | 0.0826298 | 0.0000000    | 10.0808334    | 0.6152682    | 1.5399230        | torch.Size([2, 512, 128])        |
| 2525    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(6)  | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.0808334    | 0.6152682    | 1.5399230        | torch.Size([2, 512, 128])        |
| 2525    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(6)  | output              | qint16        | 0.0000231 | 0.5248206    | 0.7333469     | 0.6152672    | 0.0020240        | torch.Size([2, 512, 1])          |
| 2526    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(6)              | input_0             | qint8         | 0.0826298 | 0.0000000    | 10.0808334    | 0.6152682    | 1.5399230        | torch.Size([2, 512, 128])        |
| 2526    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(6)              | input_1             | qint16        | 0.0000231 | 0.5248206    | 0.7333469     | 0.6152672    | 0.0020240        | torch.Size([2, 512, 1])          |
| 2526    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(6)              | output              | qint16        | 0.0003154 | -0.7333220   | 9.5379162     | 0.0000060    | 1.5378958        | torch.Size([2, 512, 128])        |
| 2527    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(6)              | input_0             | qint16        | 0.0003154 | -0.7333220   | 9.5379162     | 0.0000060    | 1.5378958        | torch.Size([2, 512, 128])        |
| 2527    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(6)              | input_1             | qint16        | 0.0003154 | -0.7333220   | 9.5379162     | 0.0000060    | 1.5378958        | torch.Size([2, 512, 128])        |
| 2527    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(6)              | output              | qint16        | 0.0032599 | 0.0000000    | 90.9726868    | 1.5380902    | 25.1975079       | torch.Size([2, 512, 128])        |
| 2528    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(6)    | input_0             | qint16        | 0.0032599 | 0.0000000    | 90.9726868    | 1.5380902    | 25.1975079       | torch.Size([2, 512, 128])        |
| 2528    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(6)    | output              | qint16        | 0.0000598 | 1.0476639    | 1.9514757     | 1.5380872    | 0.0464886        | torch.Size([2, 512, 1])          |
| 2529    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(6)            | input               | qint16        | 0.0000598 | 1.0476639    | 1.9514757     | 1.5380872    | 0.0464886        | torch.Size([2, 512, 1])          |
| 2529    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.pos_fc.11.rsqrt(6)            | output              | qint16        | 0.0000315 | 0.7158399    | 0.9769779     | 0.8124696    | 0.0034220        | torch.Size([2, 512, 1])          |
| 2530    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(6)          | input_0             | qint16        | 0.0003154 | -0.7333220   | 9.5379162     | 0.0000060    | 1.5378958        | torch.Size([2, 512, 128])        |
| 2530    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(6)          | input_1             | qint16        | 0.0000315 | 0.7158399    | 0.9769779     | 0.8124696    | 0.0034220        | torch.Size([2, 512, 1])          |
| 2530    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(6)          | output              | qint16        | 0.0002431 | -0.6003936   | 7.2764301     | 0.0000009    | 0.9998516        | torch.Size([2, 512, 128])        |
| 2531    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(6)     | input               | torch.float32 |           | 0.7088336    | 1.4002132     | 0.9292046    | 0.0145085        | torch.Size([128])                |
| 2531    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(6)     | output              | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 2532    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(6)       | input_0             | qint16        | 0.0002431 | -0.6003936   | 7.2764301     | 0.0000009    | 0.9998516        | torch.Size([2, 512, 128])        |
| 2532    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(6)       | input_1             | qint16        | 0.0000427 | 0.7088346    | 1.4001919     | 0.9292030    | 0.0145085        | torch.Size([128])                |
| 2532    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(6)       | output              | qint16        | 0.0002455 | -0.8407180   | 7.3501439     | 0.0095610    | 0.9032983        | torch.Size([2, 512, 128])        |
| 2533    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(6)       | input               | torch.float32 |           | -0.0965041   | 0.2669707     | 0.0619903    | 0.0064956        | torch.Size([128])                |
| 2533    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(6)       | output              | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 2534    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(6)         | input_0             | qint16        | 0.0002455 | -0.8407180   | 7.3501439     | 0.0095610    | 0.9032983        | torch.Size([2, 512, 128])        |
| 2534    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(6)         | input_1             | qint16        | 0.0000081 | -0.0965062   | 0.2669667     | 0.0619904    | 0.0064956        | torch.Size([128])                |
| 2534    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(6)         | output              | qint8         | 0.0587279 | -0.8221908   | 7.2822618     | 0.0720724    | 0.8648180        | torch.Size([2, 512, 128])        |
| 2535    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2535    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017906 | -1.1119591   | 2.8201861     | 0.1946432    | 0.4366399        | torch.Size([2, 512, 3])          |
| 2536    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(6)                  | input               | qint16        | 0.0017906 | -1.1119591   | 2.8201861     | 0.1946432    | 0.4366399        | torch.Size([2, 512, 3])          |
| 2536    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(6)                  | weight              | torch.float32 |           | -0.8288664   | 0.6362330     | 0.0683853    | 0.1118651        | torch.Size([32, 3])              |
| 2536    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(6)                  | bias                | torch.float32 |           | -0.5554879   | 0.5432062     | 0.0766153    | 0.1068659        | torch.Size([32])                 |
| 2536    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.0(6)                  | output              | torch.float32 |           | -2.0696445   | 2.4343228     | 0.0969878    | 0.2479164        | torch.Size([2, 512, 32])         |
| 2537    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(6)                  | input               | torch.float32 |           | -2.0696445   | 2.4343228     | 0.0969878    | 0.2479164        | torch.Size([2, 512, 32])         |
| 2537    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.1(6)                  | output              | qint8         | 0.0194126 | 0.0000000    | 2.4265749     | 0.2517358    | 0.0998401        | torch.Size([2, 512, 32])         |
| 2538    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(6)  | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.4265749     | 0.2517358    | 0.0998401        | torch.Size([2, 512, 32])         |
| 2538    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(6)  | output              | qint16        | 0.0000252 | 0.1571238    | 0.6964215     | 0.2517371    | 0.0130070        | torch.Size([2, 512, 1])          |
| 2539    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(6)              | input_0             | qint8         | 0.0194126 | 0.0000000    | 2.4265749     | 0.2517358    | 0.0998401        | torch.Size([2, 512, 32])         |
| 2539    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(6)              | input_1             | qint16        | 0.0000252 | 0.1571238    | 0.6964215     | 0.2517371    | 0.0130070        | torch.Size([2, 512, 1])          |
| 2539    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(6)              | output              | qint16        | 0.0000639 | -0.6964291   | 1.7301432     | -0.0000006   | 0.0868452        | torch.Size([2, 512, 32])         |
| 2540    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(6)              | input_0             | qint16        | 0.0000639 | -0.6964291   | 1.7301432     | -0.0000006   | 0.0868452        | torch.Size([2, 512, 32])         |
| 2540    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(6)              | input_1             | qint16        | 0.0000639 | -0.6964291   | 1.7301432     | -0.0000006   | 0.0868452        | torch.Size([2, 512, 32])         |
| 2540    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(6)              | output              | qint16        | 0.0001394 | 0.0000000    | 2.9934602     | 0.0868403    | 0.0267712        | torch.Size([2, 512, 32])         |
| 2541    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(6)    | input_0             | qint16        | 0.0001394 | 0.0000000    | 2.9934602     | 0.0868403    | 0.0267712        | torch.Size([2, 512, 32])         |
| 2541    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(6)    | output              | qint16        | 0.0000212 | 0.0379530    | 0.4677198     | 0.0868406    | 0.0045832        | torch.Size([2, 512, 1])          |
| 2542    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(6)            | input               | qint16        | 0.0000212 | 0.0379530    | 0.4677198     | 0.0868406    | 0.0045832        | torch.Size([2, 512, 1])          |
| 2542    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.2.rsqrt(6)            | output              | qint16        | 0.0001649 | 1.4621282    | 5.1323719     | 3.9337635    | 1.0432551        | torch.Size([2, 512, 1])          |
| 2543    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(6)          | input_0             | qint16        | 0.0000639 | -0.6964291   | 1.7301432     | -0.0000006   | 0.0868452        | torch.Size([2, 512, 32])         |
| 2543    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(6)          | input_1             | qint16        | 0.0001649 | 1.4621282    | 5.1323719     | 3.9337635    | 1.0432551        | torch.Size([2, 512, 1])          |
| 2543    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(6)          | output              | qint16        | 0.0000919 | -1.0439715   | 3.0128427     | -0.0000219   | 0.9997615        | torch.Size([2, 512, 32])         |
| 2544    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(6)     | input               | torch.float32 |           | 0.8401937    | 1.1936733     | 0.9969203    | 0.0071658        | torch.Size([32])                 |
| 2544    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(6)     | output              | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 2545    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(6)       | input_0             | qint16        | 0.0000919 | -1.0439715   | 3.0128427     | -0.0000219   | 0.9997615        | torch.Size([2, 512, 32])         |
| 2545    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(6)       | input_1             | qint16        | 0.0000364 | 0.8401886    | 1.1936550     | 0.9969214    | 0.0071652        | torch.Size([32])                 |
| 2545    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(6)       | output              | qint16        | 0.0001022 | -1.2179394   | 3.2300847     | 0.0072561    | 0.9921793        | torch.Size([2, 512, 32])         |
| 2546    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(6)       | input               | torch.float32 |           | -0.1003950   | 0.1085345     | 0.0035262    | 0.0030721        | torch.Size([32])                 |
| 2546    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(6)       | output              | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 2547    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(6)         | input_0             | qint16        | 0.0001022 | -1.2179394   | 3.2300847     | 0.0072561    | 0.9921793        | torch.Size([2, 512, 32])         |
| 2547    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(6)         | input_1             | qint16        | 0.0000033 | -0.1003946   | 0.1085328     | 0.0035266    | 0.0030721        | torch.Size([32])                 |
| 2547    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(6)         | output              | qint8         | 0.0232598 | -1.2095096   | 2.9539945     | 0.0106070    | 0.9398663        | torch.Size([2, 512, 32])         |
| 2548    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(6)                  | input               | qint8         | 0.0232598 | -1.2095096   | 2.9539945     | 0.0106070    | 0.9398663        | torch.Size([2, 512, 32])         |
| 2548    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(6)                  | weight              | torch.float32 |           | -0.5793310   | 0.5422795     | -0.0032135   | 0.0176575        | torch.Size([32, 32])             |
| 2548    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(6)                  | bias                | torch.float32 |           | -0.1716317   | 0.2230143     | 0.0007250    | 0.0126328        | torch.Size([32])                 |
| 2548    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.3(6)                  | output              | torch.float32 |           | -4.4134846   | 2.1661389     | -0.2344224   | 1.4551022        | torch.Size([2, 512, 32])         |
| 2549    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(6)                  | input               | torch.float32 |           | -4.4134846   | 2.1661389     | -0.2344224   | 1.4551022        | torch.Size([2, 512, 32])         |
| 2549    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.4(6)                  | output              | qint8         | 0.0172935 | 0.0000000    | 2.1616912     | 0.3626570    | 0.2527932        | torch.Size([2, 512, 32])         |
| 2550    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(6)  | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1616912     | 0.3626570    | 0.2527932        | torch.Size([2, 512, 32])         |
| 2550    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(6)  | output              | qint16        | 0.0000141 | 0.2637273    | 0.4318014     | 0.3626567    | 0.0009807        | torch.Size([2, 512, 1])          |
| 2551    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(6)              | input_0             | qint8         | 0.0172935 | 0.0000000    | 2.1616912     | 0.3626570    | 0.2527932        | torch.Size([2, 512, 32])         |
| 2551    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(6)              | input_1             | qint16        | 0.0000141 | 0.2637273    | 0.4318014     | 0.3626567    | 0.0009807        | torch.Size([2, 512, 1])          |
| 2551    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(6)              | output              | qint16        | 0.0000617 | -0.4317828   | 1.8866346     | -0.0000011   | 0.2518142        | torch.Size([2, 512, 32])         |
| 2552    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(6)              | input_0             | qint16        | 0.0000617 | -0.4317828   | 1.8866346     | -0.0000011   | 0.2518142        | torch.Size([2, 512, 32])         |
| 2552    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(6)              | input_1             | qint16        | 0.0000617 | -0.4317828   | 1.8866346     | -0.0000011   | 0.2518142        | torch.Size([2, 512, 32])         |
| 2552    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(6)              | output              | qint16        | 0.0001252 | 0.0000000    | 3.5594084     | 0.2518057    | 0.1794211        | torch.Size([2, 512, 32])         |
| 2553    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(6)    | input_0             | qint16        | 0.0001252 | 0.0000000    | 3.5594084     | 0.2518057    | 0.1794211        | torch.Size([2, 512, 32])         |
| 2553    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(6)    | output              | qint16        | 0.0000132 | 0.1529912    | 0.3471240     | 0.2518061    | 0.0033608        | torch.Size([2, 512, 1])          |
| 2554    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(6)            | input               | qint16        | 0.0000132 | 0.1529912    | 0.3471240     | 0.2518061    | 0.0033608        | torch.Size([2, 512, 1])          |
| 2554    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.5.rsqrt(6)            | output              | qint16        | 0.0000777 | 1.6972939    | 2.5457854     | 2.0378175    | 0.0684713        | torch.Size([2, 512, 1])          |
| 2555    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(6)          | input_0             | qint16        | 0.0000617 | -0.4317828   | 1.8866346     | -0.0000011   | 0.2518142        | torch.Size([2, 512, 32])         |
| 2555    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(6)          | input_1             | qint16        | 0.0000777 | 1.6972939    | 2.5457854     | 2.0378175    | 0.0684713        | torch.Size([2, 512, 1])          |
| 2555    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(6)          | output              | qint16        | 0.0001125 | -0.9112657   | 3.6849864     | -0.0000577   | 0.9995290        | torch.Size([2, 512, 32])         |
| 2556    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(6)     | input               | torch.float32 |           | 0.8191299    | 1.0923718     | 0.9808199    | 0.0031231        | torch.Size([32])                 |
| 2556    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(6)     | output              | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 2557    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(6)       | input_0             | qint16        | 0.0001125 | -0.9112657   | 3.6849864     | -0.0000577   | 0.9995290        | torch.Size([2, 512, 32])         |
| 2557    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(6)       | input_1             | qint16        | 0.0000333 | 0.8191247    | 1.0923551     | 0.9808187    | 0.0031228        | torch.Size([32])                 |
| 2557    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(6)       | output              | qint16        | 0.0001113 | -0.9194291   | 3.5213978     | 0.0093902    | 0.9944724        | torch.Size([2, 512, 32])         |
| 2558    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(6)       | input               | torch.float32 |           | -0.0704119   | 0.0788569     | 0.0097621    | 0.0015200        | torch.Size([32])                 |
| 2558    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(6)       | output              | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 2559    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(6)         | input_0             | qint16        | 0.0001113 | -0.9194291   | 3.5213978     | 0.0093902    | 0.9944724        | torch.Size([2, 512, 32])         |
| 2559    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(6)         | input_1             | qint16        | 0.0000024 | -0.0704110   | 0.0788556     | 0.0097622    | 0.0015200        | torch.Size([32])                 |
| 2559    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(6)         | output              | qint8         | 0.0262611 | -0.8928760   | 3.3351545     | 0.0190498    | 0.9671333        | torch.Size([2, 512, 32])         |
| 2560    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(6)                  | input               | qint8         | 0.0262611 | -0.8928760   | 3.3351545     | 0.0190498    | 0.9671333        | torch.Size([2, 512, 32])         |
| 2560    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(6)                  | weight              | torch.float32 |           | -0.5712157   | 0.5219681     | -0.0062917   | 0.0166056        | torch.Size([32, 32])             |
| 2560    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(6)                  | bias                | torch.float32 |           | -0.1649730   | 0.2318604     | 0.0253026    | 0.0136139        | torch.Size([32])                 |
| 2560    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.6(6)                  | output              | torch.float32 |           | -4.1177902   | 2.6330574     | -0.1538105   | 1.2691293        | torch.Size([2, 512, 32])         |
| 2561    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(6)                  | input               | torch.float32 |           | -4.1177902   | 2.6330574     | -0.1538105   | 1.2691293        | torch.Size([2, 512, 32])         |
| 2561    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.7(6)                  | output              | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3692503    | 0.2681906        | torch.Size([2, 512, 32])         |
| 2562    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(6)  | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3692503    | 0.2681906        | torch.Size([2, 512, 32])         |
| 2562    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(6)  | output              | qint16        | 0.0000154 | 0.1895597    | 0.4795143     | 0.3692498    | 0.0085100        | torch.Size([2, 512, 1])          |
| 2563    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(6)              | input_0             | qint8         | 0.0188970 | 0.0000000    | 2.3999181     | 0.3692503    | 0.2681906        | torch.Size([2, 512, 32])         |
| 2563    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(6)              | input_1             | qint16        | 0.0000154 | 0.1895597    | 0.4795143     | 0.3692498    | 0.0085100        | torch.Size([2, 512, 1])          |
| 2563    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(6)              | output              | qint16        | 0.0000636 | -0.4795335   | 2.0154917     | 0.0000010    | 0.2596888        | torch.Size([2, 512, 32])         |
| 2564    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(6)              | input_0             | qint16        | 0.0000636 | -0.4795335   | 2.0154917     | 0.0000010    | 0.2596888        | torch.Size([2, 512, 32])         |
| 2564    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(6)              | input_1             | qint16        | 0.0000636 | -0.4795335   | 2.0154917     | 0.0000010    | 0.2596888        | torch.Size([2, 512, 32])         |
| 2564    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(6)              | output              | qint16        | 0.0001333 | 0.0000000    | 4.0621991     | 0.2596871    | 0.2723300        | torch.Size([2, 512, 32])         |
| 2565    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(6)    | input_0             | qint16        | 0.0001333 | 0.0000000    | 4.0621991     | 0.2596871    | 0.2723300        | torch.Size([2, 512, 32])         |
| 2565    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(6)    | output              | qint16        | 0.0000116 | 0.1363609    | 0.3784634     | 0.2596568    | 0.0042143        | torch.Size([2, 512, 1])          |
| 2566    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(6)            | input               | qint16        | 0.0000116 | 0.1363609    | 0.3784634     | 0.2596568    | 0.0042143        | torch.Size([2, 512, 1])          |
| 2566    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.8.rsqrt(6)            | output              | qint16        | 0.0000821 | 1.6254737    | 2.6913540     | 2.0237556    | 0.1022225        | torch.Size([2, 512, 1])          |
| 2567    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(6)          | input_0             | qint16        | 0.0000636 | -0.4795335   | 2.0154917     | 0.0000010    | 0.2596888        | torch.Size([2, 512, 32])         |
| 2567    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(6)          | input_1             | qint16        | 0.0000821 | 1.6254737    | 2.6913540     | 2.0237556    | 0.1022225        | torch.Size([2, 512, 1])          |
| 2567    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(6)          | output              | qint16        | 0.0001195 | -0.9471664   | 3.7896223     | -0.0000014   | 1.0000218        | torch.Size([2, 512, 32])         |
| 2568    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(6)     | input               | torch.float32 |           | 0.8903234    | 1.1315480     | 0.9912031    | 0.0026835        | torch.Size([32])                 |
| 2568    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(6)     | output              | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 2569    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(6)       | input_0             | qint16        | 0.0001195 | -0.9471664   | 3.7896223     | -0.0000014   | 1.0000218        | torch.Size([2, 512, 32])         |
| 2569    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(6)       | input_1             | qint16        | 0.0000345 | 0.8903204    | 1.1315308     | 0.9912042    | 0.0026835        | torch.Size([32])                 |
| 2569    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(6)       | output              | qint16        | 0.0001226 | -1.0717149   | 3.9048221     | 0.0047552    | 1.0255283        | torch.Size([2, 512, 32])         |
| 2570    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(6)       | input               | torch.float32 |           | -0.0586081   | 0.0779655     | 0.0041962    | 0.0015323        | torch.Size([32])                 |
| 2570    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(6)       | output              | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 2571    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(6)         | input_0             | qint16        | 0.0001226 | -1.0717149   | 3.9048221     | 0.0047552    | 1.0255283        | torch.Size([2, 512, 32])         |
| 2571    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(6)         | input_1             | qint16        | 0.0000024 | -0.0586082   | 0.0779643     | 0.0041960    | 0.0015323        | torch.Size([32])                 |
| 2571    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(6)         | output              | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0092978    | 1.0047754        | torch.Size([2, 512, 32])         |
| 2572    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(6)                  | input               | qint8         | 0.0302522 | -1.0285763   | 3.8420348     | 0.0092978    | 1.0047754        | torch.Size([2, 512, 32])         |
| 2572    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(6)                  | weight              | torch.float32 |           | -0.3204980   | 0.3365203     | -0.0020388   | 0.0145364        | torch.Size([32, 32])             |
| 2572    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(6)                  | bias                | torch.float32 |           | -0.1559148   | 0.2119379     | 0.0091616    | 0.0105488        | torch.Size([32])                 |
| 2572    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.size_fc.9(6)                  | output              | torch.float32 |           | -2.2637768   | 2.6702924     | 0.0153041    | 0.7850285        | torch.Size([2, 512, 32])         |
| 2573    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(6)                 | input               | torch.float32 |           | -2.2637768   | 2.6702924     | 0.0153041    | 0.7850285        | torch.Size([2, 512, 32])         |
| 2573    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.size_fc.10(6)                 | output              | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3552548    | 0.2742819        | torch.Size([2, 512, 32])         |
| 2574    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(6) | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3552548    | 0.2742819        | torch.Size([2, 512, 32])         |
| 2574    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(6) | output              | qint16        | 0.0000157 | 0.2544906    | 0.5130996     | 0.3534452    | 0.0022568        | torch.Size([2, 512, 1])          |
| 2575    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(6)             | input_0             | qint8         | 0.0200096 | 0.0000000    | 2.5412204     | 0.3552548    | 0.2742819        | torch.Size([2, 512, 32])         |
| 2575    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(6)             | input_1             | qint16        | 0.0000157 | 0.2544906    | 0.5130996     | 0.3534452    | 0.0022568        | torch.Size([2, 512, 1])          |
| 2575    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(6)             | output              | qint16        | 0.0000689 | -0.5131254   | 2.1610141     | 0.0018078    | 0.2714504        | torch.Size([2, 512, 32])         |
| 2576    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(6)             | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1610141     | 0.0018078    | 0.2714504        | torch.Size([2, 512, 32])         |
| 2576    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(6)             | input_1             | qint16        | 0.0000689 | -0.5131254   | 2.1610141     | 0.0018078    | 0.2714504        | torch.Size([2, 512, 32])         |
| 2576    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(6)             | output              | qint16        | 0.0001557 | 0.0000000    | 4.6700044     | 0.2714503    | 0.3222677        | torch.Size([2, 512, 32])         |
| 2577    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(6)   | input_0             | qint16        | 0.0001557 | 0.0000000    | 4.6700044     | 0.2714503    | 0.3222677        | torch.Size([2, 512, 32])         |
| 2577    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(6)   | output              | qint16        | 0.0000123 | 0.1806791    | 0.3951588     | 0.2714501    | 0.0014414        | torch.Size([2, 512, 1])          |
| 2578    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(6)           | input               | qint16        | 0.0000123 | 0.1806791    | 0.3951588     | 0.2714501    | 0.0014414        | torch.Size([2, 512, 1])          |
| 2578    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.size_fc.11.rsqrt(6)           | output              | qint16        | 0.0000803 | 1.5907621    | 2.3525321     | 1.9330623    | 0.0176154        | torch.Size([2, 512, 1])          |
| 2579    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(6)         | input_0             | qint16        | 0.0000689 | -0.5131254   | 2.1610141     | 0.0018078    | 0.2714504        | torch.Size([2, 512, 32])         |
| 2579    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(6)         | input_1             | qint16        | 0.0000803 | 1.5907621    | 2.3525321     | 1.9330623    | 0.0176154        | torch.Size([2, 512, 1])          |
| 2579    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(6)         | output              | qint16        | 0.0001207 | -1.0354456   | 3.9475155     | 0.0033888    | 0.9999637        | torch.Size([2, 512, 32])         |
| 2580    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(6)    | input               | torch.float32 |           | 0.8289159    | 1.6609058     | 1.2561316    | 0.0353652        | torch.Size([32])                 |
| 2580    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(6)    | output              | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 2581    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(6)      | input_0             | qint16        | 0.0001207 | -1.0354456   | 3.9475155     | 0.0033888    | 0.9999637        | torch.Size([2, 512, 32])         |
| 2581    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(6)      | input_1             | qint16        | 0.0000507 | 0.8288943    | 1.6608806     | 1.2561259    | 0.0353652        | torch.Size([32])                 |
| 2581    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(6)      | output              | qint16        | 0.0001642 | -1.7197134   | 4.9914718     | -0.0227836   | 1.4572822        | torch.Size([2, 512, 32])         |
| 2582    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(6)      | input               | torch.float32 |           | -0.1194881   | 0.2576658     | 0.0445686    | 0.0113612        | torch.Size([32])                 |
| 2582    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(6)      | output              | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 2583    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(6)        | input_0             | qint16        | 0.0001642 | -1.7197134   | 4.9914718     | -0.0227836   | 1.4572822        | torch.Size([2, 512, 32])         |
| 2583    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(6)        | input_1             | qint16        | 0.0000079 | -0.1194852   | 0.2576619     | 0.0445689    | 0.0113611        | torch.Size([32])                 |
| 2583    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(6)        | output              | qint8         | 0.0385920 | -1.6594547   | 4.9011803     | 0.0218104    | 1.3772231        | torch.Size([2, 512, 32])         |
| 2584    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2584    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017906 | -1.0367541   | 1.1388180     | -0.0181245   | 0.2362428        | torch.Size([2, 512, 2])          |
| 2585    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(6)                   | input               | qint16        | 0.0017906 | -1.0367541   | 1.1388180     | -0.0181245   | 0.2362428        | torch.Size([2, 512, 2])          |
| 2585    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(6)                   | weight              | torch.float32 |           | -0.7023237   | 0.7394427     | 0.0490668    | 0.1972211        | torch.Size([32, 2])              |
| 2585    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(6)                   | bias                | torch.float32 |           | -0.7971504   | 0.6681666     | -0.1171320   | 0.1641774        | torch.Size([32])                 |
| 2585    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.0(6)                   | output              | torch.float32 |           | -1.6350799   | 1.3546352     | -0.1199704   | 0.2533004        | torch.Size([2, 512, 32])         |
| 2586    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(6)                   | input               | torch.float32 |           | -1.6350799   | 1.3546352     | -0.1199704   | 0.2533004        | torch.Size([2, 512, 32])         |
| 2586    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.1(6)                   | output              | qint8         | 0.0115854 | 0.0000000    | 1.3554963     | 0.1539960    | 0.0692163        | torch.Size([2, 512, 32])         |
| 2587    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(6)   | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.3554963     | 0.1539960    | 0.0692163        | torch.Size([2, 512, 32])         |
| 2587    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(6)   | output              | qint16        | 0.0000105 | 0.1082505    | 0.2378687     | 0.1539964    | 0.0011267        | torch.Size([2, 512, 1])          |
| 2588    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(6)               | input_0             | qint8         | 0.0115854 | 0.0000000    | 1.3554963     | 0.1539960    | 0.0692163        | torch.Size([2, 512, 32])         |
| 2588    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(6)               | input_1             | qint16        | 0.0000105 | 0.1082505    | 0.2378687     | 0.1539964    | 0.0011267        | torch.Size([2, 512, 1])          |
| 2588    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(6)               | output              | qint16        | 0.0000395 | -0.2378854   | 1.1190697     | 0.0000002    | 0.0680906        | torch.Size([2, 512, 32])         |
| 2589    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(6)               | input_0             | qint16        | 0.0000395 | -0.2378854   | 1.1190697     | 0.0000002    | 0.0680906        | torch.Size([2, 512, 32])         |
| 2589    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(6)               | input_1             | qint16        | 0.0000395 | -0.2378854   | 1.1190697     | 0.0000002    | 0.0680906        | torch.Size([2, 512, 32])         |
| 2589    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(6)               | output              | qint16        | 0.0000524 | 0.0000000    | 1.2523071     | 0.0680897    | 0.0172412        | torch.Size([2, 512, 32])         |
| 2590    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(6)     | input_0             | qint16        | 0.0000524 | 0.0000000    | 1.2523071     | 0.0680897    | 0.0172412        | torch.Size([2, 512, 32])         |
| 2590    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(6)     | output              | qint16        | 0.0000071 | 0.0402978    | 0.1286543     | 0.0680904    | 0.0004773        | torch.Size([2, 512, 1])          |
| 2591    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(6)             | input               | qint16        | 0.0000071 | 0.0402978    | 0.1286543     | 0.0680904    | 0.0004773        | torch.Size([2, 512, 1])          |
| 2591    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.2.rsqrt(6)             | output              | qint16        | 0.0001514 | 2.7877924    | 4.9613075     | 3.9632776    | 0.3168582        | torch.Size([2, 512, 1])          |
| 2592    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(6)           | input_0             | qint16        | 0.0000395 | -0.2378854   | 1.1190697     | 0.0000002    | 0.0680906        | torch.Size([2, 512, 32])         |
| 2592    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(6)           | input_1             | qint16        | 0.0001514 | 2.7877924    | 4.9613075     | 3.9632776    | 0.3168582        | torch.Size([2, 512, 1])          |
| 2592    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(6)           | output              | qint16        | 0.0001206 | -0.7543806   | 3.9524767     | -0.0000325   | 0.9995238        | torch.Size([2, 512, 32])         |
| 2593    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(6)      | input               | torch.float32 |           | 0.8947600    | 1.1748335     | 0.9865216    | 0.0041537        | torch.Size([32])                 |
| 2593    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(6)      | output              | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 2594    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(6)        | input_0             | qint16        | 0.0001206 | -0.7543806   | 3.9524767     | -0.0000325   | 0.9995238        | torch.Size([2, 512, 32])         |
| 2594    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(6)        | input_1             | qint16        | 0.0000359 | 0.8947629    | 1.1748155     | 0.9865247    | 0.0041531        | torch.Size([32])                 |
| 2594    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(6)        | output              | qint16        | 0.0001306 | -0.8581456   | 4.2798867     | 0.0034778    | 0.9985540        | torch.Size([2, 512, 32])         |
| 2595    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(6)        | input               | torch.float32 |           | -0.0879948   | 0.1319895     | 0.0285039    | 0.0034159        | torch.Size([32])                 |
| 2595    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(6)        | output              | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 2596    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(6)          | input_0             | qint16        | 0.0001306 | -0.8581456   | 4.2798867     | 0.0034778    | 0.9985540        | torch.Size([2, 512, 32])         |
| 2596    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(6)          | input_1             | qint16        | 0.0000040 | -0.0879930   | 0.1319875     | 0.0285044    | 0.0034159        | torch.Size([32])                 |
| 2596    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(6)          | output              | qint8         | 0.0302674 | -0.8172185   | 3.8439538     | 0.0318025    | 0.9195036        | torch.Size([2, 512, 32])         |
| 2597    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(6)                   | input               | qint8         | 0.0302674 | -0.8172185   | 3.8439538     | 0.0318025    | 0.9195036        | torch.Size([2, 512, 32])         |
| 2597    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(6)                   | weight              | torch.float32 |           | -1.0547366   | 0.5812716     | 0.0070099    | 0.0187704        | torch.Size([32, 32])             |
| 2597    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(6)                   | bias                | torch.float32 |           | -0.2183180   | 0.1396109     | -0.0140744   | 0.0103446        | torch.Size([32])                 |
| 2597    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.3(6)                   | output              | torch.float32 |           | -4.9187975   | 1.6897694     | -0.4795849   | 1.4007375        | torch.Size([2, 512, 32])         |
| 2598    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(6)                   | input               | torch.float32 |           | -4.9187975   | 1.6897694     | -0.4795849   | 1.4007375        | torch.Size([2, 512, 32])         |
| 2598    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.4(6)                   | output              | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2249792    | 0.1229561        | torch.Size([2, 512, 32])         |
| 2599    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(6)   | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2249792    | 0.1229561        | torch.Size([2, 512, 32])         |
| 2599    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(6)   | output              | qint16        | 0.0000116 | 0.1696848    | 0.3584630     | 0.2249790    | 0.0014333        | torch.Size([2, 512, 1])          |
| 2600    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(6)               | input_0             | qint8         | 0.0142143 | 0.0000000    | 1.6915014     | 0.2249792    | 0.1229561        | torch.Size([2, 512, 32])         |
| 2600    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(6)               | input_1             | qint16        | 0.0000116 | 0.1696848    | 0.3584630     | 0.2249790    | 0.0014333        | torch.Size([2, 512, 1])          |
| 2600    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(6)               | output              | qint16        | 0.0000516 | -0.3584414   | 1.4245257     | -0.0000008   | 0.1215236        | torch.Size([2, 512, 32])         |
| 2601    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(6)               | input_0             | qint16        | 0.0000516 | -0.3584414   | 1.4245257     | -0.0000008   | 0.1215236        | torch.Size([2, 512, 32])         |
| 2601    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(6)               | input_1             | qint16        | 0.0000516 | -0.3584414   | 1.4245257     | -0.0000008   | 0.1215236        | torch.Size([2, 512, 32])         |
| 2601    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(6)               | output              | qint16        | 0.0000889 | 0.0000000    | 2.0292974     | 0.1215207    | 0.0489045        | torch.Size([2, 512, 32])         |
| 2602    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(6)     | input_0             | qint16        | 0.0000889 | 0.0000000    | 2.0292974     | 0.1215207    | 0.0489045        | torch.Size([2, 512, 32])         |
| 2602    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(6)     | output              | qint16        | 0.0000089 | 0.0743818    | 0.2335940     | 0.1215208    | 0.0010882        | torch.Size([2, 512, 1])          |
| 2603    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(6)             | input               | qint16        | 0.0000089 | 0.0743818    | 0.2335940     | 0.1215208    | 0.0010882        | torch.Size([2, 512, 1])          |
| 2603    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.5.rsqrt(6)             | output              | qint16        | 0.0001114 | 2.0690060    | 3.6515737     | 2.9360633    | 0.1172174        | torch.Size([2, 512, 1])          |
| 2604    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(6)           | input_0             | qint16        | 0.0000516 | -0.3584414   | 1.4245257     | -0.0000008   | 0.1215236        | torch.Size([2, 512, 32])         |
| 2604    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(6)           | input_1             | qint16        | 0.0001114 | 2.0690060    | 3.6515737     | 2.9360633    | 0.1172174        | torch.Size([2, 512, 1])          |
| 2604    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(6)           | output              | qint16        | 0.0001083 | -0.8445604   | 3.5501876     | -0.0000105   | 0.9999009        | torch.Size([2, 512, 32])         |
| 2605    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(6)      | input               | torch.float32 |           | 0.8550419    | 1.1198171     | 0.9805899    | 0.0036729        | torch.Size([32])                 |
| 2605    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(6)      | output              | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 2606    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(6)        | input_0             | qint16        | 0.0001083 | -0.8445604   | 3.5501876     | -0.0000105   | 0.9999009        | torch.Size([2, 512, 32])         |
| 2606    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(6)        | input_1             | qint16        | 0.0000342 | 0.8550492    | 1.1198000     | 0.9805875    | 0.0036728        | torch.Size([32])                 |
| 2606    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(6)        | output              | qint16        | 0.0001106 | -0.9185085   | 3.6229506     | -0.0018153   | 0.9723719        | torch.Size([2, 512, 32])         |
| 2607    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(6)        | input               | torch.float32 |           | -0.0792132   | 0.1045145     | 0.0242442    | 0.0021608        | torch.Size([32])                 |
| 2607    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(6)        | output              | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 2608    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(6)          | input_0             | qint16        | 0.0001106 | -0.9185085   | 3.6229506     | -0.0018153   | 0.9723719        | torch.Size([2, 512, 32])         |
| 2608    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(6)          | input_1             | qint16        | 0.0000032 | -0.0792132   | 0.1045129     | 0.0242443    | 0.0021608        | torch.Size([32])                 |
| 2608    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(6)          | output              | qint8         | 0.0268612 | -0.8595570   | 3.4113667     | 0.0217468    | 0.9225408        | torch.Size([2, 512, 32])         |
| 2609    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(6)                   | input               | qint8         | 0.0268612 | -0.8595570   | 3.4113667     | 0.0217468    | 0.9225408        | torch.Size([2, 512, 32])         |
| 2609    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(6)                   | weight              | torch.float32 |           | -0.4480607   | 0.3678726     | 0.0004879    | 0.0160908        | torch.Size([32, 32])             |
| 2609    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(6)                   | bias                | torch.float32 |           | -0.1861591   | 0.1739754     | 0.0155446    | 0.0137690        | torch.Size([32])                 |
| 2609    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.6(6)                   | output              | torch.float32 |           | -3.6919556   | 2.4407692     | -0.2321382   | 1.2792883        | torch.Size([2, 512, 32])         |
| 2610    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(6)                   | input               | torch.float32 |           | -3.6919556   | 2.4407692     | -0.2321382   | 1.2792883        | torch.Size([2, 512, 32])         |
| 2610    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.7(6)                   | output              | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3304751    | 0.2102521        | torch.Size([2, 512, 32])         |
| 2611    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(6)   | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3304751    | 0.2102521        | torch.Size([2, 512, 32])         |
| 2611    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(6)   | output              | qint16        | 0.0000156 | 0.2402986    | 0.4880910     | 0.3304750    | 0.0013093        | torch.Size([2, 512, 1])          |
| 2612    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(6)               | input_0             | qint8         | 0.0183966 | 0.0000000    | 2.3363676     | 0.3304751    | 0.2102521        | torch.Size([2, 512, 32])         |
| 2612    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(6)               | input_1             | qint16        | 0.0000156 | 0.2402986    | 0.4880910     | 0.3304750    | 0.0013093        | torch.Size([2, 512, 1])          |
| 2612    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(6)               | output              | qint16        | 0.0000645 | -0.4880934   | 2.0799463     | -0.0000007   | 0.2089445        | torch.Size([2, 512, 32])         |
| 2613    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(6)               | input_0             | qint16        | 0.0000645 | -0.4880934   | 2.0799463     | -0.0000007   | 0.2089445        | torch.Size([2, 512, 32])         |
| 2613    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(6)               | input_1             | qint16        | 0.0000645 | -0.4880934   | 2.0799463     | -0.0000007   | 0.2089445        | torch.Size([2, 512, 32])         |
| 2613    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(6)               | output              | qint16        | 0.0001365 | 0.0000000    | 4.3262181     | 0.2089395    | 0.1508419        | torch.Size([2, 512, 32])         |
| 2614    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(6)     | input_0             | qint16        | 0.0001365 | 0.0000000    | 4.3262181     | 0.2089395    | 0.1508419        | torch.Size([2, 512, 32])         |
| 2614    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(6)     | output              | qint16        | 0.0000123 | 0.1602064    | 0.3574433     | 0.2089401    | 0.0007301        | torch.Size([2, 512, 1])          |
| 2615    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(6)             | input               | qint16        | 0.0000123 | 0.1602064    | 0.3574433     | 0.2089401    | 0.0007301        | torch.Size([2, 512, 1])          |
| 2615    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.8.rsqrt(6)             | output              | qint16        | 0.0000749 | 1.6726017    | 2.4551423     | 2.1999412    | 0.0178544        | torch.Size([2, 512, 1])          |
| 2616    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(6)           | input_0             | qint16        | 0.0000645 | -0.4880934   | 2.0799463     | -0.0000007   | 0.2089445        | torch.Size([2, 512, 32])         |
| 2616    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(6)           | input_1             | qint16        | 0.0000749 | 1.6726017    | 2.4551423     | 2.1999412    | 0.0178544        | torch.Size([2, 512, 1])          |
| 2616    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(6)           | output              | qint16        | 0.0001267 | -0.8639228   | 4.1501474     | -0.0002789   | 0.9971693        | torch.Size([2, 512, 32])         |
| 2617    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(6)      | input               | torch.float32 |           | 0.8469434    | 1.1090456     | 0.9866461    | 0.0031007        | torch.Size([32])                 |
| 2617    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(6)      | output              | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 2618    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(6)        | input_0             | qint16        | 0.0001267 | -0.8639228   | 4.1501474     | -0.0002789   | 0.9971693        | torch.Size([2, 512, 32])         |
| 2618    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(6)        | input_1             | qint16        | 0.0000338 | 0.8469599    | 1.1090287     | 0.9866493    | 0.0031003        | torch.Size([32])                 |
| 2618    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(6)        | output              | qint16        | 0.0001376 | -0.9427418   | 4.4246821     | -0.0005248   | 0.9946056        | torch.Size([2, 512, 32])         |
| 2619    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(6)        | input               | torch.float32 |           | -0.0626723   | 0.0887763     | 0.0071697    | 0.0011301        | torch.Size([32])                 |
| 2619    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(6)        | output              | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 2620    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(6)          | input_0             | qint16        | 0.0001376 | -0.9427418   | 4.4246821     | -0.0005248   | 0.9946056        | torch.Size([2, 512, 32])         |
| 2620    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(6)          | input_1             | qint16        | 0.0000027 | -0.0626711   | 0.0887750     | 0.0071699    | 0.0011301        | torch.Size([32])                 |
| 2620    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(6)          | output              | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0059945    | 0.9694970        | torch.Size([2, 512, 32])         |
| 2621    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(6)                   | input               | qint8         | 0.0326290 | -0.9462408   | 4.1438823     | 0.0059945    | 0.9694970        | torch.Size([2, 512, 32])         |
| 2621    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(6)                   | weight              | torch.float32 |           | -0.5597425   | 0.7001730     | 0.0015679    | 0.0160348        | torch.Size([32, 32])             |
| 2621    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(6)                   | bias                | torch.float32 |           | -0.1810580   | 0.1736723     | -0.0279047   | 0.0091159        | torch.Size([32])                 |
| 2621    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.yaw_fc.9(6)                   | output              | torch.float32 |           | -4.3381438   | 3.4833324     | -0.2435418   | 1.1335706        | torch.Size([2, 512, 32])         |
| 2622    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(6)                  | input               | torch.float32 |           | -4.3381438   | 3.4833324     | -0.2435418   | 1.1335706        | torch.Size([2, 512, 32])         |
| 2622    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.yaw_fc.10(6)                  | output              | qint8         | 0.0271917 | 0.0000000    | 3.4533420     | 0.2809062    | 0.2820806        | torch.Size([2, 512, 32])         |
| 2623    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(6)  | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.4533420     | 0.2809062    | 0.2820806        | torch.Size([2, 512, 32])         |
| 2623    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(6)  | output              | qint16        | 0.0000121 | 0.2149898    | 0.3979983     | 0.2809028    | 0.0017322        | torch.Size([2, 512, 1])          |
| 2624    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(6)              | input_0             | qint8         | 0.0271917 | 0.0000000    | 3.4533420     | 0.2809062    | 0.2820806        | torch.Size([2, 512, 32])         |
| 2624    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(6)              | input_1             | qint16        | 0.0000121 | 0.2149898    | 0.3979983     | 0.2809028    | 0.0017322        | torch.Size([2, 512, 1])          |
| 2624    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(6)              | output              | qint16        | 0.0000976 | -0.3980255   | 3.1643906     | -0.0000005   | 0.2803521        | torch.Size([2, 512, 32])         |
| 2625    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(6)              | input_0             | qint16        | 0.0000976 | -0.3980255   | 3.1643906     | -0.0000005   | 0.2803521        | torch.Size([2, 512, 32])         |
| 2625    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(6)              | input_1             | qint16        | 0.0000976 | -0.3980255   | 3.1643906     | -0.0000005   | 0.2803521        | torch.Size([2, 512, 32])         |
| 2625    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(6)              | output              | qint16        | 0.0003122 | 0.0000000    | 10.0133667    | 0.2803462    | 0.7126965        | torch.Size([2, 512, 32])         |
| 2626    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(6)    | input_0             | qint16        | 0.0003122 | 0.0000000    | 10.0133667    | 0.2803462    | 0.7126965        | torch.Size([2, 512, 32])         |
| 2626    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(6)    | output              | qint16        | 0.0000136 | 0.1375637    | 0.4466016     | 0.2803444    | 0.0059074        | torch.Size([2, 512, 1])          |
| 2627    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(6)            | input               | qint16        | 0.0000136 | 0.1375637    | 0.4466016     | 0.2803444    | 0.0059074        | torch.Size([2, 512, 1])          |
| 2627    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.yaw_fc.11.rsqrt(6)            | output              | qint16        | 0.0000802 | 1.4963876    | 2.6273782     | 1.9510026    | 0.0922954        | torch.Size([2, 512, 1])          |
| 2628    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(6)          | input_0             | qint16        | 0.0000976 | -0.3980255   | 3.1643906     | -0.0000005   | 0.2803521        | torch.Size([2, 512, 32])         |
| 2628    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(6)          | input_1             | qint16        | 0.0000802 | 1.4963876    | 2.6273782     | 1.9510026    | 0.0922954        | torch.Size([2, 512, 1])          |
| 2628    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(6)          | output              | qint16        | 0.0001482 | -0.7892564   | 4.8386588     | -0.0000059   | 0.9997259        | torch.Size([2, 512, 32])         |
| 2629    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(6)     | input               | torch.float32 |           | 0.8363900    | 1.4688344     | 1.0570920    | 0.0396277        | torch.Size([32])                 |
| 2629    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(6)     | output              | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 2630    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(6)       | input_0             | qint16        | 0.0001482 | -0.7892564   | 4.8386588     | -0.0000059   | 0.9997259        | torch.Size([2, 512, 32])         |
| 2630    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(6)       | input_1             | qint16        | 0.0000448 | 0.8364074    | 1.4688120     | 1.0570912    | 0.0396254        | torch.Size([32])                 |
| 2630    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(6)       | output              | qint16        | 0.0001637 | -1.1593152   | 4.8689275     | -0.0517939   | 0.9404231        | torch.Size([2, 512, 32])         |
| 2631    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(6)       | input               | torch.float32 |           | -0.1492936   | 0.2842544     | 0.0803791    | 0.0109446        | torch.Size([32])                 |
| 2631    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(6)       | output              | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 2632    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(6)         | input_0             | qint16        | 0.0001637 | -1.1593152   | 4.8689275     | -0.0517939   | 0.9404231        | torch.Size([2, 512, 32])         |
| 2632    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(6)         | input_1             | qint16        | 0.0000087 | -0.1492948   | 0.2842501     | 0.0803791    | 0.0109447        | torch.Size([32])                 |
| 2632    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(6)         | output              | qint8         | 0.0373904 | -0.9721510   | 4.7485838     | 0.0290435    | 0.8741193        | torch.Size([2, 512, 32])         |
| 2633    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | input_0             | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2633    | torch.Tensor.__getitem__                                                    | head.anchor_encoder                               | output              | qint16        | 0.0017906 | -11.8823843  | 9.4238977     | -0.2258043   | 2.2937431        | torch.Size([2, 512, 3])          |
| 2634    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(6)                   | input               | qint16        | 0.0017906 | -11.8823843  | 9.4238977     | -0.2258043   | 2.2937431        | torch.Size([2, 512, 3])          |
| 2634    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(6)                   | weight              | torch.float32 |           | -1.0475703   | 0.9848034     | -0.0054673   | 0.2080412        | torch.Size([64, 3])              |
| 2634    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(6)                   | bias                | torch.float32 |           | -0.8030427   | 0.5068271     | -0.0504076   | 0.1294928        | torch.Size([64])                 |
| 2634    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.0(6)                   | output              | torch.float32 |           | -11.3768530  | 12.9607649    | -0.0989956   | 1.6937283        | torch.Size([2, 512, 64])         |
| 2635    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(6)                   | input               | torch.float32 |           | -11.3768530  | 12.9607649    | -0.0989956   | 1.6937283        | torch.Size([2, 512, 64])         |
| 2635    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.1(6)                   | output              | qint8         | 0.0729980 | 0.0000000    | 9.2707472     | 0.2926181    | 0.6373982        | torch.Size([2, 512, 64])         |
| 2636    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(6)   | input_0             | qint8         | 0.0729980 | 0.0000000    | 9.2707472     | 0.2926181    | 0.6373982        | torch.Size([2, 512, 64])         |
| 2636    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(6)   | output              | qint16        | 0.0000685 | 0.1208711    | 2.2452281     | 0.2924904    | 0.1362088        | torch.Size([2, 512, 1])          |
| 2637    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(6)               | input_0             | qint8         | 0.0729980 | 0.0000000    | 9.2707472     | 0.2926181    | 0.6373982        | torch.Size([2, 512, 64])         |
| 2637    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(6)               | input_1             | qint16        | 0.0000685 | 0.1208711    | 2.2452281     | 0.2924904    | 0.1362088        | torch.Size([2, 512, 1])          |
| 2637    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(6)               | output              | qint16        | 0.0002902 | -2.2453439   | 7.8211222     | 0.0001273    | 0.5008095        | torch.Size([2, 512, 64])         |
| 2638    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(6)               | input_0             | qint16        | 0.0002902 | -2.2453439   | 7.8211222     | 0.0001273    | 0.5008095        | torch.Size([2, 512, 64])         |
| 2638    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(6)               | input_1             | qint16        | 0.0002902 | -2.2453439   | 7.8211222     | 0.0001273    | 0.5008095        | torch.Size([2, 512, 64])         |
| 2638    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(6)               | output              | qint16        | 0.0029551 | 0.0000000    | 61.1704750    | 0.5008872    | 9.9810562        | torch.Size([2, 512, 64])         |
| 2639    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(6)     | input_0             | qint16        | 0.0029551 | 0.0000000    | 61.1704750    | 0.5008872    | 9.9810562        | torch.Size([2, 512, 64])         |
| 2639    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(6)     | output              | qint16        | 0.0003723 | 0.0245721    | 10.9870911    | 0.5008999    | 2.7798445        | torch.Size([2, 512, 1])          |
| 2640    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(6)             | input               | qint16        | 0.0003723 | 0.0245721    | 10.9870911    | 0.5008999    | 2.7798445        | torch.Size([2, 512, 1])          |
| 2640    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.2.rsqrt(6)             | output              | qint16        | 0.0001859 | 0.3015977    | 6.0927577     | 4.0805206    | 2.9349506        | torch.Size([2, 512, 1])          |
| 2641    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(6)           | input_0             | qint16        | 0.0002902 | -2.2453439   | 7.8211222     | 0.0001273    | 0.5008095        | torch.Size([2, 512, 64])         |
| 2641    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(6)           | input_1             | qint16        | 0.0001859 | 0.3015977    | 6.0927577     | 4.0805206    | 2.9349506        | torch.Size([2, 512, 1])          |
| 2641    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(6)           | output              | qint16        | 0.0001160 | -0.9002430   | 3.7993641     | 0.0000418    | 0.9979109        | torch.Size([2, 512, 64])         |
| 2642    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(6)      | input               | torch.float32 |           | 0.8691067    | 1.1281288     | 0.9794419    | 0.0036082        | torch.Size([64])                 |
| 2642    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(6)      | output              | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 2643    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(6)        | input_0             | qint16        | 0.0001160 | -0.9002430   | 3.7993641     | 0.0000418    | 0.9979109        | torch.Size([2, 512, 64])         |
| 2643    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(6)        | input_1             | qint16        | 0.0000344 | 0.8691075    | 1.1281115     | 0.9794400    | 0.0036082        | torch.Size([64])                 |
| 2643    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(6)        | output              | qint16        | 0.0001189 | -1.0156153   | 3.7777705     | 0.0117992    | 0.9583972        | torch.Size([2, 512, 64])         |
| 2644    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(6)        | input               | torch.float32 |           | -0.1133662   | 0.1493634     | 0.0304540    | 0.0046508        | torch.Size([64])                 |
| 2644    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(6)        | output              | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 2645    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(6)          | input_0             | qint16        | 0.0001189 | -1.0156153   | 3.7777705     | 0.0117992    | 0.9583972        | torch.Size([2, 512, 64])         |
| 2645    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(6)          | input_1             | qint16        | 0.0000046 | -0.1133644   | 0.1493611     | 0.0304541    | 0.0046508        | torch.Size([64])                 |
| 2645    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(6)          | output              | qint8         | 0.0267452 | -1.0163175   | 3.3966403     | 0.0422975    | 0.8800303        | torch.Size([2, 512, 64])         |
| 2646    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(6)                   | input               | qint8         | 0.0267452 | -1.0163175   | 3.3966403     | 0.0422975    | 0.8800303        | torch.Size([2, 512, 64])         |
| 2646    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(6)                   | weight              | torch.float32 |           | -0.4523612   | 0.4813256     | -0.0014562   | 0.0096743        | torch.Size([64, 64])             |
| 2646    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(6)                   | bias                | torch.float32 |           | -0.1183558   | 0.2243176     | 0.0150283    | 0.0049289        | torch.Size([64])                 |
| 2646    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.3(6)                   | output              | torch.float32 |           | -5.4968929   | 4.5976205     | -0.3663588   | 2.1020625        | torch.Size([2, 512, 64])         |
| 2647    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(6)                   | input               | torch.float32 |           | -5.4968929   | 4.5976205     | -0.3663588   | 2.1020625        | torch.Size([2, 512, 64])         |
| 2647    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.4(6)                   | output              | qint8         | 0.0337689 | 0.0000000    | 4.2886496     | 0.3582777    | 0.2883950        | torch.Size([2, 512, 64])         |
| 2648    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(6)   | input_0             | qint8         | 0.0337689 | 0.0000000    | 4.2886496     | 0.3582777    | 0.2883950        | torch.Size([2, 512, 64])         |
| 2648    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(6)   | output              | qint16        | 0.0000195 | 0.2100092    | 0.6278861     | 0.3582780    | 0.0137265        | torch.Size([2, 512, 1])          |
| 2649    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(6)               | input_0             | qint8         | 0.0337689 | 0.0000000    | 4.2886496     | 0.3582777    | 0.2883950        | torch.Size([2, 512, 64])         |
| 2649    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(6)               | input_1             | qint16        | 0.0000195 | 0.2100092    | 0.6278861     | 0.3582780    | 0.0137265        | torch.Size([2, 512, 1])          |
| 2649    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(6)               | output              | qint16        | 0.0001376 | -0.6278468   | 3.6913867     | -0.0000022   | 0.2746826        | torch.Size([2, 512, 64])         |
| 2650    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(6)               | input_0             | qint16        | 0.0001376 | -0.6278468   | 3.6913867     | -0.0000022   | 0.2746826        | torch.Size([2, 512, 64])         |
| 2650    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(6)               | input_1             | qint16        | 0.0001376 | -0.6278468   | 3.6913867     | -0.0000022   | 0.2746826        | torch.Size([2, 512, 64])         |
| 2650    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(6)               | output              | qint16        | 0.0006236 | 0.0000000    | 13.6264763    | 0.2746768    | 0.3802993        | torch.Size([2, 512, 64])         |
| 2651    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(6)     | input_0             | qint16        | 0.0006236 | 0.0000000    | 13.6264763    | 0.2746768    | 0.3802993        | torch.Size([2, 512, 64])         |
| 2651    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(6)     | output              | qint16        | 0.0000322 | 0.0832135    | 0.8714588     | 0.2746789    | 0.0295356        | torch.Size([2, 512, 1])          |
| 2652    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(6)             | input               | qint16        | 0.0000322 | 0.0832135    | 0.8714588     | 0.2746789    | 0.0295356        | torch.Size([2, 512, 1])          |
| 2652    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.5.rsqrt(6)             | output              | qint16        | 0.0001060 | 1.0711774    | 3.4663534     | 2.2350755    | 0.5795557        | torch.Size([2, 512, 1])          |
| 2653    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(6)           | input_0             | qint16        | 0.0001376 | -0.6278468   | 3.6913867     | -0.0000022   | 0.2746826        | torch.Size([2, 512, 64])         |
| 2653    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(6)           | input_1             | qint16        | 0.0001060 | 1.0711774    | 3.4663534     | 2.2350755    | 0.5795557        | torch.Size([2, 512, 1])          |
| 2653    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(6)           | output              | qint16        | 0.0001466 | -0.8774181   | 4.4562984     | -0.0000122   | 0.9999300        | torch.Size([2, 512, 64])         |
| 2654    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(6)      | input               | torch.float32 |           | 0.8333027    | 1.1388558     | 0.9778216    | 0.0042186        | torch.Size([64])                 |
| 2654    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(6)      | output              | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 2655    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(6)        | input_0             | qint16        | 0.0001466 | -0.8774181   | 4.4562984     | -0.0000122   | 0.9999300        | torch.Size([2, 512, 64])         |
| 2655    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(6)        | input_1             | qint16        | 0.0000348 | 0.8333015    | 1.1388384     | 0.9778193    | 0.0042185        | torch.Size([64])                 |
| 2655    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(6)        | output              | qint16        | 0.0001474 | -0.9385505   | 4.3311024     | 0.0040128    | 0.9831162        | torch.Size([2, 512, 64])         |
| 2656    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(6)        | input               | torch.float32 |           | -0.0757831   | 0.1161729     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 2656    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(6)        | output              | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 2657    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(6)          | input_0             | qint16        | 0.0001474 | -0.9385505   | 4.3311024     | 0.0040128    | 0.9831162        | torch.Size([2, 512, 64])         |
| 2657    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(6)          | input_1             | qint16        | 0.0000035 | -0.0757823   | 0.1161711     | 0.0164943    | 0.0016283        | torch.Size([64])                 |
| 2657    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(6)          | output              | qint8         | 0.0350382 | -0.9109923   | 4.2746563     | 0.0210819    | 0.9465801        | torch.Size([2, 512, 64])         |
| 2658    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(6)                   | input               | qint8         | 0.0350382 | -0.9109923   | 4.2746563     | 0.0210819    | 0.9465801        | torch.Size([2, 512, 64])         |
| 2658    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(6)                   | weight              | torch.float32 |           | -0.5707353   | 0.3620123     | -0.0010372   | 0.0088292        | torch.Size([64, 64])             |
| 2658    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(6)                   | bias                | torch.float32 |           | -0.1720246   | 0.1340137     | -0.0235144   | 0.0050507        | torch.Size([64])                 |
| 2658    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.6(6)                   | output              | torch.float32 |           | -5.4164205   | 3.7209508     | -0.2854934   | 1.9211605        | torch.Size([2, 512, 64])         |
| 2659    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(6)                   | input               | torch.float32 |           | -5.4164205   | 3.7209508     | -0.2854934   | 1.9211605        | torch.Size([2, 512, 64])         |
| 2659    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.7(6)                   | output              | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4353574    | 0.4712659        | torch.Size([2, 512, 64])         |
| 2660    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(6)   | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4353574    | 0.4712659        | torch.Size([2, 512, 64])         |
| 2660    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(6)   | output              | qint16        | 0.0000166 | 0.3242096    | 0.5162124     | 0.4353584    | 0.0025838        | torch.Size([2, 512, 1])          |
| 2661    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(6)               | input_0             | qint8         | 0.0287789 | 0.0000000    | 3.6549141     | 0.4353574    | 0.4712659        | torch.Size([2, 512, 64])         |
| 2661    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(6)               | input_1             | qint16        | 0.0000166 | 0.3242096    | 0.5162124     | 0.4353584    | 0.0025838        | torch.Size([2, 512, 1])          |
| 2661    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(6)               | output              | qint16        | 0.0000988 | -0.5162169   | 3.1849682     | -0.0000005   | 0.4686852        | torch.Size([2, 512, 64])         |
| 2662    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(6)               | input_0             | qint16        | 0.0000988 | -0.5162169   | 3.1849682     | -0.0000005   | 0.4686852        | torch.Size([2, 512, 64])         |
| 2662    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(6)               | input_1             | qint16        | 0.0000988 | -0.5162169   | 3.1849682     | -0.0000005   | 0.4686852        | torch.Size([2, 512, 64])         |
| 2662    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(6)               | output              | qint16        | 0.0003201 | 0.0000000    | 10.1438894    | 0.4686633    | 0.9449810        | torch.Size([2, 512, 64])         |
| 2663    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(6)     | input_0             | qint16        | 0.0003201 | 0.0000000    | 10.1438894    | 0.4686633    | 0.9449810        | torch.Size([2, 512, 64])         |
| 2663    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(6)     | output              | qint16        | 0.0000230 | 0.2689644    | 0.7198279     | 0.4686648    | 0.0123541        | torch.Size([2, 512, 1])          |
| 2664    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(6)             | input               | qint16        | 0.0000230 | 0.2689644    | 0.7198279     | 0.4686648    | 0.0123541        | torch.Size([2, 512, 1])          |
| 2664    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.8.rsqrt(6)             | output              | qint16        | 0.0000608 | 1.1786208    | 1.9281387     | 1.4941351    | 0.0352663        | torch.Size([2, 512, 1])          |
| 2665    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(6)           | input_0             | qint16        | 0.0000988 | -0.5162169   | 3.1849682     | -0.0000005   | 0.4686852        | torch.Size([2, 512, 64])         |
| 2665    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(6)           | input_1             | qint16        | 0.0000608 | 1.1786208    | 1.9281387     | 1.4941351    | 0.0352663        | torch.Size([2, 512, 1])          |
| 2665    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(6)           | output              | qint16        | 0.0001598 | -0.7402589   | 4.2458982     | 0.0000004    | 1.0000288        | torch.Size([2, 512, 64])         |
| 2666    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(6)      | input               | torch.float32 |           | 0.8006503    | 1.1495361     | 0.9818506    | 0.0032003        | torch.Size([64])                 |
| 2666    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(6)      | output              | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 2667    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(6)        | input_0             | qint16        | 0.0001598 | -0.7402589   | 4.2458982     | 0.0000004    | 1.0000288        | torch.Size([2, 512, 64])         |
| 2667    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(6)        | input_1             | qint16        | 0.0000351 | 0.8006672    | 1.1495186     | 0.9818505    | 0.0032000        | torch.Size([64])                 |
| 2667    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(6)        | output              | qint16        | 0.0001633 | -0.8057523   | 4.4400463     | 0.0056764    | 0.9972416        | torch.Size([2, 512, 64])         |
| 2668    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(6)        | input               | torch.float32 |           | -0.0461140   | 0.1411197     | 0.0132828    | 0.0015701        | torch.Size([64])                 |
| 2668    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(6)        | output              | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 2669    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(6)          | input_0             | qint16        | 0.0001633 | -0.8057523   | 4.4400463     | 0.0056764    | 0.9972416        | torch.Size([2, 512, 64])         |
| 2669    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(6)          | input_1             | qint16        | 0.0000043 | -0.0461161   | 0.1411176     | 0.0132827    | 0.0015701        | torch.Size([64])                 |
| 2669    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(6)          | output              | qint8         | 0.0387038 | -0.8127795   | 4.4509358     | 0.0187979    | 0.9825269        | torch.Size([2, 512, 64])         |
| 2670    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(6)                   | input               | qint8         | 0.0387038 | -0.8127795   | 4.4509358     | 0.0187979    | 0.9825269        | torch.Size([2, 512, 64])         |
| 2670    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(6)                   | weight              | torch.float32 |           | -0.5701389   | 0.3477888     | 0.0006721    | 0.0085883        | torch.Size([64, 64])             |
| 2670    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(6)                   | bias                | torch.float32 |           | -0.1677032   | 0.1709885     | -0.0237130   | 0.0070098        | torch.Size([64])                 |
| 2670    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.anchor_encoder.vel_fc.9(6)                   | output              | torch.float32 |           | -4.7903147   | 7.2159615     | -0.4054872   | 1.6105318        | torch.Size([2, 512, 64])         |
| 2671    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(6)                  | input               | torch.float32 |           | -4.7903147   | 7.2159615     | -0.4054872   | 1.6105318        | torch.Size([2, 512, 64])         |
| 2671    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.anchor_encoder.vel_fc.10(6)                  | output              | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2755183    | 0.5784494        | torch.Size([2, 512, 64])         |
| 2672    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(6)  | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2755183    | 0.5784494        | torch.Size([2, 512, 64])         |
| 2672    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(6)  | output              | qint16        | 0.0000138 | 0.2038234    | 0.4050180     | 0.2755211    | 0.0021294        | torch.Size([2, 512, 1])          |
| 2673    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(6)              | input_0             | qint8         | 0.0562272 | 0.0000000    | 7.1408587     | 0.2755183    | 0.5784494        | torch.Size([2, 512, 64])         |
| 2673    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(6)              | input_1             | qint16        | 0.0000138 | 0.2038234    | 0.4050180     | 0.2755211    | 0.0021294        | torch.Size([2, 512, 1])          |
| 2673    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(6)              | output              | qint16        | 0.0002137 | -0.4051085   | 6.9370551     | -0.0000133   | 0.5763287        | torch.Size([2, 512, 64])         |
| 2674    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(6)              | input_0             | qint16        | 0.0002137 | -0.4051085   | 6.9370551     | -0.0000133   | 0.5763287        | torch.Size([2, 512, 64])         |
| 2674    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(6)              | input_1             | qint16        | 0.0002137 | -0.4051085   | 6.9370551     | -0.0000133   | 0.5763287        | torch.Size([2, 512, 64])         |
| 2674    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(6)              | output              | qint16        | 0.0014959 | 0.0000000    | 48.1224632    | 0.5763301    | 13.7751951       | torch.Size([2, 512, 64])         |
| 2675    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(6)    | input_0             | qint16        | 0.0014959 | 0.0000000    | 48.1224632    | 0.5763301    | 13.7751951       | torch.Size([2, 512, 64])         |
| 2675    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(6)    | output              | qint16        | 0.0000253 | 0.1888213    | 0.8200353     | 0.5763291    | 0.0327014        | torch.Size([2, 512, 1])          |
| 2676    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(6)            | input               | qint16        | 0.0000253 | 0.1888213    | 0.8200353     | 0.5763291    | 0.0327014        | torch.Size([2, 512, 1])          |
| 2676    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.anchor_encoder.vel_fc.11.rsqrt(6)            | output              | qint16        | 0.0000680 | 1.1042942    | 2.2290647     | 1.3864353    | 0.0881204        | torch.Size([2, 512, 1])          |
| 2677    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(6)          | input_0             | qint16        | 0.0002137 | -0.4051085   | 6.9370551     | -0.0000133   | 0.5763287        | torch.Size([2, 512, 64])         |
| 2677    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(6)          | input_1             | qint16        | 0.0000680 | 1.1042942    | 2.2290647     | 1.3864353    | 0.0881204        | torch.Size([2, 512, 1])          |
| 2677    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(6)          | output              | qint16        | 0.0002366 | -0.7071119   | 7.7517352     | -0.0000261   | 0.9992076        | torch.Size([2, 512, 64])         |
| 2678    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(6)     | input               | torch.float32 |           | 0.7297163    | 1.2824999     | 1.0134131    | 0.0161719        | torch.Size([64])                 |
| 2678    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(6)     | output              | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 2679    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(6)       | input_0             | qint16        | 0.0002366 | -0.7071119   | 7.7517352     | -0.0000261   | 0.9992076        | torch.Size([2, 512, 64])         |
| 2679    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(6)       | input_1             | qint16        | 0.0000391 | 0.7297148    | 1.2824804     | 1.0134130    | 0.0161716        | torch.Size([64])                 |
| 2679    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(6)       | output              | qint16        | 0.0001954 | -0.8820183   | 5.9984670     | -0.0208379   | 0.8106003        | torch.Size([2, 512, 64])         |
| 2680    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(6)       | input               | torch.float32 |           | -0.2385408   | 0.3192695     | 0.0900053    | 0.0129013        | torch.Size([64])                 |
| 2680    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(6)       | output              | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 2681    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(6)         | input_0             | qint16        | 0.0001954 | -0.8820183   | 5.9984670     | -0.0208379   | 0.8106003        | torch.Size([2, 512, 64])         |
| 2681    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(6)         | input_1             | qint16        | 0.0000097 | -0.2385399   | 0.3192646     | 0.0900051    | 0.0129013        | torch.Size([64])                 |
| 2681    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(6)         | output              | qint8         | 0.0462055 | -0.8316998   | 5.8681040     | 0.0690870    | 0.7474586        | torch.Size([2, 512, 64])         |
| 2682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_0             | qint8         | 0.0587279 | -0.8221908   | 7.2822618     | 0.0720724    | 0.8648180        | torch.Size([2, 512, 128])        |
| 2682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_1             | qint8         | 0.0385920 | -1.6594547   | 4.9011803     | 0.0218104    | 1.3772231        | torch.Size([2, 512, 32])         |
| 2682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_2             | qint8         | 0.0373904 | -0.9721510   | 4.7485838     | 0.0290435    | 0.8741193        | torch.Size([2, 512, 32])         |
| 2682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_3             | qint8         | 0.0462055 | -0.8316998   | 5.8681040     | 0.0690870    | 0.7474586        | torch.Size([2, 512, 64])         |
| 2682    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | output              | qint8         | 0.0569265 | -1.6508691   | 7.2296681     | 0.0627509    | 0.8982574        | torch.Size([2, 512, 256])        |
| 2683    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(10)                                | input               | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 2683    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(10)                                | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 2683    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(10)                                | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 2684    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.35.query_cat                          | input_0             | qint8         | 0.0356415 | -4.2769833   | 4.5264740     | 0.0048699    | 0.8194842        | torch.Size([2, 512, 256])        |
| 2684    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.35.query_cat                          | input_1             | qint8         | 0.0569265 | -1.6508691   | 7.2296681     | 0.0627509    | 0.8982574        | torch.Size([2, 512, 256])        |
| 2684    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.35.query_cat                          | output              | qint8         | 0.0540095 | -4.2667513   | 6.8592076     | 0.0365812    | 0.8575636        | torch.Size([2, 512, 512])        |
| 2685    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.35.key_cat                            | input_0             | qint8         | 0.0307486 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 256])        |
| 2685    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.35.key_cat                            | input_1             | qint8         | 0.0569265 | -1.0246774   | 5.3510933     | 0.0736042    | 0.8488365        | torch.Size([2, 256, 256])        |
| 2685    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.35.key_cat                            | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 2686    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0540095 | -4.2667513   | 6.8592076     | 0.0365812    | 0.8575636        | torch.Size([2, 512, 512])        |
| 2686    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0540095 | -4.2667513   | 6.8592076     | 0.0365812    | 0.8575636        | torch.Size([512, 2, 512])        |
| 2687    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([2, 256, 512])        |
| 2687    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2688    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([2, 256, 512])        |
| 2688    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2689    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | input_0             | qint8         | 0.0540095 | -4.2667513   | 6.8592076     | 0.0365812    | 0.8575636        | torch.Size([512, 2, 512])        |
| 2689    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | output              | qint8         | 0.0540095 | -4.2667513   | 6.8592076     | 0.0365812    | 0.8575636        | torch.Size([512, 2, 512])        |
| 2690    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | input_0             | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2690    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | output              | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2691    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | input_0             | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2691    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | output              | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2692    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.q_proj                        | input               | qint8         | 0.0540095 | -4.2667513   | 6.8592076     | 0.0365812    | 0.8575636        | torch.Size([512, 2, 512])        |
| 2692    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.q_proj                        | weight              | torch.float32 |           | -0.3235276   | 0.4215601     | -0.0001558   | 0.0035258        | torch.Size([512, 512])           |
| 2692    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.q_proj                        | bias                | torch.float32 |           | -0.0954634   | 0.0875029     | 0.0007627    | 0.0007613        | torch.Size([512])                |
| 2692    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.q_proj                        | output              | qint8         | 0.0901176 | -11.5350485  | 11.4449310    | 0.0076453    | 9.2402868        | torch.Size([512, 2, 512])        |
| 2693    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.k_proj                        | input               | qint8         | 0.0510911 | -1.0218226   | 5.3645682     | 0.0373205    | 0.4265048        | torch.Size([256, 2, 512])        |
| 2693    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.k_proj                        | weight              | torch.float32 |           | -0.5609900   | 0.5793647     | -0.0000159   | 0.0038509        | torch.Size([512, 512])           |
| 2693    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.k_proj                        | bias                | torch.float32 |           | -0.0054600   | 0.0029553     | -0.0000182   | 0.0000005        | torch.Size([512])                |
| 2693    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.k_proj                        | output              | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([256, 2, 512])        |
| 2694    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.v_proj                        | input               | qint16        | 0.0001526 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([256, 2, 512])        |
| 2694    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.v_proj                        | weight              | torch.float32 |           | -0.3044824   | 0.3430385     | -0.0000593   | 0.0016902        | torch.Size([512, 512])           |
| 2694    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.v_proj                        | bias                | torch.float32 |           | -0.0821221   | 0.0959587     | 0.0009844    | 0.0006061        | torch.Size([512])                |
| 2694    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.v_proj                        | output              | qint8         | 0.0110485 | -0.0773395   | 0.0994365     | 0.0008416    | 0.0006123        | torch.Size([256, 2, 512])        |
| 2695    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | input_0             | qint8         | 0.0901176 | -11.5350485  | 11.4449310    | 0.0076453    | 9.2402868        | torch.Size([512, 2, 512])        |
| 2695    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | output              | qint8         | 0.0901176 | -11.5350485  | 11.4449310    | 0.0076453    | 9.2402868        | torch.Size([512, 16, 64])        |
| 2696    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0901176 | -11.5350485  | 11.4449310    | 0.0076453    | 9.2402868        | torch.Size([512, 16, 64])        |
| 2696    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0901176 | -11.5350485  | 11.4449310    | 0.0076453    | 9.2402868        | torch.Size([16, 512, 64])        |
| 2697    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | input_0             | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([256, 2, 512])        |
| 2697    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | output              | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([256, 16, 64])        |
| 2698    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([256, 16, 64])        |
| 2698    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([16, 256, 64])        |
| 2699    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | input_0             | qint8         | 0.0110485 | -0.0773395   | 0.0994365     | 0.0008416    | 0.0006123        | torch.Size([256, 2, 512])        |
| 2699    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | output              | qint8         | 0.0110485 | -0.0773395   | 0.0994365     | 0.0008416    | 0.0006123        | torch.Size([256, 16, 64])        |
| 2700    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0110485 | -0.0773395   | 0.0994365     | 0.0008416    | 0.0006123        | torch.Size([256, 16, 64])        |
| 2700    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0110485 | -0.0773395   | 0.0994365     | 0.0008416    | 0.0006123        | torch.Size([16, 256, 64])        |
| 2701    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.35.attn.q_scale_mul                   | input_0             | qint8         | 0.0901176 | -11.5350485  | 11.4449310    | 0.0076453    | 9.2402868        | torch.Size([16, 512, 64])        |
| 2701    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.35.attn.q_scale_mul                   | output              | qint8         | 0.0112647 | -1.4418811   | 1.4306164     | 0.0009557    | 0.1443795        | torch.Size([16, 512, 64])        |
| 2702    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([16, 256, 64])        |
| 2702    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([16, 64, 256])        |
| 2703    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.35.attn.matmul                        | input_0             | qint8         | 0.0112647 | -1.4418811   | 1.4306164     | 0.0009557    | 0.1443795        | torch.Size([16, 512, 64])        |
| 2703    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.35.attn.matmul                        | input_1             | qint8         | 0.0860516 | -6.1096616   | 6.9701772     | -0.0052102   | 4.5042491        | torch.Size([16, 64, 256])        |
| 2703    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.35.attn.matmul                        | output              | qint8         | 1.5337846 | -104.2973557 | 58.2838173    | -4.4315367   | 364.9558716      | torch.Size([16, 512, 256])       |
| 2704    | torch.Tensor.max                                                            | head.layers.35.attn.softmax                       | input               | qint8         | 1.5337846 | -104.2973557 | 58.2838173    | -4.4315367   | 364.9558716      | torch.Size([16, 512, 256])       |
| 2704    | torch.Tensor.max                                                            | head.layers.35.attn.softmax                       | output_0            | qint8         | 1.5337846 | -104.2973557 | 58.2838173    | -4.4315367   | 365.0002441      | torch.Size([16, 512, 1])         |
| 2704    | torch.Tensor.max                                                            | head.layers.35.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 1])         |
| 2705    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.35.attn.softmax.sub                   | input_0             | qint8         | 1.5337846 | -104.2973557 | 58.2838173    | -4.4315367   | 364.9558716      | torch.Size([16, 512, 256])       |
| 2705    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.35.attn.softmax.sub                   | input_1             | qint8         | 1.5337846 | -104.2973557 | 58.2838173    | -4.4315367   | 365.0002441      | torch.Size([16, 512, 1])         |
| 2705    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.35.attn.softmax.sub                   | output              | qint16        | 0.0114400 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2706    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.35.attn.softmax.exp                   | input               | qint16        | 0.0114400 | 0.0000000    | 0.0000000     | 0.0000000    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2706    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.35.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2707    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.35.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2707    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.35.attn.softmax.sum                   | output              | qint16        | 0.0038134 | 124.9545517  | 124.9545517   | 124.9545517  | 0.0000000        | torch.Size([16, 512, 1])         |
| 2708    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.35.attn.softmax.reciprocal            | input               | qint16        | 0.0038134 | 124.9545517  | 124.9545517   | 124.9545517  | 0.0000000        | torch.Size([16, 512, 1])         |
| 2708    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.35.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0079957    | 0.0079957     | 0.0079957    | 0.0000000        | torch.Size([16, 512, 1])         |
| 2709    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.35.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.9999847    | 0.9999847     | 0.9999847    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2709    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.35.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0079957    | 0.0079957     | 0.0079957    | 0.0000000        | torch.Size([16, 512, 1])         |
| 2709    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.35.attn.softmax.mul                   | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2710    | torch.nn.modules.dropout.Dropout                                            | head.layers.35.attn.attention_drop                | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2710    | torch.nn.modules.dropout.Dropout                                            | head.layers.35.attn.attention_drop                | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2711    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.35.attn.attn_matmul                   | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2711    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.35.attn.attn_matmul                   | input_1             | qint8         | 0.0110485 | -0.0773395   | 0.0994365     | 0.0008416    | 0.0006123        | torch.Size([16, 256, 64])        |
| 2711    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.35.attn.attn_matmul                   | output              | qint8         | 0.0121273 | -0.1576548   | 0.1940367     | 0.0018001    | 0.0024430        | torch.Size([16, 512, 64])        |
| 2712    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0121273 | -0.1576548   | 0.1940367     | 0.0018001    | 0.0024430        | torch.Size([16, 512, 64])        |
| 2712    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0121273 | -0.1576548   | 0.1940367     | 0.0018001    | 0.0024430        | torch.Size([512, 16, 64])        |
| 2713    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | input_0             | qint8         | 0.0121273 | -0.1576548   | 0.1940367     | 0.0018001    | 0.0024430        | torch.Size([512, 16, 64])        |
| 2713    | torch.Tensor.reshape                                                        | head.layers.35.attn                               | output              | qint8         | 0.0121273 | -0.1576548   | 0.1940367     | 0.0018001    | 0.0024430        | torch.Size([512, 2, 512])        |
| 2714    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.out_proj                      | input               | qint8         | 0.0121273 | -0.1576548   | 0.1940367     | 0.0018001    | 0.0024430        | torch.Size([512, 2, 512])        |
| 2714    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.out_proj                      | weight              | torch.float32 |           | -0.2512448   | 0.2980582     | -0.0000690   | 0.0024223        | torch.Size([512, 512])           |
| 2714    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.out_proj                      | bias                | torch.float32 |           | -0.3283637   | 0.3022734     | 0.0070495    | 0.0084595        | torch.Size([512])                |
| 2714    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.35.attn.out_proj                      | output              | qint8         | 0.0125890 | -0.6042743   | 0.4532057     | 0.0143840    | 0.0278148        | torch.Size([512, 2, 512])        |
| 2715    | torch.Tensor.view                                                           | head.layers.35.attn                               | input_0             | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([16, 512, 256])       |
| 2715    | torch.Tensor.view                                                           | head.layers.35.attn                               | output              | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 2716    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.35.attn.attn_weights_mean             | input               | qint8         | 0.0078431 | 0.0078431    | 0.0078431     | 0.0078431    | 0.0000000        | torch.Size([2, 8, 512, 256])     |
| 2716    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.35.attn.attn_weights_mean             | output              | qint8         | 0.0029857 | 0.0089571    | 0.0089571     | 0.0089571    | 0.0000000        | torch.Size([2, 512, 256])        |
| 2717    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | input_0             | qint8         | 0.0125890 | -0.6042743   | 0.4532057     | 0.0143840    | 0.0278148        | torch.Size([512, 2, 512])        |
| 2717    | torch.Tensor.transpose                                                      | head.layers.35.attn                               | output              | qint8         | 0.0125890 | -0.6042743   | 0.4532057     | 0.0143840    | 0.0278148        | torch.Size([2, 512, 512])        |
| 2718    | torch.nn.modules.dropout.Dropout                                            | head.layers.35.dropout                            | input               | qint8         | 0.0125890 | -0.6042743   | 0.4532057     | 0.0143840    | 0.0278148        | torch.Size([2, 512, 512])        |
| 2718    | torch.nn.modules.dropout.Dropout                                            | head.layers.35.dropout                            | output              | qint8         | 0.0125890 | -0.6042743   | 0.4532057     | 0.0143840    | 0.0278148        | torch.Size([2, 512, 512])        |
| 2719    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.35.add                                | input_0             | qint8         | 0.0540095 | -4.2667513   | 6.8592076     | 0.0365812    | 0.8575636        | torch.Size([2, 512, 512])        |
| 2719    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.35.add                                | input_1             | qint8         | 0.0125890 | -0.6042743   | 0.4532057     | 0.0143840    | 0.0278148        | torch.Size([2, 512, 512])        |
| 2719    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.35.add                                | output              | qint8         | 0.0538740 | -4.4176698   | 6.5726304     | 0.0510069    | 0.8290636        | torch.Size([2, 512, 512])        |
| 2720    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(10)                                 | input               | qint8         | 0.0538740 | -4.4176698   | 6.5726304     | 0.0510069    | 0.8290636        | torch.Size([2, 512, 512])        |
| 2720    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(10)                                 | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 2720    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(10)                                 | output              | qint16        | 0.0015259 | -7.3455811   | 6.9488525     | 0.0200832    | 0.9414247        | torch.Size([2, 512, 256])        |
| 2721    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(11)                                | input               | qint16        | 0.0015259 | -7.3455811   | 6.9488525     | 0.0200832    | 0.9414247        | torch.Size([2, 512, 256])        |
| 2721    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(11)                                | weight              | torch.float32 |           | -0.1090298   | 0.1089591     | -0.0000406   | 0.0005908        | torch.Size([512, 256])           |
| 2721    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_before(11)                                | output              | qint16        | 0.0001526 | -3.5009766   | 3.3811951     | 0.0061508    | 0.0653493        | torch.Size([2, 512, 512])        |
| 2722    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.36.query_cat                          | input_0             | qint16        | 0.0015259 | -7.3455811   | 6.9488525     | 0.0200832    | 0.9414247        | torch.Size([2, 512, 256])        |
| 2722    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.36.query_cat                          | input_1             | qint8         | 0.0569265 | -1.6508691   | 7.2296681     | 0.0627509    | 0.8982574        | torch.Size([2, 512, 256])        |
| 2722    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.36.query_cat                          | output              | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([2, 512, 512])        |
| 2723    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.36.key_cat                            | input_0             | qint16        | 0.0015259 | -7.3455811   | 6.9488525     | 0.0200832    | 0.9414247        | torch.Size([2, 512, 256])        |
| 2723    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.36.key_cat                            | input_1             | qint8         | 0.0569265 | -1.6508691   | 7.2296681     | 0.0627509    | 0.8982574        | torch.Size([2, 512, 256])        |
| 2723    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.36.key_cat                            | output              | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([2, 512, 512])        |
| 2724    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([2, 512, 512])        |
| 2724    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2725    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([2, 512, 512])        |
| 2725    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2726    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint16        | 0.0001526 | -3.5009766   | 3.3811951     | 0.0061508    | 0.0653493        | torch.Size([2, 512, 512])        |
| 2726    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint16        | 0.0001526 | -3.5009766   | 3.3811951     | 0.0061508    | 0.0653493        | torch.Size([512, 2, 512])        |
| 2727    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | input_0             | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2727    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | output              | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2728    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | input_0             | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2728    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | output              | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2729    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | input_0             | qint16        | 0.0001526 | -3.5009766   | 3.3811951     | 0.0061508    | 0.0653493        | torch.Size([512, 2, 512])        |
| 2729    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | output              | qint16        | 0.0001526 | -3.5009766   | 3.3811951     | 0.0061508    | 0.0653493        | torch.Size([512, 2, 512])        |
| 2730    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.q_proj                        | input               | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2730    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.q_proj                        | weight              | torch.float32 |           | -0.3146838   | 0.3318836     | 0.0000977    | 0.0028868        | torch.Size([512, 512])           |
| 2730    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.q_proj                        | bias                | torch.float32 |           | -0.1396752   | 0.1003755     | -0.0017663   | 0.0008599        | torch.Size([512])                |
| 2730    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.q_proj                        | output              | qint8         | 0.0785457 | -10.0538483  | 9.9753027     | -0.0874685   | 4.5559444        | torch.Size([512, 2, 512])        |
| 2731    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.k_proj                        | input               | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([512, 2, 512])        |
| 2731    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.k_proj                        | weight              | torch.float32 |           | -0.9564776   | 0.9354519     | -0.0000881   | 0.0038703        | torch.Size([512, 512])           |
| 2731    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.k_proj                        | bias                | torch.float32 |           | -0.1178043   | 0.1006244     | -0.0005137   | 0.0002969        | torch.Size([512])                |
| 2731    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.k_proj                        | output              | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([512, 2, 512])        |
| 2732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.v_proj                        | input               | qint16        | 0.0001526 | -3.5009766   | 3.3811951     | 0.0061508    | 0.0653493        | torch.Size([512, 2, 512])        |
| 2732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.v_proj                        | weight              | torch.float32 |           | -0.2458883   | 0.2633308     | -0.0000698   | 0.0018804        | torch.Size([512, 512])           |
| 2732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.v_proj                        | bias                | torch.float32 |           | -0.1800991   | 0.2041788     | 0.0003858    | 0.0020850        | torch.Size([512])                |
| 2732    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.v_proj                        | output              | qint8         | 0.0266370 | -3.4095411   | 3.3829041     | -0.0003861   | 0.1602517        | torch.Size([512, 2, 512])        |
| 2733    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | input_0             | qint8         | 0.0785457 | -10.0538483  | 9.9753027     | -0.0874685   | 4.5559444        | torch.Size([512, 2, 512])        |
| 2733    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | output              | qint8         | 0.0785457 | -10.0538483  | 9.9753027     | -0.0874685   | 4.5559444        | torch.Size([512, 16, 64])        |
| 2734    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.0785457 | -10.0538483  | 9.9753027     | -0.0874685   | 4.5559444        | torch.Size([512, 16, 64])        |
| 2734    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.0785457 | -10.0538483  | 9.9753027     | -0.0874685   | 4.5559444        | torch.Size([16, 512, 64])        |
| 2735    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | input_0             | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([512, 2, 512])        |
| 2735    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | output              | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([512, 16, 64])        |
| 2736    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([512, 16, 64])        |
| 2736    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([16, 512, 64])        |
| 2737    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | input_0             | qint8         | 0.0266370 | -3.4095411   | 3.3829041     | -0.0003861   | 0.1602517        | torch.Size([512, 2, 512])        |
| 2737    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | output              | qint8         | 0.0266370 | -3.4095411   | 3.3829041     | -0.0003861   | 0.1602517        | torch.Size([512, 16, 64])        |
| 2738    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.0266370 | -3.4095411   | 3.3829041     | -0.0003861   | 0.1602517        | torch.Size([512, 16, 64])        |
| 2738    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.0266370 | -3.4095411   | 3.3829041     | -0.0003861   | 0.1602517        | torch.Size([16, 512, 64])        |
| 2739    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.36.attn.q_scale_mul                   | input_0             | qint8         | 0.0785457 | -10.0538483  | 9.9753027     | -0.0874685   | 4.5559444        | torch.Size([16, 512, 64])        |
| 2739    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul_scalar | head.layers.36.attn.q_scale_mul                   | output              | qint8         | 0.0098182 | -1.2567310   | 1.2469128     | -0.0109336   | 0.0711866        | torch.Size([16, 512, 64])        |
| 2740    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([16, 512, 64])        |
| 2740    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([16, 64, 512])        |
| 2741    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.36.attn.matmul                        | input_0             | qint8         | 0.0098182 | -1.2567310   | 1.2469128     | -0.0109336   | 0.0711866        | torch.Size([16, 512, 64])        |
| 2741    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.36.attn.matmul                        | input_1             | qint8         | 0.1281204 | -16.1431675  | 16.2712879    | 0.0174742    | 7.6075134        | torch.Size([16, 64, 512])        |
| 2741    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.36.attn.matmul                        | output              | qint8         | 0.8731236 | -97.7898407  | 110.8866959   | -2.0080848   | 373.3625183      | torch.Size([16, 512, 512])       |
| 2742    | torch.Tensor.max                                                            | head.layers.36.attn.softmax                       | input               | qint8         | 0.8731236 | -97.7898407  | 110.8866959   | -2.0080848   | 373.3625183      | torch.Size([16, 512, 512])       |
| 2742    | torch.Tensor.max                                                            | head.layers.36.attn.softmax                       | output_0            | qint8         | 0.8731236 | 0.0000000    | 110.8866959   | 31.5507584   | 585.6015015      | torch.Size([16, 512, 1])         |
| 2742    | torch.Tensor.max                                                            | head.layers.36.attn.softmax                       | output_1            | torch.int64   |           | 0.0000000    | 511.0000000   | 258.5056152  | 13371.7734375    | torch.Size([16, 512, 1])         |
| 2743    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.36.attn.softmax.sub                   | input_0             | qint8         | 0.8731236 | -97.7898407  | 110.8866959   | -2.0080848   | 373.3625183      | torch.Size([16, 512, 512])       |
| 2743    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.36.attn.softmax.sub                   | input_1             | qint8         | 0.8731236 | 0.0000000    | 110.8866959   | 31.5507584   | 585.6015015      | torch.Size([16, 512, 1])         |
| 2743    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.36.attn.softmax.sub                   | output              | qint16        | 0.0077178 | -208.6747437 | 0.0000000     | -33.5588455  | 1003.2863770     | torch.Size([16, 512, 512])       |
| 2744    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.36.attn.softmax.exp                   | input               | qint16        | 0.0077178 | -208.6747437 | 0.0000000     | -33.5588455  | 1003.2863770     | torch.Size([16, 512, 512])       |
| 2744    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.36.attn.softmax.exp                   | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0158327    | 0.0106071        | torch.Size([16, 512, 512])       |
| 2745    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.36.attn.softmax.sum                   | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0158327    | 0.0106071        | torch.Size([16, 512, 512])       |
| 2745    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.36.attn.softmax.sum                   | output              | qint16        | 0.0022208 | 0.9993768    | 72.7701797    | 6.9819698    | 149.9105988      | torch.Size([16, 512, 1])         |
| 2746    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.36.attn.softmax.reciprocal            | input               | qint16        | 0.0022208 | 0.9993768    | 72.7701797    | 6.9819698    | 149.9105988      | torch.Size([16, 512, 1])         |
| 2746    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.36.attn.softmax.reciprocal            | output              | qint16        | 0.0000305 | 0.0137331    | 0.9999847     | 0.3494116    | 0.0631092        | torch.Size([16, 512, 1])         |
| 2747    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.36.attn.softmax.mul                   | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.0158327    | 0.0106071        | torch.Size([16, 512, 512])       |
| 2747    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.36.attn.softmax.mul                   | input_1             | qint16        | 0.0000305 | 0.0137331    | 0.9999847     | 0.3494116    | 0.0631092        | torch.Size([16, 512, 1])         |
| 2747    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.36.attn.softmax.mul                   | output              | qint8         | 0.0077647 | 0.0000000    | 0.9861225     | 0.0019233    | 0.0004712        | torch.Size([16, 512, 512])       |
| 2748    | torch.nn.modules.dropout.Dropout                                            | head.layers.36.attn.attention_drop                | input               | qint8         | 0.0077647 | 0.0000000    | 0.9861225     | 0.0019233    | 0.0004712        | torch.Size([16, 512, 512])       |
| 2748    | torch.nn.modules.dropout.Dropout                                            | head.layers.36.attn.attention_drop                | output              | qint8         | 0.0077647 | 0.0000000    | 0.9861225     | 0.0019233    | 0.0004712        | torch.Size([16, 512, 512])       |
| 2749    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.36.attn.attn_matmul                   | input_0             | qint8         | 0.0077647 | 0.0000000    | 0.9861225     | 0.0019233    | 0.0004712        | torch.Size([16, 512, 512])       |
| 2749    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.36.attn.attn_matmul                   | input_1             | qint8         | 0.0266370 | -3.4095411   | 3.3829041     | -0.0003861   | 0.1602517        | torch.Size([16, 512, 64])        |
| 2749    | horizon_plugin_pytorch.nn.qat.matmul.Matmul                                 | head.layers.36.attn.attn_matmul                   | output              | qint8         | 0.0185151 | -2.3699298   | 2.3514147     | -0.0022482   | 0.1384135        | torch.Size([16, 512, 64])        |
| 2750    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.0185151 | -2.3699298   | 2.3514147     | -0.0022482   | 0.1384135        | torch.Size([16, 512, 64])        |
| 2750    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.0185151 | -2.3699298   | 2.3514147     | -0.0022482   | 0.1384135        | torch.Size([512, 16, 64])        |
| 2751    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | input_0             | qint8         | 0.0185151 | -2.3699298   | 2.3514147     | -0.0022482   | 0.1384135        | torch.Size([512, 16, 64])        |
| 2751    | torch.Tensor.reshape                                                        | head.layers.36.attn                               | output              | qint8         | 0.0185151 | -2.3699298   | 2.3514147     | -0.0022482   | 0.1384135        | torch.Size([512, 2, 512])        |
| 2752    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.out_proj                      | input               | qint8         | 0.0185151 | -2.3699298   | 2.3514147     | -0.0022482   | 0.1384135        | torch.Size([512, 2, 512])        |
| 2752    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.out_proj                      | weight              | torch.float32 |           | -0.2637568   | 0.2630204     | 0.0000084    | 0.0029881        | torch.Size([512, 512])           |
| 2752    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.out_proj                      | bias                | torch.float32 |           | -0.3658920   | 0.3991215     | 0.0013332    | 0.0151695        | torch.Size([512])                |
| 2752    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.36.attn.out_proj                      | output              | qint8         | 0.0297938 | -3.8136096   | 3.7838159     | -0.0210481   | 0.6746359        | torch.Size([512, 2, 512])        |
| 2753    | torch.Tensor.view                                                           | head.layers.36.attn                               | input_0             | qint8         | 0.0077647 | 0.0000000    | 0.9861225     | 0.0019233    | 0.0004712        | torch.Size([16, 512, 512])       |
| 2753    | torch.Tensor.view                                                           | head.layers.36.attn                               | output              | qint8         | 0.0077647 | 0.0000000    | 0.9861225     | 0.0019233    | 0.0004712        | torch.Size([2, 8, 512, 512])     |
| 2754    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.36.attn.attn_weights_mean             | input               | qint8         | 0.0077647 | 0.0000000    | 0.9861225     | 0.0019233    | 0.0004712        | torch.Size([2, 8, 512, 512])     |
| 2754    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.36.attn.attn_weights_mean             | output              | qint8         | 0.0017886 | 0.0000000    | 0.2271481     | 0.0020152    | 0.0000662        | torch.Size([2, 512, 512])        |
| 2755    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | input_0             | qint8         | 0.0297938 | -3.8136096   | 3.7838159     | -0.0210481   | 0.6746359        | torch.Size([512, 2, 512])        |
| 2755    | torch.Tensor.transpose                                                      | head.layers.36.attn                               | output              | qint8         | 0.0297938 | -3.8136096   | 3.7838159     | -0.0210481   | 0.6746359        | torch.Size([2, 512, 512])        |
| 2756    | torch.nn.modules.dropout.Dropout                                            | head.layers.36.dropout                            | input               | qint8         | 0.0297938 | -3.8136096   | 3.7838159     | -0.0210481   | 0.6746359        | torch.Size([2, 512, 512])        |
| 2756    | torch.nn.modules.dropout.Dropout                                            | head.layers.36.dropout                            | output              | qint8         | 0.0297938 | -3.8136096   | 3.7838159     | -0.0210481   | 0.6746359        | torch.Size([2, 512, 512])        |
| 2757    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.36.add                                | input_0             | qint8         | 0.0539790 | -6.9093070   | 6.8553281     | 0.0446345    | 0.9179909        | torch.Size([2, 512, 512])        |
| 2757    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.36.add                                | input_1             | qint8         | 0.0297938 | -3.8136096   | 3.7838159     | -0.0210481   | 0.6746359        | torch.Size([2, 512, 512])        |
| 2757    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.36.add                                | output              | qint8         | 0.0546933 | -6.4538107   | 6.9460506     | 0.0234514    | 1.6052015        | torch.Size([2, 512, 512])        |
| 2758    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(11)                                 | input               | qint8         | 0.0546933 | -6.4538107   | 6.9460506     | 0.0234514    | 1.6052015        | torch.Size([2, 512, 512])        |
| 2758    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(11)                                 | weight              | torch.float32 |           | -0.3694984   | 0.3971221     | -0.0001689   | 0.0017596        | torch.Size([256, 512])           |
| 2758    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.fc_after(11)                                 | output              | qint16        | 0.0015259 | -42.2088623  | 32.0190430    | -0.0250491   | 15.1507921       | torch.Size([2, 512, 256])        |
| 2759    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.37.input_mean.mean                    | input_0             | qint16        | 0.0015259 | -42.2088623  | 32.0190430    | -0.0250491   | 15.1507921       | torch.Size([2, 512, 256])        |
| 2759    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.37.input_mean.mean                    | output              | qint16        | 0.0000059 | -0.1753774   | 0.1944833     | -0.0255771   | 0.0065336        | torch.Size([2, 512, 1])          |
| 2760    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.37.sub                                | input_0             | qint16        | 0.0015259 | -42.2088623  | 32.0190430    | -0.0250491   | 15.1507921       | torch.Size([2, 512, 256])        |
| 2760    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.37.sub                                | input_1             | qint16        | 0.0000059 | -0.1753774   | 0.1944833     | -0.0255771   | 0.0065336        | torch.Size([2, 512, 1])          |
| 2760    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.37.sub                                | output              | qint16        | 0.0014179 | -42.3386078  | 31.8886566    | 0.0005384    | 15.1440382       | torch.Size([2, 512, 256])        |
| 2761    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.mul                                | input_0             | qint16        | 0.0014179 | -42.3386078  | 31.8886566    | 0.0005384    | 15.1440382       | torch.Size([2, 512, 256])        |
| 2761    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.mul                                | input_1             | qint16        | 0.0014179 | -42.3386078  | 31.8886566    | 0.0005384    | 15.1440382       | torch.Size([2, 512, 256])        |
| 2761    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.mul                                | output              | qint16        | 0.0658912 | 0.0000000    | 1792.5703125  | 15.1433201   | 6749.9018555     | torch.Size([2, 512, 256])        |
| 2762    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.37.var_mean.mean                      | input_0             | qint16        | 0.0658912 | 0.0000000    | 1792.5703125  | 15.1433201   | 6749.9018555     | torch.Size([2, 512, 256])        |
| 2762    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.37.var_mean.mean                      | output              | qint16        | 0.0010888 | 6.4655185    | 28.7507610    | 15.1431885   | 25.9706879       | torch.Size([2, 512, 1])          |
| 2763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.37.rsqrt                              | input               | qint16        | 0.0010888 | 6.4655185    | 28.7507610    | 15.1431885   | 25.9706879       | torch.Size([2, 512, 1])          |
| 2763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.37.rsqrt                              | output              | qint16        | 0.0000123 | 0.1864951    | 0.3932764     | 0.2683362    | 0.0021030        | torch.Size([2, 512, 1])          |
| 2764    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.out_mul                            | input_0             | qint16        | 0.0014179 | -42.3386078  | 31.8886566    | 0.0005384    | 15.1440382       | torch.Size([2, 512, 256])        |
| 2764    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.out_mul                            | input_1             | qint16        | 0.0000123 | 0.1864951    | 0.3932764     | 0.2683362    | 0.0021030        | torch.Size([2, 512, 1])          |
| 2764    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.out_mul                            | output              | qint16        | 0.0002569 | -8.3326416   | 6.2077279     | 0.0001112    | 1.0000720        | torch.Size([2, 512, 256])        |
| 2765    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.37.weight_quant                       | input               | torch.float32 |           | 0.7167655    | 1.1553942     | 0.9289461    | 0.0046820        | torch.Size([256])                |
| 2765    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.37.weight_quant                       | output              | qint16        | 0.0000353 | 0.7167728    | 1.1553766     | 0.9289459    | 0.0046819        | torch.Size([256])                |
| 2766    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.weight_mul                         | input_0             | qint16        | 0.0002569 | -8.3326416   | 6.2077279     | 0.0001112    | 1.0000720        | torch.Size([2, 512, 256])        |
| 2766    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.weight_mul                         | input_1             | qint16        | 0.0000353 | 0.7167728    | 1.1553766     | 0.9289459    | 0.0046819        | torch.Size([256])                |
| 2766    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.37.weight_mul                         | output              | qint16        | 0.0001885 | -6.1140208   | 4.6395745     | 0.0024560    | 0.6856432        | torch.Size([2, 512, 256])        |
| 2767    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.37.bias_quant                         | input               | torch.float32 |           | -0.2403839   | 0.2585355     | 0.0083271    | 0.0031905        | torch.Size([256])                |
| 2767    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.37.bias_quant                         | output              | qint16        | 0.0000079 | -0.2403846   | 0.2585316     | 0.0083271    | 0.0031905        | torch.Size([256])                |
| 2768    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.37.bias_add                           | input_0             | qint16        | 0.0001885 | -6.1140208   | 4.6395745     | 0.0024560    | 0.6856432        | torch.Size([2, 512, 256])        |
| 2768    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.37.bias_add                           | input_1             | qint16        | 0.0000079 | -0.2403846   | 0.2585316     | 0.0083271    | 0.0031905        | torch.Size([256])                |
| 2768    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.37.bias_add                           | output              | qint8         | 0.0427042 | -5.4661393   | 4.3985338     | 0.0109038    | 0.6492963        | torch.Size([2, 512, 256])        |
| 2769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.kps_generator.offset               | input               | qint8         | 0.0427042 | -5.4661393   | 4.3985338     | 0.0109038    | 0.6492963        | torch.Size([2, 512, 256])        |
| 2769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.kps_generator.offset               | weight              | torch.float32 |           | -0.2949824   | 0.2879395     | -0.0002231   | 0.0054715        | torch.Size([24, 256])            |
| 2769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.kps_generator.offset               | bias                | torch.float32 |           | -0.1117399   | 0.0869147     | -0.0169646   | 0.0027590        | torch.Size([24])                 |
| 2769    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.kps_generator.offset               | output              | qint16        | 0.0002762 | -7.8163490   | 7.7188454     | -0.3357290   | 2.6170950        | torch.Size([2, 512, 24])         |
| 2770    | torch.Tensor.view                                                           | head.layers.38.kps_generator                      | input_0             | qint16        | 0.0002762 | -7.8163490   | 7.7188454     | -0.3357290   | 2.6170950        | torch.Size([2, 512, 24])         |
| 2770    | torch.Tensor.view                                                           | head.layers.38.kps_generator                      | output              | qint16        | 0.0002762 | -7.8163490   | 7.7188454     | -0.3357290   | 2.6170950        | torch.Size([2, 512, 8, 3])       |
| 2771    | torch.Tensor.__getitem__                                                    | head.layers.38.kps_generator                      | input_0             | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2771    | torch.Tensor.__getitem__                                                    | head.layers.38.kps_generator                      | output              | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.9105684    | 276.4417419      | torch.Size([2, 512, 1, 3])       |
| 2772    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.kps_generator.keypoints_add        | input_0             | qint16        | 0.0002762 | -7.8163490   | 7.7188454     | -0.3357290   | 2.6170950        | torch.Size([2, 512, 8, 3])       |
| 2772    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.kps_generator.keypoints_add        | input_1             | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.9105684    | 276.4417419      | torch.Size([2, 512, 1, 3])       |
| 2772    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.kps_generator.keypoints_add        | output              | qint16        | 0.0019563 | -59.5524254  | 57.3653297    | 0.5748482    | 279.7022705      | torch.Size([2, 512, 8, 3])       |
| 2773    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.weight_add                         | input_0             | qint8         | 0.0427042 | -5.4661393   | 4.3985338     | 0.0109038    | 0.6492963        | torch.Size([2, 512, 256])        |
| 2773    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.weight_add                         | input_1             | qint8         | 0.0569265 | -1.6508691   | 7.2296681     | 0.0627509    | 0.8982574        | torch.Size([2, 512, 256])        |
| 2773    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.weight_add                         | output              | qint8         | 0.0589563 | -5.8956308   | 7.4874511     | 0.0735998    | 1.4726986        | torch.Size([2, 512, 256])        |
| 2774    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 2774    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 2775    | torch.Tensor.reshape                                                        | head.layers.38                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 3, 4])         |
| 2775    | torch.Tensor.reshape                                                        | head.layers.38                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 2776    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.0                   | input               | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.4411911   | 1.6379664        | torch.Size([2, 6, 12])           |
| 2776    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.0                   | weight              | torch.float32 |           | -0.5837476   | 0.6199124     | 0.0053515    | 0.0138439        | torch.Size([256, 12])            |
| 2776    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.0                   | bias                | torch.float32 |           | -0.3124255   | 0.3618607     | 0.0002249    | 0.0292400        | torch.Size([256])                |
| 2776    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.0                   | output              | torch.float32 |           | -1.2533853   | 1.0490028     | -0.1139899   | 0.1707718        | torch.Size([2, 6, 256])          |
| 2777    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.38.camera_encoder.1                   | input               | torch.float32 |           | -1.2533853   | 1.0490028     | -0.1139899   | 0.1707718        | torch.Size([2, 6, 256])          |
| 2777    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.38.camera_encoder.1                   | output              | qint8         | 0.0084702 | 0.0000000    | 1.0503051     | 0.1233253    | 0.0408647        | torch.Size([2, 6, 256])          |
| 2778    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.input_mean.mean   | input_0             | qint8         | 0.0084702 | 0.0000000    | 1.0503051     | 0.1233253    | 0.0408647        | torch.Size([2, 6, 256])          |
| 2778    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.input_mean.mean   | output              | qint16        | 0.0000042 | 0.1090552    | 0.1363179     | 0.1233251    | 0.0000874        | torch.Size([2, 6, 1])            |
| 2779    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.2.sub               | input_0             | qint8         | 0.0084702 | 0.0000000    | 1.0503051     | 0.1233253    | 0.0408647        | torch.Size([2, 6, 256])          |
| 2779    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.2.sub               | input_1             | qint16        | 0.0000042 | 0.1090552    | 0.1363179     | 0.1233251    | 0.0000874        | torch.Size([2, 6, 1])            |
| 2779    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.2.sub               | output              | qint16        | 0.0000293 | -0.1363228   | 0.9206253     | 0.0000005    | 0.0407843        | torch.Size([2, 6, 256])          |
| 2780    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.mul               | input_0             | qint16        | 0.0000293 | -0.1363228   | 0.9206253     | 0.0000005    | 0.0407843        | torch.Size([2, 6, 256])          |
| 2780    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.mul               | input_1             | qint16        | 0.0000293 | -0.1363228   | 0.9206253     | 0.0000005    | 0.0407843        | torch.Size([2, 6, 256])          |
| 2780    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.mul               | output              | qint16        | 0.0000281 | 0.0000000    | 0.8475609     | 0.0407724    | 0.0070412        | torch.Size([2, 6, 256])          |
| 2781    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.var_mean.mean     | input_0             | qint16        | 0.0000281 | 0.0000000    | 0.8475609     | 0.0407724    | 0.0070412        | torch.Size([2, 6, 256])          |
| 2781    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.var_mean.mean     | output              | qint16        | 0.0000016 | 0.0283441    | 0.0499293     | 0.0407722    | 0.0000438        | torch.Size([2, 6, 1])            |
| 2782    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.38.camera_encoder.2.rsqrt             | input               | qint16        | 0.0000016 | 0.0283441    | 0.0499293     | 0.0407722    | 0.0000438        | torch.Size([2, 6, 1])            |
| 2782    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.38.camera_encoder.2.rsqrt             | output              | qint16        | 0.0001811 | 4.4749341    | 5.9347620     | 5.0026426    | 0.2071000        | torch.Size([2, 6, 1])            |
| 2783    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.out_mul           | input_0             | qint16        | 0.0000293 | -0.1363228   | 0.9206253     | 0.0000005    | 0.0407843        | torch.Size([2, 6, 256])          |
| 2783    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.out_mul           | input_1             | qint16        | 0.0001811 | 4.4749341    | 5.9347620     | 5.0026426    | 0.2071000        | torch.Size([2, 6, 1])            |
| 2783    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.out_mul           | output              | qint16        | 0.0001394 | -0.6472265   | 4.4395580     | 0.0000096    | 0.9999228        | torch.Size([2, 6, 256])          |
| 2784    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.2.weight_quant      | input               | torch.float32 |           | 0.6364256    | 1.2354475     | 0.9619384    | 0.0091793        | torch.Size([256])                |
| 2784    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.2.weight_quant      | output              | qint16        | 0.0000377 | 0.6364341    | 1.2354287     | 0.9619380    | 0.0091793        | torch.Size([256])                |
| 2785    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.weight_mul        | input_0             | qint16        | 0.0001394 | -0.6472265   | 4.4395580     | 0.0000096    | 0.9999228        | torch.Size([2, 6, 256])          |
| 2785    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.weight_mul        | input_1             | qint16        | 0.0000377 | 0.6364341    | 1.2354287     | 0.9619380    | 0.0091793        | torch.Size([256])                |
| 2785    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.weight_mul        | output              | qint16        | 0.0001526 | -0.7995798   | 4.8615918     | 0.0230162    | 1.0097616        | torch.Size([2, 6, 256])          |
| 2786    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.2.bias_quant        | input               | torch.float32 |           | -0.0854455   | 0.2577538     | 0.0279319    | 0.0030540        | torch.Size([256])                |
| 2786    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.2.bias_quant        | output              | qint16        | 0.0000079 | -0.0854420   | 0.2577499     | 0.0279318    | 0.0030540        | torch.Size([256])                |
| 2787    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.2.bias_add          | input_0             | qint16        | 0.0001526 | -0.7995798   | 4.8615918     | 0.0230162    | 1.0097616        | torch.Size([2, 6, 256])          |
| 2787    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.2.bias_add          | input_1             | qint16        | 0.0000079 | -0.0854420   | 0.2577499     | 0.0279318    | 0.0030540        | torch.Size([256])                |
| 2787    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.2.bias_add          | output              | qint8         | 0.0388084 | -0.8149760   | 4.8510475     | 0.0507591    | 0.9891814        | torch.Size([2, 6, 256])          |
| 2788    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.3                   | input               | qint8         | 0.0388084 | -0.8149760   | 4.8510475     | 0.0507591    | 0.9891814        | torch.Size([2, 6, 256])          |
| 2788    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.3                   | weight              | torch.float32 |           | -0.4502119   | 0.5281727     | 0.0017226    | 0.0051280        | torch.Size([256, 256])           |
| 2788    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.3                   | bias                | torch.float32 |           | -0.0939403   | 0.2747428     | -0.0087818   | 0.0019428        | torch.Size([256])                |
| 2788    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.camera_encoder.3                   | output              | torch.float32 |           | -9.8099270   | 41.2162781    | -0.9717865   | 16.5492401       | torch.Size([2, 6, 256])          |
| 2789    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.38.camera_encoder.4                   | input               | torch.float32 |           | -9.8099270   | 41.2162781    | -0.9717865   | 16.5492401       | torch.Size([2, 6, 256])          |
| 2789    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.38.camera_encoder.4                   | output              | qint8         | 0.3216666 | 0.0000000    | 40.8516579    | 0.7856330    | 12.0403137       | torch.Size([2, 6, 256])          |
| 2790    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.input_mean.mean   | input_0             | qint8         | 0.3216666 | 0.0000000    | 40.8516579    | 0.7856330    | 12.0403137       | torch.Size([2, 6, 256])          |
| 2790    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.input_mean.mean   | output              | qint16        | 0.0000320 | 0.7099144    | 1.0265617     | 0.7856359    | 0.0132294        | torch.Size([2, 6, 1])            |
| 2791    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.5.sub               | input_0             | qint8         | 0.3216666 | 0.0000000    | 40.8516579    | 0.7856330    | 12.0403137       | torch.Size([2, 6, 256])          |
| 2791    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.5.sub               | input_1             | qint16        | 0.0000320 | 0.7099144    | 1.0265617     | 0.7856359    | 0.0132294        | torch.Size([2, 6, 1])            |
| 2791    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.5.sub               | output              | qint16        | 0.0012295 | -1.0266387   | 40.1286659    | -0.0001993   | 12.0284004       | torch.Size([2, 6, 256])          |
| 2792    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.mul               | input_0             | qint16        | 0.0012295 | -1.0266387   | 40.1286659    | -0.0001993   | 12.0284004       | torch.Size([2, 6, 256])          |
| 2792    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.mul               | input_1             | qint16        | 0.0012295 | -1.0266387   | 40.1286659    | -0.0001993   | 12.0284004       | torch.Size([2, 6, 256])          |
| 2792    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.mul               | output              | qint16        | 0.0495343 | 0.0000000    | 1610.3092041  | 12.0259895   | 10225.2792969    | torch.Size([2, 6, 256])          |
| 2793    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.var_mean.mean     | input_0             | qint16        | 0.0495343 | 0.0000000    | 1610.3092041  | 12.0259895   | 10225.2792969    | torch.Size([2, 6, 256])          |
| 2793    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.var_mean.mean     | output              | qint16        | 0.0004794 | 10.3112717   | 15.5280304    | 12.0259018   | 2.7838807        | torch.Size([2, 6, 1])            |
| 2794    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.38.camera_encoder.5.rsqrt             | input               | qint16        | 0.0004794 | 10.3112717   | 15.5280304    | 12.0259018   | 2.7838807        | torch.Size([2, 6, 1])            |
| 2794    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.38.camera_encoder.5.rsqrt             | output              | qint16        | 0.0000101 | 0.2537668    | 0.3114140     | 0.2900412    | 0.0003194        | torch.Size([2, 6, 1])            |
| 2795    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.out_mul           | input_0             | qint16        | 0.0012295 | -1.0266387   | 40.1286659    | -0.0001993   | 12.0284004       | torch.Size([2, 6, 256])          |
| 2795    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.out_mul           | input_1             | qint16        | 0.0000101 | 0.2537668    | 0.3114140     | 0.2900412    | 0.0003194        | torch.Size([2, 6, 1])            |
| 2795    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.out_mul           | output              | qint16        | 0.0003601 | -0.2618112   | 11.7577353    | -0.0000554   | 1.0001616        | torch.Size([2, 6, 256])          |
| 2796    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.5.weight_quant      | input               | torch.float32 |           | 0.4334703    | 1.5143329     | 0.8827897    | 0.0300007        | torch.Size([256])                |
| 2796    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.5.weight_quant      | output              | qint16        | 0.0000462 | 0.4334918    | 1.5143098     | 0.8827903    | 0.0300003        | torch.Size([256])                |
| 2797    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.weight_mul        | input_0             | qint16        | 0.0003601 | -0.2618112   | 11.7577353    | -0.0000554   | 1.0001616        | torch.Size([2, 6, 256])          |
| 2797    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.weight_mul        | input_1             | qint16        | 0.0000462 | 0.4334918    | 1.5143098     | 0.8827903    | 0.0300003        | torch.Size([256])                |
| 2797    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.weight_mul        | output              | qint16        | 0.0003089 | -0.3963486   | 10.0369196    | -0.0292926   | 0.5637336        | torch.Size([2, 6, 256])          |
| 2798    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.5.bias_quant        | input               | torch.float32 |           | -0.7513186   | 0.5755784     | 0.0355008    | 0.0327518        | torch.Size([256])                |
| 2798    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.camera_encoder.5.bias_quant        | output              | qint16        | 0.0000229 | -0.7513300   | 0.5755810     | 0.0355008    | 0.0327518        | torch.Size([256])                |
| 2799    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.5.bias_add          | input_0             | qint16        | 0.0003089 | -0.3963486   | 10.0369196    | -0.0292926   | 0.5637336        | torch.Size([2, 6, 256])          |
| 2799    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.5.bias_add          | input_1             | qint16        | 0.0000229 | -0.7513300   | 0.5755810     | 0.0355008    | 0.0327518        | torch.Size([256])                |
| 2799    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.5.bias_add          | output              | qint8         | 0.0780597 | -1.0928354   | 9.9135780     | 0.0056919    | 0.5515839        | torch.Size([2, 6, 256])          |
| 2800    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint8         | 0.0589563 | -5.8956308   | 7.4874511     | 0.0735998    | 1.4726986        | torch.Size([2, 512, 256])        |
| 2800    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint8         | 0.0589563 | -5.8956308   | 7.4874511     | 0.0735998    | 1.4726986        | torch.Size([2, 512, 1, 256])     |
| 2801    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint8         | 0.0780597 | -1.0928354   | 9.9135780     | 0.0056919    | 0.5515839        | torch.Size([2, 6, 256])          |
| 2801    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint8         | 0.0780597 | -1.0928354   | 9.9135780     | 0.0056919    | 0.5515839        | torch.Size([2, 1, 6, 256])       |
| 2802    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.cam_add                            | input_0             | qint8         | 0.0589563 | -5.8956308   | 7.4874511     | 0.0735998    | 1.4726986        | torch.Size([2, 512, 1, 256])     |
| 2802    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.cam_add                            | input_1             | qint8         | 0.0780597 | -1.0928354   | 9.9135780     | 0.0056919    | 0.5515839        | torch.Size([2, 1, 6, 256])       |
| 2802    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.38.cam_add                            | output              | qint8         | 0.0794382 | -5.3223619   | 10.0886564    | 0.0799577    | 1.5841163        | torch.Size([2, 512, 6, 256])     |
| 2803    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.weights_fc                         | input               | qint8         | 0.0794382 | -5.3223619   | 10.0886564    | 0.0799577    | 1.5841163        | torch.Size([2, 512, 6, 256])     |
| 2803    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.weights_fc                         | weight              | torch.float32 |           | -0.3664656   | 0.5587184     | 0.0007138    | 0.0033969        | torch.Size([64, 256])            |
| 2803    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.weights_fc                         | bias                | torch.float32 |           | -0.1132682   | 0.0694408     | -0.0024798   | 0.0018388        | torch.Size([64])                 |
| 2803    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.weights_fc                         | output              | qint8         | 0.0762606 | -9.7613525   | 8.1598806     | -0.0972451   | 6.0669417        | torch.Size([2, 512, 6, 64])      |
| 2804    | torch.Tensor.reshape                                                        | head.layers.38                                    | input_0             | qint8         | 0.0762606 | -9.7613525   | 8.1598806     | -0.0972451   | 6.0669417        | torch.Size([2, 512, 6, 64])      |
| 2804    | torch.Tensor.reshape                                                        | head.layers.38                                    | output              | qint8         | 0.0762606 | -9.7613525   | 8.1598806     | -0.0972451   | 6.0669417        | torch.Size([2, 512, 48, 8])      |
| 2805    | torch.Tensor.max                                                            | head.layers.38.weight_softmax                     | input               | qint8         | 0.0762606 | -9.7613525   | 8.1598806     | -0.0972451   | 6.0669417        | torch.Size([2, 512, 48, 8])      |
| 2805    | torch.Tensor.max                                                            | head.layers.38.weight_softmax                     | output_0            | qint8         | 0.0762606 | 1.7539930    | 8.1598806     | 3.6927729    | 1.2208368        | torch.Size([2, 512, 1, 8])       |
| 2805    | torch.Tensor.max                                                            | head.layers.38.weight_softmax                     | output_1            | torch.int64   |           | 1.0000000    | 47.0000000    | 26.3417969   | 244.1888580      | torch.Size([2, 512, 1, 8])       |
| 2806    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.weight_softmax.sub                 | input_0             | qint8         | 0.0762606 | -9.7613525   | 8.1598806     | -0.0972451   | 6.0669417        | torch.Size([2, 512, 48, 8])      |
| 2806    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.weight_softmax.sub                 | input_1             | qint8         | 0.0762606 | 1.7539930    | 8.1598806     | 3.6927729    | 1.2208368        | torch.Size([2, 512, 1, 8])       |
| 2806    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.38.weight_softmax.sub                 | output              | qint16        | 0.0005096 | -14.4130249  | 0.0000000     | -3.7900100   | 6.7149248        | torch.Size([2, 512, 48, 8])      |
| 2807    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.38.weight_softmax.exp                 | input               | qint16        | 0.0005096 | -14.4130249  | 0.0000000     | -3.7900100   | 6.7149248        | torch.Size([2, 512, 48, 8])      |
| 2807    | horizon_plugin_pytorch.nn.qat.segment_lut.SegmentLUT                        | head.layers.38.weight_softmax.exp                 | output              | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1576173    | 0.0631506        | torch.Size([2, 512, 48, 8])      |
| 2808    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.38.weight_softmax.sum                 | input               | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1576173    | 0.0631506        | torch.Size([2, 512, 48, 8])      |
| 2808    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.38.weight_softmax.sum                 | output              | qint16        | 0.0005971 | 1.4319152    | 19.0048218    | 7.5656214    | 11.7429695       | torch.Size([2, 512, 1, 8])       |
| 2809    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.38.weight_softmax.reciprocal          | input               | qint16        | 0.0005971 | 1.4319152    | 19.0048218    | 7.5656214    | 11.7429695       | torch.Size([2, 512, 1, 8])       |
| 2809    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.38.weight_softmax.reciprocal          | output              | qint16        | 0.0000238 | 0.0526089    | 0.6983672     | 0.1738128    | 0.0118339        | torch.Size([2, 512, 1, 8])       |
| 2810    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.weight_softmax.mul                 | input_0             | qint16        | 0.0000305 | 0.0000000    | 0.9999847     | 0.1576173    | 0.0631506        | torch.Size([2, 512, 48, 8])      |
| 2810    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.weight_softmax.mul                 | input_1             | qint16        | 0.0000238 | 0.0526089    | 0.6983672     | 0.1738128    | 0.0118339        | torch.Size([2, 512, 1, 8])       |
| 2810    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.weight_softmax.mul                 | output              | qint8         | 0.0043630 | 0.0000000    | 0.5541015     | 0.0206669    | 0.0015308        | torch.Size([2, 512, 48, 8])      |
| 2811    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint16        | 0.0019563 | -59.5524254  | 57.3653297    | 0.5748482    | 279.7022705      | torch.Size([2, 512, 8, 3])       |
| 2811    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint16        | 0.0019563 | -45.4145432  | 51.3107071    | 0.8480499    | 287.0471802      | torch.Size([2, 512, 8, 1])       |
| 2812    | torch.ones_like                                                             | head.layers.38                                    | input               | qint16        | 0.0019563 | -45.4145432  | 51.3107071    | 0.8480499    | 287.0471802      | torch.Size([2, 512, 8, 1])       |
| 2812    | torch.ones_like                                                             | head.layers.38                                    | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2813    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.point_quant_stub                   | input               | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2813    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.38.point_quant_stub                   | output              | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2814    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.point_cat                          | input_0             | qint16        | 0.0019563 | -59.5524254  | 57.3653297    | 0.5748482    | 279.7022705      | torch.Size([2, 512, 8, 3])       |
| 2814    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.point_cat                          | input_1             | qint16        | 1.0000000 | 1.0000000    | 1.0000000     | 1.0000000    | 0.0000000        | torch.Size([2, 512, 8, 1])       |
| 2814    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.point_cat                          | output              | qint16        | 0.0018311 | -59.5532227  | 57.3651123    | 0.6810637    | 209.8083954      | torch.Size([2, 512, 8, 4])       |
| 2815    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 4, 4])         |
| 2815    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2816    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint16        | 0.0018311 | -59.5532227  | 57.3651123    | 0.6810637    | 209.8083954      | torch.Size([2, 512, 8, 4])       |
| 2816    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint16        | 0.0018311 | -59.5532227  | 57.3651123    | 0.6810637    | 209.8083954      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2817    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.point_matmul                       | input_0             | qint16        | 0.0001336 | -4.3784013   | 1.5829122     | -0.2683968   | 1.3634887        | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2817    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.point_matmul                       | input_1             | qint16        | 0.0018311 | -59.5532227  | 57.3651123    | 0.6810637    | 209.8083954      | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2817    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.point_matmul                       | output              | qint16        | 0.0029204 | -88.8804321  | 87.7181015    | 0.2437895    | 95.9508057       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2818    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.38.point_sum                          | input               | qint16        | 0.0029204 | -88.8804321  | 87.7181015    | 0.2437895    | 95.9508057       | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2818    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.38.point_sum                          | output              | qint16        | 0.0030852 | -92.7923203  | 93.6283951    | 0.9753551    | 377.0231628      | torch.Size([2, 6, 512, 8, 4])    |
| 2819    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint16        | 0.0030852 | -92.7923203  | 93.6283951    | 0.9753551    | 377.0231628      | torch.Size([2, 6, 512, 8, 4])    |
| 2819    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint16        | 0.0030852 | -60.4104919  | 58.6396141    | -0.5440685   | 413.0735474      | torch.Size([2, 6, 512, 8, 1])    |
| 2820    | torch.clamp                                                                 | head.layers.38                                    | input               | qint16        | 0.0030852 | -60.4104919  | 58.6396141    | -0.5440685   | 413.0735474      | torch.Size([2, 6, 512, 8, 1])    |
| 2820    | torch.clamp                                                                 | head.layers.38                                    | output              | qint16        | 0.0030852 | 0.0000000    | 58.6396141    | 7.2913108    | 146.6139069      | torch.Size([2, 6, 512, 8, 1])    |
| 2821    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.38.reciprocal_op                      | input               | qint16        | 0.0030852 | 0.0000000    | 58.6396141    | 7.2913108    | 146.6139069      | torch.Size([2, 6, 512, 8, 1])    |
| 2821    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                             | head.layers.38.reciprocal_op                      | output              | qint16        | 0.0003357 | 0.0171204    | 10.9996643    | 6.1583996    | 28.4347496       | torch.Size([2, 6, 512, 8, 1])    |
| 2822    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint16        | 0.0030852 | -92.7923203  | 93.6283951    | 0.9753551    | 377.0231628      | torch.Size([2, 6, 512, 8, 4])    |
| 2822    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint16        | 0.0030852 | -92.7923203  | 93.6283951    | 1.7229488    | 545.8018799      | torch.Size([2, 6, 512, 8, 2])    |
| 2823    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.point_mul                          | input_0             | qint16        | 0.0030852 | -92.7923203  | 93.6283951    | 1.7229488    | 545.8018799      | torch.Size([2, 6, 512, 8, 2])    |
| 2823    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.point_mul                          | input_1             | qint16        | 0.0003357 | 0.0171204    | 10.9996643    | 6.1583996    | 28.4347496       | torch.Size([2, 6, 512, 8, 1])    |
| 2823    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.point_mul                          | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2271782    | 0.9036832        | torch.Size([2, 6, 512, 8, 2])    |
| 2824    | torch.Tensor.flatten                                                        | head.layers.38                                    | input               | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2271782    | 0.9036832        | torch.Size([2, 6, 512, 8, 2])    |
| 2824    | torch.Tensor.flatten                                                        | head.layers.38                                    | output              | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2271782    | 0.9036832        | torch.Size([12, 512, 8, 2])      |
| 2825    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.38                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.1459892    | 19.5724487       | torch.Size([12, 256, 16, 44])    |
| 2825    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.38                                    | input_1             | qint16        | 0.0000336 | -1.1000000   | 1.0999664     | 0.2271782    | 0.9036832        | torch.Size([12, 512, 8, 2])      |
| 2825    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                | head.layers.38                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([12, 256, 512, 8])    |
| 2826    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.feat_cat                           | input               | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([12, 256, 512, 8])    |
| 2826    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.feat_cat                           | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([12, 256, 512, 8])    |
| 2827    | torch.Tensor.view                                                           | head.layers.38                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([12, 256, 512, 8])    |
| 2827    | torch.Tensor.view                                                           | head.layers.38                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([2, 6, 256, 512, 8])  |
| 2828    | torch.Tensor.permute                                                        | head.layers.38                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([2, 6, 256, 512, 8])  |
| 2828    | torch.Tensor.permute                                                        | head.layers.38                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([2, 512, 6, 8, 256])  |
| 2829    | torch.Tensor.contiguous                                                     | head.layers.38                                    | input               | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665209        | torch.Size([2, 512, 6, 8, 256])  |
| 2829    | torch.Tensor.contiguous                                                     | head.layers.38                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665211        | torch.Size([2, 512, 6, 8, 256])  |
| 2830    | torch.Tensor.view                                                           | head.layers.38                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665211        | torch.Size([2, 512, 6, 8, 256])  |
| 2830    | torch.Tensor.view                                                           | head.layers.38                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665211        | torch.Size([2, 512, 48, 256])    |
| 2831    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | input_0             | qint8         | 0.0043630 | 0.0000000    | 0.5541015     | 0.0206669    | 0.0015308        | torch.Size([2, 512, 48, 8])      |
| 2831    | torch.Tensor.__getitem__                                                    | head.layers.38                                    | output              | qint8         | 0.0043630 | 0.0000000    | 0.5541015     | 0.0206669    | 0.0015308        | torch.Size([2, 512, 48, 8, 1])   |
| 2832    | torch.Tensor.reshape                                                        | head.layers.38                                    | input_0             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665211        | torch.Size([2, 512, 48, 256])    |
| 2832    | torch.Tensor.reshape                                                        | head.layers.38                                    | output              | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665211        | torch.Size([2, 512, 48, 8, 32])  |
| 2833    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.feat_mul                           | input_0             | qint8         | 0.0043630 | 0.0000000    | 0.5541015     | 0.0206669    | 0.0015308        | torch.Size([2, 512, 48, 8, 1])   |
| 2833    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.feat_mul                           | input_1             | qint8         | 0.2235520 | -28.6146584  | 28.3911057    | 0.0271789    | 2.8665211        | torch.Size([2, 512, 48, 8, 32])  |
| 2833    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.38.feat_mul                           | output              | qint8         | 0.0150244 | -1.9231185   | 1.9080942     | 0.0003569    | 0.0023469        | torch.Size([2, 512, 48, 8, 32])  |
| 2834    | torch.Tensor.view                                                           | head.layers.38                                    | input_0             | qint8         | 0.0150244 | -1.9231185   | 1.9080942     | 0.0003569    | 0.0023469        | torch.Size([2, 512, 48, 8, 32])  |
| 2834    | torch.Tensor.view                                                           | head.layers.38                                    | output              | qint8         | 0.0150244 | -1.9231185   | 1.9080942     | 0.0003569    | 0.0023469        | torch.Size([2, 512, 48, 256])    |
| 2835    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.38.feat_sum                           | input               | qint8         | 0.0150244 | -1.9231185   | 1.9080942     | 0.0003569    | 0.0023469        | torch.Size([2, 512, 48, 256])    |
| 2835    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sum        | head.layers.38.feat_sum                           | output              | qint8         | 0.0393799 | -5.0406322   | 4.5680728     | 0.0173349    | 0.2730421        | torch.Size([2, 512, 256])        |
| 2836    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.output_proj                        | input               | qint8         | 0.0393799 | -5.0406322   | 4.5680728     | 0.0173349    | 0.2730421        | torch.Size([2, 512, 256])        |
| 2836    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.output_proj                        | weight              | torch.float32 |           | -0.3224856   | 0.3687426     | 0.0000557    | 0.0083070        | torch.Size([256, 256])           |
| 2836    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.output_proj                        | bias                | torch.float32 |           | -0.0892059   | 0.1071169     | 0.0013445    | 0.0012537        | torch.Size([256])                |
| 2836    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.38.output_proj                        | output              | qint8         | 0.0481215 | -6.0151830   | 6.1114259     | 0.0504901    | 0.7089856        | torch.Size([2, 512, 256])        |
| 2837    | torch.nn.modules.dropout.Dropout                                            | head.layers.38.proj_drop                          | input               | qint8         | 0.0481215 | -6.0151830   | 6.1114259     | 0.0504901    | 0.7089856        | torch.Size([2, 512, 256])        |
| 2837    | torch.nn.modules.dropout.Dropout                                            | head.layers.38.proj_drop                          | output              | qint8         | 0.0481215 | -6.0151830   | 6.1114259     | 0.0504901    | 0.7089856        | torch.Size([2, 512, 256])        |
| 2838    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.residual_op                        | input_0             | qint8         | 0.0481215 | -6.0151830   | 6.1114259     | 0.0504901    | 0.7089856        | torch.Size([2, 512, 256])        |
| 2838    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.residual_op                        | input_1             | qint8         | 0.0427042 | -5.4661393   | 4.3985338     | 0.0109038    | 0.6492963        | torch.Size([2, 512, 256])        |
| 2838    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.cat        | head.layers.38.residual_op                        | output              | qint8         | 0.0450399 | -5.7651100   | 5.7200699     | 0.0307726    | 0.6797323        | torch.Size([2, 512, 512])        |
| 2839    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.input_mean.mean           | input_0             | qint8         | 0.0450399 | -5.7651100   | 5.7200699     | 0.0307726    | 0.6797323        | torch.Size([2, 512, 512])        |
| 2839    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.input_mean.mean           | output              | qint16        | 0.0000041 | -0.0143370   | 0.1300177     | 0.0307726    | 0.0004137        | torch.Size([2, 512, 1])          |
| 2840    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.39.pre_norm.sub                       | input_0             | qint8         | 0.0450399 | -5.7651100   | 5.7200699     | 0.0307726    | 0.6797323        | torch.Size([2, 512, 512])        |
| 2840    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.39.pre_norm.sub                       | input_1             | qint16        | 0.0000041 | -0.0143370   | 0.1300177     | 0.0307726    | 0.0004137        | torch.Size([2, 512, 1])          |
| 2840    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.39.pre_norm.sub                       | output              | qint16        | 0.0002463 | -5.8725648   | 5.6644750     | 0.0000005    | 0.6793184        | torch.Size([2, 512, 512])        |
| 2841    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.mul                       | input_0             | qint16        | 0.0002463 | -5.8725648   | 5.6644750     | 0.0000005    | 0.6793184        | torch.Size([2, 512, 512])        |
| 2841    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.mul                       | input_1             | qint16        | 0.0002463 | -5.8725648   | 5.6644750     | 0.0000005    | 0.6793184        | torch.Size([2, 512, 512])        |
| 2841    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.mul                       | output              | qint16        | 0.0019887 | 0.0000000    | 34.4866333    | 0.6793299    | 4.6114597        | torch.Size([2, 512, 512])        |
| 2842    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.var_mean.mean             | input_0             | qint16        | 0.0019887 | 0.0000000    | 34.4866333    | 0.6793299    | 4.6114597        | torch.Size([2, 512, 512])        |
| 2842    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.var_mean.mean             | output              | qint16        | 0.0000644 | 0.3502890    | 2.1114643     | 0.6781948    | 0.0700353        | torch.Size([2, 512, 1])          |
| 2843    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.39.pre_norm.rsqrt                     | input               | qint16        | 0.0000644 | 0.3502890    | 2.1114643     | 0.6781948    | 0.0700353        | torch.Size([2, 512, 1])          |
| 2843    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.39.pre_norm.rsqrt                     | output              | qint16        | 0.0000520 | 0.6882075    | 1.6895816     | 1.2739048    | 0.0495925        | torch.Size([2, 512, 1])          |
| 2844    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.out_mul                   | input_0             | qint16        | 0.0002463 | -5.8725648   | 5.6644750     | 0.0000005    | 0.6793184        | torch.Size([2, 512, 512])        |
| 2844    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.out_mul                   | input_1             | qint16        | 0.0000520 | 0.6882075    | 1.6895816     | 1.2739048    | 0.0495925        | torch.Size([2, 512, 1])          |
| 2844    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.out_mul                   | output              | qint16        | 0.0002782 | -8.8458633   | 7.0586061     | 0.0000014    | 1.0004807        | torch.Size([2, 512, 512])        |
| 2845    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.39.pre_norm.weight_quant              | input               | torch.float32 |           | 0.6608862    | 1.4900941     | 0.9789718    | 0.0452766        | torch.Size([512])                |
| 2845    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.39.pre_norm.weight_quant              | output              | qint16        | 0.0000455 | 0.6608846    | 1.4900714     | 0.9789717    | 0.0452770        | torch.Size([512])                |
| 2846    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.weight_mul                | input_0             | qint16        | 0.0002782 | -8.8458633   | 7.0586061     | 0.0000014    | 1.0004807        | torch.Size([2, 512, 512])        |
| 2846    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.weight_mul                | input_1             | qint16        | 0.0000455 | 0.6608846    | 1.4900714     | 0.9789717    | 0.0452770        | torch.Size([512])                |
| 2846    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.weight_mul                | output              | qint16        | 0.0002037 | -6.2391038   | 5.8999224     | 0.0018641    | 0.7747681        | torch.Size([2, 512, 512])        |
| 2847    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.39.pre_norm.bias_quant                | input               | torch.float32 |           | -0.1679264   | 0.1870694     | 0.0026524    | 0.0032695        | torch.Size([512])                |
| 2847    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.39.pre_norm.bias_quant                | output              | qint16        | 0.0000057 | -0.1679243   | 0.1870666     | 0.0026524    | 0.0032695        | torch.Size([512])                |
| 2848    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.39.pre_norm.bias_add                  | input_0             | qint16        | 0.0002037 | -6.2391038   | 5.8999224     | 0.0018641    | 0.7747681        | torch.Size([2, 512, 512])        |
| 2848    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.39.pre_norm.bias_add                  | input_1             | qint16        | 0.0000057 | -0.1679243   | 0.1870666     | 0.0026524    | 0.0032695        | torch.Size([512])                |
| 2848    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.39.pre_norm.bias_add                  | output              | qint8         | 0.0423501 | -5.4208083   | 5.3784580     | 0.0046135    | 0.7761624        | torch.Size([2, 512, 512])        |
| 2849    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.0.0                         | input               | qint8         | 0.0423501 | -5.4208083   | 5.3784580     | 0.0046135    | 0.7761624        | torch.Size([2, 512, 512])        |
| 2849    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.0.0                         | weight              | torch.float32 |           | -0.5392269   | 0.4812456     | -0.0005245   | 0.0077121        | torch.Size([1024, 512])          |
| 2849    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.0.0                         | bias                | torch.float32 |           | -0.1937473   | 0.0078548     | -0.0795463   | 0.0012755        | torch.Size([1024])               |
| 2849    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.0.0                         | output              | torch.float32 |           | -18.8387184  | 16.2749596    | -3.8539987   | 9.8128576        | torch.Size([2, 512, 1024])       |
| 2850    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.39.activate                           | input               | torch.float32 |           | -18.8387184  | 16.2749596    | -3.8539987   | 9.8128576        | torch.Size([2, 512, 1024])       |
| 2850    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.39.activate                           | output              | qint8         | 0.0964026 | 0.0000000    | 12.2431355    | 0.1797455    | 0.5091801        | torch.Size([2, 512, 1024])       |
| 2851    | torch.nn.modules.dropout.Dropout                                            | head.layers.39.layers.0.2                         | input               | qint8         | 0.0964026 | 0.0000000    | 12.2431355    | 0.1797455    | 0.5091801        | torch.Size([2, 512, 1024])       |
| 2851    | torch.nn.modules.dropout.Dropout                                            | head.layers.39.layers.0.2                         | output              | qint8         | 0.0964026 | 0.0000000    | 12.2431355    | 0.1797455    | 0.5091801        | torch.Size([2, 512, 1024])       |
| 2852    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.1                           | input               | qint8         | 0.0964026 | 0.0000000    | 12.2431355    | 0.1797455    | 0.5091801        | torch.Size([2, 512, 1024])       |
| 2852    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.1                           | weight              | torch.float32 |           | -0.5038874   | 0.5895149     | 0.0001352    | 0.0091717        | torch.Size([256, 1024])          |
| 2852    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.1                           | bias                | torch.float32 |           | -0.0698264   | 0.0842768     | -0.0005476   | 0.0007709        | torch.Size([256])                |
| 2852    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.layers.1                           | output              | qint8         | 0.1754952 | -22.4633827  | 21.9368973    | -0.0155154   | 16.2850819       | torch.Size([2, 512, 256])        |
| 2853    | torch.nn.modules.dropout.Dropout                                            | head.layers.39.layers.2                           | input               | qint8         | 0.1754952 | -22.4633827  | 21.9368973    | -0.0155154   | 16.2850819       | torch.Size([2, 512, 256])        |
| 2853    | torch.nn.modules.dropout.Dropout                                            | head.layers.39.layers.2                           | output              | qint8         | 0.1754952 | -22.4633827  | 21.9368973    | -0.0155154   | 16.2850819       | torch.Size([2, 512, 256])        |
| 2854    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.identity_fc                        | input               | qint8         | 0.0423501 | -5.4208083   | 5.3784580     | 0.0046135    | 0.7761624        | torch.Size([2, 512, 512])        |
| 2854    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.identity_fc                        | weight              | torch.float32 |           | -0.4967276   | 0.4735355     | -0.0000963   | 0.0086209        | torch.Size([256, 512])           |
| 2854    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.identity_fc                        | bias                | torch.float32 |           | -0.1381557   | 0.0822432     | -0.0011134   | 0.0011628        | torch.Size([256])                |
| 2854    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.39.identity_fc                        | output              | torch.float32 |           | -18.2817078  | 26.2105160    | -0.0139364   | 15.7980833       | torch.Size([2, 512, 256])        |
| 2855    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.39.short_add                          | input_0             | torch.float32 |           | -18.2817078  | 26.2105160    | -0.0139364   | 15.7980833       | torch.Size([2, 512, 256])        |
| 2855    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.39.short_add                          | input_1             | qint8         | 0.1754952 | -22.4633827  | 21.9368973    | -0.0155154   | 16.2850819       | torch.Size([2, 512, 256])        |
| 2855    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.39.short_add                          | output              | qint8         | 0.2482714 | -27.8064003  | 30.0408440    | -0.0296862   | 44.4664154       | torch.Size([2, 512, 256])        |
| 2856    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.40.input_mean.mean                    | input_0             | qint8         | 0.2482714 | -27.8064003  | 30.0408440    | -0.0296862   | 44.4664154       | torch.Size([2, 512, 256])        |
| 2856    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.40.input_mean.mean                    | output              | qint16        | 0.0000092 | -0.2667018   | 0.1988081     | -0.0296861   | 0.0058259        | torch.Size([2, 512, 1])          |
| 2857    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.40.sub                                | input_0             | qint8         | 0.2482714 | -27.8064003  | 30.0408440    | -0.0296862   | 44.4664154       | torch.Size([2, 512, 256])        |
| 2857    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.40.sub                                | input_1             | qint16        | 0.0000092 | -0.2667018   | 0.1988081     | -0.0296861   | 0.0058259        | torch.Size([2, 512, 1])          |
| 2857    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.40.sub                                | output              | qint16        | 0.0013535 | -27.8122063  | 30.2240696    | -0.0000040   | 44.4605789       | torch.Size([2, 512, 256])        |
| 2858    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.mul                                | input_0             | qint16        | 0.0013535 | -27.8122063  | 30.2240696    | -0.0000040   | 44.4605789       | torch.Size([2, 512, 256])        |
| 2858    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.mul                                | input_1             | qint16        | 0.0013535 | -27.8122063  | 30.2240696    | -0.0000040   | 44.4605789       | torch.Size([2, 512, 256])        |
| 2858    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.mul                                | output              | qint16        | 0.0600673 | 0.0000000    | 913.5042114   | 44.4601669   | 6537.6376953     | torch.Size([2, 512, 256])        |
| 2859    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.40.var_mean.mean                      | input_0             | qint16        | 0.0600673 | 0.0000000    | 913.5042114   | 44.4601669   | 6537.6376953     | torch.Size([2, 512, 256])        |
| 2859    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.40.var_mean.mean                      | output              | qint16        | 0.0070895 | 10.5846119   | 105.4561996   | 44.4608002   | 1207.9520264     | torch.Size([2, 512, 1])          |
| 2860    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.40.rsqrt                              | input               | qint16        | 0.0070895 | 10.5846119   | 105.4561996   | 44.4608002   | 1207.9520264     | torch.Size([2, 512, 1])          |
| 2860    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.40.rsqrt                              | output              | qint16        | 0.0000102 | 0.0973823    | 0.3073698     | 0.1848528    | 0.0038416        | torch.Size([2, 512, 1])          |
| 2861    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.out_mul                            | input_0             | qint16        | 0.0013535 | -27.8122063  | 30.2240696    | -0.0000040   | 44.4605789       | torch.Size([2, 512, 256])        |
| 2861    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.out_mul                            | input_1             | qint16        | 0.0000102 | 0.0973823    | 0.3073698     | 0.1848528    | 0.0038416        | torch.Size([2, 512, 1])          |
| 2861    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.out_mul                            | output              | qint16        | 0.0001724 | -4.8017368   | 5.6492949     | -0.0000005   | 0.9999980        | torch.Size([2, 512, 256])        |
| 2862    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.40.weight_quant                       | input               | torch.float32 |           | 0.3611936    | 1.1129279     | 0.8462322    | 0.0141788        | torch.Size([256])                |
| 2862    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.40.weight_quant                       | output              | qint16        | 0.0000340 | 0.3611772    | 1.1129109     | 0.8462322    | 0.0141787        | torch.Size([256])                |
| 2863    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.weight_mul                         | input_0             | qint16        | 0.0001724 | -4.8017368   | 5.6492949     | -0.0000005   | 0.9999980        | torch.Size([2, 512, 256])        |
| 2863    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.weight_mul                         | input_1             | qint16        | 0.0000340 | 0.3611772    | 1.1129109     | 0.8462322    | 0.0141787        | torch.Size([256])                |
| 2863    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.40.weight_mul                         | output              | qint16        | 0.0001429 | -4.3882594   | 4.6831064     | 0.0012515    | 0.7389750        | torch.Size([2, 512, 256])        |
| 2864    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.40.bias_quant                         | input               | torch.float32 |           | -0.1068868   | 0.1063567     | 0.0003906    | 0.0013848        | torch.Size([256])                |
| 2864    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.40.bias_quant                         | output              | qint16        | 0.0000033 | -0.1068852   | 0.1063568     | 0.0003905    | 0.0013848        | torch.Size([256])                |
| 2865    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.40.bias_add                           | input_0             | qint16        | 0.0001429 | -4.3882594   | 4.6831064     | 0.0012515    | 0.7389750        | torch.Size([2, 512, 256])        |
| 2865    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.40.bias_add                           | input_1             | qint16        | 0.0000033 | -0.1068852   | 0.1063568     | 0.0003905    | 0.0013848        | torch.Size([256])                |
| 2865    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.40.bias_add                           | output              | qint8         | 0.0307486 | -3.9358160   | 3.9050674     | 0.0017250    | 0.7349314        | torch.Size([2, 512, 256])        |
| 2866    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.add1                               | input_0             | qint8         | 0.0307486 | -3.9358160   | 3.9050674     | 0.0017250    | 0.7349314        | torch.Size([2, 512, 256])        |
| 2866    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.add1                               | input_1             | qint8         | 0.0569265 | -1.6508691   | 7.2296681     | 0.0627509    | 0.8982574        | torch.Size([2, 512, 256])        |
| 2866    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.add1                               | output              | qint8         | 0.0648109 | -4.5367646   | 8.2309875     | 0.0644774    | 1.4485302        | torch.Size([2, 512, 256])        |
| 2867    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.0                           | input               | qint8         | 0.0648109 | -4.5367646   | 8.2309875     | 0.0644774    | 1.4485302        | torch.Size([2, 512, 256])        |
| 2867    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.0                           | weight              | torch.float32 |           | -0.9671087   | 1.0510615     | 0.0000745    | 0.0080127        | torch.Size([256, 256])           |
| 2867    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.0                           | bias                | torch.float32 |           | -0.2240563   | 0.0783759     | -0.0502922   | 0.0026307        | torch.Size([256])                |
| 2867    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.0                           | output              | torch.float32 |           | -13.8775845  | 10.5064259    | -1.4548702   | 5.2595387        | torch.Size([2, 512, 256])        |
| 2868    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.1                           | input               | torch.float32 |           | -13.8775845  | 10.5064259    | -1.4548702   | 5.2595387        | torch.Size([2, 512, 256])        |
| 2868    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.1                           | output              | qint8         | 0.0711467 | 0.0000000    | 9.0356340     | 0.3298588    | 0.6281105        | torch.Size([2, 512, 256])        |
| 2869    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.2                           | input               | qint8         | 0.0711467 | 0.0000000    | 9.0356340     | 0.3298588    | 0.6281105        | torch.Size([2, 512, 256])        |
| 2869    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.2                           | weight              | torch.float32 |           | -0.7024922   | 0.4782098     | -0.0114104   | 0.0081320        | torch.Size([256, 256])           |
| 2869    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.2                           | bias                | torch.float32 |           | -0.1883502   | 0.2478070     | -0.0179733   | 0.0065595        | torch.Size([256])                |
| 2869    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.2                           | output              | torch.float32 |           | -14.0593300  | 8.3704205     | -0.9502421   | 2.9371805        | torch.Size([2, 512, 256])        |
| 2870    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.3                           | input               | torch.float32 |           | -14.0593300  | 8.3704205     | -0.9502421   | 2.9371805        | torch.Size([2, 512, 256])        |
| 2870    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.3                           | output              | qint8         | 0.0569740 | 0.0000000    | 7.2356949     | 0.2812284    | 0.3624919        | torch.Size([2, 512, 256])        |
| 2871    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.input_mean.mean           | input_0             | qint8         | 0.0569740 | 0.0000000    | 7.2356949     | 0.2812284    | 0.3624919        | torch.Size([2, 512, 256])        |
| 2871    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.input_mean.mean           | output              | qint16        | 0.0000196 | 0.1235233    | 0.5394667     | 0.2812281    | 0.0038086        | torch.Size([2, 512, 1])          |
| 2872    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.layers.4.sub                       | input_0             | qint8         | 0.0569740 | 0.0000000    | 7.2356949     | 0.2812284    | 0.3624919        | torch.Size([2, 512, 256])        |
| 2872    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.layers.4.sub                       | input_1             | qint16        | 0.0000196 | 0.1235233    | 0.5394667     | 0.2812281    | 0.0038086        | torch.Size([2, 512, 1])          |
| 2872    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.layers.4.sub                       | output              | qint16        | 0.0002722 | -0.5394104   | 6.7494340     | 0.0000122    | 0.3586797        | torch.Size([2, 512, 256])        |
| 2873    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.mul                       | input_0             | qint16        | 0.0002722 | -0.5394104   | 6.7494340     | 0.0000122    | 0.3586797        | torch.Size([2, 512, 256])        |
| 2873    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.mul                       | input_1             | qint16        | 0.0002722 | -0.5394104   | 6.7494340     | 0.0000122    | 0.3586797        | torch.Size([2, 512, 256])        |
| 2873    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.mul                       | output              | qint16        | 0.0024287 | 0.0000000    | 45.5556374    | 0.3587815    | 1.2962679        | torch.Size([2, 512, 256])        |
| 2874    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.var_mean.mean             | input_0             | qint16        | 0.0024287 | 0.0000000    | 45.5556374    | 0.3587815    | 1.2962679        | torch.Size([2, 512, 256])        |
| 2874    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.var_mean.mean             | output              | qint16        | 0.0000630 | 0.0741699    | 1.5886073     | 0.3587808    | 0.0221727        | torch.Size([2, 512, 1])          |
| 2875    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.layers.4.rsqrt                     | input               | qint16        | 0.0000630 | 0.0741699    | 1.5886073     | 0.3587808    | 0.0221727        | torch.Size([2, 512, 1])          |
| 2875    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.layers.4.rsqrt                     | output              | qint16        | 0.0001424 | 0.7934365    | 3.6716025     | 1.7786727    | 0.1542375        | torch.Size([2, 512, 1])          |
| 2876    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.out_mul                   | input_0             | qint16        | 0.0002722 | -0.5394104   | 6.7494340     | 0.0000122    | 0.3586797        | torch.Size([2, 512, 256])        |
| 2876    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.out_mul                   | input_1             | qint16        | 0.0001424 | 0.7934365    | 3.6716025     | 1.7786727    | 0.1542375        | torch.Size([2, 512, 1])          |
| 2876    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.out_mul                   | output              | qint16        | 0.0002603 | -0.5837738   | 8.0898085     | 0.0000174    | 0.9997542        | torch.Size([2, 512, 256])        |
| 2877    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.4.weight_quant              | input               | torch.float32 |           | 0.7249702    | 1.1691658     | 0.9793198    | 0.0052795        | torch.Size([256])                |
| 2877    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.4.weight_quant              | output              | qint16        | 0.0000357 | 0.7249596    | 1.1691481     | 0.9793198    | 0.0052796        | torch.Size([256])                |
| 2878    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.weight_mul                | input_0             | qint16        | 0.0002603 | -0.5837738   | 8.0898085     | 0.0000174    | 0.9997542        | torch.Size([2, 512, 256])        |
| 2878    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.weight_mul                | input_1             | qint16        | 0.0000357 | 0.7249596    | 1.1691481     | 0.9793198    | 0.0052796        | torch.Size([256])                |
| 2878    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.weight_mul                | output              | qint16        | 0.0002939 | -0.6768344   | 8.9875259     | -0.0025075   | 0.9558188        | torch.Size([2, 512, 256])        |
| 2879    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.4.bias_quant                | input               | torch.float32 |           | -0.1581516   | 0.2960921     | 0.0620406    | 0.0084647        | torch.Size([256])                |
| 2879    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.4.bias_quant                | output              | qint16        | 0.0000090 | -0.1581507   | 0.2960876     | 0.0620408    | 0.0084647        | torch.Size([256])                |
| 2880    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.layers.4.bias_add                  | input_0             | qint16        | 0.0002939 | -0.6768344   | 8.9875259     | -0.0025075   | 0.9558188        | torch.Size([2, 512, 256])        |
| 2880    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.layers.4.bias_add                  | input_1             | qint16        | 0.0000090 | -0.1581507   | 0.2960876     | 0.0620408    | 0.0084647        | torch.Size([256])                |
| 2880    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.layers.4.bias_add                  | output              | qint8         | 0.0578579 | -0.6942947   | 7.3479519     | 0.0594143    | 0.9150969        | torch.Size([2, 512, 256])        |
| 2881    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.5                           | input               | qint8         | 0.0578579 | -0.6942947   | 7.3479519     | 0.0594143    | 0.9150969        | torch.Size([2, 512, 256])        |
| 2881    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.5                           | weight              | torch.float32 |           | -0.5549148   | 0.5088162     | 0.0022494    | 0.0062838        | torch.Size([256, 256])           |
| 2881    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.5                           | bias                | torch.float32 |           | -0.1952680   | 0.0616404     | -0.0414291   | 0.0020574        | torch.Size([256])                |
| 2881    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.5                           | output              | torch.float32 |           | -8.6135702   | 10.9062824    | -0.9069549   | 3.1608193        | torch.Size([2, 512, 256])        |
| 2882    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.6                           | input               | torch.float32 |           | -8.6135702   | 10.9062824    | -0.9069549   | 3.1608193        | torch.Size([2, 512, 256])        |
| 2882    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.6                           | output              | qint8         | 0.0631569 | 0.0000000    | 8.0209284     | 0.3558684    | 0.6266572        | torch.Size([2, 512, 256])        |
| 2883    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.7                           | input               | qint8         | 0.0631569 | 0.0000000    | 8.0209284     | 0.3558684    | 0.6266572        | torch.Size([2, 512, 256])        |
| 2883    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.7                           | weight              | torch.float32 |           | -0.4746350   | 0.4722923     | -0.0083948   | 0.0048993        | torch.Size([256, 256])           |
| 2883    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.7                           | bias                | torch.float32 |           | -0.1257761   | 0.3425134     | -0.0276972   | 0.0021361        | torch.Size([256])                |
| 2883    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.7                           | output              | torch.float32 |           | -9.3402996   | 28.1359196    | -1.2677188   | 3.2203004        | torch.Size([2, 512, 256])        |
| 2884    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.8                           | input               | torch.float32 |           | -9.3402996   | 28.1359196    | -1.2677188   | 3.2203004        | torch.Size([2, 512, 256])        |
| 2884    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.layers.8                           | output              | qint8         | 0.2470197 | 0.0000000    | 28.1602421    | 0.2373205    | 1.1838517        | torch.Size([2, 512, 256])        |
| 2885    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.input_mean.mean           | input_0             | qint8         | 0.2470197 | 0.0000000    | 28.1602421    | 0.2373205    | 1.1838517        | torch.Size([2, 512, 256])        |
| 2885    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.input_mean.mean           | output              | qint16        | 0.0000332 | 0.0752729    | 0.8539674     | 0.2373197    | 0.0071377        | torch.Size([2, 512, 1])          |
| 2886    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.layers.9.sub                       | input_0             | qint8         | 0.2470197 | 0.0000000    | 28.1602421    | 0.2373205    | 1.1838517        | torch.Size([2, 512, 256])        |
| 2886    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.layers.9.sub                       | input_1             | qint16        | 0.0000332 | 0.0752729    | 0.8539674     | 0.2373197    | 0.0071377        | torch.Size([2, 512, 1])          |
| 2886    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.layers.9.sub                       | output              | qint16        | 0.0010366 | -0.8541243   | 27.9704990    | -0.0000591   | 1.1767631        | torch.Size([2, 512, 256])        |
| 2887    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.mul                       | input_0             | qint16        | 0.0010366 | -0.8541243   | 27.9704990    | -0.0000591   | 1.1767631        | torch.Size([2, 512, 256])        |
| 2887    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.mul                       | input_1             | qint16        | 0.0010366 | -0.8541243   | 27.9704990    | -0.0000591   | 1.1767631        | torch.Size([2, 512, 256])        |
| 2887    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.mul                       | output              | qint16        | 0.0352496 | 0.0000000    | 782.3639526   | 1.1793234    | 294.8939209      | torch.Size([2, 512, 256])        |
| 2888    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.var_mean.mean             | input_0             | qint16        | 0.0352496 | 0.0000000    | 782.3639526   | 1.1793234    | 294.8939209      | torch.Size([2, 512, 256])        |
| 2888    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.var_mean.mean             | output              | qint16        | 0.0001550 | 0.2048724    | 3.8542972     | 1.1793251    | 0.4405374        | torch.Size([2, 512, 1])          |
| 2889    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.layers.9.rsqrt                     | input               | qint16        | 0.0001550 | 0.2048724    | 3.8542972     | 1.1793251    | 0.4405374        | torch.Size([2, 512, 1])          |
| 2889    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.layers.9.rsqrt                     | output              | qint16        | 0.0000950 | 0.5093679    | 2.2092884     | 1.0328076    | 0.0798208        | torch.Size([2, 512, 1])          |
| 2890    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.out_mul                   | input_0             | qint16        | 0.0010366 | -0.8541243   | 27.9704990    | -0.0000591   | 1.1767631        | torch.Size([2, 512, 256])        |
| 2890    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.out_mul                   | input_1             | qint16        | 0.0000950 | 0.5093679    | 2.2092884     | 1.0328076    | 0.0798208        | torch.Size([2, 512, 1])          |
| 2890    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.out_mul                   | output              | qint16        | 0.0004850 | -0.5233418   | 15.8928080    | -0.0000914   | 0.9964820        | torch.Size([2, 512, 256])        |
| 2891    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.9.weight_quant              | input               | torch.float32 |           | 0.6879961    | 1.2603064     | 0.9672197    | 0.0079379        | torch.Size([256])                |
| 2891    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.9.weight_quant              | output              | qint16        | 0.0000385 | 0.6880097    | 1.2602870     | 0.9672204    | 0.0079378        | torch.Size([256])                |
| 2892    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.weight_mul                | input_0             | qint16        | 0.0004850 | -0.5233418   | 15.8928080    | -0.0000914   | 0.9964820        | torch.Size([2, 512, 256])        |
| 2892    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.weight_mul                | input_1             | qint16        | 0.0000385 | 0.6880097    | 1.2602870     | 0.9672204    | 0.0079378        | torch.Size([256])                |
| 2892    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.weight_mul                | output              | qint16        | 0.0003337 | -0.6597154   | 10.9341908    | -0.0130637   | 0.6535774        | torch.Size([2, 512, 256])        |
| 2893    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.9.bias_quant                | input               | torch.float32 |           | -0.2941498   | 0.1362485     | 0.0674987    | 0.0034837        | torch.Size([256])                |
| 2893    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.9.bias_quant                | output              | qint16        | 0.0000090 | -0.2941543   | 0.1362510     | 0.0674987    | 0.0034837        | torch.Size([256])                |
| 2894    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.layers.9.bias_add                  | input_0             | qint16        | 0.0003337 | -0.6597154   | 10.9341908    | -0.0130637   | 0.6535774        | torch.Size([2, 512, 256])        |
| 2894    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.layers.9.bias_add                  | input_1             | qint16        | 0.0000090 | -0.2941543   | 0.1362510     | 0.0674987    | 0.0034837        | torch.Size([256])                |
| 2894    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.layers.9.bias_add                  | output              | qint8         | 0.0810696 | -0.6485567   | 10.2958374    | 0.0559583    | 0.6106889        | torch.Size([2, 512, 256])        |
| 2895    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.10                          | input               | qint8         | 0.0810696 | -0.6485567   | 10.2958374    | 0.0559583    | 0.6106889        | torch.Size([2, 512, 256])        |
| 2895    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.10                          | weight              | torch.float32 |           | -0.5602010   | 0.3975652     | -0.0010181   | 0.0060089        | torch.Size([11, 256])            |
| 2895    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.10                          | bias                | torch.float32 |           | -0.0569350   | 0.0453742     | -0.0089781   | 0.0008329        | torch.Size([11])                 |
| 2895    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.layers.10                          | output              | qint8         | 0.1358676 | -17.3910503  | 16.0323753    | 0.1999536    | 3.7647698        | torch.Size([2, 512, 11])         |
| 2896    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.11.scale_quant_stub         | input               | torch.float32 |           | 0.0074676    | 0.9877831     | 0.1357567    | 0.0805249        | torch.Size([11])                 |
| 2896    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.layers.11.scale_quant_stub         | output              | qint16        | 0.0000301 | 0.0074760    | 0.9877680     | 0.1357576    | 0.0805218        | torch.Size([11])                 |
| 2897    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.11.mul                      | input_0             | qint8         | 0.1358676 | -17.3910503  | 16.0323753    | 0.1999536    | 3.7647698        | torch.Size([2, 512, 11])         |
| 2897    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.11.mul                      | input_1             | qint16        | 0.0000301 | 0.0074760    | 0.9877680     | 0.1357576    | 0.0805218        | torch.Size([11])                 |
| 2897    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.layers.11.mul                      | output              | qint16        | 0.0000522 | -1.5977910   | 1.3420755     | -0.0024444   | 0.0335251        | torch.Size([2, 512, 11])         |
| 2898    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.add2                               | input_0             | qint16        | 0.0000522 | -1.5977910   | 1.3420755     | -0.0024444   | 0.0335251        | torch.Size([2, 512, 11])         |
| 2898    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.add2                               | input_1             | qint16        | 0.0017906 | -53.4581947  | 53.4134293    | 0.2365430    | 76.3572617       | torch.Size([2, 512, 11])         |
| 2898    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.add2                               | output              | qint16        | 0.0017906 | -53.4534492  | 53.4158478    | 0.2340835    | 76.4504242       | torch.Size([2, 512, 11])         |
| 2899    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.0                       | input               | qint8         | 0.0307486 | -3.9358160   | 3.9050674     | 0.0017250    | 0.7349314        | torch.Size([2, 512, 256])        |
| 2899    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.0                       | weight              | torch.float32 |           | -0.3916217   | 0.4025688     | -0.0007721   | 0.0074816        | torch.Size([256, 256])           |
| 2899    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.0                       | bias                | torch.float32 |           | -0.2124989   | 0.1511600     | -0.0473562   | 0.0046897        | torch.Size([256])                |
| 2899    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.0                       | output              | torch.float32 |           | -11.5958710  | 14.5481672    | -0.6247739   | 10.2819099       | torch.Size([2, 512, 256])        |
| 2900    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.cls_layers.1                       | input               | torch.float32 |           | -11.5958710  | 14.5481672    | -0.6247739   | 10.2819099       | torch.Size([2, 512, 256])        |
| 2900    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.cls_layers.1                       | output              | qint8         | 0.1129420 | 0.0000000    | 14.3436356    | 1.0045137    | 3.4334288        | torch.Size([2, 512, 256])        |
| 2901    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.input_mean.mean       | input_0             | qint8         | 0.1129420 | 0.0000000    | 14.3436356    | 1.0045137    | 3.4334288        | torch.Size([2, 512, 256])        |
| 2901    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.input_mean.mean       | output              | qint16        | 0.0000497 | 0.4102948    | 1.4656152     | 1.0045102    | 0.0682868        | torch.Size([2, 512, 1])          |
| 2902    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.2.sub                   | input_0             | qint8         | 0.1129420 | 0.0000000    | 14.3436356    | 1.0045137    | 3.4334288        | torch.Size([2, 512, 256])        |
| 2902    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.2.sub                   | input_1             | qint16        | 0.0000497 | 0.4102948    | 1.4656152     | 1.0045102    | 0.0682868        | torch.Size([2, 512, 1])          |
| 2902    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.2.sub                   | output              | qint16        | 0.0004272 | -1.4656147   | 13.1379910    | 0.0000298    | 3.3651371        | torch.Size([2, 512, 256])        |
| 2903    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.mul                   | input_0             | qint16        | 0.0004272 | -1.4656147   | 13.1379910    | 0.0000298    | 3.3651371        | torch.Size([2, 512, 256])        |
| 2903    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.mul                   | input_1             | qint16        | 0.0004272 | -1.4656147   | 13.1379910    | 0.0000298    | 3.3651371        | torch.Size([2, 512, 256])        |
| 2903    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.mul                   | output              | qint16        | 0.0059792 | 0.0000000    | 172.6086578   | 3.3654134    | 79.9446640       | torch.Size([2, 512, 256])        |
| 2904    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.var_mean.mean         | input_0             | qint16        | 0.0059792 | 0.0000000    | 172.6086578   | 3.3654134    | 79.9446640       | torch.Size([2, 512, 256])        |
| 2904    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.var_mean.mean         | output              | qint16        | 0.0001856 | 1.0544636    | 5.3643656     | 3.3654122    | 1.0637059        | torch.Size([2, 512, 1])          |
| 2905    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.cls_layers.2.rsqrt                 | input               | qint16        | 0.0001856 | 1.0544636    | 5.3643656     | 3.3654122    | 1.0637059        | torch.Size([2, 512, 1])          |
| 2905    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.cls_layers.2.rsqrt                 | output              | qint16        | 0.0000279 | 0.4317482    | 0.9145448     | 0.5670786    | 0.0096214        | torch.Size([2, 512, 1])          |
| 2906    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.out_mul               | input_0             | qint16        | 0.0004272 | -1.4656147   | 13.1379910    | 0.0000298    | 3.3651371        | torch.Size([2, 512, 256])        |
| 2906    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.out_mul               | input_1             | qint16        | 0.0000279 | 0.4317482    | 0.9145448     | 0.5670786    | 0.0096214        | torch.Size([2, 512, 1])          |
| 2906    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.out_mul               | output              | qint16        | 0.0002366 | -0.6604574   | 7.7511492     | 0.0000126    | 0.9994133        | torch.Size([2, 512, 256])        |
| 2907    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.2.weight_quant          | input               | torch.float32 |           | 0.7428278    | 1.2361827     | 0.9719122    | 0.0050141        | torch.Size([256])                |
| 2907    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.2.weight_quant          | output              | qint16        | 0.0000377 | 0.7428225    | 1.2361639     | 0.9719127    | 0.0050141        | torch.Size([256])                |
| 2908    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.weight_mul            | input_0             | qint16        | 0.0002366 | -0.6604574   | 7.7511492     | 0.0000126    | 0.9994133        | torch.Size([2, 512, 256])        |
| 2908    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.weight_mul            | input_1             | qint16        | 0.0000377 | 0.7428225    | 1.2361639     | 0.9719127    | 0.0050141        | torch.Size([256])                |
| 2908    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.weight_mul            | output              | qint16        | 0.0002924 | -0.7890536   | 9.5794439     | 0.0059996    | 0.9922521        | torch.Size([2, 512, 256])        |
| 2909    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.2.bias_quant            | input               | torch.float32 |           | -0.0868656   | 0.2186394     | 0.0415796    | 0.0023078        | torch.Size([256])                |
| 2909    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.2.bias_quant            | output              | qint16        | 0.0000067 | -0.0868686   | 0.2186361     | 0.0415795    | 0.0023079        | torch.Size([256])                |
| 2910    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.2.bias_add              | input_0             | qint16        | 0.0002924 | -0.7890536   | 9.5794439     | 0.0059996    | 0.9922521        | torch.Size([2, 512, 256])        |
| 2910    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.2.bias_add              | input_1             | qint16        | 0.0000067 | -0.0868686   | 0.2186361     | 0.0415795    | 0.0023079        | torch.Size([256])                |
| 2910    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.2.bias_add              | output              | qint8         | 0.0627147 | -0.7525767   | 7.9647703     | 0.0474650    | 1.0066388        | torch.Size([2, 512, 256])        |
| 2911    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.3                       | input               | qint8         | 0.0627147 | -0.7525767   | 7.9647703     | 0.0474650    | 1.0066388        | torch.Size([2, 512, 256])        |
| 2911    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.3                       | weight              | torch.float32 |           | -0.6531906   | 0.4522330     | 0.0064459    | 0.0071903        | torch.Size([256, 256])           |
| 2911    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.3                       | bias                | torch.float32 |           | -0.1963050   | 0.2913345     | -0.0591058   | 0.0040117        | torch.Size([256])                |
| 2911    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.3                       | output              | torch.float32 |           | -16.9295044  | 27.6297131    | -2.1281080   | 9.4131775        | torch.Size([2, 512, 256])        |
| 2912    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.cls_layers.4                       | input               | torch.float32 |           | -16.9295044  | 27.6297131    | -2.1281080   | 9.4131775        | torch.Size([2, 512, 256])        |
| 2912    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.cls_layers.4                       | output              | qint8         | 0.2300008 | 0.0000000    | 27.6000919    | 0.4364148    | 2.5933018        | torch.Size([2, 512, 256])        |
| 2913    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.input_mean.mean       | input_0             | qint8         | 0.2300008 | 0.0000000    | 27.6000919    | 0.4364148    | 2.5933018        | torch.Size([2, 512, 256])        |
| 2913    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.input_mean.mean       | output              | qint16        | 0.0000361 | 0.2254937    | 1.1724520     | 0.4364116    | 0.0063595        | torch.Size([2, 512, 1])          |
| 2914    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.5.sub                   | input_0             | qint8         | 0.2300008 | 0.0000000    | 27.6000919    | 0.4364148    | 2.5933018        | torch.Size([2, 512, 256])        |
| 2914    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.5.sub                   | input_1             | qint16        | 0.0000361 | 0.2254937    | 1.1724520     | 0.4364116    | 0.0063595        | torch.Size([2, 512, 1])          |
| 2914    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.5.sub                   | output              | qint16        | 0.0009402 | -1.1724275   | 27.2563534    | 0.0000737    | 2.5869043        | torch.Size([2, 512, 256])        |
| 2915    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.mul                   | input_0             | qint16        | 0.0009402 | -1.1724275   | 27.2563534    | 0.0000737    | 2.5869043        | torch.Size([2, 512, 256])        |
| 2915    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.mul                   | input_1             | qint16        | 0.0009402 | -1.1724275   | 27.2563534    | 0.0000737    | 2.5869043        | torch.Size([2, 512, 256])        |
| 2915    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.mul                   | output              | qint16        | 0.0289670 | 0.0000000    | 742.9160156   | 2.5880065    | 629.3818970      | torch.Size([2, 512, 256])        |
| 2916    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.var_mean.mean         | input_0             | qint16        | 0.0289670 | 0.0000000    | 742.9160156   | 2.5880065    | 629.3818970      | torch.Size([2, 512, 256])        |
| 2916    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.var_mean.mean         | output              | qint16        | 0.0001386 | 0.3801037    | 3.9681556     | 2.5880067    | 0.3984727        | torch.Size([2, 512, 1])          |
| 2917    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.cls_layers.5.rsqrt                 | input               | qint16        | 0.0001386 | 0.3801037    | 3.9681556     | 2.5880067    | 0.3984727        | torch.Size([2, 512, 1])          |
| 2917    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.cls_layers.5.rsqrt                 | output              | qint16        | 0.0000517 | 0.5019889    | 1.6219811     | 0.6413465    | 0.0130706        | torch.Size([2, 512, 1])          |
| 2918    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.out_mul               | input_0             | qint16        | 0.0009402 | -1.1724275   | 27.2563534    | 0.0000737    | 2.5869043        | torch.Size([2, 512, 256])        |
| 2918    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.out_mul               | input_1             | qint16        | 0.0000517 | 0.5019889    | 1.6219811     | 0.6413465    | 0.0130706        | torch.Size([2, 512, 1])          |
| 2918    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.out_mul               | output              | qint16        | 0.0004616 | -0.7704371   | 14.4485807    | 0.0000616    | 0.9994718        | torch.Size([2, 512, 256])        |
| 2919    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.5.weight_quant          | input               | torch.float32 |           | 0.5720253    | 0.9521823     | 0.8364800    | 0.0042872        | torch.Size([256])                |
| 2919    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.5.weight_quant          | output              | qint16        | 0.0000291 | 0.5720213    | 0.9521677     | 0.8364807    | 0.0042872        | torch.Size([256])                |
| 2920    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.weight_mul            | input_0             | qint16        | 0.0004616 | -0.7704371   | 14.4485807    | 0.0000616    | 0.9994718        | torch.Size([2, 512, 256])        |
| 2920    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.weight_mul            | input_1             | qint16        | 0.0000291 | 0.5720213    | 0.9521677     | 0.8364807    | 0.0042872        | torch.Size([256])                |
| 2920    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.weight_mul            | output              | qint16        | 0.0004149 | -0.7322125   | 12.9848452    | 0.0106565    | 0.7921636        | torch.Size([2, 512, 256])        |
| 2921    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.5.bias_quant            | input               | torch.float32 |           | -0.1434759   | 0.2099707     | 0.0936137    | 0.0056069        | torch.Size([256])                |
| 2921    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.cls_layers.5.bias_quant            | output              | qint16        | 0.0000064 | -0.1434727   | 0.2099674     | 0.0936136    | 0.0056070        | torch.Size([256])                |
| 2922    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.5.bias_add              | input_0             | qint16        | 0.0004149 | -0.7322125   | 12.9848452    | 0.0106565    | 0.7921636        | torch.Size([2, 512, 256])        |
| 2922    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.5.bias_add              | input_1             | qint16        | 0.0000064 | -0.1434727   | 0.2099674     | 0.0936136    | 0.0056070        | torch.Size([256])                |
| 2922    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.5.bias_add              | output              | qint8         | 0.1015764 | -0.8126108   | 12.9001961    | 0.1046797    | 0.7522746        | torch.Size([2, 512, 256])        |
| 2923    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.6                       | input               | qint8         | 0.1015764 | -0.8126108   | 12.9001961    | 0.1046797    | 0.7522746        | torch.Size([2, 512, 256])        |
| 2923    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.6                       | weight              | torch.float32 |           | -0.3821189   | 0.1957047     | -0.0082432   | 0.0038872        | torch.Size([10, 256])            |
| 2923    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.6                       | bias                | torch.float32 |           | -4.5506554   | -4.5029793    | -4.5237875   | 0.0002058        | torch.Size([10])                 |
| 2923    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.cls_layers.6                       | output              | torch.float32 |           | -8.3864613   | 3.0943866     | -4.9181986   | 1.7579498        | torch.Size([2, 512, 10])         |
| 2924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.0                   | input               | qint8         | 0.0648109 | -4.5367646   | 8.2309875     | 0.0644774    | 1.4485302        | torch.Size([2, 512, 256])        |
| 2924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.0                   | weight              | torch.float32 |           | -0.5681219   | 0.4727457     | 0.0007156    | 0.0080122        | torch.Size([256, 256])           |
| 2924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.0                   | bias                | torch.float32 |           | -0.2011542   | 0.2002611     | -0.0506676   | 0.0076206        | torch.Size([256])                |
| 2924    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.0                   | output              | torch.float32 |           | -15.2000360  | 13.9074354    | -1.2385695   | 12.5961781       | torch.Size([2, 512, 256])        |
| 2925    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.quality_layers.1                   | input               | torch.float32 |           | -15.2000360  | 13.9074354    | -1.2385695   | 12.5961781       | torch.Size([2, 512, 256])        |
| 2925    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.quality_layers.1                   | output              | qint8         | 0.1115867 | 0.0000000    | 13.9483366    | 0.9106061    | 3.1633446        | torch.Size([2, 512, 256])        |
| 2926    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.input_mean.mean   | input_0             | qint8         | 0.1115867 | 0.0000000    | 13.9483366    | 0.9106061    | 3.1633446        | torch.Size([2, 512, 256])        |
| 2926    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.input_mean.mean   | output              | qint16        | 0.0000459 | 0.5086603    | 1.4759134     | 0.9106078    | 0.0231206        | torch.Size([2, 512, 1])          |
| 2927    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.2.sub               | input_0             | qint8         | 0.1115867 | 0.0000000    | 13.9483366    | 0.9106061    | 3.1633446        | torch.Size([2, 512, 256])        |
| 2927    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.2.sub               | input_1             | qint16        | 0.0000459 | 0.5086603    | 1.4759134     | 0.9106078    | 0.0231206        | torch.Size([2, 512, 1])          |
| 2927    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.2.sub               | output              | qint16        | 0.0004373 | -1.4760474   | 12.8597622    | -0.0000140   | 3.1402655        | torch.Size([2, 512, 256])        |
| 2928    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.mul               | input_0             | qint16        | 0.0004373 | -1.4760474   | 12.8597622    | -0.0000140   | 3.1402655        | torch.Size([2, 512, 256])        |
| 2928    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.mul               | input_1             | qint16        | 0.0004373 | -1.4760474   | 12.8597622    | -0.0000140   | 3.1402655        | torch.Size([2, 512, 256])        |
| 2928    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.mul               | output              | qint16        | 0.0062684 | 0.0000000    | 165.3722076   | 3.1401668    | 76.6553116       | torch.Size([2, 512, 256])        |
| 2929    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.var_mean.mean     | input_0             | qint16        | 0.0062684 | 0.0000000    | 165.3722076   | 3.1401668    | 76.6553116       | torch.Size([2, 512, 256])        |
| 2929    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.var_mean.mean     | output              | qint16        | 0.0002635 | 1.3273973    | 6.4994516     | 3.1401849    | 0.7371426        | torch.Size([2, 512, 1])          |
| 2930    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.quality_layers.2.rsqrt             | input               | qint16        | 0.0002635 | 1.3273973    | 6.4994516     | 3.1401849    | 0.7371426        | torch.Size([2, 512, 1])          |
| 2930    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.quality_layers.2.rsqrt             | output              | qint16        | 0.0000313 | 0.3922601    | 0.8679585     | 0.5786296    | 0.0052204        | torch.Size([2, 512, 1])          |
| 2931    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.out_mul           | input_0             | qint16        | 0.0004373 | -1.4760474   | 12.8597622    | -0.0000140   | 3.1402655        | torch.Size([2, 512, 256])        |
| 2931    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.out_mul           | input_1             | qint16        | 0.0000313 | 0.3922601    | 0.8679585     | 0.5786296    | 0.0052204        | torch.Size([2, 512, 1])          |
| 2931    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.out_mul           | output              | qint16        | 0.0002233 | -0.6197887   | 7.3157845     | -0.0000147   | 1.0000285        | torch.Size([2, 512, 256])        |
| 2932    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.2.weight_quant      | input               | torch.float32 |           | 0.7529514    | 1.2044538     | 0.9968498    | 0.0071440        | torch.Size([256])                |
| 2932    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.2.weight_quant      | output              | qint16        | 0.0000368 | 0.7529421    | 1.2044355     | 0.9968504    | 0.0071438        | torch.Size([256])                |
| 2933    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.weight_mul        | input_0             | qint16        | 0.0002233 | -0.6197887   | 7.3157845     | -0.0000147   | 1.0000285        | torch.Size([2, 512, 256])        |
| 2933    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.weight_mul        | input_1             | qint16        | 0.0000368 | 0.7529421    | 1.2044355     | 0.9968504    | 0.0071438        | torch.Size([256])                |
| 2933    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.weight_mul        | output              | qint16        | 0.0002407 | -0.7464323   | 7.7182689     | -0.0084309   | 0.9868575        | torch.Size([2, 512, 256])        |
| 2934    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.2.bias_quant        | input               | torch.float32 |           | -0.1380954   | 0.2172861     | 0.0049230    | 0.0046242        | torch.Size([256])                |
| 2934    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.2.bias_quant        | output              | qint16        | 0.0000066 | -0.1380936   | 0.2172828     | 0.0049230    | 0.0046242        | torch.Size([256])                |
| 2935    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.2.bias_add          | input_0             | qint16        | 0.0002407 | -0.7464323   | 7.7182689     | -0.0084309   | 0.9868575        | torch.Size([2, 512, 256])        |
| 2935    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.2.bias_add          | input_1             | qint16        | 0.0000066 | -0.1380936   | 0.2172828     | 0.0049230    | 0.0046242        | torch.Size([256])                |
| 2935    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.2.bias_add          | output              | qint8         | 0.0521971 | -0.8351532   | 6.6290283     | -0.0037030   | 1.0367030        | torch.Size([2, 512, 256])        |
| 2936    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.3                   | input               | qint8         | 0.0521971 | -0.8351532   | 6.6290283     | -0.0037030   | 1.0367030        | torch.Size([2, 512, 256])        |
| 2936    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.3                   | weight              | torch.float32 |           | -0.5449315   | 0.4749622     | 0.0150954    | 0.0048535        | torch.Size([256, 256])           |
| 2936    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.3                   | bias                | torch.float32 |           | -0.1342729   | 0.3925043     | -0.0479803   | 0.0025327        | torch.Size([256])                |
| 2936    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.3                   | output              | torch.float32 |           | -11.7092714  | 48.1567421    | -2.8045897   | 9.9707623        | torch.Size([2, 512, 256])        |
| 2937    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.quality_layers.4                   | input               | torch.float32 |           | -11.7092714  | 48.1567421    | -2.8045897   | 9.9707623        | torch.Size([2, 512, 256])        |
| 2937    | horizon_plugin_pytorch.nn.qat.relu.ReLU                                     | head.layers.41.quality_layers.4                   | output              | qint8         | 0.3741915 | 0.0000000    | 47.5223198    | 0.3020436    | 5.1916280        | torch.Size([2, 512, 256])        |
| 2938    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.input_mean.mean   | input_0             | qint8         | 0.3741915 | 0.0000000    | 47.5223198    | 0.3020436    | 5.1916280        | torch.Size([2, 512, 256])        |
| 2938    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.input_mean.mean   | output              | qint16        | 0.0000172 | 0.1943958    | 0.4516595     | 0.3020442    | 0.0030202        | torch.Size([2, 512, 1])          |
| 2939    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.5.sub               | input_0             | qint8         | 0.3741915 | 0.0000000    | 47.5223198    | 0.3020436    | 5.1916280        | torch.Size([2, 512, 256])        |
| 2939    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.5.sub               | input_1             | qint16        | 0.0000172 | 0.1943958    | 0.4516595     | 0.3020442    | 0.0030202        | torch.Size([2, 512, 1])          |
| 2939    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.5.sub               | output              | qint16        | 0.0015078 | -0.4523455   | 47.2218552    | -0.0000051   | 5.1886268        | torch.Size([2, 512, 256])        |
| 2940    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.mul               | input_0             | qint16        | 0.0015078 | -0.4523455   | 47.2218552    | -0.0000051   | 5.1886268        | torch.Size([2, 512, 256])        |
| 2940    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.mul               | input_1             | qint16        | 0.0015078 | -0.4523455   | 47.2218552    | -0.0000051   | 5.1886268        | torch.Size([2, 512, 256])        |
| 2940    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.mul               | output              | qint16        | 0.0744998 | 0.0000000    | 2229.9289551  | 5.1899548    | 5317.2915039     | torch.Size([2, 512, 256])        |
| 2941    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.var_mean.mean     | input_0             | qint16        | 0.0744998 | 0.0000000    | 2229.9289551  | 5.1899548    | 5317.2915039     | torch.Size([2, 512, 256])        |
| 2941    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.var_mean.mean     | output              | qint16        | 0.0003287 | 1.3937700    | 9.7563896     | 5.1899757    | 2.3815598        | torch.Size([2, 512, 1])          |
| 2942    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.quality_layers.5.rsqrt             | input               | qint16        | 0.0003287 | 1.3937700    | 9.7563896     | 5.1899757    | 2.3815598        | torch.Size([2, 512, 1])          |
| 2942    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                       | head.layers.41.quality_layers.5.rsqrt             | output              | qint16        | 0.0000276 | 0.3201473    | 0.8470258     | 0.4543195    | 0.0051414        | torch.Size([2, 512, 1])          |
| 2943    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.out_mul           | input_0             | qint16        | 0.0015078 | -0.4523455   | 47.2218552    | -0.0000051   | 5.1886268        | torch.Size([2, 512, 256])        |
| 2943    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.out_mul           | input_1             | qint16        | 0.0000276 | 0.3201473    | 0.8470258     | 0.4543195    | 0.0051414        | torch.Size([2, 512, 1])          |
| 2943    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.out_mul           | output              | qint16        | 0.0004683 | -0.3123307   | 15.3130989    | 0.0000008    | 0.9994488        | torch.Size([2, 512, 256])        |
| 2944    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.5.weight_quant      | input               | torch.float32 |           | 0.4071644    | 0.9784095     | 0.7547790    | 0.0145837        | torch.Size([256])                |
| 2944    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.5.weight_quant      | output              | qint16        | 0.0000299 | 0.4071593    | 0.9783945     | 0.7547793    | 0.0145835        | torch.Size([256])                |
| 2945    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.weight_mul        | input_0             | qint16        | 0.0004683 | -0.3123307   | 15.3130989    | 0.0000008    | 0.9994488        | torch.Size([2, 512, 256])        |
| 2945    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.weight_mul        | input_1             | qint16        | 0.0000299 | 0.4071593    | 0.9783945     | 0.7547793    | 0.0145835        | torch.Size([256])                |
| 2945    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.weight_mul        | output              | qint16        | 0.0002802 | -0.3054612   | 9.1643963     | -0.0040510   | 0.4064661        | torch.Size([2, 512, 256])        |
| 2946    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.5.bias_quant        | input               | torch.float32 |           | -0.5791797   | 0.1132794     | 0.0721813    | 0.0038805        | torch.Size([256])                |
| 2946    | horizon_plugin_pytorch.nn.qat.stubs.QuantStub                               | head.layers.41.quality_layers.5.bias_quant        | output              | qint16        | 0.0000177 | -0.5791885   | 0.1132818     | 0.0721815    | 0.0038806        | torch.Size([256])                |
| 2947    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.5.bias_add          | input_0             | qint16        | 0.0002802 | -0.3054612   | 9.1643963     | -0.0040510   | 0.4064661        | torch.Size([2, 512, 256])        |
| 2947    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.5.bias_add          | input_1             | qint16        | 0.0000177 | -0.5791885   | 0.1132818     | 0.0721815    | 0.0038806        | torch.Size([256])                |
| 2947    | horizon_plugin_pytorch.nn.qat.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.5.bias_add          | output              | qint8         | 0.0683446 | -0.6834456   | 8.6797590     | 0.0680463    | 0.3613338        | torch.Size([2, 512, 256])        |
| 2948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.6                   | input               | qint8         | 0.0683446 | -0.6834456   | 8.6797590     | 0.0680463    | 0.3613338        | torch.Size([2, 512, 256])        |
| 2948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.6                   | weight              | torch.float32 |           | -0.1633572   | 0.1557941     | -0.0001491   | 0.0013779        | torch.Size([2, 256])             |
| 2948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.6                   | bias                | torch.float32 |           | 0.0361053    | 0.0646671     | 0.0503862    | 0.0004079        | torch.Size([2])                  |
| 2948    | horizon_plugin_pytorch.nn.qat.linear.Linear                                 | head.layers.41.quality_layers.6                   | output              | torch.float32 |           | -2.4531858   | 5.0349183     | 0.1602004    | 0.9188936        | torch.Size([2, 512, 2])          |
| 2949    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(5)                                   | input               | qint16        | 0.0017906 | -53.4534492  | 53.4158478    | 0.2340835    | 76.4504242       | torch.Size([2, 512, 11])         |
| 2949    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(5)                                   | output              | torch.float32 |           | -53.4534492  | 53.4158478    | 0.2340835    | 76.4504242       | torch.Size([2, 512, 11])         |
| 2950    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(6)                                   | input               | torch.float32 |           | -8.3864613   | 3.0943866     | -4.9181986   | 1.7579498        | torch.Size([2, 512, 10])         |
| 2950    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(6)                                   | output              | torch.float32 |           | -8.3864613   | 3.0943866     | -4.9181986   | 1.7579498        | torch.Size([2, 512, 10])         |
| 2951    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(7)                                   | input               | torch.float32 |           | -2.4531858   | 5.0349183     | 0.1602004    | 0.9188936        | torch.Size([2, 512, 2])          |
| 2951    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(7)                                   | output              | torch.float32 |           | -2.4531858   | 5.0349183     | 0.1602004    | 0.9188936        | torch.Size([2, 512, 2])          |
| 2952    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(8)                                   | input               | torch.float32 |           | -8.3864613   | 3.0943866     | -4.9181986   | 1.7579498        | torch.Size([2, 512, 10])         |
| 2952    | horizon_plugin_pytorch.nn.qat.stubs.DeQuantStub                             | head.dequant(8)                                   | output              | torch.float32 |           | -8.3864613   | 3.0943866     | -4.9181986   | 1.7579498        | torch.Size([2, 512, 10])         |
+---------+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------+---------------+-----------+--------------+---------------+--------------+------------------+----------------------------------+